How to deal with bad data in production

If we want to clean up data in our Cassandra database, via CQL operations - either upsert or remove/insert the row with fixed data. There is no schema evolution taking place here, just data cleanup in our Cassandra tables. How does this affect Lagom when the service is started and the clean data is read instead of the old/bad data?

  • Lagom persistent entity would have the old data, will it automatically fetch the good/clean data? Most probably no but what can be done then?
  • Is there a command to flush the persistent entities in the message tables and re-create them with the new/clean data? It might not be possible to keep track of the history of the persistent entities but a new messages table with all events as created could be generated in some way?