I have a question about the granularity of events for read side processor for inserting data into Cassandra.
I could, of course, put what I want to write into one event, and then in read side processor generate N prepared statements (N = A + B + C…) and write them.
I am more interested in another approach, where I’d split my original event into more smaller ones, where each would be translated to A, B, C… prepared statements.
Am I right to assume that I’d have multiple read-side consumers/writers in runtime in production, which could share the load of writing data into Cassandra? I don’t really need atomic/batch inserts, but I do need them to eventually end up there.