Your basic understanding is right, what actors bring to the table is processing one message at at time.
Without knowing more the use case seems simple enough that in reality messages should not queue up much, other things such as serialization from an outside protocol (gRPC, HTTP) will likely dominate. Always benchmark and see that you are having a problem actually caused by what you think is causing it before optimizing for troughput.
If you have verified that the messages are in fact the bottleneck there are various strategies (not a complete list):
- Split the state up in smaller parts that can be interacted with individually - perhaps the details shouldn’t be a part of the bidding state? (You are already doing this in one aspect since you model each item as an actor and each can run and processing messages individually, potentially scaling out over several nodes)
- Publish the non-mutable/seldom changing state to subscribers that then do not need to query (aka cache).
- Re-think exactly how your modelling optimizes/focuses on the most important part of the problem - perhaps there is a tradeoff to be made - should the modelled thing be an actor or a bid for example.
- Fall back to or interact with (non-lock) concurrency tools in the JDK, volatile fields, Atomics - note though that will be a hinderance if you need to scale out.
- Avoid defensive copying by using immutable data structures that can be sent in messages as actor states - this can make querying very cheap, if that is what will happen most often.