CQRS (Command Query Responsibility Segregation (but with our current design/arch maybe this will be called Command–query separation 😀)) is a software design pattern that emphasizes the separation of a system's read and writes operations. Instead of using a single model to handle both queries and commands, CQRS advocates for the use of distinct models, each optimized for its specific task (Greg Young).
The main benefit of using CQRS is improved scalability and performance. Another important benefit of CQRS is improved maintainability and extensibility of the system. By separating the read and write concerns, CQRS promotes a more modular and decoupled architecture, making it easier to change or add new features without disrupting existing functionality.
there are no changes for querying data, so the current process remains the same:
- FrontEnd sends a request to the BackEnd system
- The Backend system collected data from storage (MongoDB, PostgreSQL, or wherever the data is), and the Backend system proceeded the data from storage (transform, etc.)
- Return the data from the BackEnd to the FrontEnd, and that's it.
Instead of directly executing commands on our storage (such as CREATE, UPDATE, or DELETE), we can use other tools to help our backend system work asynchronously. This way, we can avoid making our users wait for too long and blocking user actions.
Tools Needs:
- Message broker: (Kafka, RabbitMQ, Redis Pub/Sub, etc.)
- Cache Tools: (Redis, etc.) to store temporary data
so how this works:
- Frontend sends a request to the Backend system with a payload to execute a command (such as update, delete, or insert).
- Backend processes the frontend request. The backend publishes the event to the message broker and stores the payload data in the cache.
- Backend will return a response back to the Frontend with a message, let's say: your request was processed, please wait a moment, we'll notify you when your request is ready. and that's it.
- Next action behind the scene will be described in the Job section
By using a message broker and cache tool and also implementing this method, we can improve the performance and reliability of our backend system, enabling us to process requests asynchronously (again*) without blocking user actions.
for the job function, itself will be running in the background of our Backend system. so, how does it's work?
- The job function subscribes events from the message broker (img:3.1)
- The job function processes the event, gets the data from temporary storage (Cache tools) (img:3.2), and runs command action (CREATE, or, UPDATE, or DELETE) to the storage system accordingly (MongoDB, PostgreSQL, etc.) (img:3.3). and then remove the events and temp data from the message broker (3.3.1) and cache tools (3.3.2).
- if the action succeeds send notify to the user either using SSE aka server-sent event or FCM aka Firebase cloud messaging.
so what happens when the jobs function failed to run the command action and data is not modified on our storage system (failed on img:3.3)? if this happens, the jobs function will again re-publish an event(3.3.1) to the message broker and re-insert the payload/body(3.3.2) data to cache tools, and this jobs function will repeat the action from 1-3. this kind of thing will avoid data loss.