The Art of Asynchronous Communication: A Deep Dive into Message Queuing

·

8 min read

Introduction

Photo of a-restaurant-with-order-list-and-busy-chef

In a typical restaurant, when a customer places an order with their waiter, the waiter must go to the kitchen and verbally relay the order to the chef. The chef must then prepare the order, and once it's ready, the waiter returns to the kitchen to retrieve it and bring it to the customer. This is a synchronous process, meaning that the waiter must wait for the chef to finish preparing the first order before they can place the next order.

This synchronous process can lead to delays and inefficiencies, especially during busy periods when there are many orders to be processed. If the chef is busy with a complex order, the waiter must wait for the chef to finish before they can place the next order, leading to long wait times for customers.

By using message queuing, however, the waiter can add each order to a queue, or a list of orders, as they are placed. This queue can be monitored by the chef, who can then start preparing orders as they become available, rather than waiting for the waiter to return with each individual order. The orders in the queue act as messages that the chef can process asynchronously, without waiting for the waiter to come back with each individual order.

This can greatly improve efficiency in the restaurant, reduce wait times for customers, and ensure that orders are processed in a timely and efficient manner. It also allows for greater scalability, as the restaurant can handle a larger volume of orders without adding additional staff.

The Problem with Synchronous Processing

The same problem happens with servers. If you have a server that executes instructions synchronously, each request will need to wait for the previous request to be executed to start executing. Now, imagine that each request takes 1 second to be executed. In this case, the 10th user will need to wait for 10 seconds and the 100th user will need to wait for 100 seconds and so on.

One solution to solve this problem is to have Horizontal Scaling i.e. having multiple servers handling requests at the same time with a Load Balancer distributing requests to multiple servers. However, this solution will still suffer if one process is taking too much time and it will not scale well. It is also not feasible to scale horizontally each time the number of requests increases. Message queues come to solve this issue.

Dataflow Techniques

Before introducing message queues, let's have a quick look at dataflow techinques. Data can flow between different processes in multiple ways:

  • Through databases: A process encodes the data and writes it to the database and another process reads the data from the database.

  • Through services: REST and RPC: A process encodes the data and sends it over the network and another process receives the data and decodes it.

  • Asynchronously through message-passing: A process sends a message to a queue and another process reads the message from the queue.

Introducing Message Queuing

Message queuing is a method of communication between processes. One process sends a message to a queue and the other process reads the message from the queue. The process sending the message is called the Producer and the one receiving it is called the Consumer. Message queuing allows processing instructions asynchronously. So, if we get back to the previously mentioned problem of handling requests synchronously on the server side, now the user does not need to wait for the server to execute all previous requests. The flow goes as follows instead:

  • The client makes a request to the server

  • The server -acting as the producer- enqueues the message to a queue

  • Then, the server responds back to the client with a message saying that the request is enqueued

  • The server loops on the queue with a FIFO manner and executes the requests

Message Queuing Architectures

There are 3 main architectures for Message Queueing systems:

  • Point-to-point (P2P)

    Communication in a Point-to-point architecture involves exchanging messages between a sender and a specific recipient aka producers and consumers. In this case, the producer sends a message to the queue, and when the consumer is free, it checks the queue for new messages and start processing the message. An important point to know is that in P2P architecture, the message gets removed from the queue whenever a consumer starts processing it. P2P architectures can also have multiple consumers working with the same queue. In this case, horizantal scaling can be achieved along side asynchronous execution.

  • Publisher-subscriber (Pub/sub)

    On the other hand, Pub/sub architectures involves a one-to-many communication pattern, where messages are published to a specific topic or channel (queue), and multiple subscribers can receive the same message simultaneously. An important distinction to know between Pub/sub and P2P architectures is that in Pub/sub architecture, the message does not get deleted when a subscriber (consumer) requests the message while in P2P architecture, the message gets removed. This approach is useful for broadcasting information or events to many components at once.

  • Request-reply

    Request Reply is a two-way communication pattern, where a sender sends a request to a specific recipient, and the recipient sends a response back to the sender. This is done by having 2 different queues: one for the communication from the sender to the receiver and one the other way. In this case, the client acts as the producer when making the request and the server acts as the consumer. When the server sends a response to the client, it acts as the producer and the client acts as the consumer.

Benefits of Message Queuing

Message queuing offers a wide range of benefits that make it an attractive choice for building distributed systems. One of the main advantages is guaranteed delivery, which ensures that messages are delivered reliably and in the correct order, even in the face of failures or network issues. Asynchronous messaging is another key benefit, which allows applications to communicate independently and asynchronously, improving performance and scalability. Load balancing is also possible using message queuing, which can help to distribute workloads evenly across multiple systems or applications. Message queuing also offers excellent support for different programming languages and protocols, making it a flexible and versatile tool for building distributed systems. Additionally, decoupling is a key benefit, which allows different teams or developers to work independently on different components of a system without interfering with each other. This can also make systems more maintainable and evolvable over time, as changes can be made to individual components without affecting the entire system. Finally, message queuing can help to promote single responsibility of work, by ensuring that each application or system is responsible for only one task, which can make systems easier to understand and maintain.

Frontend User Experience

At this point in the article, you may be concerned about the end user experience. How will they interact with such a system and what will they see? From the end-user perspective, what they get is usually a message telling them that their message is received and being processed e.g. "We received your request and it is being processed. You will receive a message once it is completed". In this case, the user may receive an email or get a notification when the request is processed. Another way is to show a progress bar or a loading indicator to the user. This way the user does not need to wait for a long time for a request to be handled immediately and, at the same time, receives a notification telling them that the request is being processed.

When to use Message Queues

Message queuing should only be used when it is necessary for your program to function properly. Do not use it just because it is a cool technology or because it is the latest trend. Instead, focus on the specific problems that message queuing can help solve, such as reliable message delivery, asynchronous messaging, and load balancing. When deciding whether or not to use message queuing, consider the specific needs of your program and only adopt it when necessary.

Tools and Examples

Here are three popular tools used for message queuing in the software industry:

  • Apache Kafka, a distributed streaming platform that is commonly used for real-time data processing.

  • Sidekiq, a simple background processing library for Ruby that uses Redis to manage its queue.

  • RabbitMQ, a message broker that implements the Advanced Message Queuing Protocol (AMQP) and is widely used for reliable message delivery.

  • AWS SQS, a distributed message queue service for decoupling and scaling software components.

These tools have been well-documented and have gained a large following in the development community due to their effectiveness and ease of use.

Conclusion

In conclusion, message queuing is an effective method of communication between processes that allows for asynchronous messaging, load balancing, and reliable message delivery. It can greatly improve efficiency and scalability in distributed systems and is adopted by a variety of tools and protocols. It is important to use message queuing only when necessary according to the specifications of your program. By adopting message queuing, developers can build more reliable, scalable, and maintainable distributed systems that can handle a large volume of requests without sacrificing performance or reliability.

Sources