09/03/2026
7 min read
ChaiCode
System Design
queue
message queue
rabbitmq
A queue is a fundamental concept in software engineering used to manage tasks or messages that need to be processed in a specific order (FIFO). It provides a mechanism for decoupling the components of a system, allowing them to operate independently and asynchronously.
Analogy : Imagine a line of customers waiting to order at a McDonald's counter.
When customers arrive, they join the line at the back. The customer who has been waiting the longest gets served first, while new customers must wait their turn. No one can skip ahead in the line.
This is exactly how a queue works in software. Tasks enter the queue one by one, and the system processes them in the same order they arrived ensuring fairness and predictable execution.
Let's imagine how youtube system design would look like if there was no message queue means they needed to perform synchronous operation
Each service waits for the next one to finish before returning a response.
That means the original client request stays open the entire time until the whole pipeline finishes.
The Slow Response : Video processing is heavy. Converting formats and compressing video can take seconds or even minutes depending on the file size.
Tight Coupling Between Services : In a synchronous pipeline, every service depends directly on the next one.If one service goes down, the entire chain breaks.
Cascading Failures: Synchronous chains can cause cascading failures.
Imagine this situation:
Compress service becomes slow
Format service starts waiting longer
Upload service keeps connections open
Threads start piling up
Eventually the whole system becomes overloaded
One slow service can cause the entire pipeline to degrade.
This is a common failure pattern in distributed systems.
Because requests stay open for a long time:
API threads stay occupied
HTTP connections remain open
Memory usage increases
If thousands of users upload videos simultaneously, the server could easily run out of resources.
Now you know the problem lets understand how message queue helps make the processing asynchronous.
A message queue is a specific type of queue used in distributed systems for inter-process communication. It acts as an intermediary between different services or applications, allowing them to send and receive messages without direct knowledge of each other.
Producer/Publisher: A component that creates and sends messages to the message queue.
Consumer/Subscriber: A component that retrieves and processes messages from the message queue.
Decoupling: Message queues decouple producers and consumers, meaning they don't need to be available at the same time or know each other's direct addresses. This enhances system resilience and scalability.
Asynchronous Processing: Messages are processed asynchronously, allowing the producer to continue its work without waiting for the consumer to complete the message processing.
Buffering: Message queues act as a buffer, storing messages when consumers are busy or offline, and delivering them when they become available. This prevents system overload and data loss.
RabbitMQ is an open-source, distributed message queue written in Erlang that supports various communication protocols, primarily the Advanced Message Queuing Protocol (AMQP). It was designed to solve the "spaghetti mesh" architecture problem where every client directly talks to every other client.
RabbitMQ Server (Broker):
This is the central component that receives messages from producers and delivers them to consumers.
It listens on port 5672 by default.
It uses a two-way, bi-directional TCP connection for communication with clients, leveraging the AMQP protocol.
The server pushes messages to consumers when they are ready.
Publisher (Producer):
An application that sends messages to the RabbitMQ server.
It establishes a stateful, two-way TCP connection with the server.
Publishers declare the queue they intend to publish to (even if RabbitMQ internally routes it via an exchange).
Consumer (Subscriber):
An application that receives and processes messages from the RabbitMQ server.
It also establishes a stateful, two-way TCP connection with the server.
Consumers express interest in a specific queue and the server pushes messages to them.
AMQP (Advanced Message Queuing Protocol):
The primary protocol used by RabbitMQ for communication.
It defines the format of messages and the rules for their exchange between the server and clients.
AMQP connections are stateful and bi-directional.
Channels:
An abstraction on top of a single TCP connection. Multiple channels can be opened over a single TCP connection, allowing for concurrent communication and multiplexing.
Channels are similar in concept to HTTP/2 streams, where multiple concurrent requests/responses are tagged with a unique ID and sent over a single connection.
Queues:
Named buffers within the RabbitMQ server where messages are stored.
Publishers send messages to queues, and consumers retrieve messages from them.
Queues can be durable (persisted to disk) or non-durable.
Exchanges:
Receive messages from producers and route them to one or more queues based on predefined rules (e.g., direct, fanout, topic).
While publishers typically specify a queue name, messages are internally routed via an exchange, providing an additional layer of abstraction.
In RabbitMQ, producers do not send messages directly to queues. Instead, messages are first sent to an exchange (router), which decides how to route them to one or more queues.
RabbitMQ supports several exchange types.
A direct exchange routes messages to a queue based on an exact routing key match.
For example, if a message has the routing key video.compress, it will only be delivered to queues that are bound with the same key.
This is commonly used when messages need to be delivered to specific services.
A fanout exchange broadcasts messages to all queues bound to it, ignoring routing keys.
This is useful when multiple services need to react to the same event.
Example:
send notifications
update analytics
trigger logging
A topic exchange allows routing based on pattern matchine
RabbitMQ pushes messages to consumers.
Consumers are responsible for acknowledging messages after successful processing. This informs the server to remove the message from the queue.
If a consumer fails to acknowledge a message (e.g., due to a crash), RabbitMQ can redeliver the message to another available consumer, ensuring "at least once" delivery guarantee.
Pros:
Decoupling: Effectively decouples system components.
Asynchronous Processing: Enables non-blocking operations.
Reliable Delivery: Offers mechanisms like acknowledgments to ensure messages are processed.
Channels: Efficiently multiplexes communication over a single TCP connection, reducing overhead.
Mature Technology: Widely adopted and well-supported.
Cons:
Complexity: Can be complex to set up and manage due to its many components and concepts.
Push Model Scalability: The push model to consumers can be challenging to scale, as it's hard to guarantee that consumers won't be overwhelmed if the server pushes too many messages at once.
Guarantees: While it offers "at least once" delivery, achieving "exactly once" delivery is very complex and often requires additional logic on the application side.
Spinning up a RabbitMQ server can be easily done using Docker containers.
Node.js (and other languages) has official AMQP client libraries (e.g., amqplib) to interact with RabbitMQ, allowing developers to create publishers and consumers.
Cloud-based RabbitMQ services (like CloudAMQP) simplify deployment and management.
RabbitMQ is particularly well-suited for asynchronous job systems where publishers submit tasks and consumers pick them up for processing.