Previously, we analyzed serverless functions and AWS Lambda. In part 2 of this 4 part series, we'll be looking into message queues and AWS SQS. Let's begin!
What are Message Queues?
Message queues store “messages”—packets of data that applications create for other applications to consume—in the order they are transmitted until the consuming application can process them. Message queues can provide asynchronous service-to-service communication in serverless and microservices architectures.
But why is this important? To answer this, let’s see how monoliths fail.
Ways in which monoliths can fail
- Too much code. A larger code base can slow down development, cause lengthy builds, increase context switching, and cause longer onboarding time for new team members.
- Fast Producer/Slow Consumer. Monoliths that manage messages in memory rather than with a buffered queue can run out of memory. For example, a flood of HTTP requests can overflow a web application's memory.
- Coupling. An unhandled error in non-critical processing can bring down your entire application.
- Complexity. Changing one part of the system can unexpectedly affect other parts even though they’re logically unrelated, which leads to some nasty bugs.
Possible solutions to the monolith problems
1. Adding parallelism on the consumer-side. Adding background workers can speed up the message processing rate so that fewer messages are held in memory. Downside: This adds complexity and you can still run out of memory. Additionally, you may end up exhausting the thread pool or screw up the order of processing.
2. Add a fault-tolerant and scalable queue like SQS in the middle of the producer and consumer and separate them into two services.
Let’s go with the latter and investigate SQS.
How do message queues solve these problems?
Consumers and Producers can push and pull messages at their own rate. They can now process asynchronously with queues.
Additional benefits of message queues:
- By separating different components with message queues, you create more fault tolerance. If one part of the system is ever unreachable, the other can still continue to interact with the queue.
- Message queues create an implicit, data-based interface that both sides can adhere to. This allows developers to modify the implementation of these applications independently as long as the data contract is upheld.
- Queues are great for handling traffic spikes. Message queues give you the ability to put web requests in a persistent queue and process all of them.
How queues process messages:
- Typically, each message in the queue is processed only once, and only one consumer processes a given message. For this reason, this messaging pattern is often called one-to-one, or point-to-point, communications. When a message needs to be processed by more than one consumer, message queues can be combined with Pub/Sub messaging in a fanout design pattern (AWS SNS can be combined with SQS to achieve this fanout pattern).
When to avoid Message Queues
If chosen for the wrong reason, a message queue can be a burden. They are not as easy to use as it sounds. Why?
- There’s a learning curve. Generally, the more separate integrated components you have, the more problems may arise. How do you scale one component? How do you distribute traffic? Authentication of the message sender?
- Size and scale consideration: How many instances of the queue do you need? How do you scale and descale? Extra size costs money and having multiple instances adds complexity. How would you replicate messages?
- Handling errors and ensuring message receipt. How do you ensure message redundancy? Should consumers explicitly acknowledge receipt? Should consumers explicitly acknowledge failure to process messages? Should multiple consumers get the same message or not? Should messages have TTL (time to live), etc.)
- Then there’s the network and message transfer overhead – Queues can add latency to your system since they require network calls. The format of the message, like JSON, could add overhead to message size.
And last, but not least – it’s harder to track logs and execution flow when things go wrong. You can’t just see the stack trace in your log, because once you send a message to the queue, you need to go and find where it is handled. And that’s not always as trivial as it sounds.
Since decoupling does add complexity, it is possible to go overboard. Just don’t decouple because you think “decoupling is good”. The costs may not be worth it, and it may just be worth adding an extra method call to the external system.
Sure, your email system is still coupled to your order processing system if you just call “sendEmail” in your order processing system. You’re coupled – yes. But not inconveniently coupled.
Even if you can’t afford to lose messages, there’s often still a simple solution – the database.
Why Use AWS SQS?
- Fault tolerance and scale. AWS manages all ongoing operations and underlying infrastructure needed to provide a highly available and scalable message queue service. SQS scales elastically with traffic so you don’t have to worry about capacity planning and pre-provisioning and downscaling.
- It’s relatively easy. With SQS, there is no upfront cost, no need to acquire, install, and configure messaging software, and no time-consuming build-out and maintenance of supporting infrastructure like servers.
- Message redundancy. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.
Let’s wrap up by taking a look at an SQS queue in the AWS console:
Let’s look at the “Details” section. The above queue has a retention period of four days (listed next to “Message Retention Period”) and it's a standard queue (meaning it attempts to deliver the messages in order, but occasionally more than one copy of a message might be delivered out of order, and it allows for duplicated content).
And that's all we need to know to get started with AWS SQS. In part 3, we'll cover AWS Kinesis. See you then!