- The Cloud Economist
- Posts
- 7 AWS Lambda Cost Optimizations I Learned After 6 Years
7 AWS Lambda Cost Optimizations I Learned After 6 Years
I share 7 tried and tested cost and performance techniques I personally use for all my serverless functions
“The most important system architectural decision is always the one that involves your wallet”.
Who said that?
I don’t know but it couldn’t be more true.
Every infrastructure decision you make should be about costs.
Let’s explore 7 cost optimizations I learned after years of working with serverless functions, and that you can use immediately yourself.
1. Higher memory = Lower Costs
Higher memory doesn’t always mean you pay more per function.
Even though Lambda costs are calculated at GB seconds of memory, with greater memory comes faster invocations and that translates into lower costs.
Now this isn’t always true, at some point the higher memory can end up costing more compared to less memory.
AWS Lambda allows you to configure anywhere between 128MB to 10GB of memory for a function.

The sweet spot is usually between 500MB and 1GB, for most workloads.
Some may require a few more GBs and some may need less than 500MBs.
The rule of thumb is to give your functions 500MB by default and monitor their usage with AWS CloudWatch. From your usage you’ll be able to adjust granularly according to each function’s needs.
2. Enable provisioned concurrency to reduce cold starts
Provisioned concurrency allows you to set concurrency on your Lambda functions ahead of time. This concurrency will often eliminate cold starts on your functions and this in turn reduces your costs.
However, you do have to pay for the provisioned concurrency even when your Lambdas are idle. So you gotta plan that provisioning wisely.
Functions that are most affected by cold starts are ones that either run large workloads such as batch processes or have large dependencies attached to them.
This brings us to the next cost optimization…
3. Trim dependencies to remove dead weight
One of the most effective things you can do reduce cold start times (and hence cost per function) is to remove as much your function dependencies.
A general rule is any code that can be executed on its own shouldn’t use a library.
Additionally you should always compare similar libraries that do the same thing and favor the smaller package.
Finally, and needless to say, any dependency not active or not bringing much value should be removed from your node_modules folder.
4. Monitor function duration and adjust timeout accordingly
Function timeout is a big factor of costs. Since Lambda charges per gb-seconds, you need to optimize your functions’ timeouts so they don’t run for too long.
For most CRUD operations a 3–5 second timeout is sufficient. However, if your code is more complex and uses a few dependencies you’ll inevitably need longer timeouts.
For these larger workloads, your functions may be running anywhere between 30 seconds to a few minutes.
However, rather than setting an arbitrary timeout duration, here’s the more efficient strategy: monitor your functions using CloudWatch and set appropriate timeouts.
Here’s how to do this:
In the Monitor tab of your Lambda function, you will see a button labelled “View Cloudwatch logs”, click on that.
You’ll see the CloudWatch log page for that function.
In the Log streams section below, open up any log.
Look for the Report requestId line and expand that message.
You’ll see the time the function ran under Duration.
Under Billed duration you will see how much time was considered in the charge
Now once you do this with a large sample size — you’ll also capture cold start time — you’ll get an average and maximum time the function typically takes.
Then you can set the timeout of your function based on that maximum of your function’s invocation duration.
5. Use ARM based architecture
One reflex I have when creating a new Lambda function is selecting the ARM architecture in the Architecture options.
The ARM architecture is typically 34% cheaper than the x86 architecture.
The AWS website says this about choosing their Graviton 2 processors:
Achieve up to 34% better price/performance with AWS Lambda Functions powered by AWS Graviton2 processor.
The pros of ARM based processors are they are more powerful and cheaper.
The cons are they support slightly less (npm) libraries.
So when choosing the ARM architecture always make sure your dependencies support ARM.
6. Offload long running tasks
For operations like file processing or large database writes you sometimes may have to offload these to a message queue like SQS or EventBridge.
You’ll want to avoid Lambda for these longer running workloads because the costs can skyrocket especially at scale. These workloads typically require prolonged invocation and may timeout past the 15 minute Lambda maximum.
Just because Lambda has a 15 minute maximum timeout doesn’t mean you should use all of those 15 minutes. That will get expensive very fast.
A general rule of thumb I follow is any workload that is typically going over 2–3 minutes — and isn’t a rare occurrence — I will offload to another service like Step Functions or EventBridge.
7. Architect for costs, not just features
Last but actually most importantly, architect your solutions for cost in mind first.
Every tradeoff in every system should weigh in that system’s costs.
When presented with two options, don’t automatically choose the more powerful, or lowest latency, or more scalable option.
Instead evaluate if the tradeoff between increasing performance/latency/scalability is worth the higher costs compared to sacrificing some performance/latency/scalability for a more affordable solution.
You would be surprised how often it is a reasonable tradeoff.
Enjoying the newsletter?
🚀 Follow me on LinkedIn for daily posts on AWS and DynamoDB.
🔥 Invite your colleagues to subscribe to help them save on their AWS costs.
✍️ Check out my blog on AWS here.