JaredDev Logo

Determining the Optimal Lambda Function size


Choosing a memory setting

It may not always be obvious what setting you should set the memory limit to on an AWS Lambda function. However, with a little bit of data, you can make an educated decision on compute time vs processing time. To test I set up a simple function calculating a Fibonacci sequence of F35.

function fibonacci(num) {
  if (num <= 1) return 1;
  return fibonacci(num - 1) + fibonacci(num - 2);

exports.handler = async event => {
  const response = {
    statusCode: 200,
    body: JSON.stringify(fibonacci(35)),
  return response;

In my case, I was able to determine that by allowing for a 0.16% increase in cost I was able to reduce the runtime by 84%. Another way to put that is that when running at the higher memory setting I saw a 612% performance boost!

Gathering the data

I used Dashbird to make gathering the data super easy. It has some pretty excellent functionality beyond what I'm using here. One feature that I hadn't experienced (thankfully) before doing this testing was the error notification email. I attempted to calculate a Fibonacci sequence that triggered a timeout and without doing any additional setup I was getting emailed when the function failed! Pretty sweet!

Anyway, at the bottom of the screen used for in-depth metrics for a particular function, you can see each invocation of a particular function. The only real part of this table that I'm using is the "Duration" column and more specifically the actual runtime (not the billed time on the right). To easily extract this I was able to copy the whole view from the frontend and just paste it into VS Code. I separated out the executions based on the memory setting and deleted the data that wasn't used. I then moved that data into Excel. If you are doing this frequently enough it might be useful to build a tool to automate this beyond what I did here.

Also, not pictured here is the cold start tagging which I've found particularly useful to see "at a glance" how often a function is being run from a cold start.

Partial view of the Dashbird dashboard

The Calculations

Here is the data that was used for the test...

Individual Runtimes in ms

128MB 256MB 768MB
1748.31 884.56 286.43
1764.57 867.82 273.80
1750.22 883.65 285.36
1747.35 877.76 281.77
1748.00 876.84 289.42
1743.46 882.74 273.34
1748.59 882.45 278.84
1757.74 881.38 290.05
1751.44 879.65 300.03
1749.20 870.38 277.82
1777.77 884.32 281.79
1749.19 878.96 294.04
1751.31 881.42 285.53
1766.34 846.63 298.09
1751.19 877.72 287.65
1751.73 859.12 290.68
1751.64 878.23 282.35
1754.77 867.64 290.53
1750.70 877.95 278.78
1806.56 894.31 313.86

The Calculated Data

Size Ave. Time Ave. Billed Time $ per 100ms 10M cost
128MB 1756.0040 1800 $0.000000208 $3,744.00
768MB 287.0080 300 $0.000001250 $3,750.00
256MB 876.6765 900 $0.000000417 $3,753.00

The Results

Cost Increase Performance Improvement
0.16% 612%

Using the data

Now that we have the results we can determine a couple of things. If this is a background process that isn't going to impact the end user. We know that we can safely run the function at the minimum 128MB and see some marginal cost savings. However, if this function is running and the user is waiting on it to run... With $6 (for 10 million executions) we can provide a 612% performance boost! This is a massive time saver for next to nothing price wise!


  • This example is super contrived but does demonstrate how to easily perform these calculations
  • Since this only cares about the raw compute power allocated to a function, extra services like API Gateway were not factored in


You can use gather a few bits of data and turn that into $$$. Since causing delays for your users costs money. Finding the sweet spot for compute time vs cost can be a big deal! By the same token overpaying for a function that doesn't impact a user wastes money. I used Dashbird to gather the data. You can, however, dig through the AWS logs and get the data, but this allowed for one less thing for me to deal with. They currently offer a free tier that can get you started so if you don't already have the necessary reports setup in AWS I'd recommend giving them a go! Even if you do have those reports, the alerting and data layout in Dashbird are excellent and would still be something I'd recommend checking out!

Did this help? Did I miss something? Either way, get in touch!