AWS Lambda has been a huge success and it’s easy to see why. The whole point of the cloud, especially the public cloud, is to facilitate flexible, scalable working. AWS Lambda takes this to a whole new level by eliminating servers and making it possible for companies to build systems using what are essentially micro-units of functionality available on demand.
The Factors Used By The AWS Lambda cost calculator
At a high level, AWS Lambda pricing are based on the number of requests you make for your functions, how much memory you use and how long it takes for your functions to execute. This last is rounded up to the nearest 100 milliseconds. Please note that the rounding is always upwards. In other words, a function that takes between 101 and 150 milliseconds to execute is charged at the same rate as a function which takes between 151 and 199 milliseconds to execute.
In addition to this, you need to pay data-transfer charges as usual and you may also be charged if your Lambda function utilizes other AWS services and/or transfers data to/from them. Last but not least, you also have the option to enable Provisional Concurrency. This keeps functions initialized, which means that they can respond in double-quick time (possibly literally).
AWS Lambda does not use instances as such, so you can’t buy Reserved Instances or look for Spot Instances. You can, however, sign up for a Compute Savings Plan. Compute Savings Plans operate along the same lines as Reserved Instances. In exchange for a commitment to a consistent amount of usage (measured in $/hour) for one to three years, you can save up to 17% on AWS Lambda. Savings apply to Duration, Provisioned Concurrency, and Duration (Provisioned Concurrency).
The Significance Of Memory Usage In AWS Lambda Cost Calculation
Although AWS Lambda pricing is based on memory usage, increasing the amount of memory you use automatically increases the CPU allocation by a proportionate amount. The key to making the most out of AWS Lambda, both in terms of functionality and in terms of cost-optimization, is optimizing the relationship between memory allocation and execution time. Sometimes this is obvious but, in some cases, it requires a thorough look at your CloudWatch logs.
For example, if a function is processing-intensive, then increasing the memory and therefore the CPU probably will have a beneficial effect, at least up to a certain point. In many other situations, however, any “drag” will be caused by factors external to Lambda and hence increasing the memory allocation will simply increase the expense of running the function without any meaningful benefit.
For example, let’s say you have a function that needs to retrieve data from an external source. It has to contact a third-party API and wait for that third-party API to respond before completing and terminating. This exercise requires very little in the way of processing power, so it’s highly unlikely that the execution time would be reduced by increasing the memory allocation. It’s far more likely that the bulk of the execution time is caused by having to wait for the external API, so your only choices would be to live with it or find another source for the data.
For completeness, if you had a similar situation with a function or instance within your own system, then the way to address it would probably be to improve the data transfer between the two components rather than to increase the memory allocation.
Optimizing For AWS Lambda Speed Versus Optimizing AWS Lambda For Cost
At first glance, you might think that in AWS Lambda, optimizing for speed also meant optimizing for cost. After all, execution time is a factor in billing, hence, you might think, anything you could do to reduce it would also reduce the cost of AWS Lambda. It’s a perfectly logical thought, but sadly it’s often wrong.
The key point to understand is that even when increasing memory allocation also increases the speed of execution (which is not guaranteed), it does not necessarily increase it enough to offset the cost of the extra memory, let alone to reduce the overall cost of running the function. This means that, in practice, a lot of the time you increase memory allocation to reduce the execution time of a function, you are doing so either to increase user/end-customer satisfaction or to improve the productivity of human staff.
Both of these outcomes are clear benefits to your business and they almost certainly improve your bottom line, but it’s hard to see how you would effectively measure the relationship between the cost of AWS Lambda and either of these rather intangible benefits.
In short, therefore, when increasing memory allocation does improve the execution time of a function, you usually only want to increase it enough to make the execution time acceptable to your staff/customers rather than trying to make the function as fast as possible.