- You can pin frequently used services to top bar in AWS Console
- By default there's a limit of 1000 concurrent lambda executions, this can be raised with a support ticket. There are companies that have this limit raised up to tens of thousands concurrent lambda executions.
- By default you get 75GB of code storage (so up to 10 React apps, lol) which can also be raised
- Looking at Throttles graph is useful, we don't want our functions to be throttled
ConcurrentExecutions
graph is useful as well - to understand if we're not approaching a limit- You can search for lambda functions using function name (adding prefixed help!) or using tags, which are really useful
- It's possible to use custom runtimes for Lambda (apart from Node,.NET, Python etc.) so if you really want to use Haskell you can do that
- ARN is a unique identifier for a resource in AWS
- Layers can be used to avoid re-including something (like an external SDK) in every single lambda function
- You can get a memory limit of a function from within the function using the
context
object - A lambda function is allocated CPU proportional to the memory configured for it, so 128MB (the default) is not going to have a lot of CPU power
- You get charged for the lambda execution time in 100ms blocks (so if your function takes 10ms to execute, you'll pay for 100ms)
- You also pay for the amount of memory given to the lambda function
- With more memory you get more consistent performance but you may end up paying more
- It's possible to run Lambda functions on a custom VPC, which is useful when you need to work with RDS, EC2, containers etc.
- When using a custom VPC you should create a custom subnet for lambda functions to prevent issues when the function will scale massively
- You can use
reserved concurrency
to limit the number of function invocations - for instance setting it to 1 will ensure that at any given moment the function can be executed only one time - Provisioned concurrency allows you to ensure that you always have a number of containers available (to ensure that you won't see cold starts)
- You cannot provision concurrency for the
$LATEST
alias - Provisioned concurrency is not free, you have to figure out what's cheaper for you
- Creating versions allows you to have 'point-in-time' versions of your lambda functions
- You can push failed async lambda function invocations to dead letter queue (for instance - SQS)
- Database proxies are available in preview
- With aliases you can run different versions of Lambda function with a different probability (for instance run version A 90% of the time and version B 10% of the time)
- You can use lambda destinations to create simple lambda -> lambda workflows, for something more complicated you should use Step Functions
- In monitoring tab it's useful to monitor async delivery values to make sure you're not missing any events in lambda destinations and dead letter queue
- Supports not only AWS but other cloud providers as well
- You can set up environment variables to be used in the stack
- You can define different
events
for a Lambda function that define different triggers for a Lambda function - SLS by default is going to create two stacks: a stack for create and update
- It's going to automatically package the lambda function in a .zip file and upload it to S3
- Serverless framework has different values for default timeout and memory size than AWS Console
- New versions of the function are automatically created whenever you deploy
sls invoke
allows you to call a lambda function locally, even before the deploymentsls invoke
can also be used to execute a function that was already deployed remotely
- You generally want to have different levels of access for different APIs in your system (obviously not everything should be public)
- One way to address that is to use
usage plans + API keys
. They are designed for rate limiting, not auth and they allow the client to access the selected API at agreed upon request rates and quotas (like Google Maps API). Request rate and quote apply to all APIs and stages covered by the usage plan. - Another thing is to allow certain APIs to be accessed by your infrastructure only - by using IAM authorization
- API Gateway also supports an custom authorizer that you can build yourself
- A VPC Endpoint allows you to securely connect your VPC to another service
- We can think of Cognito as a collection of 3 different services: Cognito User Pools, Cognito Federated Identities and Cognito Sync
- Cognito User Pools is a managed identity service (registration/verify email/password policy etc. etc.). After signign in user can access APIs on API Gateway that require sign in
- Cognito Federated Identities - allows you to take auth token issues by auth providers and exchange it for a set of temporary AWS credentials
- Cognito Sync - nobody uses it lmao, it syncs user data across multiple devices
- In short - when user registers, confirms their email etc. the client talks with Cognito User Pools, and after a successful sign-in, Cognito User Pools returns a JWT token. This token is later used for authorization in API Gateway.
- Blog post: https://theburningmonk.com/2019/11/check-list-for-going-live-with-api-gateway-and-lambda/
- You don't have take care of every thing mentioned in the post, but the more critical the system - the more we should invest in things that can improve observability, security, performance and resilience of our API
- API Gateway has a timeout of 29 seconds so even if you set your function timeout to 15 minutes it won't matter because it's going to timeout way sooner
- Serverless caching: https://theburningmonk.com/2019/10/all-you-need-to-know-about-caching-for-serverless-applications/