Introduction to AWS...
Clear all
1 Posts
1 Users
Posts: 108
Topic starter
Joined: 3 years ago

Introduction to AWS Lambda

This lab will give you the basic understanding of AWS Lambda. It will demonstrate the basic steps required to get started to create and deploy a Lambda function in an event-driven environment.

q2RifGJ.png (815×15351)

Introduction to AWS Lambda

45 minutesFree


Rate Lab

SPL-88 Version 2.3.5

© 2020 Amazon Web Services, Inc. and its affiliates. All rights reserved. This work may not be reproduced or redistributed, in whole or in part, without prior written permission from Amazon Web Services, Inc. Commercial copying, lending, or selling is prohibited.

Errors or corrections? Email us at [email protected].

Other questions? Contact us at


The lab provides a basic explanation of AWS Lambda. It will demonstrate the steps required to get started to create a Lambda function in an event-driven environment.

AWS Lambda is a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information. AWS Lambda starts running your code within milliseconds of an event such as an image upload, in-app activity, website click, or output from a connected device. You can also use AWS Lambda to create new back-end services where compute resources are automatically triggered based on custom requests.

Topics covered

By the end of this lab you will be able to:

  • Create an AWS Lambda function
  • Configure an Amazon S3 bucket as a Lambda Event Source
  • Trigger a Lambda function by uploading an object to Amazon S3
  • Monitor AWS Lambda S3 functions through Amazon CloudWatch Log


Familiarity with Amazon S3 would be beneficial.

Start Lab

  1. At the top of your screen, launch your lab by clicking Start Lab

This will start the process of provisioning your lab resources. An estimated amount of time to provision your lab resources will be displayed. You must wait for your resources to be provisioned before continuing.

 If you are prompted for a token, use the one distributed to you (or credits you have purchased).

  1. Open your lab by clicking Open Console

This will automatically log you into the AWS Management Console.

 Please do not change the Region unless instructed.

Common login errors

Error : Federated login credentials

If you see this message:

  • Close the browser tab to return to your initial lab window
  • Wait a few seconds
  • Click Open Console again

You should now be able to access the AWS Management Console.

Error: You must first log out

If you see the message, You must first log out before logging into a different AWS account:

  • Click click here
  • Close your browser tab to return to your initial Qwiklabs window
  • Click Open Console again


This lab demonstrates AWS Lambda by creating a serverless image thumbnail application.

The following diagram illustrates the application flow:

Overview 1

1 A user uploads an object to the source bucket in Amazon S3 (object-created event).

2 Amazon S3 detects the object-created event.

3 Amazon S3 publishes the object-created event to AWS Lambda by invoking the Lambda function and passing event data as a function parameter.

4 AWS Lambda executes the Lambda function.

5 From the event data it receives, the Lambda function knows the source bucket name and object key name. The Lambda function reads the object and creates a thumbnail using graphics libraries, then saves the thumbnail to the target bucket.

Upon completing this tutorial, you will have the following resources in your account:

Overview 2

The steps in this lab will show you how to create the Amazon S3 buckets and the Lambda function. You will then test the service by uploading images for resizing.

Task 1: Create the Amazon S3 Buckets

In this task, you will create two Amazon S3 buckets -- one for input and one for output.

Amazon S3 buckets require unique names, so you will add a random number to the bucket name.

  1. In the AWS Management Console, on the Services menu, click S3.

  2. Click Create bucket and then configure:

  • Bucket name: 
  • Replace NUMBER with a random number
  • Copy the name of your bucket to a text editor
  • Click Create bucket

Every bucket in Amazon S3 requires a unique name such as images-34523452345.

 If you receive an error stating The requested bucket name is not available, then click the first Edit link, change the bucket name and try again until it works.

You will now create another bucket for output.

  1. Click Create bucket and then configure:
  • Bucket name: Paste the name of your images bucket
  • At the end of the bucket name, append 
  • Click Create bucket

 Do not change the Region.

You should now have buckets named similar to:

  • images-123
  • images-123-resized

You will now upload a picture for testing purposes.

  1. Right-click this link and download the picture to your computer: HappyFace.jpg

  2. Name the file HappyFace.jpg.

 Firefox users: Make sure the saved filename is HappyFace.jpg (not .jpeg).

  1. Open the image on your computer.

It is a large picture, with dimensions of 1280 x 853.

  1. In the S3 Management Console, click the images- bucket. (Not the -resized bucket)

  2. Click  Upload

  3. In the Upload window, click Add files

  4. Browse to and select the HappyFace.jpg picture you downloaded.

  5. Click Upload

Later in this lab you will invoke the Lambda function manually by passing sample event data to the function. The sample data will refer to this HappyFace.jpg image.

Task 2: Create an AWS Lambda Function

In this task, you will create an AWS Lambda function that reads an image from Amazon S3, resizes the image and then stores the new image in Amazon S3.

  1. On the Services menu, click Lambda.

 Do not change the Region. You must use US West (Oregon) for this lab.

  1. Click Create function

 Blueprints are code templates for writing Lambda functions. Blueprints are provided for standard Lambda triggers such as creating Alexa skills and processing Amazon Kinesis Firehose streams. This lab provides you with a pre-written Lambda function, so you will Author from scratch.

  1. In the Create function window, configure:
  • Function name: 
  • Runtime: Python 3.7

 Make sure to select Python 3.7 under Other Supported runtime. If you select Python 3.8 from Latest supported list, the code will fail.

  • Expand  Choose or create an execution role
  • Execution role: Use an existing role
  • Existing role: lambda-execution-role

This role grants permission to the Lambda function to access Amazon S3 to read and write the images.

  1. Click Create function

A page will be displayed with your function configuration.

AWS Lambda functions can be triggered automatically by activities such as data being received by Amazon Kinesis or data being updated in an Amazon DynamoDB database. For this lab, you will trigger the Lambda function whenever a new object is created in your Amazon S3 bucket.

  1. Click  Add trigger then configure:
  • Select a trigger: S3
  • Bucket: Select your images- bucket (e.g. images-123)
  • Event type: All object create events
  1. Scroll to the bottom of the screen, then click Add

  2. Click Create-Thumbnail at the top of the diagram:


You will now configure the Lambda function.

  1. Scroll down to the Function code section and configure the following settings (and ignore any settings that aren't listed):
  • Code entry type: Upload a file from Amazon S3
  • Runtime: Python 3.7
  • Handler: 

 Make sure you set the Handler field to the above value, otherwise the Lambda function will not be found.

  • Amazon S3 link URL: Copy and paste this URL into the field: 

The file contains the following Lambda function:

 Do not copy this code -- it is just showing you what is in the Zip file.

import boto3
import os
import sys
import uuid
from PIL import Image
import PIL.Image

s3_client = boto3.client('s3')

def resize_image(image_path, resized_path):
    with as image:
        image.thumbnail((128, 128))

def handler(event, context):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
        upload_path = '/tmp/resized-{}'.format(key)

        s3_client.download_file(bucket, key, download_path)
        resize_image(download_path, upload_path)
        s3_client.upload_file(upload_path, '{}-resized'.format(bucket), key)
  1. Examine the above code. It is performing the following steps:
  • Receives an Event, which contains the name of the incoming object (Bucket, Key)
  • Downloads the image to local storage
  • Resizes the image using the Pillow library
  • Uploads the resized image to the -resized bucket
  1. In the Basic settings section towards the bottom of the page, click Edit
  • Description enter: 
  • Click Save

You will leave the other settings as default, but here is a brief explanation of these settings:

  • Memory defines the resources that will be allocated to your function. Increasing memory also increases CPU allocated to the function.
  • Timeout sets the maximum duration for function execution.
  • VPC (under Network) provides the Lambda function access to resources within a Virtual Private Cloud (VPC) network.

 Ignore the warning message "You don't have permission to configure a VPC.". This lab does not need a VPC.

  • Dead Letter Queue (DLQ) Resource (under Debugging and error handling) defines how to handle failed function executions.
  • Enable active tracing allows tracing and monitoring of distributed code via AWS X-Ray.
  1. Click Save at the top of the window.

Your Lambda function has now been configured.

Task 3: Test Your Function

In this task, you will test your Lambda function. This is done by simulating an event with the same information normally sent from Amazon S3 when a new object is uploaded.

  1. At the top of the screen, click Test then configure:
  • Event template: Amazon S3 Put
  • Event name: 

A sample template will be displayed that shows the event data sent to a Lambda function when it is triggered by an upload into Amazon S3. You will need to edit the bucket name so that it uses the bucket you created earlier.

  1. Replace example-bucket with the name of your images bucket (e.g. images-123) that you copied to your text editor.

Be sure to replace example-bucket in both locations.


  1. Replace test/key with the name of the picture that you uploaded. This should be 


  1. Click Create

  2. Click Test

AWS Lambda will now trigger your function, using HappyFace.jpg as the input image.

Towards the top of the page you should see the message: Execution result: succeeded

 If your test did not succeed, the error message will explain the cause of failure. For example, a Forbidden message means that the image was not found possibly due to an incorrect bucket name. Review the previous steps to confirm that you have configured the function correctly.

  1. Click  Details to expand it (towards the top of the screen).

You will be shown information including:

  • Execution duration
  • Resources configured
  • Maximum memory used
  • Log output

You can now view the resized image that was stored in Amazon S3.

  1. On the Services menu, click S3.

  2. Click the name of your -resized bucket (which is the second bucket you created), then:

  • Click HappyFace.jpg
  • Click Open (If the image does not open, disable your pop-up blocker.)

The image should now be a smaller thumbnail of the original image.

You are welcome to upload your own images to the images- bucket and then check for thumbnails in the -resized bucket.

Task 4: Monitoring and Logging

You can monitor AWS Lambda functions to identify problems and view log files to assist in debugging.

  1. On the Services menu, click Lambda.

  2. Click your Create-Thumbnail function.

  3. Click the Monitoring tab.

The console displays graphs showing:

  • Invocations: The number of times the function has been invoked.
  • Duration: How long the function took to execute (in milliseconds).
  • Errors: How many times the function failed.
  • Throttles: When too many functions are invoked simultaneously, they will be throttled. The default is 1000 concurrent executions.
  • Iterator Age: Measures the age of the last record processed from streaming triggers (Amazon Kinesis and Amazon DynamoDB Streams).
  • Dead Letter Errors: Failures when sending messages to the Dead Letter Queue.

Log messages from Lambda functions are retained in Amazon CloudWatch Logs.

  1. Click View logs in CloudWatch

  2. Click the Log Stream that appears.

  3. Expand  each message to view the log message details.

The Event Data includes the Request Id, the duration (in milliseconds), the billed duration (rounded up to the nearest 100 ms, the Memory Size of the function and the Maximum Memory that the function used. In addition, any logging messages or print statements from the functions are displayed in the logs. This assists in debugging Lambda functions.


 Congratulations! You have successfully:

  • Created an AWS Lambda function
  • Configured an Amazon S3 bucket as a Lambda Event Source
  • Triggered a Lambda function by uploading an object to Amazon S3
  • Monitored AWS Lambda S3 functions through Amazon CloudWatch Log

End Lab

Follow these steps to close the console, end your lab, and evaluate the experience.

  1. Return to the AWS Management Console.

  2. On the navigation bar, click awsstudent@<AccountNumber>, and then click Sign Out.

  3. Click End Lab

  4. Click OK

  5. (Optional):

  • Select the applicable number of stars 
  • Type a comment
  • Click Submit

    • 1 star = Very dissatisfied
    • 2 stars = Dissatisfied
    • 3 stars = Neutral
    • 4 stars = Satisfied
    • 5 stars = Very satisfied

You may close the dialog if you don't want to provide feedback.

Additional Resources

For feedback, suggestions, or corrections, please email us at [email protected].

Ready for more?

Here's another lab we think you'll like.


Introduction to Amazon Virtual Private Cloud (VPC)



Topic Tags
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
100% Free SEO Tools - Tool Kits PRO
%d bloggers like this: