The course's goal is clear: get up to speed on how to run containerised workloads in AWS. Over 4 weeks, tackle the end-to-end lifecycle of several containerised applications, leveraging Infrastructure-as-Code with Terraform.
The course teaches the cloud-native landscape while focusing on production patterns so you can start using containers immediately. Learning the conceptual landscape will help you build reliable and maintainable services and know when to push for the bleeding-edge and take calculated risks.
As our overarching project, we will run a multi-microservices 'Hello World' application, from Go to Python, and from serverless containers to hand-tuned EC2 instances. We will address the common pain-points: CI/CD, connecting to other AWS resources (including databases), security, monitoring and developer experience. Completing the project will fully prepare you for the world of running containers and its challenges.
How we got to containers and container orchestrators, typical use-cases and workflows, and we will look at some general architecture patterns. Followed by crash-course for Infrastructure-as-Code with Terraform. By the end of the first week, you will know the differences between ECS and Fargate and EKS, and you will be able to manage and deploy resources using Terraform.
What are containers, building, storing, and running them locally. We will use multiple applications in multiple languages, and we will also be addressing best practices. By the end of the week, you will build containers for several applications, setup CI pipelines using GitHub Actions, and store the container images in AWS for use in the next two weeks.
Serverless containers: an introduction to Fargate on ECS. We will run a service on Fargate on ECS, monitor and secure it, and we will connect our application to a database. Patterns and best practices will be discussed, as well as integrations with existing applications and services. By the end of the week, you will have your first service up and running.
The final week focuses on Kubernetes. The most famous name in containers will be discussed in-depth, as well as other relevant projects. We will run our final microservice in and connect all our applications to complete our project. By the end of the week, you will know how to use Kubernetes and why, how to deploy applications with Helm, and how to integrate Kubernetes with other AWS services.
We will wrap up the course with an extra lecture to discuss further steps regarding container workloads and the cloud-native world. More complex subjects will be presented, such as Service Meshes.
The class takes place over 4 weeks, with an average time commitment of 4-5 hours per week. Each week will follow this structure:
Mondays: Lectures & exercises released via the Homeschool platform; you'll need to be logged in to access this.
All week: Peer chat and exercise sharing with your instructor & class cohort.
Thursday evening, 9pm and Friday morning, 10am London Times: Live recap and Q&A sessions (~1 hour) with the instructors. Exercise answers released.
The Q&A session is streamed within the Homeschool platform, with questions submitted via chat. Q&A recordings are added to the platform and all questions are logged with a timecode for each is published after the streams end, in case you miss them.
A brief list of what you will need to do this class:
A basic understanding of Amazon Web Services (AWS) and previous experience with AWS workloads and management. Eg: VPC, EC2, IAM, AWS CLI.
An active AWS account with a valid billing method set. The AWS Free Tier is not sufficient for this course.
An active GitHub account. The Free tier with unlimited Actions minutes for public repositories is sufficient.
A computer with a terminal emulator and administrative rights to install new software (Terraform, Docker, Kubernetes CLI, and more).
A code editor or IDE. For example, VS Code, Sublime Text, Vim, Emacs, Visual Studio, IntelliJ, etc.
Sysadmins or people with an operations-focused background who want to get ready for a Cloud-Native and containerized world. This course draws several parallels to virtual machines and immutable infrastructure to make the leap more approachable.
Developers who need to run and maintain their applications in production. The course also showcases programming and application development practices for containers while considering the legacy software issues and relevant patterns. This course draws several parallels to code libraries and local development to make the leap more approachable.
Serverless Engineers that need longer-running jobs, GPU support, or more powerful hardware. Containers are often used together with serverless technologies, and a significant part of the course is focused on serverless containers (Fargate).
Tech Leads, Architects, and Engineering Managers that need to know when to recommend containers, how to leverage the relevant AWS services with minimal friction, and help their business by focusing on velocity.