DevEx - Containerized Applications
A few notes from the session at AWS Loft 2019 regarding containerized application, in terms of tools and process that support the developre experience; slides are available online.
We had an overview of the process to manage and delivery containerized applications and the key elements in the development process; one practice that becomes rather relevant is Infrastructure as Code, this is however not specific of containerized applications but any application that is Cloud based.
When we practice Infrastructure as Code a key principle is to have a clean separation between the application and infratructure, this is, that we organize and manage resources separately, the idea is that the maintenance of neither should affect the other; this impacts directly:
- Repositories, how granular should we go?
- Pipelines, build and delivery mechanisms should target changes that are specific to either area and never across.
DevEx
This comprises practices and tools for each of the areas shown in the table below, most of the tooling referred to are services offered by AWS.
Flow: Local development => IaC => Deployment safety => Continuous delivery
Element | Description | Tooling |
---|---|---|
Local Development | This is mostly about getting to know some parts of the platform and gaining confidence that it will work in a deployed environment in the Cloud; to get as close as possible to a live environment. | AWS ECS CLI or docker/docker-compose if not using ECS. |
IaC | Model our infrastructure as a combination of templates and code. | AWS CDK. |
Deployment safety | Have fail-safe mechanisms to roll out new versions of the application without disruption. | B/G deployments with AWS CodeDeploy, AWS ECS TaskSet API. |
Continuous delivery | Establish a strategy to conduct releases which factors in the lifecycle of application code and the underlying platform. | The different models were exposed in the next section. |
Releasing
We were walked through various models to handle releases:
- Single Source/Single Pipeline - The task definition(s) and application code sit at the same repository and are served by the same pipeline, which is responsible of building, pushing and releasing.
- Mutiple Sources/Multiple Pipeline - The task defnition(s) and application code sit at different repositories, there is one dedicated pipeline for each of the sources. While the pipeline responsible of the application would build and push newer images to the registry, the other pipeline would be triggered in case a new deployment was needed due to change of settings, regardless of what image is used.
A bit on the rationale about multiple source / multiple pipelines:
In a containerized solution, changes pertinent to the platform supporting the environment, i.e. scaling settings, etc. have nothing to do with the application code itself, therfore we’d continue to use the image considered latest and that is actively running.
- Consume base image - This is the typical model, where a prebuilt image (not built by us) serves as base, our task which describes the use of such image, is built and pushed to the registry by a pipeline. Again, the task rollout is handled separately from building a new image.
- Consume side-car image from central team - We build and push a base image to the registry, when deploye an extra image is deployed along as a side car to our application; the side car could have supporting utilities to our application.
- Dev builds images, Ops pushes them - The only remark here is that the deployment from images is handled exclusively by operations, this is more about delimiting the responsibility of each team.
Remarks
About CodeDeploy capabilities:
- Supports B/G
- Coming soon: App Mesh and Canary Releases
Concepts
- AWS CDK Constructs (bundle of rules/code definitions) prebuilt to achieve common tasks
- Task is the term used to refer to a container.