Modern software projects are going to be complicated to develop and maintain without automation. The most sophisticated way will be to create a pipeline for the project for sourcing, building, and deploying new changes. This article continues the previous one that was telling about the deployment of public serverless API. Here we will be doing further automation._
What was the project? It gives the ability to store, retrieve, edit and delete books from the Rest API. If you are going to follow instructions, it will be worth having the git repository cloned. It is situated [here](https://github.com/Grenguar/aws-cdk-api-workshop). First, it is crucial to have a common vocabulary.
Used terms:
- implementation code - the main application situated in the root folder
- implementation stack - the one which has the infra code for lambdas, API GW and
- pipeline stack - magical one which has the CI/CD logic in it, including self-mutation.
In a nutshell, there are a couple of manual steps for succeeding. However, in the end, we will create the pipeline that will be getting the changes and check them in the infrastructure and implementation both. Here they are:
- Creating GitHub Connection using AWS Codestar
- Provisioning CDKToolkit (new or updated)
- Provisioning of CodePipeline from CLI
Let's start with the very beginning of the project building process. The pipeline should be able to get the code from the GitHub repository. In old times, one should store the secret of the GitHub webhook. Now there is a second version of it.
For using that, we will be setting up a connection. It is a region-specific service situated in ['Developer Tools -> Connections'](https://console.aws.amazon.com/codesuite/settings/connections). One should click 'Create Connection.' In this step, there are three options: BitBucket, GitHub, AWS Connector for GitHub.
The connection requires the installation of 'AWS Connector for GitHub.' There are options to choose either all repositories from the account or selected ones.
After the next screen, there is an ARN of this connection. One should write it down because the first stage of the pipeline will use it. From experience, it is better to save it into SSM Parameter Store situated [here](https://eu-west-1.console.aws.amazon.com/systems-manager/parameters/). The path to the secret could be like
`/serverless-api/git/connection-arn`.
The traditional approach of creating the CodePipeline projects is to have every stage described like sourcing, building, and deploying. The main issue was that everything like output artifacts and execution roles would be described from the ground-up.
However, IaC tools are evolving. CDK has the concept of 'Construct,' which is basically a group of resources. There is a new one called `aws-cdk-lib/pipelines.` It solves the above-mentioned problems of provisioning additional resources by hand.
The synth option is the first we can start to discuss. It is basically our build stage all-in-one stop. SSM stores the connection ARN. It is a comprehensive security practice for putting it into managed service for secrets. The first argument for input connection is `<owner>/<repository_name>`. This will work with the corporate accounts also.
The commands option is basically a replacement for the buildspec. It is a sequence of commands which will be executed by the builder. In CDK, it is implemented as an array of strings. It can be handy for parametrization. It is important to note that first, the webpack project should be dealt with. Only after that will there be `code` folder with our lambdas. Without that, infrastructure will fail.
Self mutation gives the ability for the pipeline to be changed by itself. Previously, it was painful to change the settings of it. Now there are two stacks: implementation and pipeline. Both of them would be checked and deployed. 'MyPipelineAppStage' is the stack that has API Gateway, Lambdas, and DynamoDB table.
Previously there was only one CloudFormation stack - ApiStack. Now our primary stack is becoming the Pipeline one. In this case, the implementation stack becomes a Stage. It is a unit that has one or more stacks that would be deployed together. In this case, `infra/bin/infra.ts` is the application entry. It will have only one stack to provision the pipeline (PipelineStack), whereas implementation one will be packed as a stage for ShellStep. The code example is here.
If one wants to use cdk, it is necessary to provision CDKToolkit. There is a one-line command for that. However, there are multiple versions of these tools. When I was preparing this article, my case was a problematic one. Deployments were not successful because of not enough rights for the CloudFormation to provision resources. For that case, I created **Troubleshooting** part of this article. Refer to it before having issues with deployments. For the first-timers:
Why are we using `npx`? In short: it will run cdk without the need to have it globally. From npx docs:
Executes <command> either from a local node_modules/.bin,
or from a central cache, installing any packages
needed in order for <command> to run.
We are almost done. Now the only thing is to run the command to provision pipeline from the `infra` folder. It would be:
That's it! Now we can relax and wait for a couple of minutes for the pipeline to succeed. The image shows the pipeline generated after that.
It is sourcing the code from the latest git commit. After that, it builds the code according to the instructions in ShellStep. The third stage is checking the changes in the pipeline itself. The next one is interesting because it has two groups of artefacts. One is for the implementation code of lambdas. Another one is a handler for the custom Lambda resource for creating log groups and retention policies. The deployment stage will check the changeset of the generated CloudFormation project.
___
In this article, I presented the modern way of creating CI/CD pipelines. The serverless use case is the most natural for me. However, one can use these findings for provisioning EC2 instances or ECS clusters.
If the stack on the deployment stage is failing. Probably, you have issues with the 'CDKToolkit'. You will need to do these actions.
So, what I did:
- Found ‘CDKToolkit’ in CFN. There I noted the S3 bucket.
- Deleted the toolkit stack.
- Deleted the bucket
- Did exactly like it was said in tutorial:
- Then, did again:
- On the last step of the pipeline: checked that IAM Role for CFN deployment stack had AdminAccess policy in the role.
- Create a connection
- CDK Pipelines