10 ways to build applications faster with Amazon CodeWhisperer

Amazon CodeWhisperer is a powerful generative AI tool that gives me coding superpowers. Ever since I have incorporated CodeWhisperer into my workflow, I have become faster, smarter, and even more delighted when building applications. However, learning to use any generative AI tool effectively requires a beginner’s mindset and a willingness to embrace new ways of working.

Best practices for tapping into CodeWhisperer’s power are still emerging. But, as an early explorer, I’ve discovered several techniques that have allowed me to get the most out of this amazing tool. In this article, I’m excited to share these techniques with you, using practical examples to illustrate just how CodeWhisperer can enhance your programming workflow. I’ll explore:

Typing less
Generating functions using code
Generating functions using comments
Generating classes
Implementing algorithms
Writing unit tests
Creating sample data
Simplifying regular expressions
Learning third-party code libraries faster
Documenting code

Before we begin

If you would like to try these techniques for yourself, you will need to use a code editor with the AWS Toolkit extension installed. VS Code, AWS Cloud9, and most editors from JetBrains will work. Refer to the CodeWhisperer “Getting Started” resources for setup instructions.

CodeWhisperer will present suggestions automatically as you type. If you aren’t presented with a suggestion, you can always manually trigger a suggestion using the Option + C (Mac) or Alt + C (Windows) shortcut. CodeWhisperer will also sometimes present you with multiple suggestions to choose from. You can press the → and ← keys to cycle through all available suggestions.

The suggestions CodeWhisperer offers are non-deterministic, which means you may receive slightly different suggestions than the ones shown in this article. If you receive a suggestion that doesn’t suit your needs, try cycling through all available suggestions using the arrow keys or try rephrasing any code comment prompts slightly. Even punctuation changes can produce fresh suggestions.

The examples in this article are written in JavaScript and TypeScript. However, the techniques I’ll demonstrate are equally suited to many of the programming languages supported by CodeWhisperer. Now, onto the examples!

Typing less

At the most basic level, CodeWhisperer can be used as a supercharged auto-complete, significantly reducing your typing when writing individual lines or blocks of code. To use CodeWhisperer in this way, just write code as you normally would. CodeWhisperer will make automatic suggestions as you type. Accept suggestions when they’re helpful, and ignore suggestions when they’re not. In my experience, using CodeWhisperer in this way alone reduces my typing by 50% or more.

Generating functions

Automatically generating function bodies from signatures

Skilled programmers use clear, descriptive names for their functions and parameters. If you follow this practice in your own programming, CodeWhisperer will have all the context it needs to implement many functions for you.

After I enter the function signature below, CodeWhisperer generates the function body for me.

User input (JavaScript):

function standardDeviation(numbers)

Result:

function standardDeviation(numbers) {
let mean = numbers.reduce((a, b) => a + b) / numbers.length;
let variance =
numbers.reduce((a, b) => a + Math.pow(b – mean, 2)) / numbers.length;
return Math.sqrt(variance);
}

Generating functions from comments

Sometimes a good function signature alone isn’t enough to convey the desired behavior of a function. In those cases, you can use a detailed code comment as a prompt to CodeWhisperer. I recommend using your language’s doc comment delimiter when using this technique.

User Input (JavaScript):

/**
* Given a list that contains some numbers and strings, format them into a string
* in which the numbers are prepended with “#” and the strings are wrapped in
* double quotes.
*/

Result:

/**
* Given a list that contains some numbers and strings, format them into a string
* in which the numbers are prepended with “#” and the strings are wrapped in
* double quotes.
*/
function formatList(list) {
return list
.map((item) => {
if (typeof item === “number”) {
return `#${item}`;
} else {
return `”${item}”`;
}
})
.join(“, “);
}

Generating classes

You can generate full or partial class implementations from a single code comment. The comment should describe the key properties and behaviors of the class. Simple descriptions are often all that is required.

User input (TypeScript):

/**
* A folder has a name, children, and an optional parent. A child can’t be added
* twice.
*/

Result:

After I write the above block comment, CodeWhisperer provides multiple suggestions, including the following implementation, complete with documentation comments for each method!

/**
* A folder has a name, children, and an optional parent. A child can’t be added
* twice.
*/
export class Folder {
name: string;
children: Folder[];
parent?: Folder;

constructor(name: string, parent?: Folder) {
this.name = name;
this.children = [];
this.parent = parent;
}

/**
* Add a child to the folder.
* @param child The child to add.
* @returns True if the child was added, false if it was already present.
*/
addChild(child: Folder): boolean {
if (this.children.includes(child)) {
return false;
}
this.children.push(child);
child.parent = this;
return true;
}

/**
* Remove a child from the folder.
* @param child The child to remove.
* @returns True if the child was removed, false if it wasn’t present.
*/
removeChild(child: Folder): boolean {
const index = this.children.indexOf(child);
if (index === -1) {
return false;
}
this.children.splice(index, 1);
child.parent = undefined;
return true;
}
}

Implementing algorithms

CodeWhisperer enables you to implement well-known algorithms with ease. Simply tell CodeWhisperer which algorithm you want to apply to a particular coding problem, and it will oblige.

User input (JavaScript):

const actors = [ … ];

// Sort the actors by age using the merge sort algorithm.

Result:

const actors = [ … ];

// Sort the actors by age using the merge sort algorithm.
function mergeSort(arr) {
if (arr.length < 2) {
return arr;
}

const middle = Math.floor(arr.length / 2);
const left = arr.slice(0, middle);
const right = arr.slice(middle);

return merge(mergeSort(left), mergeSort(right));
}

function merge(left, right) {
const result = [];

while (left.length && right.length) {
if (left[0].age < right[0].age) {
result.push(left.shift());
} else {
result.push(right.shift());
}
}

while (left.length) {
result.push(left.shift());
}

while (right.length) {
result.push(right.shift());
}

return result;
}

console.log(mergeSort(actors));

Writing unit tests

CodeWhisperer accelerates the implementation of unit tests. Not only is it able to implement the logic for most of the explicit unit tests I ask for, but it often suggests unit tests for edge cases that hadn’t even occurred to me.

When I want CodeWhisperer to implement a unit test, I start with a declaration of the condition I’m testing and the expected result as in the following example.

User input (JavaScript):

describe(“Folder.addChild()”, () => {
it(“when adding an existing folder should not add duplicate”

Result:

describe(“Folder.addChild()”, () => {
it(“when adding an existing folder should not add duplicate”, () => {
const folder = new Folder(“root”);
const child = new Folder(“child”);
folder.addChild(child);
folder.addChild(child);
expect(folder.children.length).toBe(1);
});
});

After you’ve written a test or two, CodeWhisperer is likely to suggest more tests, even without explicit prompting.

Tip: You’ll notice that my test description includes 1) the function I’m testing, 2) the condition I’m testing, and 3) what I expect the result to be. Following this three-part pattern is a great way to ensure CodeWhisperer has the context it needs to implement high quality test logic.

Creating sample data

One of the delightful discoveries I’ve made is that CodeWhisperer can generate dummy data—or really any code that has a repeating pattern. As an illustration, for the following list I entered only the first “user” object as an example, and CodeWhisperer helpfully continued the pattern.

User input (JavaScript):

const fakeUsers = [
{ name: “User 1”, id: “user1” },

Result:

const fakeUsers = [
{ name: “User 1”, id: “user1” },
{ name: “User 2”, id: “user2” },
{ name: “User 3”, id: “user3” },
{ name: “User 4”, id: “user4” },
{ name: “User 5”, id: “user5” },
{ name: “User 6”, id: “user6” }
]

It even works for generating random but plausible values like the names seen below.

User input (JavaScript):

const fakeUsers = [
{ name: “Sally”, id: “user1” },

Result:

const fakeUsers = [
{ name: “Sally”, id: “user1” },
{ name: “Bob”, id: “user2” },
{ name: “Joe”, id: “user3” },
{ name: “Jane”, id: “user4” },
{ name: “Jill”, id: “user5” },
{ name: “Jack”, id: “user6” }
]

Simplifying regular expressions

While CodeWhisperer can’t translate natural language into custom regular expressions (yet), it does know a lot of commonly used regular expressions. This saves you time by putting these regular expressions at your fingertips whenever you need them.

Let’s look at one notoriously complex but common regular expression.

User input (JavaScript):

/**
* Function that validates whether a string is a valid email address.
*/

Result:

/**
* Function that validates whether a string is a valid email address.
*/
function validateEmail(email: string): boolean {
const re =
/^(([^<>()[]\.,;:s@”]+(.[^<>()[]\.,;:s@”]+)*)|(“.+”))@(([[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}])|(([a-zA-Z-0-9]+.)+[a-zA-Z]{2,}))$/;
return re.test(String(email).toLowerCase());
}

Learning third-party code libraries faster

CodeWhisperer helps accelerate my learning when I’m using an unfamiliar library, module, or framework. In the example below, I needed to write a utility function that would take a gzip compressed value coming from an API and translate it into a value that the rest of my code could understand. I knew the “pako” library would help me, but I had no idea how to use it. I just prompted CodeWhisperer with an import statement and a comment describing my goal.

import pako from “pako”;

/**
* Accepts a base64-encoded, gzip-compressed JSON string and returns
* a parsed JSON object.
*/

Result:

import pako from “pako”;

/**
* Accepts a base64-encoded, gzip-compressed JSON string and returns
* a parsed JSON object.
*/
export function decodeGzipJson(base64String) {
const compressed = atob(base64String);
const uncompressed = pako.inflate(compressed, { to: “string” });
return JSON.parse(uncompressed);
}

Documenting code

CodeWhisperer is capable of generating docstrings and comments for the code it generates, as well as for your existing code. For example, let’s say I want CodeWhisperer to document the matches() method of this FavoritesFilter TypeScript class I’ve implemented (I’ve omitted some implementation details for brevity).

class FavoritesFilter implements IAssetFilter {

matches(asset: Asset): boolean {

}
}

I can just type a doc comment delimiter (/** */) immediately above the method name and CodeWhisperer will generate the body of the doc comment for me.

Note: When using CodeWhisperer in this way you may have to manually trigger a suggestion using Option + C (Mac) or Alt + C (Windows).

class FavoritesFilter implements IAssetFilter {

/**
* Determines whether the asset matches the filter.
*/
matches(asset: Asset): boolean {

}
}

Conclusion

I hope the techniques above inspire ideas for how CodeWhisperer can make you a more productive coder. Install CodeWhisperer today to start using these time-saving techniques in your own projects. These examples only scratch the surface. As additional creative minds start applying CodeWhisperer to their daily workflows, I’m sure new techniques and best practices will continue to emerge. If you discover a novel approach that you find useful, post a comment to share what you’ve discovered. Perhaps your technique will make it into a future article and help others in the CodeWhisperer community enhance their superpowers.

Kris Schultz (he/him)

Kris Schultz has spent over 25 years bringing engaging user experiences to life by combining emerging technologies with world class design. In his role as 3D Specialist Solutions Architect, Kris helps customers leverage AWS services to power 3D applications of all sorts.

Create a CI/CD pipeline for .NET Lambda functions with AWS CDK Pipelines

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in familiar programming languages and provision it through AWS CloudFormation.

In this blog post, we will explore the process of creating a Continuous Integration/Continuous Deployment (CI/CD) pipeline for a .NET AWS Lambda function using the CDK Pipelines. We will cover all the necessary steps to automate the deployment of the .NET Lambda function, including setting up the development environment, creating the pipeline with AWS CDK, configuring the pipeline stages, and publishing the test reports. Additionally, we will show how to promote the deployment from a lower environment to a higher environment with manual approval.

Background

AWS CDK makes it easy to deploy a stack that provisions your infrastructure to AWS from your workstation by simply running cdk deploy. This is useful when you are doing initial development and testing. However, in most real-world scenarios, there are multiple environments, such as development, testing, staging, and production. It may not be the best approach to deploy your CDK application in all these environments using cdk deploy. Deployment to these environments should happen through more reliable, automated pipelines. CDK Pipelines makes it easy to set up a continuous deployment pipeline for your CDK applications, powered by AWS CodePipeline.

The AWS CDK Developer Guide’s Continuous integration and delivery (CI/CD) using CDK Pipelines page shows you how you can use CDK Pipelines to deploy a Node.js based Lambda function. However, .NET based Lambda functions are different from Node.js or Python based Lambda functions in that .NET code first needs to be compiled to create a deployment package. As a result, we decided to write this blog as a step-by-step guide to assist our .NET customers with deploying their Lambda functions utilizing CDK Pipelines.

In this post, we dive deeper into creating a real-world pipeline that runs build and unit tests, and deploys a .NET Lambda function to one or multiple environments.

Architecture

CDK Pipelines is a construct library that allows you to provision a CodePipeline pipeline. The pipeline created by CDK pipelines is self-mutating. This means, you need to run cdk deploy one time to get the pipeline started. After that, the pipeline automatically updates itself if you add new application stages or stacks in the source code.

The following diagram captures the architecture of the CI/CD pipeline created with CDK Pipelines. Let’s explore this architecture at a high level before diving deeper into the details.

Figure 1: Reference architecture diagram

The solution creates a CodePipeline with a AWS CodeCommit repo as the source (CodePipeline Source Stage). When code is checked into CodeCommit, the pipeline is automatically triggered and retrieves the code from the CodeCommit repository branch to proceed to the Build stage.

Build stage compiles the CDK application code and generates the cloud assembly.

Update Pipeline stage updates the pipeline (if necessary).

Publish Assets stage uploads the CDK assets to Amazon S3.

After Publish Assets is complete, the pipeline deploys the Lambda function to both the development and production environments. For added control, the architecture includes a manual approval step for releases that target the production environment.

Prerequisites

For this tutorial, you should have:

An AWS account

Visual Studio 2022
AWS Toolkit for Visual Studio
Node.js 18.x or later
AWS CDK v2 (2.67.0 or later required)
Git

Bootstrapping

Before you use AWS CDK to deploy CDK Pipelines, you must bootstrap the AWS environments where you want to deploy the Lambda function. An environment is the target AWS account and Region into which the stack is intended to be deployed.

In this post, you deploy the Lambda function into a development environment and, optionally, a production environment. This requires bootstrapping both environments. However, deployment to a production environment is optional; you can skip bootstrapping that environment for the time being, as we will cover that later.

This is one-time activity per environment for each environment to which you want to deploy CDK applications. To bootstrap the development environment, run the below command, substituting in the AWS account ID for your dev account, the region you will use for your dev environment, and the locally-configured AWS CLI profile you wish to use for that account. See the documentation for additional details.

cdk bootstrap aws://<DEV-ACCOUNT-ID>/<DEV-REGION>
–profile DEV-PROFILE
–cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess

‐‐profile specifies the AWS CLI credential profile that will be used to bootstrap the environment. If not specified, default profile will be used. The profile should have sufficient permissions to provision the resources for the AWS CDK during bootstrap process.

‐‐cloudformation-execution-policies specifies the ARNs of managed policies that should be attached to the deployment role assumed by AWS CloudFormation during deployment of your stacks.

Note: By default, stacks are deployed with full administrator permissions using the AdministratorAccess policy, but for real-world usage, you should define a more restrictive IAM policy and use that, refer customizing bootstrapping in AWS CDK documentation and Secure CDK deployments with IAM permission boundaries to see how to do that.

Create a Git repository in AWS CodeCommit

For this post, you will use CodeCommit to store your source code. First, create a git repository named dotnet-lambda-cdk-pipeline in CodeCommit by following these steps in the CodeCommit documentation.

After you have created the repository, generate git credentials to access the repository from your local machine if you don’t already have them. Follow the steps below to generate git credentials.

Sign in to the AWS Management Console and open the IAM console.
Create an IAM user (for example, git-user).
Once user is created, attach AWSCodeCommitPowerUser policy to the user.
Next. open the user details page, choose the Security Credentials tab, and in HTTPS Git credentials for AWS CodeCommit, choose Generate.

Download credentials to download this information as a .CSV file.

Clone the recently created repository to your workstation, then cd into dotnet-lambda-cdk-pipeline directory.

git clone <CODECOMMIT-CLONE-URL>
cd dotnet-lambda-cdk-pipeline

Alternatively, you can use git-remote-codecommit to clone the repository with git clone codecommit::<REGION>://<PROFILE>@<REPOSITORY-NAME> command, replacing the placeholders with their original values. Using git-remote-codecommit does not require you to create additional IAM users to manage git credentials. To learn more, refer AWS CodeCommit with git-remote-codecommit documentation page.

Initialize the CDK project

From the command prompt, inside the dotnet-lambda-cdk-pipeline directory, initialize a AWS CDK project by running the following command.

cdk init app –language csharp

Open the generated C# solution in Visual Studio, right-click the DotnetLambdaCdkPipeline project and select Properties. Set the Target framework to .NET 6.

Create a CDK stack to provision the CodePipeline

Your CDK Pipelines application includes at least two stacks: one that represents the pipeline itself, and one or more stacks that represent the application(s) deployed via the pipeline. In this step, you create the first stack that deploys a CodePipeline pipeline in your AWS account.

From Visual Studio, open the solution by opening the .sln solution file (in the src/ folder). Once the solution has loaded, open the DotnetLambdaCdkPipelineStack.cs file, and replace its contents with the following code. Note that the filename, namespace and class name all assume you named your Git repository as shown earlier.

Note: be sure to replace “<CODECOMMIT-REPOSITORY-NAME>” in the code below with the name of your CodeCommit repository (in this blog post, we have used dotnet-lambda-cdk-pipeline).

using Amazon.CDK;
using Amazon.CDK.AWS.CodeBuild;
using Amazon.CDK.AWS.CodeCommit;
using Amazon.CDK.AWS.IAM;
using Amazon.CDK.Pipelines;
using Constructs;
using System.Collections.Generic;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStack : Stack
{
internal DotnetLambdaCdkPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{

var repository = Repository.FromRepositoryName(this, “repository”, “<CODECOMMIT-REPOSITORY-NAME>”);

// This construct creates a pipeline with 3 stages: Source, Build, and UpdatePipeline
var pipeline = new CodePipeline(this, “pipeline”, new CodePipelineProps
{
PipelineName = “LambdaPipeline”,
SelfMutation = true,

// Synth represents a build step that produces the CDK Cloud Assembly.
// The primary output of this step needs to be the cdk.out directory generated by the cdk synth command.
Synth = new CodeBuildStep(“Synth”, new CodeBuildStepProps
{
// The files downloaded from the repository will be placed in the working directory when the script is executed
Input = CodePipelineSource.CodeCommit(repository, “master”),

// Commands to run to generate CDK Cloud Assembly
Commands = new string[] { “npm install -g aws-cdk”, “cdk synth” },

// Build environment configuration
BuildEnvironment = new BuildEnvironment
{
BuildImage = LinuxBuildImage.AMAZON_LINUX_2_4,
ComputeType = ComputeType.MEDIUM,

// Specify true to get a privileged container inside the build environment image
Privileged = true
}
})
});
}
}
}

In the preceding code, you use CodeBuildStep instead of ShellStep, since ShellStep doesn’t provide a property to specify BuildEnvironment. We need to specify the build environment in order to set privileged mode, which allows access to the Docker daemon in order to build container images in the build environment. This is necessary to use the CDK’s bundling feature, which is explained in later in this blog post.

Open the file src/DotnetLambdaCdkPipeline/Program.cs, and edit its contents to reflect the below. Be sure to replace the placeholders with your AWS account ID and region for your dev environment.

using Amazon.CDK;

namespace DotnetLambdaCdkPipeline
{
sealed class Program
{
public static void Main(string[] args)
{
var app = new App();
new DotnetLambdaCdkPipelineStack(app, “DotnetLambdaCdkPipelineStack”, new StackProps
{
Env = new Amazon.CDK.Environment
{
Account = “<DEV-ACCOUNT-ID>”,
Region = “<DEV-REGION>”
}
});
app.Synth();
}
}
}

Note: Instead of committing the account ID and region to source control, you can set environment variables on the CodeBuild agent and use them; see Environments in the AWS CDK documentation for more information. Because the CodeBuild agent is also configured in your CDK code, you can use the BuildEnvironmentVariableType property to store environment variables in AWS Systems Manager Parameter Store or AWS Secrets Manager.

After you make the code changes, build the solution to ensure there are no build issues. Next, commit and push all the changes you just made. Run the following commands (or alternatively use Visual Studio’s built-in Git functionality to commit and push your changes):

git add –all .
git commit -m ‘Initial commit’
git push

Then navigate to the root directory of repository where your cdk.json file is present, and run the cdk deploy command to deploy the initial version of CodePipeline. Note that the deployment can take several minutes.

The pipeline created by CDK Pipelines is self-mutating. This means you only need to run cdk deploy one time to get the pipeline started. After that, the pipeline automatically updates itself if you add new CDK applications or stages in the source code.

After the deployment has finished, a CodePipeline is created and automatically runs. The pipeline includes three stages as shown below.

Source – It fetches the source of your AWS CDK app from your CodeCommit repository and triggers the pipeline every time you push new commits to it.

Build – This stage compiles your code (if necessary) and performs a cdk synth. The output of that step is a cloud assembly.

UpdatePipeline – This stage runs cdk deploy command on the cloud assembly generated in previous stage. It modifies the pipeline if necessary. For example, if you update your code to add a new deployment stage to the pipeline to your application, the pipeline is automatically updated to reflect the changes you made.

Figure 2: Initial CDK pipeline stages

Define a CodePipeline stage to deploy .NET Lambda function

In this step, you create a stack containing a simple Lambda function and place that stack in a stage. Then you add the stage to the pipeline so it can be deployed.

To create a Lambda project, do the following:

In Visual Studio, right-click on the solution, choose Add, then choose New Project.
In the New Project dialog box, choose the AWS Lambda Project (.NET Core – C#) template, and then choose OK or Next.
For Project Name, enter SampleLambda, and then choose Create.
From the Select Blueprint dialog, choose Empty Function, then choose Finish.

Next, create a new file in the CDK project at src/DotnetLambdaCdkPipeline/SampleLambdaStack.cs to define your application stack containing a Lambda function. Update the file with the following contents (adjust the namespace as necessary):

using Amazon.CDK;
using Amazon.CDK.AWS.Lambda;
using Constructs;
using AssetOptions = Amazon.CDK.AWS.S3.Assets.AssetOptions;

namespace DotnetLambdaCdkPipeline
{
class SampleLambdaStack: Stack
{
public SampleLambdaStack(Construct scope, string id, StackProps props = null) : base(scope, id, props)
{
// Commands executed in a AWS CDK pipeline to build, package, and extract a .NET function.
var buildCommands = new[]
{
“cd /asset-input”,
“export DOTNET_CLI_HOME=”/tmp/DOTNET_CLI_HOME””,
“export PATH=”$PATH:/tmp/DOTNET_CLI_HOME/.dotnet/tools””,
“dotnet build”,
“dotnet tool install -g Amazon.Lambda.Tools”,
“dotnet lambda package -o output.zip”,
“unzip -o -d /asset-output output.zip”
};

new Function(this, “LambdaFunction”, new FunctionProps
{
Runtime = Runtime.DOTNET_6,
Handler = “SampleLambda::SampleLambda.Function::FunctionHandler”,

// Asset path should point to the folder where .csproj file is present.
// Also, this path should be relative to cdk.json file.
Code = Code.FromAsset(“./src/SampleLambda”, new AssetOptions
{
Bundling = new BundlingOptions
{
Image = Runtime.DOTNET_6.BundlingImage,
Command = new[]
{
“bash”, “-c”, string.Join(” && “, buildCommands)
}
}
})
});
}
}
}

Building inside a Docker container

The preceding code uses bundling feature to build the Lambda function inside a docker container. Bundling starts a new docker container, copies the Lambda source code inside /asset-input directory of the container, runs the specified commands that write the package files under /asset-output directory. The files in /asset-output are copied as assets to the stack’s cloud assembly directory. In a later stage, these files are zipped and uploaded to S3 as the CDK asset.

Building Lambda functions inside Docker containers is preferable than building them locally because it reduces the host machine’s dependencies, resulting in greater consistency and reliability in your build process.

Bundling requires the creation of a docker container on your build machine. For this purpose, the privileged: true setting on the build machine has already been configured.

Adding development stage

Create a new file in the CDK project at src/DotnetLambdaCdkPipeline/DotnetLambdaCdkPipelineStage.cs to hold your stage. This class will create the development stage for your pipeline.

using Amazon.CDK;
using Constructs;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStage : Stage
{
internal DotnetLambdaCdkPipelineStage(Construct scope, string id, IStageProps props = null) : base(scope, id, props)
{
Stack lambdaStack = new SampleLambdaStack(this, “LambdaStack”);
}
}
}

Edit src/DotnetLambdaCdkPipeline/DotnetLambdaCdkPipelineStack.cs to add the stage to your pipeline. Add the bolded line from the code below to your file.

using Amazon.CDK;
using Amazon.CDK.Pipelines;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStack : Stack
{
internal DotnetLambdaCdkPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{

var repository = Repository.FromRepositoryName(this, “repository”, “dotnet-lambda-cdk-application”);

// This construct creates a pipeline with 3 stages: Source, Build, and UpdatePipeline
var pipeline = new CodePipeline(this, “pipeline”, new CodePipelineProps
{
PipelineName = “LambdaPipeline”,
.
.
.
});

var devStage = pipeline.AddStage(new DotnetLambdaCdkPipelineStage(this, “Development”));
}
}
}

Next, build the solution, then commit and push the changes to the CodeCommit repo. This will trigger the CodePipeline to start.

When the pipeline runs, UpdatePipeline stage detects the changes and updates the pipeline based on the code it finds there. After the UpdatePipeline stage completes, pipeline is updated with additional stages.

Let’s observe the changes:

An Assets stage has been added. This stage uploads all the assets you are using in your app to Amazon S3 (the S3 bucket created during bootstrapping) so that they could be used by other deployment stages later in the pipeline. For example, the CloudFormation template used by the development stage, includes reference to these assets, which is why assets are first moved to S3 and then referenced in later stages.

A Development stage with two actions has been added. The first action is to create the change set, and the second is to execute it.

Figure 3: CDK pipeline with development stage to deploy .NET Lambda function

After the Deploy stage has completed, you can find the newly-deployed Lambda function by visiting the Lambda console, selecting “Functions” from the left menu, and filtering the functions list with “LambdaStack”. Note the runtime is .NET.

Running Unit Test cases in the CodePipeline

Next, you will add unit test cases to your Lambda function, and run them through the pipeline to generate a test report in CodeBuild.

To create a Unit Test project, do the following:

Right click on the solution, choose Add, then choose New Project.
In the New Project dialog box, choose the xUnit Test Project template, and then choose OK or Next.
For Project Name, enter SampleLambda.Tests, and then choose Create or Next.
Depending on your version of Visual Studio, you may be prompted to select the version of .NET to use. Choose .NET 6.0 (Long Term Support), then choose Create.
Right click on SampleLambda.Tests project, choose Add, then choose Project Reference. Select SampleLambda project, and then choose OK.

Next, edit the src/SampleLambda.Tests/UnitTest1.cs file to add a unit test. You can use the code below, which verifies that the Lambda function returns the input string as upper case.

using Xunit;

namespace SampleLambda.Tests
{
public class UnitTest1
{
[Fact]
public void TestSuccess()
{
var lambda = new SampleLambda.Function();

var result = lambda.FunctionHandler(“test string”, context: null);

Assert.Equal(“TEST STRING”, result);
}
}
}

You can add pre-deployment or post-deployment actions to the stage by calling its AddPre() or AddPost() method. To execute above test cases, we will use a pre-deployment action.

To add a pre-deployment action, we will edit the src/DotnetLambdaCdkPipeline/DotnetLambdaCdkPipelineStack.cs file in the CDK project, after we add code to generate test reports.

To run the unit test(s) and publish the test report in CodeBuild, we will construct a BuildSpec for our CodeBuild project. We also provide IAM policy statements to be attached to the CodeBuild service role granting it permissions to run the tests and create reports. Update the file by adding the new code (starting with “// Add this code for test reports”) below the devStage declaration you added earlier:

using Amazon.CDK;
using Amazon.CDK.Pipelines;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStack : Stack
{
internal DotnetLambdaCdkPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// …
// …
// …
var devStage = pipeline.AddStage(new DotnetLambdaCdkPipelineStage(this, “Development”));

// Add this code for test reports
var reportGroup = new ReportGroup(this, “TestReports”, new ReportGroupProps
{
ReportGroupName = “TestReports”
});

// Policy statements for CodeBuild Project Role
var policyProps = new PolicyStatementProps()
{
Actions = new string[] {
“codebuild:CreateReportGroup”,
“codebuild:CreateReport”,
“codebuild:UpdateReport”,
“codebuild:BatchPutTestCases”
},
Effect = Effect.ALLOW,
Resources = new string[] { reportGroup.ReportGroupArn }
};

// PartialBuildSpec in AWS CDK for C# can be created using Dictionary
var reports = new Dictionary<string, object>()
{
{
“reports”, new Dictionary<string, object>()
{
{
reportGroup.ReportGroupArn, new Dictionary<string,object>()
{
{ “file-format”, “VisualStudioTrx” },
{ “files”, “**/*” },
{ “base-directory”, “./testresults” }
}
}
}
}
};
// End of new code block
}
}
}

Finally, add the CodeBuildStep as a pre-deployment action to the development stage with necessary CodeBuildStepProps to set up reports. Add this after the new code you added above.

devStage.AddPre(new Step[]
{
new CodeBuildStep(“Unit Test”, new CodeBuildStepProps
{
Commands= new string[]
{
“dotnet test -c Release ./src/SampleLambda.Tests/SampleLambda.Tests.csproj –logger trx –results-directory ./testresults”,
},
PrimaryOutputDirectory = “./testresults”,
PartialBuildSpec= BuildSpec.FromObject(reports),
RolePolicyStatements = new PolicyStatement[] { new PolicyStatement(policyProps) },
BuildEnvironment = new BuildEnvironment
{
BuildImage = LinuxBuildImage.AMAZON_LINUX_2_4,
ComputeType = ComputeType.MEDIUM
}
})
});

Build the solution, then commit and push the changes to the repository. Pushing the changes triggers the pipeline, runs the test cases, and publishes the report to the CodeBuild console. To view the report, after the pipeline has completed, navigate to TestReports in CodeBuild’s Report Groups as shown below.

Figure 4: Test report in CodeBuild report group

Deploying to production environment with manual approval

CDK Pipelines makes it very easy to deploy additional stages with different accounts. You have to bootstrap the accounts and Regions you want to deploy to, and they must have a trust relationship added to the pipeline account.

To bootstrap an additional production environment into which AWS CDK applications will be deployed by the pipeline, run the below command, substituting in the AWS account ID for your production account, the region you will use for your production environment, the AWS CLI profile to use with the prod account, and the AWS account ID where the pipeline is already deployed (the account you bootstrapped at the start of this blog).

cdk bootstrap aws://<PROD-ACCOUNT-ID>/<PROD-REGION>
–profile <PROD-PROFILE>
–cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess
–trust <PIPELINE-ACCOUNT-ID>

The –trust option indicates which other account should have permissions to deploy AWS CDK applications into this environment. For this option, specify the pipeline’s AWS account ID.

Use below code to add a new stage for production deployment with manual approval. Add this code below the “devStage.AddPre(…)” code block you added in the previous section, and remember to replace the placeholders with your AWS account ID and region for your prod environment.

var prodStage = pipeline.AddStage(new DotnetLambdaCdkPipelineStage(this, “Production”, new StageProps
{
Env = new Environment
{
Account = “<PROD-ACCOUNT-ID>”,
Region = “<PROD-REGION>”
}
}), new AddStageOpts
{
Pre = new[] { new ManualApprovalStep(“PromoteToProd”) }
});

To support deploying CDK applications to another account, the artifact buckets must be encrypted, so add a CrossAccountKeys property to the CodePipeline near the top of the pipeline stack file, and set the value to true (see the line in bold in the code snippet below). This creates a KMS key for the artifact bucket, allowing cross-account deployments.

var pipeline = new CodePipeline(this, “pipeline”, new CodePipelineProps
{
PipelineName = “LambdaPipeline”,
SelfMutation = true,
CrossAccountKeys = true,
EnableKeyRotation = true, //Enable KMS key rotation for the generated KMS keys

// …
}

After you commit and push the changes to the repository, a new manual approval step called PromoteToProd is added to the Production stage of the pipeline. The pipeline pauses at this step and awaits manual approval as shown in the screenshot below.

Figure 5: Pipeline waiting for manual review

When you click the Review button, you are presented with the following dialog. From here, you can choose to approve or reject and add comments if needed.

Figure 6: Manual review approval dialog

Once you approve, the pipeline resumes, executes the remaining steps and completes the deployment to production environment.

Figure 7: Successful deployment to production environment

Clean up

To avoid incurring future charges, log into the AWS console of the different accounts you used, go to the AWS CloudFormation console of the Region(s) where you chose to deploy, select and click Delete on the stacks created for this activity. Alternatively, you can delete the CloudFormation Stack(s) using cdk destroy command. It will not delete the CDKToolkit stack that the bootstrap command created. If you want to delete that as well, you can do it from the AWS Console.

Conclusion

In this post, you learned how to use CDK Pipelines for automating the deployment process of .NET Lambda functions. An intuitive and flexible architecture makes it easy to set up a CI/CD pipeline that covers the entire application lifecycle, from build and test to deployment. With CDK Pipelines, you can streamline your development workflow, reduce errors, and ensure consistent and reliable deployments.
For more information on CDK Pipelines and all the ways it can be used, see the CDK Pipelines reference documentation.

About the authors:

Ankush Jain

Ankush Jain is a Cloud Consultant at AWS Professional Services based out of Pune, India. He currently focuses on helping customers migrate their .NET applications to AWS. He is passionate about cloud, with a keen interest in serverless technologies.

Sanjay Chaudhari

Sanjay Chaudhari is a Cloud Consultant with AWS Professional Services. He works with customers to migrate and modernize their Microsoft workloads to the AWS Cloud.

Extending CloudFormation and CDK with Third-Party Extensions

Did you know you can use CloudFormation to manage third-party resources? The AWS CloudFormation Public Registry provides a searchable collection of CloudFormation extensions and makes it easy to discover and provision them in CloudFormation templates and AWS Cloud Development Kit (CDK) applications. In the past three months, we’ve added a number of new, exciting partners to the Public Registry, including GitLab, Okta, and PagerDuty.

The extensions available on the registry are wide-ranging and include third-party resources from partners such as MongoDB; hooks, which are preventative controls that add safeguards to provisioning; and modules, which are re-usable components that take into account best practices and opinionated definitions of resources. AWS Partner Network (APN), third parties, and the developer community contribute these extensions to the Public Registry. Using extensions, customers no longer need to create and maintain custom provisioning logic for resource types from third-party vendors.

Over last few months, AWS collaborated with partners to develop and publish over 80 new resources across 14 providers to Public Registry for CloudFormation. Below is a summary of the new resource type additions.

Recently Updated Third-Party Providers

Provider
Use case

MongoDB Atlas

Manage components in MongoDB Atlas. Add, edit, or delete administrative objects within Atlas, including projects, users, and database deployments

Note: You cannot read or write data to Atlas Clusters with Atlas Admin APIs and AWS CloudFormation resources. To read and write data in Atlas, you must use the Atlas Data API

GitLab
Manage the users and groups in an organization, set up a new project with the right users, groups, and access token, tag a project automatically for every active CI/CD deployment

New Relic
Create a new Dashboard with custom Pages, Widgets and Layout, add tags to your data to help improve data organization and findability, workloads-related tasks

GitHub
Manage the users and groups in an organization, set up a new project with the right users, groups, and access token, Add a webhook to a repo

Dynatrace
Set up a new project with service level objective, locations, monitors and metrics

Okta
Onboard a new application into Okta with the right users and groups

PagerDuty
Set up monitoring of a new or existing application

Databricks
Set up a Databricks cluster and jobs

Fastly
Configure Fastly as a CDN for your web app

BigID
Connect S3 and DynamoDB data sources into your BigID application

Rollbar
Set up a new Rollbar project and manage rules, teams, and users

Cloudflare
Configure a DNS record and load-balancing using Cloudflare

Lacework
Configure Lacework alert profiles, rules, channels and manage queries

Snowflake
Create databases, users, and manage privileges

Key Benefits

Here are some of the benefits for extension builders and consumers when publishing extensions to the public registry:

Discoverability – Publishing your extensions in the public registry will make them discoverable by 1M+ active CloudFormation and CDK customers.

CDK Support – We’re seeing rapid growth in the adoption of the CDK amongst the developer population. Upon publishing to the registry, L1 CDK Constructs will automatically be created for your third party resources making them compatible with the CDK with no added work required. These constructs will also be listed on Construct Hub and aids discoverability discoverable by customers. Note: Automated L1 CDK construct generation is currently an experimental feature.

Drift detection – Third-party resource types in the public registry also integrate with drift detection. After creating a resource from a third-party resource type, CloudFormation will detect changes to the third-party resource from its template configuration, known as configuration drift, just as it would with AWS resources.

AWS Config – You can also use AWS Config to manage compliance for third-party resources consumed from the registry. The resource types are automatically tracked as Configuration Items when you have configured AWS Config to record them, and used CloudFormation to create, update, and delete them. Whether the resource types you use are third-party or AWS resources, you can view configuration history for them, in addition to being able to write AWS Config rules to verify configuration best practices.

Abstraction of Best Practices with Modules – Browse and use modules from the registry when creating your CloudFormation templates to ensure you’re provisioning resources while adhering to best practices.

AWS Cloud Control API – The AWS Cloud Control API allows AWS partners and customers to interface with your resource type through API calls using Create, Read, Update, Delete, and List (CRUD-L) operations. Resources in the registry will be automatically integrated with our AWS Cloud Control API and expands your third party resource compatibility to even more AWS services and IaC tools.

We’ve seen great momentum from our partners and developer community over the past year. We are looking forward to continued investment and innovation in the Public Registry.

How to Get Started

For Resource Type Users: Explore and Activate Third Party Resource Types

Third party resource types must first be activated before they can be used. You do this by logging into your AWS Console > Navigate to CloudFormation > Registry > Public extensions > Set the Publisher to Third Party. This will show you a list of available third-party resources in your region (note that different regions may have a different set of third-party resource types). Select the radio box next to the resource types you want to activate and click the activate button at the top of the list.

Figure 1:

Don’t see the extension you need in the registry?

You can submit requests for new third-party extensions through our Community Registry Extensions Github repo issue tracker! Click the New Issue button and describe the third-party extension along with information about your use case.

For Developers and Publishers: Join the CloudFormation Developer Community and Start Building

You can see several of the community-built registry extensions in the AWS CloudFormation Community Registry Extensions repository and even contribute yourself. You can also read about the experiences and lessons learned from publishing to the Registry through this blog written by Cloudsoft.

For developers looking to create new resource types to add to the public Registry, follow this creating resource types walkthrough help you get started. If you need assistance creating, publishing resources, or just want to join the discussion, you can join the conversation today in our CloudFormation Discord Channel. We’d love to hear about your experiences and use cases in developing innovations with registry extensions.

About the authors:

Anuj Sharma

Anuj Sharma is a Sr Container Partner Solution Architect with Amazon Web Services. He works with ISV partners and drives Partner-AWS product development and integrations.

Lucas Chen

Lucas is a Senior Product Manager at Amazon Web Services. He leads the CloudFormation Registry and its integrations with third-party products. Prior to AWS, he spent 9 years at VMware working on its end user computing product, Workspace ONE.

Rahul Sharma

Rahul is a Senior Product Manager-Technical at Amazon Web Services with over two years of product management spanning AWS CloudFormation and AWS Cloud Control API.

Introducing Finch: An Open Source Client for Container Development

Today we are happy to announce a new open source project, Finch. Finch is a new command line client for building, running, and publishing Linux containers. It provides for simple installation of a native macOS client, along with a curated set of de facto standard open source components including Lima, nerdctl, containerd, and BuildKit. With Finch, you can create and run containers locally, and build and publish Open Container Initiative (OCI) container images.

At launch, Finch is a new project in its early days with basic functionality, initially only supporting macOS (on all Mac CPU architectures). Rather than iterating in private and releasing a finished project, we feel open source is most successful when diverse voices come to the party. We have plans for features and innovations, but opening the project this early will lead to a more robust and useful solution for all. We are happy to address issues, and are ready to accept pull requests. We’re also hopeful that with our adoption of these open source components from which Finch is composed, we’ll increase focus and attention on these components, and add more hands to the important work of open source maintenance and stewardship. In particular, Justin Cormack, CTO of Docker shared that “we’re bullish about Finch’s adoption of containerd and BuildKit, and we look forward to AWS working with us on upstream contributions.”

We are excited to build Finch in the open with interested collaborators. We want to expand Finch from its current basic starting point to cover Windows and Linux platforms and additional functionality that we’ve put on our roadmap, but would love your ideas as well. Please open issues or file pull requests and start discussing your ideas with us in the Finch Slack channel. Finch is licensed under the Apache 2.0 license and anyone can freely use it.

Why build Finch?

For building and running Linux containers on non-Linux hosts, there are existing commercial products as well as an array of purpose-built open source projects. While companies may be able to assemble a simple command line tool from existing open source components, most organizations want their developers to focus on building their applications, not on building tools.

At AWS, we began looking at the available open source components for container tooling and were immediately impressed with the progress of Lima, recently included in the Cloud Native Computing Foundation (CNCF) as a sandbox project. The goal of Lima is to promote containerd and nerdctl to Mac users, and this aligns very well with our existing investment in both using and contributing to the CNCF graduated project, containerd. Rather than introducing another tool and fragmenting open source efforts, the team decided to integrate with Lima and is making contributions to the project. Akihiro Suda, creator of nerdctl and Lima and a longtime maintainer of containerd, BuildKit, and runc, added “I’m excited to see AWS contributing to nerdctl and Lima and very happy to see the community growing around these projects. I look forward to collaborating with AWS contributors to improve Lima and nerdctl alongside Finch.”

Finch is our response to the complexity of curating and assembling an open source container development tool for macOS initially, followed by Windows and Linux in the future. We are curating the components, depending directly on Lima and nerdctl, and packaging them together with their dependencies into a simple installer for macOS. Finch, via its macOS-native client, acts as a passthrough to nerdctl which is running in a Lima-managed virtual machine. All of the moving parts are abstracted away behind the simple and easy-to-use Finch client. Finch manages and installs all required open source components and their dependencies, removing any need for you to manage dependency updates and fixes.

The core Finch client will always be a curated distribution composed entirely of open source, vendor-neutral projects. We also want Finch to be customizable for downstream consumers to create their own extensions and value-added features for specific use cases. We know that AWS customers will want extensions that make it easier for local containers to integrate with AWS cloud services. However, these will be opt-in extensions that don’t impact or fragment the open source core or upstream dependencies that Finch depends on. Extensions will be maintained as separate projects with their own release cycles. We feel this model strikes a perfect balance for providing specific features while still collaborating in the open with Finch and its upstream dependencies. Since the project is open source, Finch provides a great starting point for anyone looking to build their own custom-purpose container client.

In summary, with Finch we’ve curated a common stack of open source components that are built and tested to work together, and married it with a simple, native tool. Finch is a project with a lot of collective container knowledge behind it. Our goal is to provide a minimal and simple build/run/push/pull experience, focused on the core workflow commands. As the project evolves, we will be working on making the virtualization component more transparent for developers with a smaller footprint and faster boot times, as well as pursuing an extensibility framework so you can customize Finch however you’d like.

Over time, we hope that Finch will become a proving ground for new ideas as well as a way to support our existing customers who asked us for an open source container development tool. While an AWS account is not required to use Finch, if you’re an AWS customer we will support you under your current AWS Support plans when using Finch along with AWS services.

What can you do with Finch?

Since Finch is integrated directly with nerdctl, all of the typical commands and options that you’ve become fluent with will work the same as if you were running natively on Linux. You can pull images from registries, run containers locally, and build images using your existing Dockerfiles. Finch also enables you to build and run images for either amd64 or arm64 architectures using emulation, which means you can build images for either (or both) architectures from your M1 Apple Silicon or Intel-based Mac. With the initial launch, support for volumes and networks is in place, and Compose is supported to run and test multiple container applications.

Once you have installed Finch from the project repository, you can get started building and running containers. As mentioned previously, for our initial launch only macOS is supported.

To install Finch on macOS download the latest release package. Opening the package file will walk you through the standard experience of a macOS application installation.

Finch has no GUI at this time and offers a simple command line client without additional integrations for cluster management or other container orchestration tools. Over time, we are interested in adding extensibility to Finch with optional features that you can choose to enable.

After install, you must initialize and start Finch’s virtual environment. Run the following command to start the VM:
finch vm init

To start Finch’s virtual environment (for example, after reboots) run:
finch vm start

Now, let’s run a simple container. The run command will pull an image if not already present, then create and start the container instance. The —rm flag will delete the container once the container command exits.

finch run –rm public.ecr.aws/finch/hello-finch
public.ecr.aws/finch/hello-finch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:a71e474da9ffd6ec3f8236dbf4ef807dd54531d6f05047edaeefa758f1b1bb7e: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:705cac764e12bd6c5b0c35ee1c9208c6c5998b442587964b1e71c6f5ed3bbe46: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:6cc2bf972f32c6d16519d8916a3dbb3cdb6da97cc1b49565bbeeae9e2591cc60: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.9 s total: 0.0 B (0.0 B/s)

@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@ @@@@@@@@@@@
@@@@@@@ @@@@@@@
@@@@@@ @@@@@@
@@@@@@ @@@@@
@@@@@ @@@# @@@@@@@@@
@@@@@ @@ @@@ @@@@@@@@@@
@@@@% @ @@ @@@@@@@@@@@
@@@@ @@@@@@@@
@@@@ @@@@@@@@@@@&
@@@@@ &@@@@@@@@@@@
@@@@@ @@@@@@@@
@@@@@ @@@@@(
@@@@@@ @@@@@@
@@@@@@@ @@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@

Hello from Finch!

Visit us @ github.com/runfinch

Lima supports userspace emulation in the underlying virtual machine. While all the images we create and use in the following example are Linux images, the Lima VM is emulating the CPU architecture of your host system, which might be 64-bit Intel or Apple Silicon-based. In the following examples we will show that no matter which CPU architecture your Mac system uses, you can author, publish, and use images for either CPU family. In the following example we will build an x86_64-architecture image on an Apple Silicon laptop, push it to ECR, and then run it on an Intel-based Mac laptop.

To verify that we are running our commands on an Apple Silicon-based Mac, we can run uname and see the architecture listed as arm64:

uname -sm
Darwin arm64

Let’s create and run an amd64 container using the –platform option to specify the non-native architecture:

finch run –rm –platform=linux/amd64 public.ecr.aws/amazonlinux/amazonlinux uname -sm
Linux x86_64

The –platform option can be used for builds as well. Let’s create a simple Dockerfile with two lines:

FROM public.ecr.aws/amazonlinux/amazonlinux:latest
LABEL maintainer=”Chris Short”

By default, Finch would build for the host’s CPU architecture platform, which we showed is arm64 above. Instead, let’s build and push an amd64 container to ECR. To build an amd64 image we add the –platform flag to our command:

finch build –platform linux/amd64 -t public.ecr.aws/cbshort/finch-multiarch .
[+] Building 6.5s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 142B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for public.ecr.aws/amazonlinux/amazonlinux:latest 1.2s
=> [auth] aws:: amazonlinux/amazonlinux:pull token for public.ecr.aws 0.0s
=> [1/1] FROM public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 3.9s
=> => resolve public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 0.0s
=> => sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2 62.31MB / 62.31MB 5.1s
=> exporting to oci image format 5.2s
=> => exporting layers 0.0s
=> => exporting manifest sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652 0.0s
=> => exporting config sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8 0.0s
=> => sending tarball 1.3s
unpacking public.ecr.aws/cbshort/finch-multiarch:latest (sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)…
Loaded image: public.ecr.aws/cbshort/finch-multiarch:latest%

finch push public.ecr.aws/cbshort/finch-multiarch
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.v2+json, sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 27.9s total: 1.6 Ki (60.0 B/s)

At this point we’ve created an image on an Apple Silicon-based Mac which can be used on any Intel/AMD CPU architecture Linux host with an OCI-compliant container runtime. This could be an Intel or AMD CPU EC2 instance, an on-premises Intel NUC, or, as we show next, an Intel CPU-based Mac. To show this capability, we’ll run our newly created image on an Intel-based Mac where we have Finch already installed. Note that we have run uname here to show the architecture of this Mac is x86_64, which is analogous to what the Go programming language references 64-bit Intel/AMD CPUs as: amd64.

uname -a
Darwin wile.local 21.6.0 Darwin Kernel Version 21.6.0: Thu Sep 29 20:12:57 PDT 2022; root:xnu-8020.240.7~1/RELEASE_X86_64 x86_64

finch run –rm –platform linux/amd64 public.ecr.aws/cbshort/finch-multiarch:latest uname -a
public.ecr.aws/cbshort/finch-multiarch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 9.2 s total: 59.4 M (6.5 MiB/s)
Linux 73bead2f506b 5.17.5-300.fc36.x86_64 #1 SMP PREEMPT Thu Apr 28 15:51:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

You can see the commands and options are familiar. As Finch is passing through our commands to the nerdctl client, all of the command syntax and options are what you’d expect, and new users can refer to nerdctl’s docs.

Another use case is multi-container application testing. Let’s use yelb as an example app that we want to run locally. What is yelb? It’s a simple web application with a cache, database, app server, and UI. These are all run as containers on a network that we’ll create. We will run yelb locally to demonstrate Finch’s compose features for microservices:

finch vm init
INFO[0000] Initializing and starting finch virtual machine…
INFO[0079] Finch virtual machine started successfully

finch compose up -d
INFO[0000] Creating network localtest_default
INFO[0000] Ensuring image redis:4.0.2
docker.io/library/redis:4.0.2: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:cd277716dbff2c0211c8366687d275d2b53112fecbf9d6c86e9853edb0900956: done |++++++++++++++++++++++++++++++++++++++|

[ snip ]

layer-sha256:afb6ec6fdc1c3ba04f7a56db32c5ff5ff38962dc4cd0ffdef5beaa0ce2eb77e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 11.4s total: 30.1 M (2.6 MiB/s)
INFO[0049] Creating container localtest_yelb-appserver_1
INFO[0049] Creating container localtest_redis-server_1
INFO[0049] Creating container localtest_yelb-db_1
INFO[0049] Creating container localtest_yelb-ui_1

The output indicates a network was created, many images were pulled, started, and are now all running in our local test environment.

In this test case, we’re using Yelb to figure out where a small team should grab lunch. We share the URL with our team, folks vote, and we see the output via the UI:

What’s next for Finch?

The project is just getting started. The team will work on adding features iteratively, and is excited to hear from you. We have ideas on making the virtualization more minimal, with faster boot times to make it more transparent for users. We are also interested in making Finch extensible, allowing for optional add-on functionality. As the project evolves, the team will direct contributions into the upstream dependencies where appropriate. We are excited to support and contribute to the success of our core dependencies: nerdctl, containerd, BuildKit, and Lima. As mentioned previously, one of the exciting things about Finch is shining a light on the projects it depends upon.

Please join us! Start a discussion, open an issue with new ideas, or report any bugs you find, and we are definitely interested in your pull requests. We plan to evolve Finch in public, by building out milestones and a roadmap with input from our users and contributors. We’d also love feedback from you about your experiences building and using containers daily and how Finch might be able to help!

Flatlogic Admin Templates banner

Multi-branch pipeline management and infrastructure deployment using AWS CDK Pipelines

This post describes how to use the AWS CDK Pipelines module to follow a Gitflow development model using AWS Cloud Development Kit (AWS CDK). Software development teams often follow a strict branching strategy during a solutions development lifecycle. Newly-created branches commonly need their own isolated copy of infrastructure resources to develop new features.

CDK Pipelines is a construct library module for continuous delivery of AWS CDK applications. CDK Pipelines are self-updating: if you add application stages or stacks, then the pipeline automatically reconfigures itself to deploy those new stages and/or stacks.

The following solution creates a new AWS CDK Pipeline within a development account for every new branch created in the source repository (AWS CodeCommit). When a branch is deleted, the pipeline and all related resources are also destroyed from the account. This GitFlow model for infrastructure provisioning allows developers to work independently from each other, concurrently, even in the same stack of the application.

Solution overview

The following diagram provides an overview of the solution. There is one default pipeline responsible for deploying resources to the different application environments (e.g., Development, Pre-Prod, and Prod). The code is stored in CodeCommit. When new changes are pushed to the default CodeCommit repository branch, AWS CodePipeline runs the default pipeline. When the default pipeline is deployed, it creates two AWS Lambda functions.

These two Lambda functions are invoked by CodeCommit CloudWatch events when a new branch in the repository is created or deleted. The Create Lambda function uses the boto3 CodeBuild module to create an AWS CodeBuild project that builds the pipeline for the feature branch. This feature pipeline consists of a build stage and an optional update pipeline stage for itself. The Destroy Lambda function creates another CodeBuild project which cleans all of the feature branch’s resources and the feature pipeline.

Figure 1. Architecture diagram.

Prerequisites

Before beginning this walkthrough, you should have the following prerequisites:

An AWS account

AWS CDK installed

Python3 installed
Jq (JSON processor) installed
Basic understanding of continuous integration/continuous development (CI/CD) Pipelines

Initial setup

Download the repository from GitHub:

# Command to clone the repository
git clone https://github.com/aws-samples/multi-branch-cdk-pipelines.git
cd multi-branch-cdk-pipelines

Create a new CodeCommit repository in the AWS Account and region where you want to deploy the pipeline and upload the source code from above to this repository. In the config.ini file, change the repository_name and region variables accordingly.

Make sure that you set up a fresh Python environment. Install the dependencies:

pip install -r requirements.txt

Run the initial-deploy.sh script to bootstrap the development and production environments and to deploy the default pipeline. You’ll be asked to provide the following parameters: (1) Development account ID, (2) Development account AWS profile name, (3) Production account ID, and (4) Production account AWS profile name.

sh ./initial-deploy.sh –dev_account_id <YOUR DEV ACCOUNT ID> —
dev_profile_name <YOUR DEV PROFILE NAME> –prod_account_id <YOUR PRODUCTION
ACCOUNT ID> –prod_profile_name <YOUR PRODUCTION PROFILE NAME>

Default pipeline

In the CI/CD pipeline, we set up an if condition to deploy the default branch resources only if the current branch is the default one. The default branch is retrieved programmatically from the CodeCommit repository. We deploy an Amazon Simple Storage Service (Amazon S3) Bucket and two Lambda functions. The bucket is responsible for storing the feature branches’ CodeBuild artifacts. The first Lambda function is triggered when a new branch is created in CodeCommit. The second one is triggered when a branch is deleted.

if branch == default_branch:

# Artifact bucket for feature AWS CodeBuild projects
artifact_bucket = Bucket(
self,
‘BranchArtifacts’,
encryption=BucketEncryption.KMS_MANAGED,
removal_policy=RemovalPolicy.DESTROY,
auto_delete_objects=True
)

# AWS Lambda function triggered upon branch creation
create_branch_func = aws_lambda.Function(
self,
‘LambdaTriggerCreateBranch’,
runtime=aws_lambda.Runtime.PYTHON_3_8,
function_name=’LambdaTriggerCreateBranch’,
handler=’create_branch.handler’,
code=aws_lambda.Code.from_asset(path.join(this_dir, ‘code’)),
environment={
“ACCOUNT_ID”: dev_account_id,
“CODE_BUILD_ROLE_ARN”: iam_stack.code_build_role.role_arn,
“ARTIFACT_BUCKET”: artifact_bucket.bucket_name,
“CODEBUILD_NAME_PREFIX”: codebuild_prefix
},
role=iam_stack.create_branch_role)

# AWS Lambda function triggered upon branch deletion
destroy_branch_func = aws_lambda.Function(
self,
‘LambdaTriggerDestroyBranch’,
runtime=aws_lambda.Runtime.PYTHON_3_8,
function_name=’LambdaTriggerDestroyBranch’,
handler=’destroy_branch.handler’,
role=iam_stack.delete_branch_role,
environment={
“ACCOUNT_ID”: dev_account_id,
“CODE_BUILD_ROLE_ARN”: iam_stack.code_build_role.role_arn,
“ARTIFACT_BUCKET”: artifact_bucket.bucket_name,
“CODEBUILD_NAME_PREFIX”: codebuild_prefix,
“DEV_STAGE_NAME”: f'{dev_stage_name}-{dev_stage.main_stack_name}’
},
code=aws_lambda.Code.from_asset(path.join(this_dir,
‘code’)))

Then, the CodeCommit repository is configured to trigger these Lambda functions based on two events:

(1) Reference created

# Configure AWS CodeCommit to trigger the Lambda function when a new branch is created
repo.on_reference_created(
‘BranchCreateTrigger’,
description=”AWS CodeCommit reference created event.”,
target=aws_events_targets.LambdaFunction(create_branch_func))

(2) Reference deleted

# Configure AWS CodeCommit to trigger the Lambda function when a branch is deleted
repo.on_reference_deleted(
‘BranchDeleteTrigger’,
description=”AWS CodeCommit reference deleted event.”,
target=aws_events_targets.LambdaFunction(destroy_branch_func))

Lambda functions

The two Lambda functions build and destroy application environments mapped to each feature branch. An Amazon CloudWatch event triggers the LambdaTriggerCreateBranch function whenever a new branch is created. The CodeBuild client from boto3 creates the build phase and deploys the feature pipeline.

Create function

The create function deploys a feature pipeline which consists of a build stage and an optional update pipeline stage for itself. The pipeline downloads the feature branch code from the CodeCommit repository, initiates the Build and Test action using CodeBuild, and securely saves the built artifact on the S3 bucket.

The Lambda function handler code is as follows:

def handler(event, context):
“””Lambda function handler”””
logger.info(event)

reference_type = event[‘detail’][‘referenceType’]

try:
if reference_type == ‘branch’:
branch = event[‘detail’][‘referenceName’]
repo_name = event[‘detail’][‘repositoryName’]

client.create_project(
name=f'{codebuild_name_prefix}-{branch}-create’,
description=”Build project to deploy branch pipeline”,
source={
‘type’: ‘CODECOMMIT’,
‘location’: f’https://git-codecommit.{region}.amazonaws.com/v1/repos/{repo_name}’,
‘buildspec’: generate_build_spec(branch)
},
sourceVersion=f’refs/heads/{branch}’,
artifacts={
‘type’: ‘S3’,
‘location’: artifact_bucket_name,
‘path’: f'{branch}’,
‘packaging’: ‘NONE’,
‘artifactIdentifier’: ‘BranchBuildArtifact’
},
environment={
‘type’: ‘LINUX_CONTAINER’,
‘image’: ‘aws/codebuild/standard:4.0’,
‘computeType’: ‘BUILD_GENERAL1_SMALL’
},
serviceRole=role_arn
)

client.start_build(
projectName=f’CodeBuild-{branch}-create’
)
except Exception as e:
logger.error(e)

Create branch CodeBuild project’s buildspec.yaml content:

version: 0.2
env:
variables:
BRANCH: {branch}
DEV_ACCOUNT_ID: {account_id}
PROD_ACCOUNT_ID: {account_id}
REGION: {region}
phases:
pre_build:
commands:
– npm install -g aws-cdk && pip install -r requirements.txt
build:
commands:
– cdk synth
– cdk deploy –require-approval=never
artifacts:
files:
– ‘**/*’

Destroy function

The second Lambda function is responsible for the destruction of a feature branch’s resources. Upon the deletion of a feature branch, an Amazon CloudWatch event triggers this Lambda function. The function creates a CodeBuild Project which destroys the feature pipeline and all of the associated resources created by that pipeline. The source property of the CodeBuild Project is the feature branch’s source code saved as an artifact in Amazon S3.

The Lambda function handler code is as follows:

def handler(event, context):
logger.info(event)
reference_type = event[‘detail’][‘referenceType’]

try:
if reference_type == ‘branch’:
branch = event[‘detail’][‘referenceName’]
client.create_project(
name=f'{codebuild_name_prefix}-{branch}-destroy’,
description=”Build project to destroy branch resources”,
source={
‘type’: ‘S3’,
‘location’: f'{artifact_bucket_name}/{branch}/CodeBuild-{branch}-create/’,
‘buildspec’: generate_build_spec(branch)
},
artifacts={
‘type’: ‘NO_ARTIFACTS’
},
environment={
‘type’: ‘LINUX_CONTAINER’,
‘image’: ‘aws/codebuild/standard:4.0’,
‘computeType’: ‘BUILD_GENERAL1_SMALL’
},
serviceRole=role_arn
)

client.start_build(
projectName=f’CodeBuild-{branch}-destroy’
)

client.delete_project(
name=f’CodeBuild-{branch}-destroy’
)

client.delete_project(
name=f’CodeBuild-{branch}-create’
)
except Exception as e:
logger.error(e)

Destroy the branch CodeBuild project’s buildspec.yaml content:

version: 0.2
env:
variables:
BRANCH: {branch}
DEV_ACCOUNT_ID: {account_id}
PROD_ACCOUNT_ID: {account_id}
REGION: {region}
phases:
pre_build:
commands:
– npm install -g aws-cdk && pip install -r requirements.txt
build:
commands:
– cdk destroy cdk-pipelines-multi-branch-{branch} –force
– aws cloudformation delete-stack –stack-name {dev_stage_name}-{branch}
– aws s3 rm s3://{artifact_bucket_name}/{branch} –recursive

Create a feature branch

On your machine’s local copy of the repository, create a new feature branch using the following git commands. Replace user-feature-123 with a unique name for your feature branch. Note that this feature branch name must comply with the CodePipeline naming restrictions, as it will be used to name a unique pipeline later in this walkthrough.

# Create the feature branch
git checkout -b user-feature-123
git push origin user-feature-123

The first Lambda function will deploy the CodeBuild project, which then deploys the feature pipeline. This can take a few minutes. You can log in to the AWS Console and see the CodeBuild project running under CodeBuild.

Figure 2. AWS Console – CodeBuild projects.

After the build is successfully finished, you can see the deployed feature pipeline under CodePipelines.

Figure 3. AWS Console – CodePipeline pipelines.

The Lambda S3 trigger project from AWS CDK Samples is used as the infrastructure resources to demonstrate this solution. The content is placed inside the src directory and is deployed by the pipeline. When visiting the Lambda console page, you can see two functions: one by the default pipeline and one by our feature pipeline.

Figure 4. AWS Console – Lambda functions.

Destroy a feature branch

There are two common ways for removing feature branches. The first one is related to a pull request, also known as a “PR”. This occurs when merging a feature branch back into the default branch. Once it’s merged, the feature branch will be automatically closed. The second way is to delete the feature branch explicitly by running the following git commands:

# delete branch local
git branch -d user-feature-123

# delete branch remote
git push origin –delete user-feature-123

The CodeBuild project responsible for destroying the feature resources is now triggered. You can see the project’s logs while the resources are being destroyed in CodeBuild, under Build history.

Figure 5. AWS Console – CodeBuild projects.

Cleaning up

To avoid incurring future charges, log into the AWS console of the different accounts you used, go to the AWS CloudFormation console of the Region(s) where you chose to deploy, and select and click Delete on the main and branch stacks.

Conclusion

This post showed how you can work with an event-driven strategy and AWS CDK to implement a multi-branch pipeline flow using AWS CDK Pipelines. The described solutions leverage Lambda and CodeBuild to provide a dynamic orchestration of resources for multiple branches and pipelines.
For more information on CDK Pipelines and all the ways it can be used, see the CDK Pipelines reference documentation.

About the authors:

Iris Kraja

Iris is a Cloud Application Architect at AWS Professional Services based in New York City. She is passionate about helping customers design and build modern AWS cloud native solutions, with a keen interest in serverless technology, event-driven architectures and DevOps.  Outside of work, she enjoys hiking and spending as much time as possible in nature.

Jan Bauer

Jan is a Cloud Application Architect at AWS Professional Services. His interests are serverless computing, machine learning, and everything that involves cloud computing.

Rolando Santamaria Maso

Rolando is a senior cloud application development consultant at AWS Professional Services, based in Germany. He helps customers migrate and modernize workloads in the AWS Cloud, with a special focus on modern application architectures and development best practices, but he also creates IaC using AWS CDK. Outside work, he maintains open-source projects and enjoys spending time with family and friends.

Caroline Gluck

Caroline is an AWS Cloud application architect based in New York City, where she helps customers design and build cloud native data science applications. Caroline is a builder at heart, with a passion for serverless architecture and machine learning. In her spare time, she enjoys traveling, cooking, and spending time with family and friends.

Creating a Laravel Project

Laravel architecture
Laravel history
The Pros of Laravel
The Cons of Laravel
Getting started with your own Laravel project
Laravel for backend
Building Laravel Projects with Templates
Building Laravel Projects with Flatlogic

Laravel is a PHP-based backend framework. It follows the Model-View-Controller design pattern which explains many of Laravel’s pluses and minuses that we’ll detail further on. Users often credit Laravel with responsiveness, scalability, and a good community. But not everything is so simple. Like many backend frameworks, it can be abstract and unintuitive. That is if you don’t have anyone to guide you.

In this article, we’ll talk about the history of Laravel, how it emerged, and how it won its position. We’ll take a closer look at the peculiarities of working with Laravel, and sum up the reasons to choose it or avoid it. Finally, we’ll dive deeper than usual into the inner mechanism of a simple app, and show you the code so you’ll know how to properly grease the gears. Keep reading!

Laravel Architecture

As mentioned, Laravel follows the Model-View-Controller architecture pattern. In this system, the Model is the part that manages the database and the data logic. The View is the user interface and all its interactive functions. The Controller is what differentiates MVC software from earlier practices. It mediates between the Model and the View, and makes the two largely independent of each other. It means easier development and maintenance, easier project management, and reusability of components.

Laravel History

Laravel was published in 2011, and it is a little over 10 years old now. By that time Taylor Otwell, the creator of Laravel, had been using CodeIgniter for quite a while. CodeIgniter was a solid backend framework and holds a small market share to this day. However, it had issues with authentication. Whenever you wanted authorization with Google or Facebook that we now take for granted, it required additional software that was hard to find in ready form. Otwell released the beta version of Laravel on June 9, 2011. The early Laravel supported a lot of the things that were missing from most backend frameworks back then. Those included authorization, views, sessions, routing, localization, and more.

Modern Laravel complies with MVC principles. But back in 2011, it did not support controllers. It became a true MVC framework in September 2011, with the Laravel 2 update. Laravel 3 came with a built-in command-line interface called Artisan. Furthermore, it had a lot more capacity for managing databases. At that point, Laravel was something largely similar to what it is today.

Laravel’s Pros

Simplicity

Backend frameworks have a reputation for being harder to grasp. While subjective, this approach has a grain of truth to it. Backend processes happen behind the scenes. They aren’t as easily demonstrable as front-end processes, and can thus be unintuitive. Laravel’s simple syntaxis and extensive use of plain PHP are a nice change of pace and a great opportunity for aspiring backend developers.

Security

Laravel is often credited for data security. One of the contributing solutions is the Eloquent ORM. This object relational mapper is included in the package and adds another abstraction level to the code. It presents data as objects making the data exchange safer and more efficient. Furthermore, Laravel can store passwords in encrypted form out of the box. Together with the overall sturdy build, this makes Laravel a safe and reliable technology.

Time and Resource Efficiency

Laravel’s initial light weight is just one of the reasons why it saves storage space and computing power. Laravel is awesome when it comes to testing separate parts of the software rather than the whole project. Any time you fix a bug, this feature of Laravel will save just a little time. But if you have to fix lots of bugs, that’s a huge asset!

Effective Mapping

Laravel’s mapping is optimized for using relational databases. This makes relational databases easier to connect to Laravel backend, and they run smoother and faster than on some other frameworks.

Built-in CLI

Laravel’s built-in CLI called Artisan is a huge asset in creating command-line applications. Artisan is an advanced CLI that lets you include tasks and migrations without additional tools and resources.

Strict Logic

Laravel complies with the MVC (Model-View-Controller) architecture. This design is helpful for structuring the code into intuitive logical areas. MVC solutions are usually less susceptible to bugs and more compliant when it comes to debugging.

Quality Community

Laravel’s community is extensive and helpful. Plenty apart from the extensive FAQs, a lot of forums and dedicated platforms orbit Laravel making it very hard to come across an issue you won’t find a solution to.

Laravel’s Cons

Possible Compatibility Issues in Older Projects

Laravel has grown tremendously since its introduction but that came at a cost. Newer versions have an array of features that don’t work properly with older versions. This can make older Laravel projects glitchy and slow. In other words, the opposite of what we value the most about Laravel.

Minimum Tools Included

We’ve mentioned what a great CLI Artisan is. However, other parts of Laravel don’t boast the same diversity of tools and components. The downside of choosing a lightweight framework is the likelihood of having to implement additional tools and some glue code to make them work together properly. This is not a frequent occasion but can sometimes negate the light weight of Laravel.

Inconsistent Speed

Laravel doesn’t shine when it comes to speed. Competitors like Yii and Symfony outrun Laravel in most scenarios. Bear in mind, though, that Laravel’s operation on the latest PHP version with JIT compilation hasn’t been extensively tested. So keep your mind open, the latest Laravel might turn out to be much faster.

Getting started with your own Laravel project

When working with backend frameworks, it’s harder to keep track of your progress. That’s one of the reasons why backend frameworks get a reputation for being hard and unintuitive. We don’t think it’s fundamentally harder. We think it just requires a bit more initial training. Let’s start with basics and progress one step at a time.

Installing pre-requisite software

To fully use Laravel’s arsenal of features, you’ll need to install some useful tools, and learn to use them. Let’s start with Docker. Docker is a virtualization solution. It lets us run software in sandbox-like environments called “containers”. Docker runs your code internally, without affecting any other software on your PC or causing any compatibility issues. What runs in Docker, stays in Docker. We suggest getting Docker Desktop. The real reason we need Docker is Sail – Laravel’s built-in command-line interface. It integrates with Docker perfectly. This basic setup will let you run intermediate versions of your project with ease.

Setting up a Subsystem

This step is highly recommended for Windows users. A Linux Subsystem allows for running binary executables. This is the least troublesome way to test-run Laravel code on Windows. Launch your Windows Terminal, preferably in administrator mode, and launch the WSL. The process is simple: just type ‘wsl’ in the PowerShell or another CLI.

Creating the Project

Everything’s set for creating our own project. I’ll let myself be vain about it and call it Al’s Laravel project. Except, we want to avoid any possible compatibility issues, so the directory will be spelled ‘als-laravel-project’. To create a project, we use the CLI to go to the directory we need to create the project and print:

curl -s https://laravel.build/als-laravel-project | bash

After a brief compilation, navigate your CLI again to the directory and move to the next step.

Creating the Project via Composer

This is another way to create a Laravel project. It has gained lots of popularity, and might be the most obvious method today. First, make sure you’ve installed both PHP and Composer. Then you can enter Artisan CLI and print the following:

composer create-project laravel/laravel example-app
cd example-app
php artisan serve

The above will create a local development server for Laravel.

Starting Sail

At this point, we can set up sail with one command. The ‘Sail Up’ command is easy enough to remember. Because we’re setting up our Sail, get it? If this is your first time launching Sail, the CLI will build application containers on your device. This takes a while but will make all subsequent Sail launches and operations faster. With the file structure there, you can access your application at http://localhost. In principle. This is just the structure of the future application and not the application itself. Let’s see what we can do next!

Primary Configuration

Laravel is often credited with the ease of setting up. Most Laravel projects require little to no initial configuration. However, you can explore the app.php file in the ‘config’ folder. It contains plenty of variables like Time zone, Laravel framework service providers, and URL Debugging. Like we said, most projects don’t require any configuration at this stage. If you’re just learning the ropes, we recommend learning to work with a basic Laravel project first. It will give you some context when you’re deciding how to configure the application.

Environment Configuration

Laravel supports developing, testing, and running your applications in different environments. Those include testing, deployment, production, and more. Adjusting your project for different environments happens by changing underlying parameters like caching directory. Environment variables are found in Laravel’s default .env file (or .env.example, depending on the method of Laravel installation that you’ve chosen).

Laravel for Backend

Laravel can be a great backend solution in many cases. Let’s list some of them!

Single-Page APIs

When building a single-page API, the small scale of the software built and the time spent implies a similarly minimalist approach to choosing the underlying technologies. We’ve mentioned how a Laravel project can be configured but in many cases that’s unnecessary. Laravel’s ease of configuration lets us create simple APIs in no time.

Next.js Applications

Next.js emerged to solve compatibility issues for Node – React applications, and that’s how it usually works. With Laravel, however, there’s another way to use Next.js. Laravel runs well as a backend of the Next.js application’s API. Laravel’s support of notifications and queues is impressive and helps use these features out of the box.

Semi-Full-Stack Framework

You might come across sources that claim Laravel to be a full-stack technology. That’s true to an extent. Laravel offers extensive possibilities for request routing. Also, if you’re interested in Laravel’s full-stack capabilities, take a closer look at Blade. Blade is Laravel’s integrated templating engine. It uses plain PHP which means no additional software will inflate your project. You can use Blade and transmit information to integral views lines. Laravel won’t work as a comprehensive front-end framework but brings along features that will be a great addition to plain JavaScript apps and landings.

Building Laravel Projects with Templates

Laravel is a highly popular framework so, naturally, there’s a huge supply of Laravel templates. One example is Flatlogic’s own Sing App Vue Template with Laravel Backend. Templates are possibly the easiest way to create a Laravel application. Especially considering the fact that many of those templates come with pre-installed front-end. The main challenge here is properly connecting all data endpoints to create a completely functional API.

To better understand how it works, we suggest trying the Sing App’s live demo. It is intuitive enough for most users to quickly understand how to manage a template-based application. Plentiful documentation will help resolve any issues and our support team is always ready to help you out here in case the documentation doesn’t cover it.

Building Laravel Projects with Flatlogic

Flatlogic Platform is our way of bridging the gap between developing your own apps and using templates. Applications running on the same technologies typically use the same elements and components. In many cases the main thing that makes them different on a technical level is the database schema that accommodates different mechanisms of data processing and storage. Flatlogic Platform allows creating applications by combining parts and building only the parts that need to be unique. It’s as easy as it sounds, and sometimes even easier. Keep reading to know more!

Step 1

The first page we see when creating our own project requires a name for the project and the tech stack. The name is pretty straightforward. Just pick one that is easy enough to associate with the project. The tech stack is the combination of technologies used in the project. The front-end, the database, and the backend to tie them together. Any combination will work fine, but given the article’s topic, we’ll choose Laravel as the backend technology.

Step 2

Next up, we’ll choose the design for our application. Currently, we’re redeveloping some visual options, so only the Material is available. But don’t worry, the others will soon be back and improved. Material design is Google’s design language used for UI compatibility of new software with Google services. Furthermore, its minimalist, unobtrusive nature works in most cases and for most users.

Step 3

The following page is the Database Schema that we mentioned earlier. The schema is the structure of the database that describes the relationships between columns, rows, and fields. This is an important part that largely defines how your application will process data. However, we’ve explored the more popular demands and included pre-built schemas perfect for eCommerce, Blogs, Social Networks, and more.

Step 4

Here we need to check if everything is according to plan. Check the stack, check the design, check the database schema, decide if you want to connect Git repository, and hit Finish.

Step 5

The next page offers us a plethora of ways to deploy and run our application. Deploy from scratch, deploy from GitHub… If you’re interested in the inner mechanisms of a Laravel application, you can view the actual code of the app.

Well done, sir or madam! You’ve created your very own Laravel App.

Conclusion

We’ve explained how to install Laravel and create your first project. That’s a solid first step for anyone who wants to learn Laravel development. For everyone else who needs a Laravel-based application but doesn’t have the time or the desire to learn the framework, we’ve offered two other routes. Both Laravel templates and Flatlogic Platform have a lot going for them. We might be biased but we usually recommend the Platform. It offers greater flexibility by allowing you to create applications with any combinations of technologies, designs, and database schemas.

Laravel is a controversial technology. It’s simple and beginner-friendly, but it requires additional research as you master Laravel development. It is one of the best and most versatile technologies including CLIs on the market yet can sometimes lack tools in other departments. We can definitely recommend Laravel to anyone who’s willing to learn backend development. Laravel offers plenty of features that speed up the development of compact, single-page applications, and large scale business solutions.

Related articles

Top 5 Admin Templates with Node.js Backend

Best Headless CMS in 2022

The post Creating a Laravel Project appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

Building a gRPC Server in Go

Intro

In this article, we will create a simple web service with gRPC in Go. We won’t use any third-party tools and only create a single endpoint to interact with the service to keep things simple.

Motivation

This is the second part of an articles series on gRPC. If you want to jump ahead, please feel free to do so. The links are down below.

Introduction to gRPC
Building a gRPC server with Go (You are here)
Building a gRPC server with .NET
Building a gRPC client with Go
Building a gRPC client with .NET

The plan

So this is what we are trying to achieve.

Generate the .proto IDL stubs.
Write the business logic for our service methods.
Spin up a gRPC server on a given port.

In a nutshell, we will be covering the following items on our initial diagram.


? As always, all the code samples documentation can be found at: [https://github.com/sahansera/go-grpc](https://github.com/sahansera/go-grpc)

Prerequisites

This guide targets Go and assumes you have the necessary go tools installed locally. Other than that, we will cover gRPC specific tooling down below. Please note that I’m using some of the commands that are macOS specific. Please follow this link to set it up if you are on a different OS.

To install Protobuf compiler:

brew install protobuf

To install Go specific gRPC dependencies:

go install http://google.golang.org/protobuf/cmd/[email protected]
go install http://google.golang.org/grpc/cmd/[email protected]

Project structure

There is no universally agreed-upon project structure per se. We will use Go modules and start by initializing a new project. So our business problem is this – we have a bookstore and we want to expose its inventory via an RPC function.

Since this talks about the creation of the server, we will call it bookshop/server

We can create a new folder called server for the server and initialize it by so:

go mod init bookshop/server

In a later post, we will also work on the client-side of this app and call it bookshop/client

This is what it’s going to look like at the end of this post.


Creating the service definitions with .proto files

In my previous post, we discussed what are Protobufs and how to write one. We will be using the same example which is shown below.

A common pattern to note here is to keep your .proto files in their separate folder, so that use them to generate the server and client stubs.

bookshop.proto

syntax = “proto3”;

option go_package = “bookshop/pb”;

message Book {
string title = 1;
string author = 2;
int32 page_count = 3;
optional string language = 4;
}

message GetBookListRequest {}
message GetBookListResponse { repeated Book books = 1; }

service Inventory {
rpc GetBookList(GetBookListRequest) returns (GetBookListResponse) {}
}

Note how we have used the [option](https://developers.google.com/protocol-buffers/docs/proto#options) keyword here. We are essentially saying the Protobuf compiler where we want to put the generated stubs. You can have multiple option statements depending on which languages you are using to generate the stubs for.

? You can find a full list of allowed values at [google/protobuf/descriptor.proto](https://github.com/protocolbuffers/protobuf/blob/2f91da585e96a7efe43505f714f03c7716a94ecb/src/google/protobuf/descriptor.proto#L44)

Other than that, we have 3 messages to represent a Book entity, a request and a response, respectively. Finally we have a service defined called Inventory which has a RPC named GetBookList which can be called by the clients.

If you need to understand how this is structured, please refer to my previous post ?

Generating the stubs

Now that we have the IDL created we can generate the Go stubs for our server. It is a good practice to put it under the make gen command so that we can easily generate them with a single command in the future.

protoc –proto_path=proto proto/*.proto –go_out=. –go-grpc_out=.

Once this is done, you will see the generated files under the server/pb folder.


Awesome! ? now we can use these subs in our server to respond to any incoming requests.

Creating the gRPC server

Now, we will create the main.go file to create the server.

main.go

type server struct {
pb.UnimplementedInventoryServer
}

func (s *server) GetBookList(ctx context.Context, in *pb.GetBookListRequest) (*pb.GetBookListResponse, error) {
return &pb.GetBookListResponse{
Books: getSampleBooks(),
}, nil
}

func main() {
listener, err := net.Listen(“tcp”, “:8080”)
if err != nil {
panic(err)
}

s := grpc.NewServer()
pb.RegisterInventoryServer(s, &server{})
if err := s.Serve(listener); err != nil {
log.Fatalf(“failed to serve: %v”, err)
}
}

We first define a struct to represent our server. The reason why we need to embed pb.UnimplementedInventoryServer is to maintain future compatibility when generating gRPC bindings for Go. You can read more on this in this initial proposal and on README.
As we discussed, GetBookList can be called by the clients, and this is where that request will be handled. We have access to context (such as auth tokens, headers etc.) and the request object we defined.
In the main method, we are creating a listening on TCP port 8080 with the net.Listen method, initialize a new gRPC server instance, register our Inventory service and then start responding to incoming requests.

Interacting with the Server

Usually, when interacting with the HTTP/1.1-like server, we can use cURL to make requests and inspect the responses. However, with gRPC, we can’t do that. (you can make requests to HTTP/2 services, but those won’t be readable). We will be using gRPCurl for that.

Once you have it up and running, you can now interact with the server we just built.

grpcurl -plaintext localhost:8080 Inventory/GetBookList

? Note: gRPC defaults to TLS for transport. However, to keep things simple, I will be using the `-plaintext` flag with `grpcurl` so that we can see a human-readable response.


How do we figure out the endpoints of the service? There are two ways to do this. One is by providing a path to the proto files, while the other option enables reflection through the code.

Enabling reflection

This is a pretty cool feature, where it will give you introspection capabilities to your API.

reflection.Register(gs)

Following screenshot displays some of the commands you can use to introspect the API.


grpcurl -plaintext -msg-template localhost:8080 describe .GetBookListResponse

Using proto files

If you don’t want to to enable it by code we can use the Protobuf files to let gRPCurl know which methods are available. Normally, when a team makes a gRPC service they will make the protobuf files available if you are integrating with them. So, without having to ask them or doing trial-and-error you can use these proto files to introspect what kind of endpoints are available for consumption.

grpcurl -import-path proto -proto bookshop.proto list

gRPCurl is great if you want to debug your RPC calls if you don’t have your client built yet.

Conclusion

In this article we looked at how we can create a simple gRPC server with Go. In the next one we will learn how to do the same with .NET.

Feel free to let me know any feedback or questions. Thanks for reading ✌️

References

https://medium.com/@nate510/structuring-go-grpc-microservices-dd176fdf28d0
https://www.youtube.com/watch?v=RHWwMrR8LUs&ab_channel=NicJackson
https://www.oreilly.com/library/view/grpc-up-and/9781492058328/Flatlogic Admin Templates banner