Announcing Snapchange: An Open Source KVM-backed Snapshot Fuzzing Framework

Fuzz testing or fuzzing is a commonly used technique for discovering bugs in software, and is useful in many different domains. However, the task of configuring a target to be fuzzed can be a laborious one, often involving refactoring large code bases to enable fuzz testing.

Today we are happy to announce Snapchange, a new open source project to make snapshot-based fuzzing much easier. Snapchange enables a target binary to be fuzzed with minimal modifications, providing useful introspection that aids in fuzzing. Snapchange is a Rust framework for building fuzzers that replay physical memory snapshots in order to increase efficiency and reduce complexity in fuzzing many types of targets. Snapchange utilizes the features of the Linux kernel’s built-in virtual machine manager known as kernel virtual machine or KVM. While Snapchange is agnostic to the target operating system, the included snapshot mechanism focuses on Linux-based targets for gathering the necessary debug information.

Snapchange started as an experiment by the AWS Find and Fix (F2) open source security research team to explore the potential of using KVM in enabling snapshot fuzzing. Snapshot fuzzing is a growing area of interest and experimentation among security researchers. Snapchange has now grown into a project that aims to provide a friendly experience for researchers and developers to experiment with snapshot fuzzing. Snapchange is one of a number of tools and techniques used by the F2 team in its research efforts aimed at creating a secure and trustworthy open source supply chain for AWS and its customers. We have found it sufficiently useful that we are sharing it with the broader research community.

Snapchange is available today under the Apache License 2.0 via GitHub. AWS F2 team is actively supporting Snapchange and has plans for new features, but we hope to engage the security research community to produce a more richly-featured and robust tool over the longer term. We welcome pull requests on GitHub and look forward to discussions that help enable future research via the project. In this blog post we’ll walk through a set of tutorials you’ll find in the repository to help provide a deeper understanding of Snapchange.

Note: Snapchange operates within a Linux operating system but requires direct access to underlying KVM primitives. Thus, it is compatible with EC2 bare metal instance types, which run without a hypervisor, but not with EC2 virtualized instances. While we provide an EC2 AMI to make it easier to get started by launching on a bare metal instance (more information on that provided below), users are free to run Snapchange in other environments that meet the basic hardware requirements.

What is snapshot fuzzing?

Fuzzing uncovers software issues by monitoring how the system behaves while processing data, especially data provided as an input to the system. Fuzzing attempts to answer the question: What happens when the data is structured in a way that is outside the scope of what the software expects to receive? If the software is bug-free, there should be no input, no matter how inappropriate or corrupt, that causes it to crash. All input data should be properly validated and either pass validation and be used, or be rejected with a well-defined error result. Any input data that causes the software to crash shows a flaw and a potential weakness in the software. For example, fuzzing an application that renders JPEGs could involve mutating a sample JPEG file and opening the mutated JPEG in the application. If the application crashes or otherwise behaves in unexpected ways, this mutated file might have uncovered an issue.

The chosen mutations, however, are not truly random. We typically guide a fuzzer using “coverage” techniques. Coverage measures what code path an input from the fuzzer has caused to be executed in the target, and is used to automatically guide a fuzzer to modify its subsequent inputs so that the execution path in the target is changed to include previously-untested portions of the target’s code. Information about the sections of code in the target that were previously executed are cached in an input corpus, and that information is used to guide new inputs to explore additional code paths. In this way, variations on the same inputs will be applied in the expectation of discovering more previously untested code sections in the target, until all parts of the target code which can be reached by a possible execution path have been tested.

A snapshot is a pairing of a physical memory dump of a running VM and its accompanying register state. Fuzzing with a snapshot enables granular execution in order to reach code blocks that are traditionally difficult to fuzz without the complexities of managing state within the target. The only information needed by Snapchange in order to continue the execution of the target in a virtual machine is the snapshot itself. Prior work exploring this technique include brownie, falkervisorchocolate_milk, Nyx, and what the fuzz. Most of these other tools require booting into a custom hypervisor on bare metal or with a modified KVM and kernel module. Snapchange can be used in environments where booting into a custom hypervisor isn’t straightforward. As noted, it can also be used on EC2 on bare metal instances that boot without any hypervisor at all.

How Snapchange Works

Snapchange fuzzes a target by injecting mutated data in the virtual machine and provides a breakpoint-based hooking mechanism, real-time coverage reports in a variety of formats (such as Lighthouse and LCOV), and single-step traces useful for debugging. With Snapchange, you can fuzz a given physical memory snapshot across multiple CPU cores in parallel, while monitoring for crashing states such as a segmentation fault or a call to an Address Sanitizer report.

While Snapchange doesn’t care how a snapshot is obtained, it includes one method which uses a patched QEMU instance via the included qemu_snapshot utility. This snapshot is then used as the initial state of a KVM virtual machine to fuzz a target.

The fuzzing loop starts by initializing the memory of a whole KVM virtual machine with the physical memory and register state of the snapshot. Snapchange then gives the user the ability to write a mutated input in the loaded guest’s memory. The virtual machine is then executed until a crash, timeout, or reset event occurs. At this point, the virtual machine will revert back to a clean state. The guest memory is restored to the original snapshot’s memory in preparation for the next input case. In order to avoid writing the entire snapshot memory on every reset, only pages that were modified during execution are restored. This significantly reduces the amount of memory which needs to be restored, speeding up the fuzzing cycle, and allowing more time to be spent fuzzing the target.

This ability to arbitrarily reset guest memory enables precise choices when harnessing a fuzz target. With snapshots, the harnessing effort involves discovering where in memory the relevant input resides. For example, instead of having to rewrite a networked application to take input packets from command line or stdin, we can use a debugger to break immediately after a recv call. Pausing execution at this point, we can document the buffer that was read into, for example address 0x6000_0000_0100 , and take a snapshot of the system with this memory address in mind. Once the snapshot is loaded via Snapchange, we can write a mutated input packet to address 0x6000_0000_0100 and continue executing the target as if it were a real packet. This precisely mimics what would happen if a corrupt or malicious packet was read off the network in a real-world scenario.

Experimenting with Snapchange

Snapchange, along with several example targets, can be found on GitHub. Because Snapchange relies on KVM for executing a snapshot, Snapchange must be used on a machine that has KVM access. Currently, Snapchange only supports x64 hosts and snapshots. As previously noted, Snapchange can be used in Amazon EC2 on a wide variety of .metal instances based on Intel processors, for example, a c6i.metal instance. There is also a public AMI containing Snapchange, with the examples pre-built and pre-snapshotted. The pre-built AMI is ami-008dec48252956ad5 in the US-East-2 region. For more information about using an AMI, check out the Get started with Amazon EC2 Linux instances tutorial. You can also install Snapchange in your own environment if you have access to supported hardware.

This blog will go over the first example in the Snapchange repository to demonstrate some of the features provided. For a more step-by-step walk-through, check out the full tutorial in the README for the 01_getpid example here.

Example target

We’ll start with the first example in Snapchange to demonstrate some of its features.

There are two goals for this target:

The input data buffer must solve for the string fuzzmetosolveme!

The return value from getpid() must be modified to be 0xdeadbeef

// harness/example1.c

void fuzzme(char* data) {
    int pid    = getpid();

    // Correct solution: data == “fuzzmetosolveme!”, pid == 0xdeadbeef
    if (data[0]  == ‘f’)
    if (data[1]  == ‘u’)
    if (data[2]  == ‘z’)
    if (data[3]  == ‘z’)
    if (data[4]  == ‘m’)
    if (data[5]  == ‘e’)
    if (data[6]  == ‘t’)
    if (data[7]  == ‘o’)
    if (data[8]  == ‘s’)
    if (data[9]  == ‘o’)
    if (data[10] == ‘l’)
    if (data[11] == ‘v’)
    if (data[12] == ‘e’)
    if (data[13] == ‘m’)
    if (data[14] == ‘e’)
    if (data[15] == ‘!’) {
        pid = getpid();
        if (pid == 0xdeadbeef) {
            // BUG
            *(int*)0xcafecafe = 0x41414141;
        }
    }

    return;
}

When taking the snapshot, we logged that the input buffer being fuzzed is located at 0x555555556004.

SNAPSHOT Data buffer: 0x555555556004

It is the fuzzer’s job to write an input test case to address 0x5555_5555_6004 to begin fuzzing. Let’s look at how Snapchange handles coverage with breakpoints.

Coverage Breakpoints

Snapchange gathers its coverage of a target using breakpoints. In the snapshot directory, an optional .covbps file containing virtual addresses in the guest can be created. Because the snapshot is static, we can use hard coded memory addresses as part of the fuzzing process. During initialization, a breakpoint is inserted into the guest memory at every address found in the coverage breakpoint file. If any coverage breakpoint is hit, it means the current input executed a new piece of the target for the first time. The input is saved into the input corpus for future use and the breakpoint is removed. This removal of coverage breakpoints when they are encountered, means that the fuzzer only pays for the cost of the coverage breakpoint once.

One approach using these coverage breakpoints is to trigger on new basic blocks from the control flow graph of a target. There are a few utility scripts included in Snapchange to gather these basic blocks using Binary Ninja, Ghidra, and radare2.

The example coverage breakpoint file of the basic blocks found in example1.bin is in snapshot/example1.bin.covbps

$ head ./snapshot/example1.bin.ghidra.covbps

0x555555555000
0x555555555014
0x555555555016
0x555555555020
0x555555555070
0x555555555080
0x555555555090
0x5555555550a0
0x5555555550b0
0x5555555550c0

Writing a fuzzer

To begin fuzzing with Snapchange, we can write the fuzzer specific for this target in Rust.

// src/fuzzer.rs

#[derive(Default)]
pub struct Example1Fuzzer;

impl Fuzzer for Example1Fuzzer {
type Input = Vec<u8>; // [0]
const START_ADDRESS: u64 = 0x5555_5555_5344;
const MAX_INPUT_LENGTH: usize = 16; // [1]
const MAX_MUTATIONS: u64 = 2; // [3]

fn set_input(&mut self, input: &Self::Input, fuzzvm: &mut FuzzVm<Self>) -> Result<()> {
// Write the mutated input
fuzzvm.write_bytes_dirty(VirtAddr(0x5555_5555_6004), CR3, input)?; // [2]

Ok(())
}

}

A few notes about this fuzzer:

The fuzzer uses input of type Vec<u8> ([0]). This tells Snapchange to provide the default mutation strategies for a vector of bytes.

Note: This is an abstract type, so the fuzzer can provide a custom mutator/generator if they choose.

The maximum length of a generated input will be 16 bytes ([1])
The fuzzer is passed a mutated Vec<u8> in set_input. This input is then written to the address of the buffer logged during the snapshot (0x5555_5555_6004) via the call to write_bytes_dirty ([2]).

Note: This address is from the printf(“SNAPSHOT Data buffer: %pn”, data); line in the harness

The fuzzer will apply, at most, two mutations per input case ([3])

Snapchange provides an entry point to the main command-line utility that takes an abstract Fuzzer, like the one we have written. This will be the entry point for our fuzzer as well.

// src/main.rs

fn main() {
snapchange_main::<fuzzer::Example1Fuzzer>().expect(“Error in Example 1”);
}

Building this we can verify that the project and snapshot directories are set up properly by attempting to translate the starting instruction pointer address from the snapshot. Snapchange provides a project translate command used for doing virtual to physical memory translations from the snapshot and attempting to disassemble the bytes found at the read physical address. We can disassemble from fuzzme function in the snapshot with the following:

$ cargo run -r — project translate fuzzme

With the confirmation that the project’s directory structure is set up properly, we can begin fuzzing!

Starting fuzzing!

Snapchange has a fuzz command which can execute across a configurable number of cores in parallel. To begin fuzzing, Snapchange will start a number of virtual machines with the physical memory found in the snapshot directory. Snapchange will then choose an input from the current corpus (or generate one if one doesn’t exist), mutate it with a variety of techniques, and then write it into the guest via the set_input() function we wrote. If any new coverage has been seen, the mutated input will be saved in the corpus for future use. If a crash is found, the crashing input will be saved for further analysis.

The example is looking for the password fuzzmetosolveme! by checking each byte in the input one at a time. This pattern creates a new location for coverage to find for each byte. If the mutation randomly finds the next byte in the password, that input is saved in the corpus to be used later to discover the next byte, until the entire password is uncovered.

We began fuzzing with 8 cores for this example.

$ cargo run -r — fuzz –cores 8

The fuzz terminal user interface (TUI) is brought up with several pieces of information used to monitor the fuzzing:

Execution time
Basic core statistics for number of executions per second overall and per core
Amount of coverage seen
Number of crashes seen
Average number of dirty pages needed to reset a guest
Number of cores currently alive
Basic coverage graph

The TUI also includes performance information about where time is being spent in the fuzzer as well as information about the reasons a virtual machine is exiting. This information is useful to have for understanding if the fuzzer is actually spending relevant time fuzzing or if the fuzzer is doing extraneous computation that is causing a performance degradation.

For example, this fuzz run is only spending 14%  of the total execution time in the guest virtual machine fuzzing the target. For some targets, this could present an opportunity to improve the performance of the fuzzer. Ideally, we want the fuzzer to be working in the guest virtual machine as much as possible. This test case is so small, though, that this number is to be expected, but it is still useful to keep in mind for more complex targets.

Lastly, there is a running list of recently-hit coverage to present a quick glance at what the fuzzer has recently uncovered in the target.

When fuzzing, the current coverage state seen by the fuzzer is written to disk, in real time, in a variety of formats: raw addresses, module+offset for usage in tools like Lighthouse, and (if debug information is available) LCOV format used for graphically annotating source code with this coverage information. This allows the developer or researcher to review the coverage to understand what the fuzzer is actually accomplishing to help them iterate on the fuzzer for potentially better results.

LCOV coverage displayed mid-fuzz session

Coverage displayed using Lighthouse in Binary Ninja

 

 

 

 

 

 

 

 

 

 

After some time, the fuzzer finds the correct input string to solve the first part of the target. We can look at the current corpus of the fuzzer in ./snapshot/current_corpus.

$ xxd snapshot/current_corpus/c2b9b72428f4059c
┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐
│00000000│ 66 75 7a 7a 6d 65 74 6f ┊ 73 6f 6c 76 65 6d 65 21 │fuzzmeto┊solveme!│
└────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘

Snapchange hooks

With the password discovered, the second half of the target revolves around getting getpid() to return an arbitrary value. This value isn’t expected to be returned from getpid(), but we can use Snapchange’s introspection features to force this result to happen. Snapchange includes breakpoint callbacks as a technique to introspect and modify the guest, such as by patching functions. Here is one example of forcing getpid() to always return the value 0xdeadbeef for our fuzzer.

fn breakpoints(&self) -> Option<&[Breakpoint<Self>]> {
Some(&[
Breakpoint {
lookup: BreakpointLookup::SymbolOffset(“libc.so.6!__GI___getpid”, 0x0),
bp_type: BreakpointType::Repeated,
bp_hook: |fuzzvm: &mut FuzzVm<Self>, _input, _fuzzer| {
// Set the return value to 0xdeadbeef
fuzzvm.set_rax(0xdead_beef); // [0]

// Fake an immediate return from the function by setting RIP to the
// value popped from the stack (this assumes the function was entered
// via a `call`)
fuzzvm.fake_immediate_return()?; // [1]

// Continue execution
Ok(Execution::Continue) // [2]
},
}
])
}

The fuzzer sets a breakpoint on the address for the symbol libc.so.6!__GI___getpid. When the breakpoint is triggered, the bp_hook function is called with the guest virtual machine (fuzzvm) as an argument. The return value for the function is stored in register rax, so we can set the value of rax to 0xdeadbeef via fuzzvm.set_rax(0xdeadbeef) [0]. We want the function to immediately return and not continue executing getpid(), so we fake the returning of the function by calling fuzzvm.fake_immediate_return() [1] to set the instruction pointer to the value on the top of the stack and Continue execution of the guest at this point [2] (rather than forcing the guest to reset).

We aren’t restricted to user space breakpoints. We could also force getpid() to return 0xdeadbeef by patching the call in the kernel in __task_pid_nr_ns. At offset 0x83 in __task_pid_nr_ns, we patch the moment the PID is read from memory and returned to the user from the kernel.

/*
// Single step trace from `cargo run -r — trace ./snapshot/current_corpus/c2b9b72428f4059c`
INSTRUCTION 1162 0xffffffff810d0ed3 0x6aa48000 | __task_pid_nr_ns+0xb3
mov eax, dword ptr [rbp+0x50]
EAX:0x0
[RBP:0xffff88806c91c000+0x50=0xffff88806c91c050 size:UInt32->0xe6]]
[8b, 45, 50]
*/
Breakpoint {
lookup: BreakpointLookup::SymbolOffset(“__task_pid_nr_ns”, 0xb3),
bp_type: BreakpointType::Repeated,
bp_hook: |fuzzvm: &mut FuzzVm<Self>, _input, _fuzzer| {
// The instruction retrieving the PID is
// mov eax, dword ptr [rbp+0x50]
// Write the 0xdeadbeef value into the memory at `rbp + 0x50`

// Get the current `rax` value
let rbp = fuzzvm.rbp();
let val: u32 = 0xdeadbeef;

// Write the wanted 0xdeadbeef in the memory location read in the
// kernel
fuzzvm.write_bytes_dirty(VirtAddr(rbp + 0x50), CR3, &val.to_le_bytes())?;

// Continue execution
Ok(Execution::Continue)
},
},

With getpid patched, we can continue fuzzing the target and check the Crashes tab in the TUI.

This looks like we’ve detected a segmentation fault (SIGSEGV) for address 0xcafecafe from the bug found in the target:

// BUG
*(int*)0xcafecafe = 0x41414141;

Single Step Traces

With a crash in hand, Snapchange can give us a single step trace using the crash as an input.

$ cargo run -r — trace ./snapshot/crashes/SIGSEGV_addr_0xcafecafe_code_AddressNotMappedToObject/c2b9b72428f4059c

This will give the state of the system at the time of the reset as well as the single step trace of the execution path. Notice that the guest reset on the force_sig_fault kernel function. This function is hooked by Snapchange to monitor for crashing states.

The single step trace is written to disk containing all instructions executed as well as the register state during each instruction. The trace includes:

Decoded instruction
State of the involved registers and memory for the given instruction
Assembly bytes for the instruction (useful for patching)
Source code where this assembly originated (if debug information is available)

What’s next for Snapchange?

The team is excited to hear from you and the community at large. We have ideas for more features and other analysis that can aid in fuzzing efforts and are interested in hearing what features the community is looking for in their fuzzing workflows. We’d also love feedback from you about your experience writing fuzzers using Snapchange on Snapchange’s GitHub. If this blog has sparked your curiosity, check out the other real-world examples included in the Snapchange repository.

Flatlogic Admin Templates banner

S3 URI Parsing is now available in AWS SDK for Java 2.x

The AWS SDK for Java team is pleased to announce the general availability of Amazon Simple Storage Service (Amazon S3) URI parsing in the AWS SDK for Java 2.x. You can now parse path-style and virtual-hosted-style S3 URIs to easily retrieve the bucket, key, region, style, and query parameters. The new parseUri() API and S3Uri class provide the highly-requested parsing features that many customers miss from the AWS SDK for Java 1.x. Please note that Amazon S3 AccessPoints and Amazon S3 on Outposts URI parsing are not supported.

Motivation

Users often need to extract important components like bucket and key from stored S3 URIs to use in S3Client operations. The new parsing APIs allow users to conveniently do so, bypassing the need for manual parsing or storing the components separately.

Getting Started

To begin, first add the dependency for S3 to your project.

<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<version>${s3.version}</version>
</dependency>

Next, instantiate S3Client and S3Utilities objects.

S3Client s3Client = S3Client.create();
S3Utilities s3Utilities = s3Client.utilities();

Parsing an S3 URI

To parse your S3 URI, call parseUri() from S3Utilities, passing in the URI. This will return a parsed S3Uri object. If you have a String of the URI, you’ll need to convert it into an URI object first.

String url = “https://s3.us-west-1.amazonaws.com/myBucket/resources/doc.txt?versionId=abc123&partNumber=77&partNumber=88”;
URI uri = URI.create(url);
S3Uri s3Uri = s3Utilities.parseUri(uri);

With the S3Uri, you can call the appropriate getter methods to retrieve the bucket, key, region, style, and query parameters. If the bucket, key, or region is not specified in the URI, an empty Optional will be returned. If query parameters are not specified in the URI, an empty map will be returned. If the field is encoded in the URI, it will be returned decoded.

Region region = s3Uri.region().orElse(null); // Region.US_WEST_1
String bucket = s3Uri.bucket().orElse(null); // “myBucket”
String key = s3Uri.key().orElse(null); // “resources/doc.txt”
boolean isPathStyle = s3Uri.isPathStyle(); // true

Retrieving query parameters

There are several APIs for retrieving the query parameters. You can return a Map<String, List<String>> of the query parameters. Alternatively, you can specify a query parameter to return the first value for the given query, or return the list of values for the given query.

Map<String, List<String>> queryParams = s3Uri.rawQueryParameters(); // {versionId=[“abc123”], partNumber=[“77”, “88”]}
String versionId = s3Uri.firstMatchingRawQueryParameter(“versionId”).orElse(null); // “abc123”
String partNumber = s3Uri.firstMatchingRawQueryParameter(“partNumber”).orElse(null); // “77”
List<String> partNumbers = s3Uri.firstMatchingRawQueryParameters(“partNumber”); // [“77”, “88”]

Caveats

Special Characters

If you work with object keys or query parameters with reserved or unsafe characters, they must be URL-encoded, e.g., replace whitespace ” ” with “%20”.

Valid:
“https://s3.us-west-1.amazonaws.com/myBucket/object%20key?query=%5Bbrackets%5D”

Invalid:
“https://s3.us-west-1.amazonaws.com/myBucket/object key?query=[brackets]”

Virtual-hosted-style URIs

If you work with virtual-hosted-style URIs with bucket names that contain a dot, i.e., “.”, the dot must not be URL-encoded.

Valid:
“https://my.Bucket.s3.us-west-1.amazonaws.com/key”

Invalid:
“https://my%2EBucket.s3.us-west-1.amazonaws.com/key”

Conclusion

In this post, I discussed parsing S3 URIs in the AWS SDK for Java 2.x and provided code examples for retrieving the bucket, key, region, style, and query parameters. To learn more about how to set up and begin using the feature, visit our Developer Guide. If you are curious about how it is implemented, check out the source code on GitHub. As always, the AWS SDK for Java team welcomes bug reports, feature requests, and pull requests on the aws-sdk-java-v2 GitHub repository.

Securing PyPI for the Future

We are excited to announce that Amazon Web Services is now the Python Package Index (PyPI) Security Sponsor at the Python Software Foundation, the non-profit devoted to advancing open source technology related to the Python programming language. Through this sponsorship, AWS is providing funding to the PSF to hire a full-time Safety and Security Engineer dedicated to improving the security posture of PyPI. This effort is part of our broader initiative at Amazon Web Services (AWS) to support open source software supply chain security.

Python is an extremely popular open source programming and scripting language among our customers, partners, and Amazon engineers. It is number one on both the TIOBE Index (April 2023) and the PopularitY of Programming Language (PYPL) Index. PyPI is the primary repository of software for the Python programming language. Since Python is modular in nature, most Python applications rely heavily on PyPI to provide the necessary dependencies for core functions rather than reinventing them each time. PyPI is also the primary distribution point for Python applications and libraries.

At AWS, we know that scale and success bring broad responsibility. Amazon and its customers build solutions with Python and we recognize the need to give back to the open source communities that we depend on and help ensure their long term sustainability. AWS is a maintaining sponsor of the PSF and has supported PyPI since 2018, when the index was rewritten to run on AWS in order to address performance and scalability concerns. Today, PyPI scales beautifully due to the significant work from PSF Director of Infrastructure Ee Durbin and the PyPI infrastructure team. AWS is pleased to be able to continue to support PyPI via AWS credits, which offset their infrastructure costs.

PyPI is now facing a new challenge at scale: keeping Python software packages secure. PyPI is regularly threatened by malicious actors, with attacks including typosquatting, dependency injection, and dependency confusion. Companies (including AWS) publish business-critical software on PyPI, and packages are being maliciously published to appear to be from users who represent a large target. These attacks on PyPI have lead to a lengthy support ticket backlog, which are currently addressed by a single part-time volunteer. Their efforts to date to stay on top of this have been nothing short of incredible, but they can be more sustainable.

As the first PyPI Security Sponsor, we are providing additional funding which will allow the PSF to hire a full-time Safety and Security Engineer for PyPI. This will provide PyPI with additional resources to take down malware from the site and respond more quickly to support tickets related to security issues. Additionally, it will allow PyPI to shift from a reactive approach to security to a proactive one in which they can develop a security plan with improvement milestones and enable proper security audits of new PyPI features before launch.

Supply chain security is an industry wide concern, and Python is not alone in these challenges. The Python Package Index is critical to countless users around the world. A new safety and security engineer will help alleviate the current bottleneck of support issues, remove malware faster, and keep PyPI secure for the benefit of all its users. We look forward to continuing our work with the Python Software Foundation as we work towards improving open source supply chain security.

Flatlogic Admin Templates banner

Multi-Architecture Container Builds with CodeCatalyst

AWS Graviton Processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon Elastic Compute Cloud (Amazon EC2). Amazon CodeCatalyst recently added support to run workflow actions using on-demand or pre-provisioned compute powered by AWS Graviton processors. Customers can now access high performance AWS Graviton processors to build artifacts for Arm, or improve their price performance. In this post I will show you how to create a multi-architecture docker image using CodeCatalyst that can run on both amd64 and arm64 processors.

Background

Container images only run on a system with the same CPU architecture for which they were targeted. For example, an amd64 image runs on Intel and AMD processors, while an arm64 image runs on AWS Graviton. Note that amd64 and x86_64 are often used interchangeable, and I have chosen to use amd64 in this post. Rather than maintaining multiple repositories for each image type, you can combine variants for multiple architectures in the same repository. In addition, you can create a manifest describing which image to use for each architecture. This is known as multi-architecture, or multi-platform images.

Let us look at an example to further understand multi-arch images. In this screenshot from Amazon Elastic Container Registry (Amazon ECR), I have created two images for a simple hello-world application. One image is tagged latest-amd64 for AMD architectures and one tagged latest-arm64 for ARM architectures.

In addition, I have created an Image Index tagged latest. The image index is a map describing which image to use for each architecture. This allows my users to simply pull hello-world:latest and the index will identify the correct image based on the target platform. The image index contains the following manifest.

{
“schemaVersion”: 2,
“mediaType”: “application/vnd.docker.distribution.manifest.list.v2+json”,
“manifests”: [
{
“mediaType”: “application/vnd.docker.distribution.manifest.v2+json”,
“size”: 1573,
“digest”: “sha256:eccb6dd2c2dbfc9…”,
“platform”: {
“architecture”: “amd64”,
“os”: “linux”
}
},
{
“mediaType”: “application/vnd.docker.distribution.manifest.v2+json”,
“size”: 1573,
“digest”: “sha256:c64812837fbd43…”,
“platform”: {
“architecture”: “arm64”,
“os”: “linux”
}
}
]
}

Now that I have explained what a multi-arch image is, I will explain how to create one in a CodeCatalyst workflow. A CodeCatalyst workflow is an automated procedure that describes how to build, test, and deploy your code as part of a continuous integration and continuous delivery (CI/CD) system. A workflow defines a series of steps, or actions, to take during a workflow run. Let’s get started.

Prerequisites

If you would like to follow along with this walkthrough, you will need:

A CodeCatalyst space and associated AWS account.

An empty CodeCatalyst projectand source repository in the space.
An Amazon ECR private repository in the associated AWS account.
A CodeCatalyst environment connected to the associated AWS account.

Walkthrough

In this walkthrough I will create a simple application using an Apache HTTP Server serving a static hello world page. The workload is inconsequential. I will focus on the process of building the container image using a CodeCatalyst workflow. The Workflow will build two container images, one for amd64 and one for arm64. The two build tasks will run in parallel on different compute architectures. When both builds are complete, the workflow will build the docker manifest. At the end of this post, my workflow will look like this.

Note that docker also offers a plugin called buildx that will allow you to build a multi-architecture image with a single command. In a real-world application, the workflow would also build the source code, run unit tests, etc. on each architecture. The sample application used in this post is so simple that there is no need to build and test the source code. Let’s examine the sample application now.

Sample Application

Initially the empty repository will only have a README.md file. By the end of this post, my repository will look like this.

I’ll begin by creating the file named index.html. I used the Create file button in CodeCatalyst console shown previously. My index.html file has the following content:

<html>
<head>
<title>Hello World!</title>
</head>
<body>
<h1>Hello World!</h1>
<p>Hello from a multi-architecture container created in CodeCatalyst.</p>
</body>
</html>

I’ll also create a Dockerfile that contains two commands. The first command instructs Docker to build a new image from the Apache HTTP Server Project image called httpd. It is important to note that the httpd image already supports multiple architectures including amd64 and arm64. When creating a multi-architecture image, the base image must also support these architectures. The second command simply copies the index.html file above into the new image. My Dockerfile file has the following content.

FROM httpd
COPY ./index.html /usr/local/apache2/htdocs/

With the source code for my sample application complete, I can turn my attention to the workflow.

CI/CD Workflow

To create a new workflow, select CI/CD from navigation on the left and then select Workflows (1). Then, select Create workflow (2), leave the default options, and select Create (3).

If the workflow editor opens in YAML mode, select Visual to open the visual designer. Now, I can start adding actions to the workflow.

Build Action for the AMD64 Variant

I’ll begin by adding a build action for the amd64 container. Select “+ Actions” to open the actions list. Find the Build action and click “+” to add a new build action to the workflow.

On the Inputs tab, create three variable named AWS_DEFAULT_REGION, IMAGE_REPO_NAME, and IMAGE_TAG. Set the first two values equal to the region and **** name of your Amazon ECR repository**.** Set the third to latest-amd64. For example:

Now select the Configuration tab and rename the action docker_build_amd64. Select the Environment, AWS account connection, and Role for the associated AWS account where you created the Amazon ECR repository. For example:

Then, copy and paste the following code into the Shell commands. This code will build the image using the Dockerfile you created previously. Then, it logs into Amazon ECR, and finally, pushes the new image to ECR.

– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

If you switch back to the YAML view, you can see that the designer has added the following action to the workflow definition.

docker_build_amd64:
Identifier: aws/[email protected]
Compute:
Type: EC2
Inputs:
Sources:
– WorkflowSource
Variables:
– Name: AWS_DEFAULT_REGION
Value: us-west-2
– Name: IMAGE_REPO_NAME
Value: hello-world
– Name: IMAGE_TAG
Value: latest-amd64
Environment:
Name: demo
Connections:
– Role: CodeCatalystPreviewDevelopmentAdministrator
Name: development
Configuration:
Steps:
– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

With the amd64 image complete, you can move on to the arm64 image.

Build Action for the ARM64 Variant

Add a second build action named docker_build_arm64 for the arm64 container. The configuration is nearly identical to the previous action with two minor changes. First, on the Inputs tab, I set the IMAGE_TAG to latest-arm64.

Second, on the Configuration tab, change the compute fleet to Linux.Arm64.Large. That is all you need to do to run your action on AWS Graviton. For example:

The Shell commands are identical to the arm64 build action. In addition, don’t forget to select the Environment, AWS account connection, and Role on the configuration tab. The complete configuration for the second action looks like this:

docker_build_arm64:
Identifier: aws/[email protected]
Compute:
Type: EC2
Fleet: Linux.Arm64.Large
Inputs:
Sources:
– WorkflowSource
Variables:
– Name: AWS_DEFAULT_REGION
Value: us-west-2
– Name: IMAGE_REPO_NAME
Value: hello-world
– Name: IMAGE_TAG
Value: latest-arm64
Environment:
Name: demo
Connections:
– Role: CodeCatalystPreviewDevelopmentAdministrator
Name: development
Configuration:
Steps:
– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

Now that you have a build action for the amd64 and arm64 images, you simply need to create a manifest file describing which image to use for each architecture.

Build Action for the Manifest

The final step in the workflow is to create the Docker manifest. Create a third build action named docker_manifest. You want this action to wait for the prior two actions to complete. Therefore, select the prior two actions from the Depends on drop down, like this:

Also configure four variables. AWS_DEFAULT_REGION and IMAGE_REPO_NAME are identical to the prior actions. In addition, IMAGE_TAG_AMD64 and IMAGE_TAG_ARM64 include the tags you created in the prior actions.

On the configuration tab, select the Environment, AWS account connection, and Role as you did in the prior actions. Then, copy and paste the following Shell commands.

– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch amd64 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch arm64 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64
– Run: docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/$IMAGE_REPO_NAME

The shell commands create a manifest and then annotate it with the correct image for both amd64 and arm64. The final action looks like this.

docker_manifest:
Identifier: aws/[email protected]
DependsOn:
– docker_build_arm64
– docker_build_amd64
Compute:
Type: EC2
Inputs:
Sources:
– WorkflowSource
Variables:
– Name: AWS_DEFAULT_REGION
Value: us-west-2
– Name: IMAGE_REPO_NAME
Value: hello-world
– Name: IMAGE_TAG_AMD64
Value: latest-amd64
– Name: IMAGE_TAG_ARM64
Value: latest-arm64
Environment:
Name: demo
Connections:
– Role: CodeCatalystPreviewDevelopmentAdministrator
Name: development
Configuration:
Steps:
– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output
text`
– Run: aws ecr get-login-password | docker login –username AWS
–password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker manifest create
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch amd64
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch arm64
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64
– Run: docker manifest push
$AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/$IMAGE_REPO_NAME

I now have a complete CI/CD workflow that creates a container images for both amd64 and arm64. When I commit the changes, CodeCatalyst will execute my workflow, build the images, and push to ECR.

Cleanup

If you have been following along with this workflow, you should delete the resources you deployed so you do not continue to incur charges. First, delete the Amazon ECR repository using the AWS console. Second, delete the project from CodeCatalyst by navigating to Project settings and choosing Delete project.

Conclusion

AWS Graviton processors are custom-built by AWS to deliver the best price performance for cloud workloads. In this post I explained how to configure CodeCatalyst workflow actions to run on AWS Graviton. I used CodeCatalyst to create a workflow that builds a multi-architecture container image that can run on both amd64 and arm64 architectures. Get started building your multi-arch containers in Amazon CodeCatalyst today! You can read more about CodeCatalyst workflows in the documentation.

Announcing General Availability of Amazon CodeCatalyst

We are pleased to announce that Amazon CodeCatalyst is now generally available. CodeCatalyst is a unified software development service that brings together everything teams need to get started planning, coding, building, testing, and deploying applications on AWS. CodeCatalyst was designed to make it easier for developers to spend more time developing application features and less time setting up project tools, creating and managing continuous integration and continuous delivery (CI/CD) pipelines, provisioning and configuring various development and deployment environments, and onboarding project collaborators. You can learn more and get started building in minutes on the AWS Free Tier at the CodeCatalyst website.

Launched in preview at AWS re:Invent in December 2022, CodeCatalyst provides an easy way for professional developers to build and deploy applications on AWS. We built CodeCatalyst based on feedback we received from customers looking for a more streamlined way to build using DevOps best practices. They want a complete software development service that lets them start new projects more quickly and gives them confidence that it will continue delivering a great long term experience throughout their application’s lifecycle.

Do more of what you love, and less of what you don’t

Starting a new project is an exciting time of imagining the possibilities: what can you build and how can you enable your end users to do something that wasn’t possible before? However, the joy of creating something new can also come with anxiety about all of the decisions to be made about tooling and integrations. Once your project is in production, managing tools and wrangling project collaborators can take your focus away from being creative and doing your best work. If you are spending too much time keeping brittle pipelines running and your teammates are constantly struggling with tooling, the day to day experience of building new features can start to feel less than joyful.

That is where CodeCatalyst comes in. It isn’t just about developer productivity – it is about helping developers and teams spend more time using the tools they are most comfortable with. Teams deliver better, more impactful outcomes to customers when they have more freedom to focus on their highest-value work and have to concern themselves less with activities that feel like roadblocks. Everything we do stems from that premise, and today’s launch marks a major milestone in helping to enable developers to have a better DevOps experience on AWS.

How CodeCatalyst delivers a great experience

There are four foundational elements of CodeCatalyst that are designed to help minimize distraction and maximize joy in the software development process: blueprints for quick project creation, actions-based CI/CD automation for managing day-to-day software lifecycle tasks, remote Dev Environments for a consistent build experience, and project and issue management for a more streamlined team collaboration.

Blueprints get you started quickly. CodeCatalyst blueprints set up an application code repository (complete with a working sample app), define cloud infrastructure, and run pre-configured CI/CD workflows for your project. Blueprints bring together the elements that are necessary both to begin a new project and deploy it into production. Blueprints can help to significantly reduce the time it takes to set up a new project. They are built by AWS for many use cases, and you can configure them with the programming languages and frameworks that you need both for your application and the underlying infrastructure-as-code. When it comes to incorporating existing tools like Jira or GitHub, CodeCatalyst has extensions that you can use to integrate them into your projects from the beginning without a lot of extra effort. Learn more about blueprints.

“CodeCatalyst helps us spend more time refining our customers’ build, test, and deploy workflows instead of implementing the underlying toolchains,” said Sean Bratcher, CEO of Buildstr. “The tight integration with AWS CDK means that definitions for infrastructure, environments, and configs live alongside the applications themselves as first-class code. This helps reduce friction when integrating with customers’ broader deployment approach.”

Actions-based CI/CD workflows take the pain out of pipeline management. CI/CD workflows in CodeCatalyst run on flexible, managed infrastructure. When you create a project with a blueprint, it comes with a complete CI/CD pipeline composed of actions from the included actions library. You can modify these pipelines with an action from the library or you can use any GitHub Action directly in the project to edit existing pipelines or build new ones from scratch. CodeCatalyst makes composing these actions into pipelines easier: you can switch back and forth between a text-based editor for declaring which actions you want to use through YAML and a visual drag-and-drop pipeline editor. Updating CI/CD workflows with new capabilities is a matter of incorporating new actions. Having CodeCatalyst create pipelines for you, based on your intent, means that you get the benefits of CI/CD automation without the ongoing pain of maintaining disparate tools.

“We needed a streamlined way within AWS to rapidly iterate development of our Reading Partners Connects e-learning platform while maintaining the highest possible quality standards,” said Yaseer Khanani, Senior Product Manager at Reading Partners. “CodeCatalyst’s built-in CI/CD workflows make it easy to efficiently deploy code and conduct testing across a distributed team.”

Automated dev environments make consistency achievable A big friction point for developers collaborating on a software project is getting everyone on the same set of dependencies and settings in their local machines, and ensuring that all other environments from test to staging to production are also consistent. To help address this, CodeCatalyst has Dev Environments that are hosted in the cloud. Dev Environments are defined using the devfile standard, ensuring that everyone working on a project gets a consistent and repeatable experience. Dev Environments connect to popular IDEs like AWS Cloud9, VS Code, and multiple JetBrains IDEs, giving you a local IDE feel while running in the cloud.

“Working closely with customers in the software developer education space, we value the reproducible and pre-configured environments Amazon CodeCatalyst provides for improving learning outcomes for new developers. CodeCatalyst allows you to personalize student experiences while providing facilitators with control over the entire experience.” said Tia Dubuisson, President of Belle Fleur Technologies.

Issue management and simplified team onboarding streamline collaboration. CodeCatalyst is designed to help provide the benefits of building in a unified software development service by making it easier to onboard and collaborate with teammates. It starts with the process of inviting new collaborators: you can invite people to work together on your project with their email address, bypassing the need for everyone to have an individual AWS account. Once they have access, collaborators can see the history and context of the project and can start contributing by creating a Dev Environment.

CodeCatalyst also has built-in issue management that is tied to your code repo, so that you can assign tasks such as code reviews and pull requests to teammates and help track progress using agile methodologies right in the service. As with the rest of CodeCatalyst, collaboration comes without the distraction of managing separate services with separate logins and disparate commercial agreements. Once you give a new teammate access, they can quickly start contributing.

New to CodeCatalyst since the Preview launch

Along with the announcement of general availability, we are excited to share a few new CodeCatalyst features. First, you can now create a new project from an existing GitHub repository. In addition, CodeCatalyst Dev Environments now support GitHub repositories allowing you to work on code stored in GitHub.

Second, CodeCatalyst Dev Environments now support Amazon CodeWhisperer. CodeWhisperer is an artificial intelligence (AI) coding companion that generates real-time code suggestions in your integrated development environment (IDE) to help you more quickly build software. CodeWhisperer is currently supported in CodeCatalyst Dev Environments using AWS Cloud 9 or Visual Studio Code.

Third, Amazon CodeCatalyst recently added support to run workflow actions using on-demand or pre-provisioned compute powered by AWS Graviton processors. AWS Graviton Processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon Elastic Compute Cloud (Amazon EC2). Customers can use workflow actions running on AWS Graviton processors to build applications that target Arm architecture, create multi-architecture containers, and modernize legacy applications to help customers reduce costs.

Finally, the library of CodeCatalyst blueprints is continuously growing. The CodeCatalyst preview release included blueprints for common workloads like single-page web applications, serverless applications, and many others. In addition, we have recently added blueprints for Static Websites with Hugo and Jekyll, as well as Intelligent Document Processing workflows.

Learn more about CodeCatalyst at Developer Innovation Day

Next Wednesday, April 26th, we are hosting Developer Innovation Day, a free 7-hour virtual event that is all about helping developers and teams learn to be productive, and collaborate, from discovery to delivery to running software and building applications. Developers can discover how the breadth and depth of AWS tools and the right practices can unlock your team’s ability to find success and take opportunities from ideas to impact.

CodeCatalyst plays a big part in Developer Innovation Day, with five sessions designed to help you see real examples of how you can spend more time doing the work you love best! Get an overview of the service, see how to deploy a working static website in minutes, collaborating effectively with teammates, and more.

Try CodeCatalyst

Ready to try CodeCatalyst? You can get started on the AWS Free Tier today and quickly deploy a blueprint with working sample code. If you would like to learn more, you can read through a collection of DevOps blogs about CodeCatalyst or read the documentation. We can’t wait to see how you innovate with CodeCatalyst!

The return of ECMAScript 2023 (and Angular)

#​634 — April 13, 2023

Read on the Web

JavaScript Weekly

The JavaScript Equality Table GameMinesweeper will feel like a walk in the park after this reminder of the horrors of JavaScript’s ==. If you need to go in depth, Section 7.2.14 of the ECMAScript spec will help, but otherwise? Stick to three equals (===) unless you have a good reason not to.

Reinis Ivanovs

htmx 1.9 Released — htmx (homepage) is an increasingly popular library outside of the JavaScript space as it lets folks use things like WebSockets, SSE, AJAX, and CSS transitions by marking up HTML rather than writing lots of JavaScript. v1.9 adds support for view transitions and generalized inline event handling. The code examples are worth a look – htmx makes a lot possible, with rather little tooling or markup needed.

htmx team

Supercharge AWS S3 Video Streaming with ImageKit’s Video API — Get adaptive bitrate streaming, video optimizations, format conversions, and real-time transformations and watermarking by attaching ImageKit with your AWS S3 bucket.

ImageKit sponsor

The ECMAScript® 2023 Language Spec Steps Forward — After prematurely announcing the progression of the ES2023 spec in February, we can now announce: TC39 has approved the ECMAScript 2023 spec, and while it remains a candidate, it’s now a step closer to eventual ECMA General Assembly approval. The finished proposals list for 2023 now includes Array find from last, hashbang support, Symbols as WeakMap keys, and change Array by copy.

ECMA International

IN BRIEF:

▶️ Angular is back with a vengeance, says Fireship.

Serverless platform AWS Lambda has introduced response streaming on its JS runtime (for now) so you can send response data as it becomes available rather than all at once. (→ Via Serverless Status)

/[]/ A look at a seemingly JS-specific quirk in regular expressions when empty character classes are used.

An analysis of languages used in GitHub pull requests shows JavaScript/TypeScript leading the way with Python just slightly behind. The comments went in lots of odd directions here.

Seven folks at Vue Amsterdam 2023 shared their ▶️ tips on getting started with Vue.js.

▶️ An hour-long chat on the State of Node.js with some leading figures.

Node v18.16.0 (LTS) has been released with backported support for compiling JavaScript code into a single executable app. Node 19’s Ada URL parser also appears.

A striking visual introduction to React and its fundamental concepts.

RELEASES:

Node.js v19.9 (Current)

Puppeteer v19.9 – It’s the week for almost 20s.

pnpm 8.2 – Efficient npm alternative.

Redwood 4.5 – Popular app framework.

Storybook 7.0 – With an official release post this time.

???? Articles & Tutorials

Ranger: Use a Range-Like Syntax for Anything? — const numbers = 1[[…8]], anyone? This is a neat trick for a bit of syntatic sugar, but I’m not sure it would pass the sniff test for most teams. You might find the implementation interesting to check out though. Long may this sort of experimentation continue.

Jon Randy

???? A proposal for JavaScript to get built-in range support is at stage 2.

????  Build and Deploy ‘23: May 3rd – Save the Date! — The ultimate CI/CD virtual conference – best practices and end-user success stories from DevOps experts. Plus, a keynote from Emily Freeman, author of DevOps for Dummies.

Codefresh sponsor

Trying Node’s Built-In Test Runner — In 2022, Node gained an experimental built-in test runner (node:test). It’s going to become stable in the forthcoming Node v20, so it’s a good time to look at how it works and how it compares to other solutions you might already be using.

Gleb Bahmutov

▶  The Right Way To Merge JavaScript Objects — In just one minute, too.

Jack Herrington

Ref vs. Reactive: What to Choose When Using Vue 3 Composition API?

Michael Hoffmann

How to Stream File Uploads to S3 Object Storage from Node.js

Austin Gil

How to Contribute to a Project You Have No Idea About

Michal Warda

???? Code & Tools

Reveal.js 4.5: An HTML Presentation Framework — Brings elegant presentations to anyone with a Web browser. v4.5 was just released with support for jumping to specific slides, a few new themes, and with live reload working with files in subfolders.

Hakim El Hattab

List.js: Add Search, Sort, Filters, and More to Tables and Lists — A handy library for adding search, sort, filters and flexibility to tables, lists or other HTML elements. Want an example? Why, of course.

Jonny Strömberg

????Quokka.js – #1 JavaScript Scratchpad for VS Code — With 2M+ downloads, Quokka.js is the #1 tool for exploring and testing JavaScript/TypeScript. Code runs immediately as you type.

Wallaby.js sponsor

Queue: Async Function Queue with Adjustable Concurrency — Exports a class Queue that implements most of the Array API.

Jesse Tane

Yet Another React Lightbox — Add a lightbox component to your projects “in minutes” – there are several examples to try, as well as a playground with adjustable settings. GitHub repo.

Igor Danchenko

Sandpack 2.6: Component Toolkit for Creating Live Code Editing Experiences — Created by the folks at CodeSandbox, so they surely know what they’re doing in this space. GitHub repo.

CodeSandbox

Easy to Use, Full-Stack Application Monitoring

TelemetryHub sponsor

TS Writer: A Template String Template Engine for Generating Code at Runtime — Rather niche, but aimed at situations where you might need to generate code at runtime in TypeScript.

tinylibs

Minimatch 9.0
↳ Glob matcher library.
     minimatch(“bar.foo”, “*.foo”)

hls.js 1.4
↳ Play HLS in browsers with support for MSE.

Partytown 0.8
↳ Relocate third-party scripts off the main thread.

Plasmo 0.68
“It’s like Next.js for browser extensions”

Obsidian 8.0 – GraphQL, built for Deno.

MUI X 6.1 – React component suite.

TestCafe 2.5 – Automate end-to-end web testing.

Maquette 3.6 – Lightweight virtual DOM library.

Venom 5.0 – WhatsApp bot library.

???? Jobs

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

Full Stack JavaScript Engineer @ Emerging Cybersecurity Startup — Small team/big results. Fun + flexible + always interesting. Come build our award-winning, all-in-one cybersecurity platform.

Defendify

????‍???? Got a job listing to share? Here’s how.

Flatlogic Admin Templates banner

Building a CRM System: Does CRM Require Coding?

Are you curious to learn if CRM requires coding? Researching for answers to questions like do I need to know how to code to use a CRM system? What coding language is used for CRM systems? How difficult is it to learn CRM coding? Technology is just a tool. In terms of getting the kids working together and motivating them, the teacher is the most important. 

Understanding the complexities of Customer Relationship Management (CRM) requires an in-depth knowledge of how businesses interact with customers. This includes how data is collected, stored, and used to improve customer experience. With a good background in coding, businesses can create custom CRM solutions that are tailored to their customer base.

By reading this article you will learn what is CRM system, does it requires coding and the key benefits. Also, we provided our top 10+ CRM systems in 2023 that don’t require coding. Let’s dive deeper into it!

What is CRM System and Does It Require Coding?

The answer to the question “Does customer relationship management require coding?” is “No, it does not” when it comes to maintaining client relationships.

Customer relationship management (CRM) is an effective technology used by businesses to better manage their interactions with prospects and customers. While some CRM systems need coding to be utilized, most do not. The majority of CRM systems are made with an intuitive, user-friendly interface that enables users to manage their customer interactions quickly and effectively without any coding knowledge. This enables even individuals with a minimal level of technical experience to utilize CRM systems to their fullest potential.

Benefits of CRM

CRM systems have several benefits and may assist your business in a variety of ways. Let’s dive deep into the key benefits that CRM could provide.

Benefit #1. Improved customer service

CRM systems were created to enhance relationships between businesses and their customers, and that is still its primary advantage. Tracking customers’ information such as demographics, purchasing history, and messages sent through all channels, allows businesses to gain easy access to all the information they need about their customers. This ensures that employees have all the necessary information to provide a superior customer experience, resulting in higher customer satisfaction.

Benefit #2. Increased sales

Using CRM systems can help you optimize your sales process, create a well-defined sales pipeline, automate time-consuming tasks, and gain visibility into all of your sales data. This can potentially result in increased sales and productivity. You can use CRM to define a dependable sales process that can be modified as needed. It also automates tasks such as lead nurturing and follow-up emails, saving time for your team. Finally, you can use the CRM to track and analyze your sales data, making it easier to find opportunities for improvement and maximize your sales.

Benefit #3. Improved customer loyalty

After you have acquired and converted leads, it is important to keep them as customers and foster customer loyalty. Low customer retention can have harmful effects on your business, such as reduced revenue or a disrupted cash flow. Utilize your CRM and the data it provides about your customers to promote repeat business. The CRM will offer sentiment analysis, automated ticketing, customer support automation, and user behavior tracking to help identify issues and promptly handle them with your customers.

Benefit #4. Extensive analytics

Having plenty of data about your customers is essential, but it is equally important to be able to interpret it and use it to your advantage. CRM systems can help you make sense of the data by providing built-in analytic capabilities. This allows for a deeper understanding of the data, as it is broken down into understandable metrics such as click-through rates, bounce rates, and demographic information. With this knowledge, you can measure the success of your marketing campaigns and make adjustments as needed.  

Benefit #5. Higher efficiency and productivity

CRM software utilizes marketing automation technology to streamline mundane tasks such as drip campaigns, allowing employees to focus on more complex work. This technology can help make sure that nothing falls through the cracks by ensuring that all important emails are sent to the appropriate recipients. Furthermore, CRM software provides a dashboard displaying how your business is running and where your processes can be optimized.

Benefit #6. Information database centralized

CRM software is great for helping businesses manage their customer relationships, providing a single, centralized database with all the information about customers that anyone in the company needs. Sales reps can quickly and easily see what products a customer has shown interest in if the customer has interacted with the company before, records of those interactions will be included in the CRM, which can be used to inform future marketing and sales strategies. This eliminates the need to search through old files and records and ensures a smoother and more efficient customer experience.

Benefit #7. Maintaining communication with prospective leads

Lead nurturing can be a time-consuming and complicated endeavor, with many steps and chances to communicate. Utilizing a CRM simplifies the process, notifying your staff when it is time to contact the prospect and recording every point of contact, from emails to phone calls.

Benefit #8. Improved customer segmentation

Having hundreds of contacts can be daunting and overwhelming. For instance, how do you determine which customers should get your email regarding your new product in-store? A CRM can automatically categorize your contact lists based on your chosen criteria, making it easier to find the ones you need to contact whenever it is needed. You can arrange contacts by place, gender, age, buyer stage, and other factors.

Benefit #9. Automated sales reports

The CRM software’s dashboard and reporting features make it easy for your team to collect and organize data about prospective and current customers. It also helps employees automate and manage their pipelines and processes. With the CRM, team members can track their quotas and goals, evaluate their performance, and check their progress on each project with ease.

Benefit #10. Improved sales forecasting

CRM software’s automated sales reports allow you to review your past performance and strategically plan for the future. With these reports, you can identify trends and gain insight into the potential of your future sales cycle performance. This information can help you adjust your goals and metrics accordingly. 

Benefit #11. Improved internal communication

A CRM not only helps your business communicate with customers but also allows your employees to communicate more effectively with each other. Through the CRM, employees can stay up-to-date on customer interactions, maintain a consistent brand voice, and send notes, alerts, messages, and emails in one easy-to-use system.

Top 10+ CRM Systems That Don’t Require Coding in 2023

Salesforce

When looking for the top CRM applications, you’ll likely come across Salesforce as one of the top choices for small businesses. What makes this software stand out is that it recognizes that businesses have different requirements, unlike other CRM solutions which offer bundles of features. With Salesforce, users can pick and choose which features they need, and can always add or remove features as needed. 

Flatlogic

Flatlogic provides a simple method for developing a custom CRM solution with complete control over the source code and scalability. An all-in-one CRM software, Flatlogics, aids businesses in streamlining their customer interaction and support procedures. It includes effective tools for automating marketing and sales processes as well as managing contacts, leads, and sales possibilities. Moreover, it has capabilities like data visualizations, tailored and automated emails, and automatic client segmentation. Businesses may monitor client interactions, foster connections, and streamline manual procedures using Flatlogics CRM to increase customer satisfaction.

Zoho CRM

Zoho was a cloud-based CRM platform, but over the years, it has evolved to become an all-in-one suite to help businesses manage their invoicing and customer management needs. Though not open-source, it can be integrated with other platforms via third-party apps. For example, Zapier can be used to connect to eCommerce platforms like Shopify. For small businesses, the Standard plan is recommended, which offers custom reports and analytics, lead scoring, webhooks, and more. This plan is suitable for startups or new businesses.

HubSpot CRM

In 2014, HubSpot launched its free CRM service, broadening its reach in a fresh direction. With this expansion, you can now seamlessly integrate marketing and sales operations into one platform. The HubSpot suite is not only ideal for running inbound marketing campaigns but also for enhancing customer connections and driving more sales. Moreover, the unlimited team feature enables multiple users to come together and form a productive team. Upon signing up for the HubSpot CRM forever free plan, you can choose your level of expertise with CRM software. Subsequently, you can gain a comprehensive orientation of the software and begin importing your earlier data, sorting contacts, and inviting team members. The HubSpot dashboard collection consolidates sales, marketing, service, and CMS, making it easy to manage. Additionally, you can take advantage of six unique sales reports that help track your monthly progress and keep the contact information organized in a centralized, customizable database. As a bonus, HubSpot CRM offers advanced features such as email tracking, deal pipelines, messenger integrations, and email templates – features not typically found in other free CRMs.

Bitrix24

If you’re searching for a free and open-source CRM, Bitrix24 is a great option. Even though the open source side is handled by a Bitrix24 partner, the cost-free features it provides are what make it such an attractive choice. It offers limitless feature records, sales tracking, email marketing, and more – all without charging a top-dollar price. With over 7,000,000 companies using Bitrix24, it has become a go-to tool. The only downside is the learning curve (even with the paid version) compared to platforms like Freshsales. Despite this, it is an ideal CRM software for small businesses that need collaboration software that doesn’t break their budget and still offers almost all of the advanced features. With this in mind, it’s an excellent choice for improving collaboration and communication, as well as managing clients with ease. 

Pipedrive

Pipedrive is an easily accessible CRM tool. It provides detailed sales-based reporting tools and a visually appealing user interface that makes it simple to track leads and contacts. The platform also offers automated call tracking and a chatbot feature, allowing for improved communication both internally and externally. This makes Pipedrive an ideal CRM tool for small to medium-sized businesses, providing a comprehensive solution that covers most of their needs. 

Insightly

Insightly is a great and budget-friendly option for businesses looking for an easy, free, and open-source CRM. It offers fast onboarding and a free lifetime plan and a comprehensive tracking system, perfect for newcomers. While it may have slightly fewer features than some of its competitors, Insightly is an excellent tool for carefully managing customer relationships and monitoring the sales pipeline to gain more insights into the sales process. The main dashboard provides detailed stats on sales and pipelines, and you can store up to 25,000 records, including contacts, leads, organizations, reports, and projects. All in all, I believe Insightly is a sensible choice for those who don’t need a lot of features and are looking to get started on a CRM right away.

Apptivo

Apptivo is an award-winning cloud-based CRM software that provides users with an extensive suite of 50+ applications to help them reduce inaccuracies, save time, and get access to essential information and tools to collaborate with their customers more effectively. This software offers a variety of services such as CRM, project management, invoicing, and much more. With over 200,000+ users, Apptivo is one of the most popular CRMs. It also offers plenty of integrations and solutions that make it stand out from the rest

SugarCRM

With over 50,000 companies using it and 7 million downloads, SugarCRM is one of the most sought-after CRM applications on the market. The application is available in 9+ languages and is capable of accommodating organizations of all sizes, from small startups to those with more than 10,000 employees. Open-source in nature, this CRM is highly customizable and adaptable to different structures and hierarchies. As such, it requires a dedicated team to manage, operate and maintain its smooth functioning. 

Nutshell

With Nutshell, you can streamline your sales process and optimize your efficiency. It has a variety of views to help you manage your leads and sales, including board view, chart view, list view, and map view. This powerful yet simple CRM tool is used by over 25,000 business professionals around the world to keep track of their customer data. 

Freshsales

Freshsales is a great option for those looking for a free and open-source CRM software suite. Focusing solely on scalability and sales, Freshsales can be used to engage with customers, close deals, and attract new sales. The free forever plan offers unlimited users, contacts, and premium support, including email, chat, and phone support. However, Freshsales Free is missing out on some advanced features like email tracking, behavior analytics, and reports. Though there is a minimal learning curve, it is best for startups, and not necessarily the best choice for small to mid-sized businesses. Overall, Freshsales is a good CRM to start with for getting a new lead and nurturing it throughout its life cycle. 

Copper

Those seeking essential features in CRM software such as email marketing, calendar/reminder systems, and client tracking should consider Copper. It offers a direct import of records from Gmail communications and allows users to sync meetings with contacts. Additionally, Copper’s integrations with most common applications such as Intuit Quickbooks, Slack, DocuSign, Zapier, and Mailchimp are especially beneficial for G Suite users. 

Summing Up

Although the major part of CRM software is created for non-technical users, coding experience is often not necessary. Businesses of all sizes may utilize CRM software to manage client contacts, streamline procedures, and enhance customer service thanks to pre-built templates, automated workflows, and drag-and-drop interfaces. Coding skills may be required for more intricate adaptations, such as unique integrations or reports.

The post Building a CRM System: Does CRM Require Coding? appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner

Behind the Scenes on AWS Contributions to Cloud Native Open Source Projects

Amazon Elastic Kubernetes Service (Amazon EKS) is well known in the Kubernetes community. But few realize that AWS engineers are closely involved and contributing upstream to Kubernetes and to many more cloud native open source projects.

In the past year alone, AWS contributed significantly to containerd, Cortex, etcd, Fluentd, nerdctl, Notary, OpenTelemetry, Thanos, and Tinkerbell. We employ maintainers and contributors on these projects and we will contribute more to these and other projects in the coming year. Here’s a behind-the-scenes look at our contributions and why we’re investing in the open source projects we support. You can also meet many of our contributors in the AWS booth at KubeCon Europe in Amsterdam, April 18-21, 2023 and hear from them in our virtual Container Day event 9 a.m. – 4 p.m. CEST on April 18.

“Amazon EKS is committed to open source and we are spending a lot of our cycles now focused on contributing back to the community. Kubernetes is part of a community that’s bigger than AWS and so we’re continuing to be committed to maintaining and helping that community to be successful because without it, we wouldn’t exist, either,” said Barry Cooks, Vice President, Kubernetes, at AWS and a Cloud Native Computing Foundation (CNCF) governing board member.

AWS contributes to Kubernetes and Etcd

Today, AWS is heavily involved in open source, cloud native projects. Consider, for example, some of our recent key contributions to Kubernetes and etcd, the underlying data store for Kubernetes.

“We’re building the AWS cloud provider, contributing to CAPI (cluster API), and serve as part of the security response committee. We helped implement gzip optimization which improves the performance of Kubernetes clients,” said Nathan Taber who leads the product team for Kubernetes at AWS, in a keynote at KubeCon North America 2022. “With etcd we’re bringing our operational learnings from running just so much etcd at scale, back into the community.”

The AWS cloud provider for Kubernetes is the open source interface between a Kubernetes cluster and AWS service APIs. This project allows a Kubernetes cluster to provision, monitor, and remove AWS resources necessary for operation of the cluster.

As of Kubernetes 1.27, AWS has just finished a multi-year effort to migrate our legacy cloud provider out of tree to an external cloud provider. The cloud provider migration reduces binary bloat in the main kubernetes/kubernetes (k/k) repository, as well as reduces dependency complexity and the surface area for security vulnerabilities.

AWS has also built a webhook framework that allows cloud providers to host webhooks in their cloud-controller-managers, which makes certain migration tasks easier. One use case for this is helping other cloud providers to migrate the persistent volume labeller admission controllers from the API server code, which is one of the last areas of cloud provider specific code that needs to be migrated out of core Kubernetes.

“We’ve included a lot of space in our planning for upstream open source work this year,” said Nick Turner, software developer on the AWS Kubernetes team and a chair in Kubernetes SIG-cloud-provider. “Expect us to keep up our contributions to the cloud provider and the load balancer controller as well as increase our investments in the AWS IAM authenticator for Kubernetes and KMS encryption provider.”

These and other Kubernetes contributions bring value to the entire Kubernetes community as well as to the EKS service and its customers.

Since KubeCon Detroit last fall, the EKS-etcd team has contributed numerous improvements to etcd. Chao Chen contributed to the effort to improve testing mechanisms for etcd by unifying the test frameworks used by etcd tests. Baoming Wang contributed an important metric to the Kubernetes API server code base which will help catch data corruption issues early. We’ve also worked on building a linearizability test suite, made various improvements to the core etcd database and the etcd backend database Bolt-DB, contributed to documentation, made helm more resilient to etcd side transient errors, and fixed an issue with the installation script for argo-cd-helmfile.

What’s driving AWS to contribute more to cloud native open source

Like most modern companies, AWS builds many of its services with open source components. There are several business and technical reasons we do this, which we’ve outlined in an article on The New Stack about why we invest in sustainable open source. We recognize that the success of our services depends on the success of those underlying open source projects.

Given that most of the open source projects that AWS supports underpin specific services, AWS tasks all engineers working in services, regardless of their assigned sub-service teams, to contribute in any way that they can to those upstream projects.

The result is a virtuous cycle that promotes mutually beneficial growth. As AWS services grow, so too do the open source projects upon which they are based because of AWS contributions and support. Conversely, as these open source projects grow from the contributions of other companies and developers, so do the benefits to the AWS services that depend upon them.

AWS contributions focus on performance and scale

AWS contributions to open source typically come as a practical matter in the form of bug fixes, code reviews, documentation, new features, or security enhancements. Like many developers working in the open source space, AWS engineers often work to address issues that arise in the course of their day jobs and then share the fixes with the rest of the open source community. Similarly, new features for an open source project are developed by AWS engineers to expand the project’s scale or performance which in turn increases the project’s usability, stability, and overall appeal.

Because AWS has a large number of Kubernetes clusters under management, it affords AWS a unique opportunity to test the limitations of open source software and build its edges stronger and further out from its initial core. So many of the contributions that our team members do for upstream Kubernetes, etcd, containerd, and other projects center on making sure that we provide insights to the upstream community on where things break down in scaling, production, and operational readiness.

The resulting insights provide value for the entire open source community as well as our own customers.

Take for example, the lag fix that curiously performed as a latency expander. AWS engineer Shyam Jeedigunta, was looking at the logs and metrics collected from thousands of production EKS clusters. He determined that Gzip compression is enabled inside the Kubernetes API server to reduce the demand on network bandwidth and to decrease latency.  However, the compression was actually increasing the latency for large list requests made by clients to the Kubernetes API server. Shyam, who is also co-chair of the Kubernetes scalability special interest group (SIG), took a deep dive into the issue to investigate whether a particular compression level created the problem and if so, could the compression level be reduced? Could Gzip compression be disabled entirely? What impact would that have on latency and network bandwidth?

Answers to questions like this one lead to contributions upstream in etcd and core Kubernetes from AWS service teams. Customers and others often report these kinds of issues to the project as well, but the nature of the problem isn’t clear until it’s viewed on 1,000 nodes and 200,000 objects of a certain kind. AWS engineers diagnose what’s going on, put together troubleshooting information, and collate information into proposals on how to fix the problem(s) to upstream to Kubernetes. AWS likes to spearhead fixing issues that arise from running the projects at scale.

Key AWS contributions

AWS contributes to many Kubernetes sub projects and SIGs. For example, Micah Hausler and Sri Saran Balaji Vellore Rajakumar serve on the Kubernetes Security Response Committee (SRC), Davanum Srinivas (Dims) chairs SIG-Architecture and SIG-k8s-infra, and Nick Turner is a chair in SIG-cloud-provider.  Key contributions have gone into projects including containerd, Cortex, cdk8s, CNI, nerdctl and Prometheus. Innovations have also been substantial and include TorchServe, improved ARM support through AWS Graviton, and the Virtual GPU plugin. However, this is not an exhaustive or complete list of AWS contributions and innovations in the cloud native community.

On containerd, for example, AWS employs two maintainers who contribute features and help ensure the project’s general health and security. Key contributions from AWS engineers to the containerd project include OpenTelemetry integration in the 1.7.0 release, improved tracing, and improved fuzzing integration.

“It’s been awesome to see the growth on the container runtime team here at AWS these past few years. I love to see the eagerness to learn not just *how* to contribute, but how to do it well and really benefit the broader community,” said Phil Estes, a principal engineer at AWS and a containerd maintainer.

Nerdctl, a Docker-compatible CLI for containerd and a containerd sub-project, is used by other open source projects Lima, Finch, and Rancher Desktop. AWS engineers significantly improved nerdctl’s compose support by adding 11 out of 13 missing compose commands. We enhanced nerdctl’s image signing/verification support by contributing cosign support for nerdctl compose, and notation support for nerdctl. And engineer Jin Dong recently became the first reviewer for the project from AWS.

AWS services are also standardizing on OpenTelemetry, a set of open source tools and standards for collecting metrics, logs, and traces to measure application performance. AWS Distro for OpenTelemetry (ADOT), OpenSearch, and CloudWatch are all building on OpenTelemetry and contribute back to the upstream project. All ADOT code is 100% open source and contributed upstream. Key contributions include: adding functionality to upstream observability components such as OpenTelemetry language SDKs, collectors, and agents.

“Amazon is the fourth largest contributor to OpenTelemetry with a dedicated maintainer and many contributors working on the project. A key contribution has been improving collector and metric stability, including improved Prometheus interoperability with OpenTelemetry,” said Taber.

A fourth example is Cortex where AWS is the top supporter of the project and employs three maintainers. As AWS runs this project at scale, engineers have the opportunity to identify and fix scaling cliffs before they become a problem for the rest of the community. Some of the key contributions are new features and performance improvements. Examples include partition compactor, Ring DynamoDB Multikey KV, out of order samples ingestion, snappy-block gRPC compression, ARM images, and Thanos PromQL engine integration.

We have also contributed bug fixes to Thanos, a tool for setting up highly available Prometheus instances with long term storage. Thanos is a CNCF incubating project which Cortex depends on. We participated in the development of the new Thanos PromQL engine and open sourced a tool that could use fuzzing for correctness testing which has already caught a few bugs.

AWS employs four maintainers on Tinkerbell, a cloud native open source bare metal provisioning engine for EKS Anywhere and a CNCF Sandbox project. Key contributions include organizing the project roadmap, VLAN support, a Kubernetes native backend, out-of-band management Kubernetes controller, Helm Chart deployment, and Cluster API provider updates.

“Our team has done a lot of work to update the Tinkerbell backend from Postgres to native Kubernetes,” said Taber.

AWS employs three maintainers in Notation, a sub project of Notary under the CNCF, and is the third largest code contributor to Notary. Notation enables the generation of cryptographic signatures for container images so users can verify that they come from a trusted source or process. AWS founded the sub project with other contributors to come up with specifications for signature format, generation, verification, and revocation. As part of this work we also defined a process for evaluating signature envelope formats like COSE ensuring that they met a high security bar before they were used in Notation.

AWS employees have either written or reviewed the majority of code contributions for the core Notation libraries and a CLI. AWS also employs a maintainer to Ratify so Kubernetes users can easily enable policies for signature verification with their existing admissions controllers. Similarly we also employ a maintainer to ORAS so signatures can easily be pushed to OCI registries. Notation enables users to define granular trust policies for defining which sources they want to trust, balance deployment safety and security needs, and flexibility on secure signing key storage options.

We have contributed to many other open source projects as well, including Crossplane, for which AWS added support for EKS IRSA in the China region and fixed Amazon Route 53 wildcard support, and Backstage, with AWS Proton and AWS Code Suite (AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy).

“We’re very excited about doing more development in the open, sharing that with our customers, and working directly in some cases with customers on their needs in open source projects and working together to make the community stronger in the Kubernetes space,” Cooks said.

AWS is open

We want to hear from you. AWS engineers are open to helping community members through collaboration and contribution opportunities. Tell us how we can help meet your needs.

AWS engineers, solutions architects, and product managers are hanging out on the Kubernetes community and the CNCF community Slack channels. Channels where you can reach out to us include the provider AWS channel and Karpenter channel, and the AWS controllers for Kubernetes channel on the Kubernetes Slack.

Find us and tell us what you’d like us to work on. Or if you have a particular issue that you found in one of these upstream projects that you think our engineers can help move the needle on. Come find us and talk to us in the CNCF’s AWS Slack channel and join us for our virtual Container Day on April 18, before KubeCon EU.

Flatlogic Admin Templates banner

Publish Amazon DevOps Guru Insights to ServiceNow for Incident Management

Amazon DevOps Guru is a fully managed AIOps service that uses machine learning (ML) to quickly identify when applications are behaving outside of their normal operating patterns and generates insights from its findings. These insights generated by Amazon DevOps Guru can be used to alert on-call teams to react to anomalies for mission critical workloads. Various customers already utilize Incident management systems like ServiceNow to identify, analyze and resolve critical incidents which could impact business operations. ServiceNow is an IT Service Management (ITSM) platform that enables enterprise organizations to improve operational efficiencies. Among its products is Incident Management which provides a single pane view to customers and allows customers restore services and resolve issues quickly.

This blog post will show you how to integrate Amazon DevOps Guru insights with ServiceNow to automatically create and manage Incidents. We will demonstrate how an insight generated by Amazon DevOps Guru for an anomaly can automatically create a ServiceNow Incident, update the incident when there are new anomalies or recommendations from Amazon DevOps Guru, and close the ServiceNow Incident once the insight is resolved by Amazon DevOps Guru.

Overview of solution

This solution uses a combination of event driven architecture and Serverless technologies, to integrate DevOps Guru insights with ServiceNow. When an Amazon DevOps Guru insight is created, an Amazon EventBridge rule is used to capture the insight as an event and routed to an AWS Lambda Function target. The lambda function interacts with ServiceNow using a REST API to create, update and close an incident for corresponding DevOps Guru events captured by EventBridge.

The EventBridge rule can be customized to capture all DevOps Guru insights or narrowed down to specific insights. In this blog, we will be capturing all DevOps Guru insights and will be performing actions on ServiceNow for the below DevOps Guru events:

DevOps Guru New Insight Open
DevOps Guru New Anomaly Association
DevOps Guru Insight Severity Upgraded
DevOps Guru New Recommendation Created
DevOps Guru Insight Closed

Figure 1: Amazon DevOps Guru Integration with ServiceNow using Amazon EventBridge and AWS Lambda

Solution Implementation Steps

Prerequisites

Before you deploy the solution and proceed with this walkthrough, you should have the following prerequisites:

Gather the hostname for your ServiceNow cloud instance. If you do not have a ServiceNow instance, you can request a developer instance through the ServiceNow Developer page.
Gather the credentials of a ServiceNow user who has permissions to make REST API calls to ServiceNow, specifically to the Table API. If you don’t have a user provisioned, you can create one by following the steps in Getting started with the REST API in the ServiceNow documentation.
Create a secret in Secrets Manager to store the ServiceNow credentials created in previous step. You can choose any name for the secret but it should have two key/value pairs, one for username and other for password.
Enable DevOps Guru for your applications by following these steps or you can follow this blog to deploy a sample serverless application that can be used to generate DevOps Guru insights for anomalies detected in the application.
Install and set up SAM CLI – Install the SAM CLI

Download and set up Java. The version should be matching to the runtime that you defined in the SAM template.yaml Serverless function configuration – Install the Java SE Development Kit 11

Maven – Install Maven

Docker – Install Docker community edition

You have two options to deploy this solution, one options is to deploy from the AWS Serverless Repository and other from the Command Line Interface (CLI).

Option 1: Deploy sample ServiceNow Connector App from AWS Serverless Repository

The DevOps Guru ServiceNow Connector application is available in the AWS Serverless Application Repository which is a managed repository for serverless applications. The application is packaged with an AWS Serverless Application Model (SAM) template, definition of the AWS resources used and the link to the source code. Follow the steps below to quickly deploy this serverless application in your AWS account.

Follow the steps below to quickly deploy this serverless application in your AWS account:

Login to the AWS management console of the account to which you plan to deploy this solution.
Go to the DevOps Guru ServiceNow Connector application in the AWS Serverless Repository and click on “Deploy”.

Figure 2: Deploy solution through AWS Serverless Repository

The Lambda application deployment screen will be displayed where you can enter the ServiceNow hostname (do not include the https prefix) and the Secret Name you created in the prerequisite steps. Click on the ‘Deploy’ button.

Figure 3: AWS Lambda Application Settings

After successful deployment the AWS Lambda Application page will display the “Create complete” status for the serverlessrepo-DevOps-Guru-ServiceNow-Connector application. The CloudFormation template creates four resources:

Lambda function which has the logic to integrate to the ServiceNow
Event Bridge rule for the DevOps Guru Insights
Lambda permission
IAM role

5.     Now you can skip Option 2 and follow the steps in the “Test the Solution” section to trigger some DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Option 2: Build and Deploy sample ServiceNow Connector App using AWS SAM Command Line Interface

As you have seen above, you can directly deploy the sample serverless application from the Serverless Repository with one click deployment. Alternatively, you can choose to clone the github source repository and deploy using the SAM CLI from your terminal.

The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing serverless applications. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see AWS SAM reference – AWS Serverless Application Model.

Before you proceed, make sure you have completed the Prerequisites section in the beginning which should set up the AWS SAM CLI, Maven and Java on your local terminal. You also need to install and set up Docker to run your functions in an Amazon Linux environment that matches Lambda.

Follow the steps below to build and deploy this serverless application using AWS SAM CLI in your AWS account:

Clone the source code from the github repo

$ git clone https://github.com/aws-samples/amazon-devops-guru-connector-servicenow.git

Before you build the resources defined in the SAM template, you can use the below validate command which will run cfn-lint validations on your SAM JSON/YAML template

$ sam validate –-lint –template template.yaml

3.     Build the application with SAM CLI

$ cd amazon-devops-guru-connector-servicenow
$ sam build

If everything is set up correctly, you should have a success message like shown below:

Build Succeeded

Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Validate SAM template: sam validate
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync –stack-name {{stack-name}} –watch
[*] Deploy: sam deploy –guided

4.  Deploy the application with SAM CLI

$ sam deploy –-guided

This command will package and deploy your application to AWS, with a series of prompts that you should respond to as shown below:

Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name – amazon-devops-guru-connector-servicenow

AWS Region: The AWS region you want to deploy your application to.

Parameter ServiceNowHost []: The ServiceNow host name/instance URL you set up. Example: dev92031.service-now.com

Parameter SecretName []: The secret name that you set up for ServiceNow credentials in the Prerequisites.

Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes.

Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn’t provided through this prompt, to deploy this example you must explicitly pass –capabilities CAPABILITY_IAM to the sam deploy command.

Disable rollback [y/N]: If set to Y, preserves the state of previously provisioned resources when an operation fails.

Save arguments to configuration file (samconfig.toml): If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application.

After you enter your parameters, you should see something like this if you have provided Y to view and confirm ChangeSets. Proceed here by providing ‘Y’ for deploying the resources.

Initiating deployment
=====================
Uploading to amazon-devops-guru-connector-servicenow/46bb4841f8f37fd41d3f40f86f31c4d7.template 1918 / 1918 (100.00%)

Waiting for changeset to be created..
CloudFormation stack changeset
—————————————————————————————————————————————————–
Operation LogicalResourceId ResourceType Replacement
—————————————————————————————————————————————————–
+ Add FunctionsDevOpsGuruPermission AWS::Lambda::Permission N/A
+ Add FunctionsDevOpsGuru AWS::Events::Rule N/A
+ Add FunctionsRole AWS::IAM::Role N/A
+ Add Functions AWS::Lambda::Function N/A
—————————————————————————————————————————————————–

Changeset created successfully. arn:aws:cloudformation:us-east-1:123456789012:changeSet/samcli-deploy1669232233/7c97b7f5-369d-400d-89cd-ebabefaa0b57

Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]:

Once the deployment succeeds, you should be able to see the successful creation of your resources

CloudFormation events from stack operations (refresh every 0.5 seconds)
—————————————————————————————————————————————————–
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
—————————————————————————————————————————————————–
CREATE_IN_PROGRESS AWS::CloudFormation::Stack amazon-devops-guru-connector- User Initiated
servicenow
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole Resource creation Initiated
CREATE_COMPLETE AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru Resource creation Initiated
CREATE_COMPLETE AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_COMPLETE AWS::CloudFormation::Stack amazon-devops-guru-connector- –
servicenow
—————————————————————————————————————————————————–

Successfully created/updated stack – amazon-devops-guru-connector-servicenow in us-east-1

You can also use the below command to list the resources deployed by passing in the stack name.

$ sam list resources –stack-name amazon-devops-guru-connector-servicenow

You can also choose to test and debug your function locally with sample events using the SAM CLI local functionality. Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Refer the Invoking Lambda functions locally – AWS Serverless Application Model link here for more details.

Follow the below steps for testing the lambda with the SAM CLI local. You have to create an env.json file with the correct values for your ServiceNow Host and SecretManager secret name that was created in the previous step.

Make sure you have created the AWS Secrets Manager secret with the desired name as mentioned in the prerequisites, which should be used here for SECRET_NAME.
Create env.json as below, by replacing the values for SERVICE_NOW_HOST and SECRET_NAME with your real value. These will be set as the local Lambda execution environment variables.

{“Parameters”: {“SERVICE_NOW_HOST”: “SNOW_HOST”,”SECRET_NAME”: “SNOW_CREDS”}}

Run the command below to validate locally that with a sample DevOps Guru payload, to trigger Lambda locally and invoke. Remember for this to work, you should have Docker instance running and also the Secret Name created in your AWS account.

$ sam local invoke Functions –event Functions/src/test/Events/CreateIncident.json –env-vars Functions/src/test/Events/env.json

Once you are done with the above steps, move on to “Test the Solution” section below to trigger sample DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Test the Solution

To test the solution, we will simulate a DevOps Guru insight. You can also simulate an insight by following the steps in this blog. After an anomaly is detected in the application, DevOps Guru creates an insight as seen below.

Figure 4: DevOps Guru Insight created for anomalous behavior

For the DevOps Guru insight shown above, a corresponding incident is automatically created on ServiceNow as shown below. In addition to the incident creation, any new anomalies and recommendations from DevOps Guru is also associated with the incident.

Figure 5: Corresponding ServiceNow Incident is created for the DevOps Guru Insight

When the anomalous behavior that generated the DevOps Guru insight is resolved, DevOps Guru automatically closes the insight. The corresponding ServiceNow incident that was created for the insight is also closed as seen below

Figure 6: ServiceNow Incident created for DevOps Guru Insight is resolved due to insight closure

Cleaning up

To avoid incurring future charges, delete the resources.

To delete the sample application that you created, use the AWS CLI command below and pass the stack name you provided in the sam deploy step.

$ aws cloudformation delete-stack –stack-name amazon-devops-guru-connector-servicenow

You could also use the AWS CloudFormation Console to delete the stack:

Figure 7: AWS Stack Console with Delete action

Conclusion

This blog post showcased how DevOps Guru continuously monitor resources in a particular region in your AWS account and automatically detects operational issues, predicts impending resource exhaustion, details likely cause, and recommends remediation actions. This post described a custom solution using serverless integration pattern with AWS Lambda and Amazon EventBridge which enabled integration of the DevOps Guru insights with customer’s most popular ITSM and Change management tool ServiceNow thus streamlining the Service Management governance and oversight over AWS services. Using this solution helps Customer’s with ServiceNow to improve their operational efficiencies, and get customized insights and real time incident alerts and management directly from DevOps Guru which provides a single pane of glass to restore services and systems quickly.

This solution was created to help customers who already use ServiceNow Incident Management, if you are already using Incident Manager from AWS Systems Manager, check out how that works with Amazon DevOps Guru here.

To learn more about Amazon DevOps Guru, join us for a free hands-on Immersion Day. Events are virtual and hosted at three global time zones. Register here: April 12th.

About the authors:

Abdullahi Olaoye

Abdullahi is a Senior Cloud Infrastructure Architect at AWS Professional Services where he works with enterprise customers to design and build cloud solutions that solve business challenges. When he’s not working, he enjoys travelling, watching documentaries and listening to history podcasts.

Sreenivas Ganesan

Sreenivas Ganesan is a Sr. DevOps Consultant at AWS experienced in architecting and delivering modernized DevOps solutions for enterprise customers in their journey to AWS Cloud, primarily focused on Infrastructure automation, Security and Compliance, Management and Governance, Provisioning and Orchestration. Outside of work, he enjoys watching new TV series, soccer and spending time with his family outdoors.

Mohan Udyavar

Mohan Udyavar is a Principal Technical Account Manager in the Enterprise Support organization of AWS advising customers in successfully migrating and operating their workloads on AWS. He is primarily focused on the Automotive industry providing prescriptive guidance to customers helping them improve the resilience and operational excellence posture of mission-critical applications. Outside of work, he loves cooking and working on tech projects with his son.

Improve collaboration between teams by using AWS CDK constructs

There are different ways to organize teams to deliver great software products. There are companies that give the end-to-end responsibility for a product to a single team, like Amazon’s Two-Pizza teams, and there are companies where multiple teams split the responsibility between infrastructure (or platform) teams and application development teams. This post provides guidance on how collaboration efficiency can be improved in the case of a split-team approach with the help of the AWS Cloud Development Kit (CDK).

The AWS CDK is an open-source software development framework to define your cloud application resources. You do this by using familiar programming languages like TypeScript, Python, Java, C# or Go. It allows you to mix code to define your application’s infrastructure, traditionally expressed through infrastructure as code tools like AWS CloudFormation or HashiCorp Terraform, with code to bundle, compile, and package your application.

This is great for autonomous teams with end-to-end responsibility, as it helps them to keep all code related to that product in a single place and single programming language. There is no need to separate application code into a different repository than infrastructure code with a single team, but what about the split-team model?

Larger enterprises commonly split the responsibility between infrastructure (or platform) teams and application development teams. We’ll see how to use the AWS CDK to ensure team independence and agility even with multiple teams involved. We’ll have a look at the different responsibilities of the participating teams and their produced artifacts, and we’ll also discuss how to make the teams work together in a frictionless way.

This blog post assumes a basic level of knowledge on the AWS CDK and its concepts. Additionally, a very high level understanding of event driven architectures is required.

Team Topologies

Let’s first have a quick look at the different team topologies and each team’s responsibilities.

One-Team Approach

In this blog post we will focus on the split-team approach described below. However, it’s still helpful to understand what we mean by “One-Team” Approach: A single team owns an application from end-to-end. This cross-functional team decides on its own on the features to implement next, which technologies to use and how to build and deploy the resulting infrastructure and application code. The team’s responsibility is infrastructure, application code, its deployment and operations of the developed service.

If you’re interested in how to structure your AWS CDK application in a such an environment have a look at our colleague Alex Pulver’s blog post Recommended AWS CDK project structure for Python applications.

Split-Team Approach

In reality we see many customers who have separate teams for application development and infrastructure development and deployment.

Infrastructure Team

What I call the infrastructure team is also known as the platform or operations team. It configures, deploys, and operates the shared infrastructure which other teams consume to run their applications on. This can be things like an Amazon SQS queue, an Amazon Elastic Container Service (Amazon ECS) cluster as well as the CI/CD pipelines used to bring new versions of the applications into production.
It is the infrastructure team’s responsibility to get the application package developed by the Application Team deployed and running on AWS, as well as provide operational support for the application.

Application Team

Traditionally the application team just provides the application’s package (for example, a JAR file or an npm package) and it’s the infrastructure team’s responsibility to figure out how to deploy, configure, and run it on AWS. However, this traditional setup often leads to bottlenecks, as the infrastructure team will have to support many different applications developed by multiple teams. Additionally, the infrastructure team often has little knowledge of the internals of those applications. This often leads to solutions which are not optimized for the problem at hand: If the infrastructure team only offers a handful of options to run services on, the application team can’t use options optimized for their workload.

This is why we extend the traditional responsibilities of the application team in this blog post. The team provides the application and additionally the description of the infrastructure required to run the application. With “infrastructure required” we mean the AWS services used to run the application. This infrastructure description needs to be written in a format which can be consumed by the infrastructure team.

While we understand that this shift of responsibility adds additional tasks to the application team, we think that in the long term it is worth the effort. This can be the starting point to introduce DevOps concepts into the organization. However, the concepts described in this blog post are still valid even if you decide that you don’t want to add this responsibility to your application teams. The boundary of who is delivering what would then just move more into the direction of the infrastructure team.

To be successful with the given approach, the two teams need to agree on a common format on how to hand over the application, its infrastructure definition, and how to bring it to production. The AWS CDK with its concept of Constructs provides a perfect means for that.

Primer: AWS CDK Constructs

In this section we take a look at the concepts the AWS CDK provides for structuring our code base and how these concepts can be used to fit a CDK project into your team topology.

Constructs

Constructs are the basic building block of an AWS CDK application. An AWS CDK application is composed of multiple constructs which in the end define how and what is deployed by AWS CloudFormation.

The AWS CDK ships with constructs created to deploy AWS services. However, it is important to understand that you are not limited to the out-of-the-box constructs provided by the AWS CDK. The true power of AWS CDK is the possibility to create your own abstractions on top of the default constructs to create solutions for your specific requirement. To achieve this you write, publish, and consume your own, custom constructs. They codify your specific requirements, create an additional level of abstraction and allow other teams to consume and use your construct.

We will use a custom construct to separate the responsibilities between the the application and the infrastructure team. The application team will release a construct which describes the infrastructure along with its configuration required to run the application code. The infrastructure team will consume this construct to deploy and operate the workload on AWS.

How to use the AWS CDK in a Split-Team Setup

Let’s now have a look at how we can use the AWS CDK to split the responsibilities between the application and infrastructure team. I’ll introduce a sample scenario and then illustrate what each team’s responsibility is within this scenario.

Scenario

Our fictitious application development team writes an AWS Lambda function which gets deployed to AWS. Messages in an Amazon SQS queue will invoke the function. Let’s say the function will process orders (whatever this means in detail is irrelevant for the example) and each order is represented by a message in the queue.

The application development team has full flexibility when it comes to creating the AWS Lambda function. They can decide which runtime to use or how much memory to configure. The SQS queue which the function will act upon is created by the infrastructure team. The application team does not have to know how the messages end up in the queue.

With that we can have a look at a sample implementation split between the teams.

Application Team

The application team is responsible for two distinct artifacts: the application code (for example, a Java jar file or an npm module) and the AWS CDK construct used to deploy the required infrastructure on AWS to run the application (an AWS Lambda Function along with its configuration).

The lifecycles of these artifacts differ: the application code changes more frequently than the infrastructure it runs in. That’s why we want to keep the artifacts separate. With that each of the artifacts can be released at its own pace and only if it was changed.

In order to achieve these separate lifecycles, it is important to notice that a release of the application artifact needs to be completely independent from the release of the CDK construct. This fits our approach of separate teams compared to the standard CDK way of building and packaging application code within the CDK construct.

But how will this be done in our example solution? The team will build and publish an application artifact which does not contain anything related to CDK.
When a CDK Stack with this construct is synthesized it will download the pre-built artifact with a given version number from AWS CodeArtifact and use it to create the input zip file for a Lambda function. There is no build of the application package happening during the CDK synth.

With the separation of construct and application code, we need to find a way to tell the CDK construct which specific version of the application code it should fetch from CodeArtifact. We will pass this information to the construct via a property of its constructor.

For dependencies on infrastructure outside of the responsibility of the application team, I follow the pattern of dependency injection. Those dependencies, for example a shared VPC or an Amazon SQS queue, are passed into the construct from the infrastructure team.

Let’s have a look at an example. We pass in the external dependency on an SQS Queue, along with details on the desired appPackageVersion and its CodeArtifact details:

export interface OrderProcessingAppConstructProps {
    queue: aws_sqs.Queue,
    appPackageVersion: string,
    codeArtifactDetails: {
        account: string,
        repository: string,
        domain: string
    }
}

export class OrderProcessingAppConstruct extends Construct {

    constructor(scope: Construct, id: string, props: OrderProcessingAppConstructProps) {
        super(scope, id);

        const lambdaFunction = new lambda.Function(this, ‘OrderProcessingLambda’, {
            code: lambda.Code.fromDockerBuild(path.join(__dirname, ‘..’, ‘bundling’), {
                buildArgs: {
                    ‘PACKAGE_VERSION’ : props.appPackageVersion,
                    ‘CODE_ARTIFACT_ACCOUNT’ : props.codeArtifactDetails.account,
                    ‘CODE_ARTIFACT_REPOSITORY’ : props.codeArtifactDetails.repository,
                    ‘CODE_ARTIFACT_DOMAIN’ : props.codeArtifactDetails.domain
                }
            }),
            runtime: lambda.Runtime.NODEJS_16_X,
            handler: ‘node_modules/order-processing-app/dist/index.lambdaHandler’
        });
        const eventSource = new SqsEventSource(props.queue);
        lambdaFunction.addEventSource(eventSource);
    }
}

Note the code lambda.Code.fromDockerBuild(…): We use AWS CDK’s functionality to bundle the code of our Lambda function via a Docker build. The only things which happen inside of the provided Dockerfile are:

the login into the AWS CodeArtifact repository which holds the pre-built application code’s package
the download and installation of the application code’s artifact from AWS CodeArtifact (in this case via npm)

If you are interested in more details on how you can build, bundle and deploy your AWS CDK assets I highly recommend a blog post by my colleague Cory Hall: Building, bundling, and deploying applications with the AWS CDK. It goes into much more detail than what we are covering here.

Looking at the example Dockerfile we can see the two steps described above:

FROM public.ecr.aws/sam/build-nodejs16.x:latest

ARG PACKAGE_VERSION
ARG CODE_ARTIFACT_AWS_REGION
ARG CODE_ARTIFACT_ACCOUNT
ARG CODE_ARTIFACT_REPOSITORY

RUN aws codeartifact login –tool npm –repository $CODE_ARTIFACT_REPOSITORY –domain $CODE_ARTIFACT_DOMAIN –domain-owner $CODE_ARTIFACT_ACCOUNT –region $CODE_ARTIFACT_AWS_REGION
RUN npm install order-processing-app@$PACKAGE_VERSION –prefix /asset

Please note the following:

we use –prefix /asset with our npm install command. This tells npm to install the dependencies into the folder which CDK will mount into the container. All files which should go into the output of the docker build need to be placed here.
the aws codeartifact login command requires credentials with the appropriate permissions to proceed. In case you run this on for example AWS CodeBuild or inside of a CDK Pipeline you need to make sure that the used role has the appropriate policies attached.

Infrastructure Team

The infrastructure team consumes the AWS CDK construct published by the application team. They own the AWS CDK Stack which composes the whole application. Possibly this will only be one of several Stacks owned by the Infrastructure team. Other Stacks might create shared infrastructure (like VPCs, networking) and other applications.

Within the stack for our application the infrastructure team consumes and instantiates the application team’s construct, passes any dependencies into it and then deploys the stack by whatever means they see fit (e.g. through AWS CodePipeline, GitHub Actions or any other form of continuous delivery/deployment).

The dependency on the application team’s construct is manifested in the package.json of the infrastructure team’s CDK app:

{
  “name”: “order-processing-infra-app”,
  …
  “dependencies”: {
    …
    “order-app-construct” : “1.1.0”,
    …
  }
  …
}

Within the created CDK Stack we see the dependency version for the application package as well as how the infrastructure team passes in additional information (like e.g. the queue to use):

export class OrderProcessingInfraStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);   

    const orderProcessingQueue = new Queue(this, ‘order-processing-queue’);

    new OrderProcessingAppConstruct(this, ‘order-processing-app’, {
       appPackageVersion: “2.0.36”,
       queue: orderProcessingQueue,
       codeArtifactDetails: { … }
     });
  }
}

Propagating New Releases

We now have the responsibilities of each team sorted out along with the artifacts owned by each team. But how do we propagate a change done by the application team all the way to production? Or asked differently: how can we invoke the infrastructure team’s CI/CD pipeline with the updated artifact versions of the application team?

We will need to update the infrastructure team’s dependencies on the application teams artifacts whenever a new version of either the application package or the AWS CDK construct is published. With the dependencies updated we can then start the release pipeline.

One approach is to listen and react to events published by AWS CodeArtifact via Amazon EventBridge. On each release AWS CodeArtifact will publish an event to Amazon EventBridge. We can listen to that event, extract the version number of the new release from its payload and start a workflow to update either our dependency on the CDK construct (e.g. in the package.json of our CDK application) or a update the appPackageVersion which the infrastructure team passes into the consumed construct.

Here’s how a release of a new app version flows through the system:

Figure 1 – A release of the application package triggers a change and deployment of the infrastructure team’s CDK Stack

The application team publishes a new app version into AWS CodeArtifact
CodeArtifact triggers an event on Amazon EventBridge
The infrastructure team listens to this event
The infrastructure team updates its CDK stack to include the latest appPackageVersion

The infrastructure team’s CDK Stack gets deployed

And very similar the release of a new version of the CDK Construct:

Figure 2 – A release of the application team’s CDK construct triggers a change and deployment of the infrastructure team’s CDK Stack

The application team publishes a new CDK construct version into AWS CodeArtifact
CodeArtifact triggers an event on Amazon EventBridge
The infrastructure team listens to this event
The infrastructure team updates its dependency to the latest CDK construct
The infrastructure team’s CDK Stack gets deployed

We will not go into the details on how such a workflow could look like, because it’s most likely highly custom for each team (think of different tools used for code repositories, CI/CD). However, here are some ideas on how it can be accomplished:

Updating the CDK Construct dependency

To update the dependency version of the CDK construct the infrastructure team’s package.json (or other files used for dependency tracking like pom.xml) needs to be updated. You can build automation to checkout the source code and issue a command like npm install [email protected]_VERSION (where NEW_VERSION is the value read from the EventBridge event payload). You then automatically create a pull request to incorporate this change into your main branch. For a sample on what this looks like see the blog post Keeping up with your dependencies: building a feedback loop for shared librares.

Updating the appPackageVersion

To update the appPackageVersion used inside of the infrastructure team’s CDK Stack you can either follow the same approach outlined above, or you can use CDK’s capability to read from an AWS Systems Manager (SSM) Parameter Store parameter. With that you wouldn’t put the value for appPackageVersion into source control, but rather read it from SSM Parameter Store. There is a how-to for this in the AWS CDK documentation: Get a value from the Systems Manager Parameter Store. You then start the infrastructure team’s pipeline based on the event of a change in the parameter.

To have a clear understanding of what is deployed at any given time and in order to see the used parameter value in CloudFormation I’d recommend using the option described at Reading Systems Manager values at synthesis time.

Conclusion

You’ve seen how the AWS Cloud Development Kit and its Construct concept can help to ensure team independence and agility even though multiple teams (in our case an application development team and an infrastructure team) work together to bring a new version of an application into production. To do so you have put the application team in charge of not only their application code, but also of the parts of the infrastructure they use to run their application on. This is still in line with the discussed split-team approach as all shared infrastructure as well as the final deployment is in control of the infrastructure team and is only consumed by the application team’s construct.

About the Authors

As a Solutions Architect Jörg works with manufacturing customers in Germany. Before he joined AWS in 2019 he held various roles like Developer, DevOps Engineer and SRE. With that Jörg enjoys building and automating things and fell in love with the AWS Cloud Development Kit.

Mo joined AWS in 2020 as a Technical Account Manager, bringing with him 7 years of hands-on AWS DevOps experience and 6 year as System operation admin. He is a member of two Technical Field Communities in AWS (Cloud Operation and Builder Experience), focusing on supporting customers with CI/CD pipelines and AI for DevOps to ensure they have the right solutions that fit their business needs.