Introducing the Enhanced Document API for DynamoDB in the AWS SDK for Java 2.x

We are excited to announce that the AWS SDK for Java 2.x now offers the Enhanced Document API for DynamoDB, providing an enhanced way of working with Amazon DynamoDb items.
This post covers using the Enhanced Document API for DynamoDB with the DynamoDB Enhanced Client. By using the Enhanced Document API, you can create an EnhancedDocument instance to represent an item with no fixed schema, and then use the DynamoDB Enhanced Client to read and write to DynamoDB.
Furthermore, unlike the Document APIs of aws-sdk-java 1.x, which provided arguments and return types that were not type-safe, the EnhancedDocument provides strongly-typed APIs for working with documents. This interface simplifies the development process and ensures that the data is correctly typed.

Prerequisites:

Before getting started, ensure you are using an up-to-date version of the AWS Java SDK dependency with all the latest released bug-fixes and features. For Enhanced Document API support, you must use version 2.20.33 or later. See our “Set up an Apache Maven project” guide for details on how to manage the AWS Java SDK dependency in your project.

Add dependency for dynamodb-enhanced in pom.xml.

<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>dynamodb-enhanced</artifactId>
<version>2.20.33</version>
</dependency>

Quick walk-through for using Enhanced Document API to interact with DDB

Step 1 : Create a DynamoDB Enhanced Client

Create an instance of the DynamoDbEnhancedClient class, which provides a high-level interface for Amazon DynamoDB that simplifies working with DynamoDB tables.

DynamoDbEnhancedClient enhancedClient = DynamoDbEnhancedClient.builder()
.dynamoDbClient(DynamoDbClient.create())
.build();

Step 2 : Create a DynamoDbTable resource object with Document table schema

To execute commands against a DynamoDB table using the Enhanced Document API, you must associate the table with your Document table schema to create a DynamoDbTable resource object. The Document table schema builder requires the primary index key and attribute converter providers. Use AttributeConverterProvider.defaultProvider() to convert document attributes of default types. An optional secondary index key can be added to the builder.

DynamoDbTable<EnhancedDocument> documentTable = enhancedClient.table(“my_table”,
TableSchema.documentSchemaBuilder()
.addIndexPartitionKey(TableMetadata.primaryIndexName(),”hashKey”, AttributeValueType.S)
.addIndexSortKey(TableMetadata.primaryIndexName(), “sortKey”, AttributeValueType.N)
.attributeConverterProviders(AttributeConverterProvider.defaultProvider())
.build());

// call documentTable.createTable() if “my_table” does not exist in DynamoDB

Step 3 : Write a DynamoDB item using an EnhancedDocument

The EnhancedDocument class has static factory methods along with a builder method to add attributes to a document. The following snippet demonstrates the type safety provided by EnhancedDocument when you construct a document item.

EnhancedDocument simpleDoc = EnhancedDocument.builder()
.attributeConverterProviders(defaultProvider())
.putString(“hashKey”, “sampleHash”)
.putNull(“nullKey”)
.putNumber(“sortKey”, 1.0)
.putBytes(“byte”, SdkBytes.fromUtf8String(“a”))
.putBoolean(“booleanKey”, true)
.build();

documentTable.putItem(simpleDoc);

Step 4 : Read a Dynamo DB item as an EnhancedDocument

Attributes of the Documents retrieved from a DynamoDB table can be accessed with getter methods

EnhancedDocument docGetItem = documentTable.getItem(r -> r.key(k -> k.partitionValue(“samppleHash”).sortValue(1)));

docGetItem.getString(“hashKey”);
docGetItem.isNull(“nullKey”)
docGetItem.getNumber(“sortKey”).floatValue();
docGetItem.getBytes(“byte”);
docGetItem.getBoolean(“booleanKey”);

AttributeConverterProviders for accessing document attributes as custom objects

You can provide a custom AttributeConverterProvider instance to an EnhancedDocument to convert document attributes to a specific object type.
These providers can be set on either DocumentTableSchema or EnhancedDocument to read or write attributes as custom objects.

TableSchema.documentSchemaBuilder()
.attributeConverterProviders(CustomClassConverterProvider.create(), defaultProvider())
.build();

// Insert a custom class instance into an EnhancedDocument as attribute ‘customMapOfAttribute’.
EnhancedDocument customAttributeDocument =
EnhancedDocument.builder().put(“customMapOfAttribute”, customClassInstance, CustomClass.class).build();

// Retrieve attribute ‘customMapOfAttribute’ as CustomClass object.
CustomClass customClassObject = customAttributeDocument.get(“customMapOfAttribute”, CustomClass.class);

Convert Documents to JSON and vice-versa

The Enhanced Document API allows you to convert a JSON string to an EnhancedDocument and vice-versa.

// Enhanced document created from JSON string using defaultConverterProviders.
EnhancedDocument documentFromJson = EnhancedDocument.fromJson(“{“key”: “Value”}”)

// Converting an EnhancedDocument to JSON string “{“key”: “Value”}”
String jsonFromDocument = documentFromJson.toJson();

Define a Custom Attribute Converter Provider

Custom attribute converter providers are implementations of AttributeConverterProvider that provide converters for custom classes.
Below is an example for a CustomClassForDocumentAPI which has as a single field stringAttribute of type String and its corresponding AttributeConverterProvider implementation.

public class CustomClassForDocumentAPI {
private final String stringAttribute;

public CustomClassForDocumentAPI(Builder builder) {
this.stringAttribute = builder.stringAttribute;
}
public static Builder builder() {
return new Builder();
}
public String stringAttribute() {
return stringAttribute;
}
public static final class Builder {
private String stringAttribute;
private Builder() {
}
public Builder stringAttribute(String stringAttribute) {
this.stringAttribute = string;
return this;
}
public CustomClassForDocumentAPI build() {
return new CustomClassForDocumentAPI(this);
}
}
}
import java.util.Map;
import software.amazon.awssdk.enhanced.dynamodb.AttributeConverter;
import software.amazon.awssdk.enhanced.dynamodb.AttributeConverterProvider;
import software.amazon.awssdk.enhanced.dynamodb.EnhancedType;
import software.amazon.awssdk.utils.ImmutableMap;

public class CustomAttributeForDocumentConverterProvider implements AttributeConverterProvider {
private final Map<EnhancedType<?>, AttributeConverter<?>> converterCache = ImmutableMap.of(
EnhancedType.of(CustomClassForDocumentAPI.class), new CustomClassForDocumentAttributeConverter());
// Different types of converters can be added to this map.

public static CustomAttributeForDocumentConverterProvider create() {
return new CustomAttributeForDocumentConverterProvider();
}

@Override
public <T> AttributeConverter<T> converterFor(EnhancedType<T> enhancedType) {
return (AttributeConverter<T>) converterCache.get(enhancedType);
}
}

A custom attribute converter is an implementation of AttributeConverter that converts a custom classes to and from a map of attribute values, as shown below.

import java.util.LinkedHashMap;
import java.util.Map;
import software.amazon.awssdk.enhanced.dynamodb.AttributeConverter;
import software.amazon.awssdk.enhanced.dynamodb.AttributeValueType;
import software.amazon.awssdk.enhanced.dynamodb.EnhancedType;
import software.amazon.awssdk.enhanced.dynamodb.internal.converter.attribute.EnhancedAttributeValue;
import software.amazon.awssdk.enhanced.dynamodb.internal.converter.attribute.StringAttributeConverter;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;

public class CustomClassForDocumentAttributeConverter implements AttributeConverter<CustomClassForDocumentAPI> {
public static CustomClassForDocumentAttributeConverter create() {
return new CustomClassForDocumentAttributeConverter();
}
@Override
public AttributeValue transformFrom(CustomClassForDocumentAPI input) {
Map<String, AttributeValue> attributeValueMap = new LinkedHashMap<>();
if(input.string() != null){
attributeValueMap.put(“stringAttribute”, AttributeValue.fromS(input.string()));
}
return EnhancedAttributeValue.fromMap(attributeValueMap).toAttributeValue();
}

@Override
public CustomClassForDocumentAPI transformTo(AttributeValue input) {
Map<String, AttributeValue> customAttr = input.m();
CustomClassForDocumentAPI.Builder builder = CustomClassForDocumentAPI.builder();
if (customAttr.get(“stringAttribute”) != null) {
builder.stringAttribute(StringAttributeConverter.create().transformTo(customAttr.get(“stringAttribute”)));
}
return builder.build();
}
@Override
public EnhancedType<CustomClassForDocumentAPI> type() {
return EnhancedType.of(CustomClassForDocumentAPI.class);
}
@Override
public AttributeValueType attributeValueType() {
return AttributeValueType.M;
}
}

Attribute Converter Provider for EnhancedDocument Builder

When working outside of a DynamoDB table context, make sure to set the attribute converter providers explicitly on the EnhancedDocument builder. When used within a DynamoDB table context, the table schema’s converter provider will be used automatically for the EnhancedDocument.
The code snippet below shows how to set an AttributeConverterProvider using the EnhancedDocument builder method.

// Enhanced document created from JSON string using custom AttributeConverterProvider.
EnhancedDocument documentFromJson = EnhancedDocument.builder()
.attributeConverterProviders(CustomClassConverterProvider.create())
.json(“{“key”: “Values”}”)
.build();

CustomClassForDocumentAPI customClass = documentFromJson.get(“key”, CustomClassForDocumentAPI.class)

Conclusion

In this blog post we showed you how to set up and begin using the Enhanced Document API with the DynamoDB Enhanced Client and standalone with the EnhancedDocument class. The enhanced client is open-source and resides in the same repository as the AWS SDK for Java 2.0.
We hope you’ll find this new feature useful. You can always share your feedback on our GitHub issues page.

How Zomato Boosted Performance 25% and Cut Compute Cost 30% Migrating Trino and Druid Workloads to AWS Graviton

Zomato is an India-based restaurant aggregator, food delivery, dining-out company with over 350,000 listed restaurants across more than 1,000 cities in India. The company relies heavily on data analytics to enrich the customer experience and improve business efficiency. Zomato’s engineering and product teams use data insights to refine their platform’s restaurant and cuisine recommendations, improve the accuracy of waiting times at restaurants, speed up the matching of delivery partners and improve overall food delivery process.

At Zomato, different teams have different requirements for data discovery based upon their business functions. For example, number of orders placed in specific area required by a city lead team, queries resolved per minute required by customer support team or most searched dishes on special events or days by marketing and other teams. Zomato’s Data Platform team is responsible for building and maintaining a reliable platform which serves these data insights to all business units.

Zomato’s Data Platform is powered by AWS services including Amazon EMR, Amazon Aurora MySQL-Compatible Edition and Amazon DynamoDB along with open source software Trino (formerly PrestoSQL) and Apache Druid for serving the previously mentioned business metrics to different teams. Trino clusters process over 250K queries by scanning 2PB of data and Apache Druid ingests over 20 billion events and serves 8 million queries every week. To deliver performance at Zomato scale, these massively parallel systems utilize horizontal scaling of nodes running on Amazon Elastic Compute Cloud (Amazon EC2) instances in their clusters on AWS. Performance of both these data platform components is critical to support all business functions reliably and efficiently in Zomato. To improve performance in a cost-effective manner, Zomato migrated these Trino and Druid workloads onto AWS Graviton-based Amazon EC2 instances.

Graviton-based EC2 instances are powered by Arm-based AWS Graviton processors. They deliver up to 40% better price performance than comparable x86-based Amazon EC2 instances. CPU and Memory intensive Java-based applications including Trino and Druid are suitable candidates for AWS Graviton based instances to optimize price-performance, as Java is well supported and generally performant out-of-the-box on arm64.

In this blog, we will walk you through an overview of Trino and Druid, how they fit into the overall Data Platform architecture and migration journey onto AWS Graviton based instances for these workloads. We will also cover challenges faced during migration, how Zomato team overcame those challenges, business gains in terms of cost savings and better performance along with future plans of Zomato on Graviton adoption for more workloads.

Trino overview

Trino is a fast, distributed SQL query engine for querying petabyte scale data, implementing massively parallel processing (MPP) architecture. It was designed as an alternative to tools that query Apache Hadoop Distributed File System (HDFS) using pipelines of MapReduce jobs, such as Apache Hive or Apache Pig, but Trino is not limited to querying HDFS only. It has been extended to operate over a multitude of data sources, including Amazon Simple Storage Service (Amazon S3), traditional relational databases and distributed data stores including Apache Cassandra, Apache Druid, MongoDB and more. When Trino executes a query, it does so by breaking up the execution into a hierarchy of stages, which are implemented as a series of tasks distributed over a network of Trino workers. This reduces end-to-end latency and makes Trino a fast tool for ad hoc data exploration over very large data sets.

Figure 1 – Trino architecture overview

Trino coordinator is responsible for parsing statements, planning queries, and managing Trino worker nodes. Every Trino installation must have a coordinator alongside one or more Trino workers. Client applications including Apache Superset and Redash connect to the coordinator via Presto Gateway to submit statements for execution. The coordinator creates a logical model of a query involving a series of stages, which is then translated into a series of connected tasks running on a cluster of Trino workers. Presto Gateway acts as a proxy/load-balancer for multiple Trino clusters.

Druid overview

Apache Druid is a real-time database to power modern analytics applications for use cases where real-time ingest, fast query performance and high uptime are important. Druid processes are deployed on three types of server nodes: Master nodes govern data availability and ingestion, Query nodes accept queries, execute them across the system, and return the results and Data nodes ingest and store queryable data. Broker processes receive queries from external clients and forward those queries to Data servers. Historicals are the workhorses that handle storage and querying on “historical” data. MiddleManager processes handle ingestion of new data into the cluster. Please refer here to learn more on detailed Druid architecture design.

Figure 2 – Druid architecture overview

Zomato’s Data Platform Architecture on AWS

Figure 3 – Zomato’s Data Platform landscape on AWS

Zomato’s Data Platform covers data ingestion, storage, distributed processing (enrichment and enhancement), batch and real-time data pipelines unification and a robust consumption layer, through which petabytes of data is queried daily for ad-hoc and near real-time analytics. In this section, we will explain the data flow of pipelines serving data to Trino and Druid clusters in the overall Data Platform architecture.

Data Pipeline-1: Amazon Aurora MySQL-Compatible database is used to store data by various microservices at Zomato. Apache Sqoop on Amazon EMR run Extract, Transform, Load (ETL) jobs at scheduled intervals to fetch data from Aurora MySQL-Compatible to transfer it to Amazon S3 in the Optimized Row Columnar (ORC) format, which is then queried by Trino clusters.

Data Pipeline-2: Debezium Kafka connector deployed on Amazon Elastic Container Service (Amazon ECS) acts as producer and continuously polls data from Aurora MySQL-Compatible database. On detecting changes in the data, it identifies the change type and publishes the change data event to Apache Kafka in Avro format. Apache Flink on Amazon EMR consumes data from Kafka topic, performs data enrichment and transformation and writes it in ORC format in Iceberg tables on Amazon S3. Trino clusters then query data from Amazon S3.

Data Pipeline-3: Moving away from other databases, Zomato had decided to go serverless with Amazon DynamoDB because of its high performance (single-digit millisecond latency), request rate (millions per second), extreme scale as per Zomato peak expectations, economics (pay as you go) and data volume (TB, PB, EB) for their business-critical apps including Food Cart, Product Catalog and Customer preferences. DynamoDB streams publish data from these apps to Amazon S3 in JSON format to serve this data pipeline. Apache Spark on Amazon EMR reads JSON data, performs transformations including conversion into ORC format and writes data back to Amazon S3 which is used by Trino clusters for querying.

Data Pipeline-4: Zomato’s core business applications serving end users include microservices, web and mobile applications. To get near real-time insights from these core applications is critical to serve customers and win their trust continuously. Services use a custom SDK developed by data platform team to publish events to the Apache Kafka topic. Then, two downstream data pipelines consume these application events available on Kafka via Apache Flink on Amazon EMR. Flink performs data conversion into ORC format and publishes data to Amazon S3 and in a parallel data pipeline, Flink also publishes enriched data onto another Kafka topic, which further serves data to an Apache Druid cluster deployed on Amazon EC2 instances.

Performance requirements for querying at scale

All of the described data pipelines ingest data into an Amazon S3 based data lake, which is then leveraged by three types of Trino clusters – Ad-hoc clusters for ad-hoc query use cases, with a maximum query runtime of 20 minutes, ETL clusters for creating materialized views to enhance performance of dashboard queries, and Reporting clusters to run queries for dashboards with various Key Performance Indicators (KPIs), with query runtime upto 3 minutes. ETL queries are run via Apache Airflow with a built-in query retry mechanism and a runtime of up to 3 hours.

Druid is used to serve two types of queries: computing aggregated metrics based on recent events and comparing aggregated metrics to historical data. For example, how is a specific metric in the current hour compared to the same last week. Depending on the use case, the service level objective for Druid query response time ranges from a few milliseconds to a few seconds.

Graviton migration of Druid cluster

Zomato first moved Druid nodes to AWS Graviton based instances in their test cluster environment to determine query performance. Nodes running brokers and middle-managers were moved from R5 to R6g instances and nodes running historicals were migrated from i3 to R6gd instances.   Zomato logged real-world queries from their production cluster and replayed them in their test cluster to validate the performance. Post validation, Zomato saw significant performance gains and reduced cost:

Performance gains

For queries in Druid, performance was measured using typical business hours (12:00 to 22:00 Hours) load of 14K queries, as shown here, where p99 query runtime reduced by 25%.

Figure 4 – Overall Druid query performance (Intel x86-64 vs. AWS Graviton)

Also, query performance improvement on the historical nodes of the Druid cluster are shown here, where p95 query runtime reduced by 66%.

Figure 5 –Query performance on Druid Historicals (Intel x86-64 vs. AWS Graviton)

Under peak load during business hours (12:00 to 22:00 Hours as shown in the provided graph), with increasingly loaded CPUs, Graviton based instances demonstrated close to linear performance resulting in better query runtime than equivalent Intel x86 based instances. This provided headroom to Zomato to reduce their overall node count in the Druid cluster for serving the same peak load query traffic.

Figure 6 – CPU utilization (Intel x86-64 vs. AWS Graviton)

Cost savings

A Cost comparison of Intel x86 vs. AWS Graviton based instances running Druid in a test environment along with the number, instance types and hourly On-demand prices in the Singapore region is shown here. There are cost savings of ~24% running the same number of Graviton based instances. Further, Druid cluster auto scales in production environment based upon performance metrics, so average cost savings with Graviton based instances are even higher at ~30% due to better performance.

Figure 7 – Cost savings analysis (Intel x86-64 vs. AWS Graviton)

Graviton migration of Trino clusters

Zomato also moved their Trino cluster in their test environment to AWS Graviton based instances and monitored query performance for different short and long-running queries. As shown here, mean wall (elapsed) time value for different Trino queries is lower on AWS Graviton instances than equivalent Intel x86 based instances, for most of the queries (lower is better).

Figure 8 – Mean Wall Time for Trino queries (Intel x86-64 vs. AWS Graviton)

Also, p99 query runtime reduced by ~33% after migrating the Trino cluster to AWS Graviton instances for a typical business day’s (7am – 7pm) mixed query load with ~15K queries.

Figure 9 –Query performance for a typical day (7am -7pm) load

Zomato’s team further optimized overall Trino query performance by enhancing Advanced Encryption Standard (AES) performance on Graviton for TLS negotiation with Amazon S3. It was achieved by enabling -XX:+UnlockDiagnosticVMOptions and -XX:+UseAESCTRIntrinsics in extra JVM flags. As shown here, mean CPU time for queries is lower after enabling extra JVM flags, for most of the queries.

Figure 10 –Query performance after enabling extra JVM options with Graviton instances

Migration challenges and approach

Zomato team is using Trino version 359 and multi-arch or ARM64-compatible docker image for this Trino version was not available. As the team wanted to migrate their Trino cluster to Graviton based instances with minimal engineering efforts and time, they backported the Trino multi-arch supported UBI8 based Docker image to their Trino version 359.  This approach allowed faster adoption of Graviton based instances, eliminating the heavy lift of upgrading, testing and benchmarking the workload on a newer Trino version.

Next Steps

Zomato has already migrated AWS managed services including Amazon EMR and Amazon Aurora MySQL-Compatible database to AWS Graviton based instances. With the successful migration of two main open source software components (Trino and Druid) of their data platform to AWS Graviton with visible and immediate price-performance gains, the Zomato team plans to replicate that success with other open source applications running on Amazon EC2 including Apache Kafka, Apache Pinot, etc.

Conclusion

This post demonstrated the price/performance benefits of adopting AWS Graviton based instances for high throughput, near real-time big data analytics workloads running on Java-based, open source Apache Druid and Trino applications. Overall, Zomato reduced the cost of its Amazon EC2 usage by 30%, while improving performance for both time-critical and ad-hoc querying by as much as 25%. Due to better performance, Zomato was also able to right size compute footprint for these workloads on a smaller number of Amazon EC2 instances, with peak capacity of Apache Druid and Trino clusters reduced by 25% and 20% respectively.

Zomato migrated these open source software applications faster by quickly implementing customizations needed for optimum performance and compatibility with Graviton based instances. Zomato’s mission is “better food for more people” and Graviton adoption is helping with this mission by providing a more sustainable, performant, and cost-effective compute platform on AWS. This is certainly a “food for thought” for customers looking forward to improve price-performance and sustainability for their business-critical workloads running on Open Source Software (OSS).

Flatlogic Admin Templates banner

Multi-Architecture Container Builds with CodeCatalyst

AWS Graviton Processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon Elastic Compute Cloud (Amazon EC2). Amazon CodeCatalyst recently added support to run workflow actions using on-demand or pre-provisioned compute powered by AWS Graviton processors. Customers can now access high performance AWS Graviton processors to build artifacts for Arm, or improve their price performance. In this post I will show you how to create a multi-architecture docker image using CodeCatalyst that can run on both amd64 and arm64 processors.

Background

Container images only run on a system with the same CPU architecture for which they were targeted. For example, an amd64 image runs on Intel and AMD processors, while an arm64 image runs on AWS Graviton. Note that amd64 and x86_64 are often used interchangeable, and I have chosen to use amd64 in this post. Rather than maintaining multiple repositories for each image type, you can combine variants for multiple architectures in the same repository. In addition, you can create a manifest describing which image to use for each architecture. This is known as multi-architecture, or multi-platform images.

Let us look at an example to further understand multi-arch images. In this screenshot from Amazon Elastic Container Registry (Amazon ECR), I have created two images for a simple hello-world application. One image is tagged latest-amd64 for AMD architectures and one tagged latest-arm64 for ARM architectures.

In addition, I have created an Image Index tagged latest. The image index is a map describing which image to use for each architecture. This allows my users to simply pull hello-world:latest and the index will identify the correct image based on the target platform. The image index contains the following manifest.

{
“schemaVersion”: 2,
“mediaType”: “application/vnd.docker.distribution.manifest.list.v2+json”,
“manifests”: [
{
“mediaType”: “application/vnd.docker.distribution.manifest.v2+json”,
“size”: 1573,
“digest”: “sha256:eccb6dd2c2dbfc9…”,
“platform”: {
“architecture”: “amd64”,
“os”: “linux”
}
},
{
“mediaType”: “application/vnd.docker.distribution.manifest.v2+json”,
“size”: 1573,
“digest”: “sha256:c64812837fbd43…”,
“platform”: {
“architecture”: “arm64”,
“os”: “linux”
}
}
]
}

Now that I have explained what a multi-arch image is, I will explain how to create one in a CodeCatalyst workflow. A CodeCatalyst workflow is an automated procedure that describes how to build, test, and deploy your code as part of a continuous integration and continuous delivery (CI/CD) system. A workflow defines a series of steps, or actions, to take during a workflow run. Let’s get started.

Prerequisites

If you would like to follow along with this walkthrough, you will need:

A CodeCatalyst space and associated AWS account.

An empty CodeCatalyst projectand source repository in the space.
An Amazon ECR private repository in the associated AWS account.
A CodeCatalyst environment connected to the associated AWS account.

Walkthrough

In this walkthrough I will create a simple application using an Apache HTTP Server serving a static hello world page. The workload is inconsequential. I will focus on the process of building the container image using a CodeCatalyst workflow. The Workflow will build two container images, one for amd64 and one for arm64. The two build tasks will run in parallel on different compute architectures. When both builds are complete, the workflow will build the docker manifest. At the end of this post, my workflow will look like this.

Note that docker also offers a plugin called buildx that will allow you to build a multi-architecture image with a single command. In a real-world application, the workflow would also build the source code, run unit tests, etc. on each architecture. The sample application used in this post is so simple that there is no need to build and test the source code. Let’s examine the sample application now.

Sample Application

Initially the empty repository will only have a README.md file. By the end of this post, my repository will look like this.

I’ll begin by creating the file named index.html. I used the Create file button in CodeCatalyst console shown previously. My index.html file has the following content:

<html>
<head>
<title>Hello World!</title>
</head>
<body>
<h1>Hello World!</h1>
<p>Hello from a multi-architecture container created in CodeCatalyst.</p>
</body>
</html>

I’ll also create a Dockerfile that contains two commands. The first command instructs Docker to build a new image from the Apache HTTP Server Project image called httpd. It is important to note that the httpd image already supports multiple architectures including amd64 and arm64. When creating a multi-architecture image, the base image must also support these architectures. The second command simply copies the index.html file above into the new image. My Dockerfile file has the following content.

FROM httpd
COPY ./index.html /usr/local/apache2/htdocs/

With the source code for my sample application complete, I can turn my attention to the workflow.

CI/CD Workflow

To create a new workflow, select CI/CD from navigation on the left and then select Workflows (1). Then, select Create workflow (2), leave the default options, and select Create (3).

If the workflow editor opens in YAML mode, select Visual to open the visual designer. Now, I can start adding actions to the workflow.

Build Action for the AMD64 Variant

I’ll begin by adding a build action for the amd64 container. Select “+ Actions” to open the actions list. Find the Build action and click “+” to add a new build action to the workflow.

On the Inputs tab, create three variable named AWS_DEFAULT_REGION, IMAGE_REPO_NAME, and IMAGE_TAG. Set the first two values equal to the region and **** name of your Amazon ECR repository**.** Set the third to latest-amd64. For example:

Now select the Configuration tab and rename the action docker_build_amd64. Select the Environment, AWS account connection, and Role for the associated AWS account where you created the Amazon ECR repository. For example:

Then, copy and paste the following code into the Shell commands. This code will build the image using the Dockerfile you created previously. Then, it logs into Amazon ECR, and finally, pushes the new image to ECR.

– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

If you switch back to the YAML view, you can see that the designer has added the following action to the workflow definition.

docker_build_amd64:
Identifier: aws/[email protected]
Compute:
Type: EC2
Inputs:
Sources:
– WorkflowSource
Variables:
– Name: AWS_DEFAULT_REGION
Value: us-west-2
– Name: IMAGE_REPO_NAME
Value: hello-world
– Name: IMAGE_TAG
Value: latest-amd64
Environment:
Name: demo
Connections:
– Role: CodeCatalystPreviewDevelopmentAdministrator
Name: development
Configuration:
Steps:
– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

With the amd64 image complete, you can move on to the arm64 image.

Build Action for the ARM64 Variant

Add a second build action named docker_build_arm64 for the arm64 container. The configuration is nearly identical to the previous action with two minor changes. First, on the Inputs tab, I set the IMAGE_TAG to latest-arm64.

Second, on the Configuration tab, change the compute fleet to Linux.Arm64.Large. That is all you need to do to run your action on AWS Graviton. For example:

The Shell commands are identical to the arm64 build action. In addition, don’t forget to select the Environment, AWS account connection, and Role on the configuration tab. The complete configuration for the second action looks like this:

docker_build_arm64:
Identifier: aws/[email protected]
Compute:
Type: EC2
Fleet: Linux.Arm64.Large
Inputs:
Sources:
– WorkflowSource
Variables:
– Name: AWS_DEFAULT_REGION
Value: us-west-2
– Name: IMAGE_REPO_NAME
Value: hello-world
– Name: IMAGE_TAG
Value: latest-arm64
Environment:
Name: demo
Connections:
– Role: CodeCatalystPreviewDevelopmentAdministrator
Name: development
Configuration:
Steps:
– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

Now that you have a build action for the amd64 and arm64 images, you simply need to create a manifest file describing which image to use for each architecture.

Build Action for the Manifest

The final step in the workflow is to create the Docker manifest. Create a third build action named docker_manifest. You want this action to wait for the prior two actions to complete. Therefore, select the prior two actions from the Depends on drop down, like this:

Also configure four variables. AWS_DEFAULT_REGION and IMAGE_REPO_NAME are identical to the prior actions. In addition, IMAGE_TAG_AMD64 and IMAGE_TAG_ARM64 include the tags you created in the prior actions.

On the configuration tab, select the Environment, AWS account connection, and Role as you did in the prior actions. Then, copy and paste the following Shell commands.

– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output text`
– Run: aws ecr get-login-password | docker login –username AWS –password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch amd64 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch arm64 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64
– Run: docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/$IMAGE_REPO_NAME

The shell commands create a manifest and then annotate it with the correct image for both amd64 and arm64. The final action looks like this.

docker_manifest:
Identifier: aws/[email protected]
DependsOn:
– docker_build_arm64
– docker_build_amd64
Compute:
Type: EC2
Inputs:
Sources:
– WorkflowSource
Variables:
– Name: AWS_DEFAULT_REGION
Value: us-west-2
– Name: IMAGE_REPO_NAME
Value: hello-world
– Name: IMAGE_TAG_AMD64
Value: latest-amd64
– Name: IMAGE_TAG_ARM64
Value: latest-arm64
Environment:
Name: demo
Connections:
– Role: CodeCatalystPreviewDevelopmentAdministrator
Name: development
Configuration:
Steps:
– Run: AWS_ACCOUNT_ID=`aws sts get-caller-identity –query “Account” –output
text`
– Run: aws ecr get-login-password | docker login –username AWS
–password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
– Run: docker manifest create
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch amd64
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_AMD64
– Run: docker manifest annotate –arch arm64
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG_ARM64
– Run: docker manifest push
$AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/$IMAGE_REPO_NAME

I now have a complete CI/CD workflow that creates a container images for both amd64 and arm64. When I commit the changes, CodeCatalyst will execute my workflow, build the images, and push to ECR.

Cleanup

If you have been following along with this workflow, you should delete the resources you deployed so you do not continue to incur charges. First, delete the Amazon ECR repository using the AWS console. Second, delete the project from CodeCatalyst by navigating to Project settings and choosing Delete project.

Conclusion

AWS Graviton processors are custom-built by AWS to deliver the best price performance for cloud workloads. In this post I explained how to configure CodeCatalyst workflow actions to run on AWS Graviton. I used CodeCatalyst to create a workflow that builds a multi-architecture container image that can run on both amd64 and arm64 architectures. Get started building your multi-arch containers in Amazon CodeCatalyst today! You can read more about CodeCatalyst workflows in the documentation.

Monitoring Amazon DevOps Guru insights using Amazon Managed Grafana

As organizations operate day-to-day, having insights into their cloud infrastructure state can be crucial for the durability and availability of their systems. Industry research estimates[1] that downtime costs small businesses around $427 per minute of downtime, and medium to large businesses an average of $9,000 per minute of downtime. Amazon DevOps Guru customers want to monitor and generate alerts using a single dashboard. This allows them to reduce context switching between applications, providing them an opportunity to respond to operational issues faster.

DevOps Guru can integrate with Amazon Managed Grafana to create and display operational insights. Alerts can be created and communicated for any critical events captured by DevOps Guru and notifications can be sent to operation teams to respond to these events. The key telemetry data types of logs and metrics are parsed and filtered to provide the necessary insights into observability.

Furthermore, it provides plug-ins to popular open-source databases, third-party ISV monitoring tools, and other cloud services. With Amazon Managed Grafana, you can easily visualize information from multiple AWS services, AWS accounts, and Regions in a single Grafana dashboard.

In this post, we will walk you through integrating the insights generated from DevOps Guru with Amazon Managed Grafana.

Solution Overview:

This architecture diagram shows the flow of the logs and metrics that will be utilized by Amazon Managed Grafana, starting with DevOps Guru and then using Amazon EventBridge to save the insight event logs to Amazon CloudWatch Log Group DevOps Guru service metrics to be parsed by Amazon Managed Grafana and create new dashboards in Grafana from these logs and Metrics.

Now we will walk you through how to do this and set up notifications to your operations team.

Prerequisites:

The following prerequisites are required for this walkthrough:

An AWS Account

Enabled DevOps Guru on your account with CloudFormation stack, or tagged resources monitored.

Using Amazon CloudWatch Metrics

 

DevOps Guru sends service metrics to CloudWatch Metrics. We will use these to      track metrics for insights and metrics for your DevOps Guru usage; the DevOps Guru service reports the metrics to the AWS/DevOps-Guru namespace in CloudWatch by default.

First, we will provision an Amazon Managed Grafana workspace and then create a Dashboard in the workspace that uses Amazon CloudWatch as a data source.

Setting up Amazon CloudWatch Metrics

Create Grafana Workspace
Navigate to Amazon Managed Grafana from AWS console, then click Create workspace

a. Select the Authentication mechanism

i. AWS IAM Identity Center (AWS SSO) or SAML v2 based Identity Providers

ii. Service Managed Permission or Customer Managed

iii. Choose Next

b. Under “Data sources and notification channels”, choose Amazon CloudWatch

c. Create the Service.

You can use this post for more information on how to create and configure the Grafana workspace with SAML based authentication.

Next, we will show you how to create a dashboard and parse the Logs and Metrics to display the DevOps Guru insights and recommendations.

2. Configure Amazon Managed Grafana

a. Add CloudWatch as a data source:
From the left bar navigation menu, hover over AWS and select Data sources.

b. From the Services dropdown select and configure CloudWatch.

3. Create a Dashboard

a. From the left navigation bar, click on add a new Panel.

b. You will see a demo panel.

c. In the demo panel – Click on Data source and select Amazon CloudWatch.

d. For this panel we will use CloudWatch metrics to display the number of insights.

e. From Namespace select the AWS/DevOps-Guru name space, Insights as Metric name and Average for Statistics.

click apply

f. This is our first panel. We can change the panel name from the right-side bar under Title. We will name this panel “Insights

g. From the top right menu, click save dashboard and give your new dashboard a name

Using Amazon CloudWatch Logs via Amazon EventBridge

For other insights outside of the service metrics, such as a number of insights per specific service or the average for a region or for a specific AWS account, we will need to parse the event logs. These logs first need to be sent to Amazon CloudWatch Logs. We will go over the details on how to set this up and how we can parse these logs in Amazon Managed Grafana using CloudWatch Logs Query Syntax. In this post, we will show a couple of examples. For more details, please check out this User Guide documentation. This is not done by default and we will need to use Amazon EventBridge to pass these logs to CloudWatch.

DevOps Guru logs include other details that can be helpful when building Dashboards, such as region, Insight Severity (High, Medium, or Low), associated resources, and DevOps guru dashboard URL, among other things.  For more information, please check out this User Guide documentation.

EventBridge offers a serverless event bus that helps you receive, filter, transform, route, and deliver events. It provides one to many messaging solutions to support decoupled architectures, and it is easy to integrate with AWS Services and 3rd-party tools. Using Amazon EventBridge with DevOps Guru provides a solution that is easy to extend to create a ticketing system through integrations with ServiceNow, Jira, and other tools. It also makes it easy to set up alert systems through integrations with PagerDuty, Slack, and more.

 

Setting up Amazon CloudWatch Logs

Let’s dive in to creating the EventBridge rule and enhance our Grafana dashboard:

a. First head to Amazon EventBridge in the AWS console.

b. Click Create rule.

     Type in rule Name and Description. You can leave the Event bus to default and Rule type to Rule with an event pattern.

c. Select AWS events or EventBridge partner events.

    For event Pattern change to Customer patterns (JSON editor) and use:

{“source”: [“aws.devops-guru”]}

This filters for all events generated from DevOps Guru. You can use the same mechanism to filter out specific messages such as new insights, or insights closed to a different channel. For this demonstration, let’s consider extracting all events.

d. Next, for Target, select AWS service.

    Then use CloudWatch log Group.

    For the Log Group, give your group a name, such as “devops-guru”.

e. Click Create rule.

f. Navigate back to Amazon Managed Grafana.
It’s time to add a couple more additional Panels to our dashboard.  Click Add panel.
    Then Select Amazon CloudWatch, and change from metrics to CloudWatch Logs and select the Log Group we created previously.

g. For the query use the following to get the number of closed insights:

fields @detail.messageType
| filter detail.messageType=”CLOSED_INSIGHT”
| count(detail.messageType)

You’ll see the new dashboard get updated with “Data is missing a time field”.

You can either open the suggestions and select a gauge that makes sense;

Or choose from multiple visualization options.

Now we have 2 panels:

h. You can repeat the same process. To create 3rd panel for the new insights using this query:

fields @detail.messageType
| filter detail.messageType=”NEW_INSIGHT”
| count(detail.messageType)

Now we have 3 panels:

Next, depending on the visualizations, you can work with the Logs and metrics data types to parse and filter the data.

i. For our fourth panel, we will add DevOps Guru dashboard direct link to the AWS Console.

Repeat the same process as demonstrated previously one more time with this query:

fields detail.messageType, detail.insightSeverity, detail.insightUrlfilter
| filter detail.messageType=”CLOSED_INSIGHT” or detail.messageType=”NEW_INSIGHT”                       

                        Switch to table when prompted on the panel.

This will give us a direct link to the DevOps Guru dashboard and help us get to the insight details and Recommendations.

Save your dashboard.

You can extend observability by sending notifications through alerts on dashboards of panels providing metrics. The alerts will be triggered when a condition is met. The Alerts are communicated with Amazon SNS notification mechanism. This is our SNS notification channel setup.

A previously created notification is used next to communicate any alerts when the condition is met across the metrics being observed.

Cleanup

To avoid incurring future charges, delete the resources.

Navigate to EventBridge in AWS console and delete the rule created in step 4 (a-e) “devops-guru”.
Navigate to CloudWatch logs in AWS console and delete the log group created as results of step 4 (a-e) named “devops-guru”.
Amazon Managed Grafana: Navigate to Amazon Managed Grafana service and delete the Grafana services you created in step 1.

Conclusion

In this post, we have demonstrated how to successfully incorporate Amazon DevOps Guru insights into Amazon Managed Grafana and use Grafana as the observability tool. This will allow Operations team to successfully observe the state of their AWS resources and notify them through Alarms on any preset thresholds on DevOps Guru metrics and logs. You can expand on this to create other panels and dashboards specific to your needs. If you don’t have DevOps Guru, you can start monitoring your AWS applications with AWS DevOps Guru today using this link.

[1] https://www.atlassian.com/incident-management/kpis/cost-of-downtime

About the authors:

MJ Kubba

MJ Kubba is a Solutions Architect who enjoys working with public sector customers to build solutions that meet their business needs. MJ has over 15 years of experience designing and implementing software solutions. He has a keen passion for DevOps and cultural transformation.

David Ernst

David is a Sr. Specialist Solution Architect – DevOps, with 20+ years of experience in designing and implementing software solutions for various industries. David is an automation enthusiast and works with AWS customers to design, deploy, and manage their AWS workloads/architectures.

Sofia Kendall

Sofia Kendall is a Solutions Architect who helps small and medium businesses achieve their goals as they utilize the cloud. Sofia has a background in Software Engineering and enjoys working to make systems reliable, efficient, and scalable.

AWS Now Supports Credentials-fetcher for gMSA on Amazon Linux 2023

In Q1 of 2023, AWS announced the release of the group Managed Service Account (gMSA) credentials-fetcher daemon, with initial support on Amazon Linux 2023, Fedora Linux 36, and Red Hat Enterprise Linux 9. The credentials-fetcher daemon, developed by AWS, is an open source project under the Apache 2.0 License. This release solves a 10-year, longstanding challenge affecting domain connected Linux machines. Until now, Linux users couldn’t use Microsoft Active Directory (Microsoft AD) gMSA and thus have missed out on the improved security and flexibility that gMSA offers over standard service accounts. With the release of the credentials-fetcher daemon, organizations now gain all of gMSA’s benefits without being tied to Windows based hosts.

In this blog post, we explain the use case for credentials-fetcher and give simple instructions for using an Active Directory domain joined Linux server with gMSA. We also demonstrate the interaction with other domain joined services such as Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server.  The new capabilities of credentials-fetcher pave the way for additional use cases, such as using a Linux host in Amazon Elastic Container Service (Amazon ECS) clusters with gMSA. AWS is committed to using the credentials-fetcher open source project in the AWS cloud, though users may choose to run the service elsewhere. The utility of the service is not limited to AWS. The credentials-fetcher daemon can be leveraged on any supported distribution of Linux and in any environment that meets the Microsoft Active Directory version requirement. This includes on-premise environments, hosted data centers, and other cloud providers.

Solution overview

Organizations running Windows workloads hosted in on-premises data centers use Microsoft AD to authenticate users and services to shared resources over the network. As these organizations migrate workloads into Windows-based environments on AWS and on other clouds, customers traditionally use the domain-join model to access Microsoft AD from Windows instances. In addition, organizations that use Windows containers to scale their applications and reduce their total cost of ownership (TCO) have used gMSAs for Active Directory access by providing Kerberos tickets for container-hosts.

As customers modernize their Windows and Microsoft SQL Server workloads to Linux-based platforms, they still need to authenticate the migrated applications through the organization’s existing Microsoft AD. Although customers can use the domain-join methodology to connect Linux instances to Microsoft AD, it requires a number of steps that traditionally include security limitations. The current method involves a sidecar architecture that fails to periodically rotate passwords, unlike gMSA on Windows containers, thus inducing a security risk of password exposure. Organizations with stringent security postures have not adopted this method on Linux containers and have been waiting for a “gMSA on Windows containers”-like experience on Linux containers.  Active Directory gMSAs have been technically infeasible for customers on Linux-based environments, until today.

A brief introduction to gMSA

Windows-based server infrastructure commonly uses Microsoft Active Directory to facilitate authentication and authorization between users, computers, and other computer network resources. Traditionally, enterprise applications running on Windows platforms use either manually managed accounts used as service accounts or Managed Service Accounts (MSA) for authentication and authorization. The use of manually managed service accounts brings with it the overhead of service account password management, including manually updating the password and updating the password on all servers. It also introduces increased security risks as these accounts typically have elevated privileges and are not tied to a specific user, which creates challenges for attributing activity when auditing the account. For this reason, password management of these accounts is critical.

In contrast, Managed Service Accounts don’t have any password management overhead; the passwords for these type of accounts are automatically rotated and updated on your servers. They are also limited to a single computer account, which means they can’t be used on more than one computer, and cannot be used for interactive logons. A Group Managed Service Account (gMSA) is a special type of service account which augments the functionality; its identity can be shared across multiple computers without needing to know the password. Computers should be part of a Microsoft Active Directory domain, which manages these service accounts to make use of them. Although Windows containers cannot join a domain like an instance, they can still use gMSA identity for authentication and authorization.

Credentials-fetcher’s potential scenarios

With the addition of the credentials-fetcher daemon, more organizations can use gMSA. This gives customers more options if they’re more familiar with Linux, they’re looking to save on licensing costs, and/or looking to improve their security posture. Customers can now associate Linux machines to a gMSA and take advantage of the authentication and authorization between members of that group managed security account. Environments hosted on domain joined, gMSA associated Linux machines running .NET applications or running in Linux containers can now use the gMSA to authenticate between their own domains and other services like Microsoft SQL Server.

Scenario 1: A Microsoft .NET application is running in Docker containers, with the hosts on a Microsoft Active Directory domain joined Amazon Elastic Computer Cloud (Amazon EC2) Linux server. The Linux application server is added as members of the gMSA group. The gMSA account is granted permissions to the domain joined Microsoft SQL Server or Amazon RDS for Microsoft SQL Server database.

Scenario 2: A Microsoft .NET application is running in Docker containers and Microsoft SQL server running in its own Docker container, with the hosts on a Microsoft Active Directory domain joined Amazon EC2 Linux server.  The Linux host servers of the application containers and Microsoft SQL Server container are added as members of the gMSA group. The gMSA account is granted permissions to the Microsoft SQL Server instance database running in a container.

Scenario 3: A Microsoft .NET application is running on an Amazon Elastic Container Service (Amazon ECS) cluster, hosted on a Microsoft Active Directory domain. The Linux servers within the Amazon ECS cluster are added as members of the gMSA group. The gMSA account is granted permissions to the domain joined Microsoft SQL Server or Amazon RDS for Microsoft SQL Server database.

Here is a visualization of the featured use-case scenarios.

Figure 1 Different use-case scenarios with Credentials-fetcher

Implementing the environment

This section will walk you through the prerequisites, environment setup and the installation steps for the credentials-fetcher daemon’s use cases.

Prerequisites

You have properly installed and configured the AWS Command Line Interface (AWS CLI) and PowerShell core on your workstation. We’ve chosen to use the AWS CLI for these steps so that the end-to-end workflow can be demonstrated.
This blog post as of April 4th, 2023 requires an install of Fedora Linux 36 or newer and the latest Amazon Linux 2023 AMI.
This blog post references AWS Managed Microsoft Active Directory, but it will also work with other self-managed Microsoft Active Directory scenarios as long as the Linux machines are able to be domain joined.
You have installed Amazon Relational Database Service (Amazon RDS) instance that is joined to the domain.
You have elevated administrative Active Directory credentials to configure instances to join a domain and create a Microsoft AD security group.
You have accessed to the credentials-fetcher GitHub package for the installation of the latest daemon and updated instructions.

Environment Setup for gMSA on Linux use cases

Figure 2 Credentials-fetcher running in Fedora Linux Server.

All instructions are assuming the use of the Fedora Linux 36 distro, which has been made available during the time of the blog creation. We plan to add gMSA support for additional Linux distributions in the future.

1.     Set up AWS Managed Microsoft Active Directory or Self-hosted Active Directory.

Active Directory setup:  You will set up domain-join from Linux instance to the AD domain. The Linux instance is part of the AD Security group that has access to gMSA account as configured by AD administrator.
AWS Managed Microsoft Active Directory can be deployed using this AWS CloudFormation template.

2.     Create a gMSA account as a Microsoft AD administrator.

Example: Replace ‘LinuxAppFarm’, ‘LinuxFarm01$’ and ‘CORP.EXAMPLE.COM’ with your own gMSA and domain names, respectively. Three Linux instances are displayed in this example: LinuxInstance01$, LinuxInstance02$ and LinuxInstance03$.

# Create the AD group
New-ADGroup -Name “LinuxAppFarm” -SamAccountName “LinuxAppFarm” -GroupScope DomainLocal

# Create the gMSA
New-ADServiceAccount -Name “gmsamachines” -DnsHostName “gmsamachines.CORP.EXAMPLE.COM” -ServicePrincipalNames “host/LinuxAppFarm”, “host/LinuxAppFarm.CORP.EXAMPLE.COM” -PrincipalsAllowedToRetrieveManagedPassword “LinuxAppFarm”

# Add your Linux Instance or containers to the AD group Add-ADGroupMember -Identity “LinuxAppFarm” -Members “LinuxInstances01$”, “LinuxInstances02$”, “LinuxInstances03$”, “MSSQLRDSIntance$”

3.     Verify and test the gMSA account.

PowerShell
# Get the current computer’s group membership
Test-ADServiceAccount gmsamachines

# Get the current computer’s group membership
Get-ADComputer $env:LinuxInstances01 | Get-ADPrincipalGroupMembership | Select-Object DistinguishedName

# Get the groups allowed to retrieve the gMSA password and Change “gmsamachines” for your own gMSA name
(Get-ADServiceAccount gmsamachines -Properties PrincipalsAllowedToRetrieveManagedPassword).PrincipalsAllowedToRetrieveManagedPassword.

Additional reference detailed instructions can be found in this guide to getting started with group managed service accounts.

4.     Create a credentialspec associated with a gMSA account:

Install powershell CredentialSpec module and create CredentialSpec

PowerShell
Install-Module CredentialSpec

New-CredentialSpec -AccountName LinuxAppFarm // Replace ‘LinuxAppFarm’ with your own gMSA Group

You will find the credentialspec in the directory ‘C:Program DataDockerCredentialspecsLinuxAppFarm_CredSpec.json’

5.     Obtain and deploy the supported Fedora Linux 36 version or newer supported AMI (AWS Public Cloud download.)

6.     Manually join your Linux system to the Microsoft Active Directory domain using the following command:

#Install realmd and configure DNS resolver for the Active Directory domain
sudo dnf install realmd sssd oddjob oddjob-mkhomedir adcli krb5-workstation samba-common-tools -y
sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved
sudo unlink /etc/resolv.conf

#Add your DNS nameserver IP and domain name to the resolv.conf and save
sudo nano /etc/resolv.conf

nameserver 10.0.0.20
search corp.example.com

#Join the Linux Server to the realm/domain case-sensitive

Replace (upper-case) realm account and domain name indicated by <bold text>  with the UPN of domain user and FQDN of domain name. Remove < and > in your final command.

Auto-join is not currently supported until the Amazon Linux 2022 distro is updated with the new rpm.

Microsoft SQL Server and Amazon RDS for Microsoft SQL Server can be added for Kerberos database authentication.

Microsoft SQL and Amazon RDS for Microsoft SQL Server must be joined to the AWS Managed Microsoft AD Domain.

See instructions on how to connect Amazon RDS for Microsoft SQL Server to the Microsoft Active Directory domain.

For the highest recommended security, constrained Kerberos delegation for gMSA should be applied to the accounts for any service access.

Set-ADAccountControl -Identity <TestgMSA$> -TrustedForDelegation $false -TrustedToAuthForDelegation $false
Set-ADServiceAccount -Identity TestgMSA$ -Clear ‘msDS-AllowedToDelegateTo’

Detailed instructions can be found here.

7.     Invoke the AddkerberosLease API with the credentialsspec input as shown in following command. This step is important to allow the credentials-fetcher to make a connection to Microsoft Active Directory. The gMSA account is then used for authentication.
Use this command with Fedora Linux only: (grpc_cli is not available on Amazon Linux)

#Replace gMSA group name, netbios name and DNS names in the command (Bold text)
grpc_cli call unix:/var/credentials-fetcher/socket/credentials_fetcher.sock AddKerberosLease “credspec_contents: ‘{“CmsPlugins”:[“ActiveDirectory”],”DomainJoinConfig”:{“Sid”:”S-1-5-21-1445507628-2856671781-3529916291″,”MachineAccountName”:”gmsamachines”,”Guid”:”af602f85-d754-4eea-9fa8-fd76810485f1″,”DnsTreeName”:”corp.example.com”,”DnsName”:”corp.example.com”,”NetBiosName”:”DEMOCORP”},”ActiveDirectoryConfig”:{“GroupManagedServiceAccounts”:[{“Name”:”gmsamachines”,”Scope”:”corp.example.com”},{“Name”:”gmsamachines”,”Scope”:”DEMOCORP”}]}}'”

Response example: (Note the response for use with your Docker application container)
path to kerberos ticket : /var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01

8.     Invoke the Delete kerberosLease API with lease id input as shown here. Set unique identifier lease_id, associated to the request. The deleted_kerberos_file_paths are paths associated to the Kerberos tickets deleted corresponding to the gMSA accounts.
Use this command from the Linux host:

#Delete Kerberos Lease sample command
grpc_cli call unix:/var/credentials-fetcher/socket/credentials_fetcher.sock DeleteKerberosLease “lease_id: ‘${response_lease_id_from_add_kerberos_lease}'”

Installation of credentials-fetcher on supported Linux distros

In this basic use case for testing the credentials-fetcher rpm, the following architecture is assumed for the purposes of this blog.

An AWS Managed Microsoft Active Directory joined Linux Application Server.
An AWS Managed Microsoft Active Directory joined RDS for Microsoft SQL.
gMSA account established for account credentials in an AWS Managed Microsoft Active Directory.

Fedora Linux 36 Server setup:

Deploy the “Fedora cloud based image for AWS public cloud” located here, to your AWS account.
Credentials-fetcher is packaged and included as part of the standard Fedora package repositories. Install the credentials-fetcher rpm by typing the command:

sudo dnf install credentials-fetcher -y

How to use credentials-fetcher per scenario

In these instructions, we will demonstrate the use of credentials-fetcher with an ASP.NET application and Amazon RDS for Microsoft SQL Server. A Microsoft SQL Server container scenario will also be demonstrated as an additional use case.

Scenario 1:  Using .NET Core container application on Linux with Amazon RDS for Microsoft SQL Server backend

Figure 3 Using .NET Core container application on Linux with Amazon RDS for Microsoft SQL Server backend

Once the environment prerequisites have been met, you can install Docker and a repository in preparation for deploying a .NET Core application to a container on the Linux server or servers.

1.     Set up the repository.

Install the dnf-plugins-core package (which provides the commands to manage your DNF repositories) and set up the repository.

sudo dnf -y install dnf-plugins-core

sudo dnf config-manager
–add-repo
https://download.docker.com/linux/fedora/docker-ce.repo

2.     Install Docker Engine and verify credentials-fetcher is installed and started.

Install the latest version of Docker Engine, containerd, and Docker Compose and start the Docker daemon:

sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl start docker

sudo dnf install credentials-fetcher
sudo systemctl start credentials-fetcher

Additional reference detailed instructions on how to install Docker engine can be found here.

1. Create a kerberos ticket – associated to the gMSA account as described in step 8 of “Environment Setup for gMSA on Linux use cases”

Take a note of the response generated by the Kerberos ticket creation.

2.     Leverage the Docker File for environment variables

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-image
WORKDIR /src
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
EXPOSE 80
COPY —from=build-image /src/out ./
RUN apt-get -qq update &&
apt-get -yqq install krb5-user &&
apt-get -yqq clean

ENV KRB5CCNAME=/var/credentials-fetcher/krbdir/krb5cc

ENTRYPOINT [“dotnet”, “WebApp.dll”]

Example: /var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01

3.     Build a Docker image for your application on the Linux server:

docker build -t <your_image_name>

4.     Run Docker with bind mount to the kerberos ticket, ensure the environment variable KRB5CCNAME (see example docker file above) is pointing to the destination location of the bind mount inside the application container.

sudo docker run -p 80:80 -d -it —name webapp1 —mount type=bind,source=/var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01,target=/var/credentials-fetcher/krbdir {docker_image}

5.     Add Amazon RDS for Microsoft SQL Server to the gMSA group with the following commands:

#Add the AD join SQL RDS server to the gMSA account
Set-ADServiceAccount -Identity “gmsamachines” -PrincipalsAllowedToRetrieveManagedPassword “mssqlrdsname$”

6.     Download and install the SQL Management Tool (SSMS) on a management Windows machine that is a member of the service account group to test the Amazon RDS for Microsoft SQL Server connections. The .NET application will have access to each other and the Amazon RDS for Microsoft SQL Server instance.

7.     Log in to the Amazon RDS for Microsoft SQL Server instance and apply the gMSA service account with the desired permissions required by the .NET application.

Scenario 2:  Using .NET Core container application on Linux with a Microsoft SQL Server Container

Figure 4 Using .NET Core container application on Linux with a Microsoft SQL Server Container.

As with Scenario 1, the same steps to install Docker on Linux and deploy your application will apply. The difference will be the deployment of Microsoft SQL Server in a container and the confirmation that the server operates as expected running on Linux and leveraging gMSA for authentication.

1.     As with the first scenario, install and run the credentials-fetcher on the Linux server or servers you are deploying your .NET application containers to.

2.     Deploy a Microsoft SQL Server 2022 container on one of the Linux servers in your gMSA group.

Leverage the Docker file example in “Use Case 1” environment KRB5CCNAME from the Microsoft SQL Server container.
Reference “Use Case 1” for details on verifying docker file KRB5CCNAME.

Run the following command on your Linux server to install the latest Microsoft SQL Server 2022 Docker container available. Replace yourStrong(!)Password with a strong password in the command.

sudo docker run -e “ACCEPT_EULA=Y” -e “SA_PASSWORD=yourStrong(!)Password” -p 1433:1433 -d mcr.microsoft.com/mssql/server:2022-latest —mount type=bind,source=/var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01,target=/var/credentials-fetcher/krbdir {SQL_docker_image}

Verify that the Microsoft SQL Server Docker container is running with the following command:

sudo docker ps

Additional reference details for deploying a Microsoft SQL Server container on Linux can be found here.

3.     Download and install the SQL Management Tool (SSMS) on a management Windows machine that is a member of the service account group to test the Microsoft SQL Server connection.

4.     Log in to the Microsoft SQL Server instance and apply the gMSA service account with the desired permissions required by the .NET application.

Conclusion

Linux containers have become a key modernization destination for customers running .NET workloads. gMSA for Linux containers will help organizations and the overall Microsoft administrator and developer community to access AD from applications and services hosted on Linux containers using the service account authentication model.

gMSA is a managed account that provides automatic password management, service principal name (SPN) management, and the ability to delegate management to administrators over multiple servers or instances. This unblocks a range of modernization use cases around identity using Microsoft AD, such as, connecting .NET Core applications hosted on Linux containers with SQL Server authenticating over Microsoft AD, and securely opening up access to network resources from applications running with service accounts. Based on the capabilities and customer usefulness, credentials-fetcher is positioned to be extended into services such as Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).

This service feature extends support for non-Windows container applications that require gMSA for Microsoft AD Authentication. AWS is dedicated to continuing development and support for the credentials-fetcher daemon open source project. We believe that open source is good for everyone and we are committed to bringing the value of open source to our customers, and the operational excellence of AWS to open source communities. Contributions and feedback are welcome.

Flatlogic Admin Templates banner

Behind the Scenes on AWS Contributions to Cloud Native Open Source Projects

Amazon Elastic Kubernetes Service (Amazon EKS) is well known in the Kubernetes community. But few realize that AWS engineers are closely involved and contributing upstream to Kubernetes and to many more cloud native open source projects.

In the past year alone, AWS contributed significantly to containerd, Cortex, etcd, Fluentd, nerdctl, Notary, OpenTelemetry, Thanos, and Tinkerbell. We employ maintainers and contributors on these projects and we will contribute more to these and other projects in the coming year. Here’s a behind-the-scenes look at our contributions and why we’re investing in the open source projects we support. You can also meet many of our contributors in the AWS booth at KubeCon Europe in Amsterdam, April 18-21, 2023 and hear from them in our virtual Container Day event 9 a.m. – 4 p.m. CEST on April 18.

“Amazon EKS is committed to open source and we are spending a lot of our cycles now focused on contributing back to the community. Kubernetes is part of a community that’s bigger than AWS and so we’re continuing to be committed to maintaining and helping that community to be successful because without it, we wouldn’t exist, either,” said Barry Cooks, Vice President, Kubernetes, at AWS and a Cloud Native Computing Foundation (CNCF) governing board member.

AWS contributes to Kubernetes and Etcd

Today, AWS is heavily involved in open source, cloud native projects. Consider, for example, some of our recent key contributions to Kubernetes and etcd, the underlying data store for Kubernetes.

“We’re building the AWS cloud provider, contributing to CAPI (cluster API), and serve as part of the security response committee. We helped implement gzip optimization which improves the performance of Kubernetes clients,” said Nathan Taber who leads the product team for Kubernetes at AWS, in a keynote at KubeCon North America 2022. “With etcd we’re bringing our operational learnings from running just so much etcd at scale, back into the community.”

The AWS cloud provider for Kubernetes is the open source interface between a Kubernetes cluster and AWS service APIs. This project allows a Kubernetes cluster to provision, monitor, and remove AWS resources necessary for operation of the cluster.

As of Kubernetes 1.27, AWS has just finished a multi-year effort to migrate our legacy cloud provider out of tree to an external cloud provider. The cloud provider migration reduces binary bloat in the main kubernetes/kubernetes (k/k) repository, as well as reduces dependency complexity and the surface area for security vulnerabilities.

AWS has also built a webhook framework that allows cloud providers to host webhooks in their cloud-controller-managers, which makes certain migration tasks easier. One use case for this is helping other cloud providers to migrate the persistent volume labeller admission controllers from the API server code, which is one of the last areas of cloud provider specific code that needs to be migrated out of core Kubernetes.

“We’ve included a lot of space in our planning for upstream open source work this year,” said Nick Turner, software developer on the AWS Kubernetes team and a chair in Kubernetes SIG-cloud-provider. “Expect us to keep up our contributions to the cloud provider and the load balancer controller as well as increase our investments in the AWS IAM authenticator for Kubernetes and KMS encryption provider.”

These and other Kubernetes contributions bring value to the entire Kubernetes community as well as to the EKS service and its customers.

Since KubeCon Detroit last fall, the EKS-etcd team has contributed numerous improvements to etcd. Chao Chen contributed to the effort to improve testing mechanisms for etcd by unifying the test frameworks used by etcd tests. Baoming Wang contributed an important metric to the Kubernetes API server code base which will help catch data corruption issues early. We’ve also worked on building a linearizability test suite, made various improvements to the core etcd database and the etcd backend database Bolt-DB, contributed to documentation, made helm more resilient to etcd side transient errors, and fixed an issue with the installation script for argo-cd-helmfile.

What’s driving AWS to contribute more to cloud native open source

Like most modern companies, AWS builds many of its services with open source components. There are several business and technical reasons we do this, which we’ve outlined in an article on The New Stack about why we invest in sustainable open source. We recognize that the success of our services depends on the success of those underlying open source projects.

Given that most of the open source projects that AWS supports underpin specific services, AWS tasks all engineers working in services, regardless of their assigned sub-service teams, to contribute in any way that they can to those upstream projects.

The result is a virtuous cycle that promotes mutually beneficial growth. As AWS services grow, so too do the open source projects upon which they are based because of AWS contributions and support. Conversely, as these open source projects grow from the contributions of other companies and developers, so do the benefits to the AWS services that depend upon them.

AWS contributions focus on performance and scale

AWS contributions to open source typically come as a practical matter in the form of bug fixes, code reviews, documentation, new features, or security enhancements. Like many developers working in the open source space, AWS engineers often work to address issues that arise in the course of their day jobs and then share the fixes with the rest of the open source community. Similarly, new features for an open source project are developed by AWS engineers to expand the project’s scale or performance which in turn increases the project’s usability, stability, and overall appeal.

Because AWS has a large number of Kubernetes clusters under management, it affords AWS a unique opportunity to test the limitations of open source software and build its edges stronger and further out from its initial core. So many of the contributions that our team members do for upstream Kubernetes, etcd, containerd, and other projects center on making sure that we provide insights to the upstream community on where things break down in scaling, production, and operational readiness.

The resulting insights provide value for the entire open source community as well as our own customers.

Take for example, the lag fix that curiously performed as a latency expander. AWS engineer Shyam Jeedigunta, was looking at the logs and metrics collected from thousands of production EKS clusters. He determined that Gzip compression is enabled inside the Kubernetes API server to reduce the demand on network bandwidth and to decrease latency.  However, the compression was actually increasing the latency for large list requests made by clients to the Kubernetes API server. Shyam, who is also co-chair of the Kubernetes scalability special interest group (SIG), took a deep dive into the issue to investigate whether a particular compression level created the problem and if so, could the compression level be reduced? Could Gzip compression be disabled entirely? What impact would that have on latency and network bandwidth?

Answers to questions like this one lead to contributions upstream in etcd and core Kubernetes from AWS service teams. Customers and others often report these kinds of issues to the project as well, but the nature of the problem isn’t clear until it’s viewed on 1,000 nodes and 200,000 objects of a certain kind. AWS engineers diagnose what’s going on, put together troubleshooting information, and collate information into proposals on how to fix the problem(s) to upstream to Kubernetes. AWS likes to spearhead fixing issues that arise from running the projects at scale.

Key AWS contributions

AWS contributes to many Kubernetes sub projects and SIGs. For example, Micah Hausler and Sri Saran Balaji Vellore Rajakumar serve on the Kubernetes Security Response Committee (SRC), Davanum Srinivas (Dims) chairs SIG-Architecture and SIG-k8s-infra, and Nick Turner is a chair in SIG-cloud-provider.  Key contributions have gone into projects including containerd, Cortex, cdk8s, CNI, nerdctl and Prometheus. Innovations have also been substantial and include TorchServe, improved ARM support through AWS Graviton, and the Virtual GPU plugin. However, this is not an exhaustive or complete list of AWS contributions and innovations in the cloud native community.

On containerd, for example, AWS employs two maintainers who contribute features and help ensure the project’s general health and security. Key contributions from AWS engineers to the containerd project include OpenTelemetry integration in the 1.7.0 release, improved tracing, and improved fuzzing integration.

“It’s been awesome to see the growth on the container runtime team here at AWS these past few years. I love to see the eagerness to learn not just *how* to contribute, but how to do it well and really benefit the broader community,” said Phil Estes, a principal engineer at AWS and a containerd maintainer.

Nerdctl, a Docker-compatible CLI for containerd and a containerd sub-project, is used by other open source projects Lima, Finch, and Rancher Desktop. AWS engineers significantly improved nerdctl’s compose support by adding 11 out of 13 missing compose commands. We enhanced nerdctl’s image signing/verification support by contributing cosign support for nerdctl compose, and notation support for nerdctl. And engineer Jin Dong recently became the first reviewer for the project from AWS.

AWS services are also standardizing on OpenTelemetry, a set of open source tools and standards for collecting metrics, logs, and traces to measure application performance. AWS Distro for OpenTelemetry (ADOT), OpenSearch, and CloudWatch are all building on OpenTelemetry and contribute back to the upstream project. All ADOT code is 100% open source and contributed upstream. Key contributions include: adding functionality to upstream observability components such as OpenTelemetry language SDKs, collectors, and agents.

“Amazon is the fourth largest contributor to OpenTelemetry with a dedicated maintainer and many contributors working on the project. A key contribution has been improving collector and metric stability, including improved Prometheus interoperability with OpenTelemetry,” said Taber.

A fourth example is Cortex where AWS is the top supporter of the project and employs three maintainers. As AWS runs this project at scale, engineers have the opportunity to identify and fix scaling cliffs before they become a problem for the rest of the community. Some of the key contributions are new features and performance improvements. Examples include partition compactor, Ring DynamoDB Multikey KV, out of order samples ingestion, snappy-block gRPC compression, ARM images, and Thanos PromQL engine integration.

We have also contributed bug fixes to Thanos, a tool for setting up highly available Prometheus instances with long term storage. Thanos is a CNCF incubating project which Cortex depends on. We participated in the development of the new Thanos PromQL engine and open sourced a tool that could use fuzzing for correctness testing which has already caught a few bugs.

AWS employs four maintainers on Tinkerbell, a cloud native open source bare metal provisioning engine for EKS Anywhere and a CNCF Sandbox project. Key contributions include organizing the project roadmap, VLAN support, a Kubernetes native backend, out-of-band management Kubernetes controller, Helm Chart deployment, and Cluster API provider updates.

“Our team has done a lot of work to update the Tinkerbell backend from Postgres to native Kubernetes,” said Taber.

AWS employs three maintainers in Notation, a sub project of Notary under the CNCF, and is the third largest code contributor to Notary. Notation enables the generation of cryptographic signatures for container images so users can verify that they come from a trusted source or process. AWS founded the sub project with other contributors to come up with specifications for signature format, generation, verification, and revocation. As part of this work we also defined a process for evaluating signature envelope formats like COSE ensuring that they met a high security bar before they were used in Notation.

AWS employees have either written or reviewed the majority of code contributions for the core Notation libraries and a CLI. AWS also employs a maintainer to Ratify so Kubernetes users can easily enable policies for signature verification with their existing admissions controllers. Similarly we also employ a maintainer to ORAS so signatures can easily be pushed to OCI registries. Notation enables users to define granular trust policies for defining which sources they want to trust, balance deployment safety and security needs, and flexibility on secure signing key storage options.

We have contributed to many other open source projects as well, including Crossplane, for which AWS added support for EKS IRSA in the China region and fixed Amazon Route 53 wildcard support, and Backstage, with AWS Proton and AWS Code Suite (AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy).

“We’re very excited about doing more development in the open, sharing that with our customers, and working directly in some cases with customers on their needs in open source projects and working together to make the community stronger in the Kubernetes space,” Cooks said.

AWS is open

We want to hear from you. AWS engineers are open to helping community members through collaboration and contribution opportunities. Tell us how we can help meet your needs.

AWS engineers, solutions architects, and product managers are hanging out on the Kubernetes community and the CNCF community Slack channels. Channels where you can reach out to us include the provider AWS channel and Karpenter channel, and the AWS controllers for Kubernetes channel on the Kubernetes Slack.

Find us and tell us what you’d like us to work on. Or if you have a particular issue that you found in one of these upstream projects that you think our engineers can help move the needle on. Come find us and talk to us in the CNCF’s AWS Slack channel and join us for our virtual Container Day on April 18, before KubeCon EU.

Flatlogic Admin Templates banner

Strategies to optimize the costs of your builds on AWS CodeBuild

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. You just specify the location of your source code and choose your build settings, and CodeBuild will run your build scripts for compiling, testing, and packaging your code.

CodeBuild uses simple pay-as-you-go pricing. There are no upfront costs or minimum fees. You pay only for the resources you use. You are charged for compute resources based on the duration it takes for your build to execute.

There are three main factors that contribute to build costs with CodeBuild:

Build duration
Compute types
Additional services

Understanding how to balance these factors is key to optimizing costs on AWS and this blog post will take a look at each.

Compute Types

CodeBuild offers three compute instance types with different amounts of memory and CPU, for example the Linux GPU Large compute type has 255GB of memory and 32 vCPUs and enables you to execute CI/CD workflow for deep learning purpose (ML/AI) with AWS CodePipeline. Incremental changes in your code, data, and ML models can now be tested for accuracy, before the changes are released through your pipeline.

The Linux 2XLarge instance type is another instance type with 145GB of memory and 72 vCPUs and is suitable for building large and complex applications that require high memory and CPU resources. It can help reduce build time, speed up delivery, and support multiple build environments.

The GPU and 2XLarge compute types are powerful but are also the most expensive compute types per minute. For most build tasks the small, medium or large instance compute types are more than adequate. Using the pricing listed in US East (Ohio) we can see the price variance between the small, medium and large Linux instance types in Figure 1 below.

Figure 1. AWS CodeBuild small, medium and large compute types vs cost per minute

Analyzing the CodeBuild compute costs leads us to a number of cost optimization considerations.

Right Sizing AWS CodeBuild Compute Types to Match Workloads

Right sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost. It’s also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirements, which results in lower costs.

Right sizing is a key mechanism for optimizing AWS costs, but it is often ignored by organizations when they first move to the AWS Cloud. They lift and shift their environments and expect to right size later. Speed and performance are often prioritized over cost, which results in oversized instances and a lot of wasted spend on unused resources.

CodeBuild monitors build resource utilization on your behalf and reports metrics through Amazon CloudWatch. These include metrics such as

CPU
Memory
Disk I/O

These metrics can be seen within the CodeBuild console, for an example see Figure 2 below:

Figure 2. Resource utilization metrics

Leveraging observability to measuring build resource usage is key to understanding how to rightsize and CodeBuild makes this easy with CloudWatch metrics readily available through the CodeBuild console.

Consider ARM / Graviton

If we compare the costs of arm1.small and general1.small over a ten minute period we can see that the arm based compute type is 32% less expensive.

Figure 3. Comparison of small arm and general compute types

But cost per minute while building is not the only benefit here, ARM processors are known for their energy efficiency and high performance. Compiling code directly on an ARM processor can potentially lead to faster execution times and improved overall system performance.

The ideal workload to migrate to ARM is one that doesn’t use architecture-specific dependencies or binaries, already runs on Linux and uses open-source components, for example: Migrating AWS Lambda functions to Arm-based AWS Graviton2 processors.

AWS Graviton processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores to deliver the best price performance for your cloud workloads. The AWS Graviton Fast Start program helps you quickly and easily move your workloads to AWS Graviton in as little as four hours for applications such as serverless, containerized, database, and caching.

Consider migrating Windows workloads to Linux

If we compare the cost of a general1.medium Windows vs Linux compute type we can see that the Linux Compute type is 43% less expensive over ten minutes:

Figure 4. Build times on Windows compared to Linux

Migrating to Linux is one strategy to not only reduce the costs of building and testing code in CodeBuild but also the cost of running in production.

The effort required to re-platform from Windows to Linux varies depending on how the application was implemented. The key is to identify and target workloads with the right characteristics, balancing strategic importance and implementation effort.

For example, older .Net applications may be able to be migrated to later versions of .NET (previously named .Net Core) first before deploying to Linux. AWS have a Porting Assistant for .NET that is an analysis tool that scans .NET Framework applications and generates a cross-platform compatibility assessment, helping you port your applications to Linux faster.

See our guide on how to escape unfriendly licensing practices by migrating Windows workloads to Linux.

Build duration

One of the dimensions of the CodeBuild pricing is the duration of each build. This is calculated in minutes, from the time you submit your build until your build is terminated, rounded up to the nearest minute. For example: if your build takes a total of 35 seconds using one arm1.small Linux instance on US East (Ohio), each build will cost the price of the full minute, which is $0.0034 in that case. Similarly, if your build takes a total of 5 minutes and 20 seconds, you’ll be charged for 6 minutes.

When you define your CodeBuild project, within a buildspec file, you can specify some of the phases of your builds. The phases you can specify are install, pre-build, build, and post-build. See the documentation to learn more about what each of those phases represent. Besides that, you can define how and where to upload reports and artifacts each build generates. It means that on each of those steps, you should do only what is necessary for the task you want to achieve. Installing dependencies that you won’t need, running commands that aren’t related to your task, or performing tests that aren’t necessary will affect your build time and unnecessarily increase your costs. Packaging and uploading target artifacts with unnecessary large files would cause a similar result.

On top of the CodeBuild phases and steps that you are directly in control, each time you start a build, it takes additional time to queue the task, provision the environment, download the source code (if applicable), and finalize. See Figure 5 below a breakdown of a succeeded build:

Figure 5. AWS CodeBuild Phase details

In the above example, for each build, it takes approximately 42 seconds on top of what is specified in the buildspec file. Considering this overhead, having several smaller builds instead of fewer larger builds can potentially increase your costs. With this in mind, you have to keep your builds as short as possible, by doing only what is necessary, so that you can minimize the costs. Furthermore, you have to find a good balance between the duration and the frequency of your builds, so that the overhead doesn’t take a large proportion of your build time. Let’s explore some approaches you can factor in to find this balance.

Build caching

A common way to save time and cost on your CodeBuild builds is with build caching. With build caching, you are able to store reusable pieces of your build environment, so that you can save time next time you start a new build. There are two types of caching:

Amazon S3 — Stores the cache in an Amazon S3 bucket that is available across multiple build hosts. If you have build artifacts that are more expensive to build than to download, this is a good option for you. For large build artifacts, this may not be the best option, because it can take longer to transfer over your network.

Local caching — Stores a cache locally on a build host that is available to that build host only. When you choose this option, the cache is immediately available on the build host, making it a good option for large build artifacts that would take long network transfer time. If you choose local caching, there are multiple cache modes you can choose including source cache mode, docker layer cache mode and custom cache mode.

Docker specific optimizations

Another strategy to optimize your build time and reduce your costs is using custom Docker images. When you specify your CodeBuild project, you can either use one of the default Docker images provided by CodeBuild, or use your own build environment packaged as a Docker image. When you create your own build environment as a Docker image, you can pre-package it with all the tools, test assets, and required dependencies. This can potentially save a significant amount of time, because on the install phase you won’t need to download packages from the internet, and on the build phase, when applicable, you won’t need to download e.g., large static test datasets.

To achieve that, you must specify the image value on the environment configuration when creating or updating your CodeBuild project. See Docker in custom image sample for CodeBuild to learn more about how to configure that. Keep in mind that larger Docker images can negatively affect your build time, therefore you should aim to keep your custom build environment as lean as possible, with only the mandatory contents. Another aspect to consider is to use Amazon Elastic Container Registry (ECR) to store your Docker images. Downloading the image from within the AWS network will be, in most of the cases, faster than downloading it from the public internet and can avoid bottlenecks from public repositories.

Consider which tests to run on the feature branch

If you are using a feature-branch approach, consider carefully which build steps and tests you are going to run on your branches. Running unit tests is a good example of what you should run on the feature branches, but unless you have very specific requirements, you probably don’t need penetration or integration tests at this point. Usually the feature branch changes often, hence running all types of tests all the time is a potential waste. Prefer to have your complex, long-running tests at a later stage of your CI/CD pipeline, as you build confidence on the version that you are to release.

Build once, deploy everywhere

It’s widely considered a best practice to avoid environment-specific code builds, therefore consider a build once, deploy everywhere strategy. There are many benefits to separating environment configuration from the build including reducing build costs, improve maintainability, scalability, and reduce the risk of errors.

Build once, deploy everywhere can be seen in the AWS Deployment Pipeline Reference Architecture where the Beta, Gamma and Prod stages are created from a single artifact created in the Build Stage:

Figure 6. Application Pipeline reference architecture

Additional Services

CloudWatch Logs and Metrics

Amazon CloudWatch can be used to monitor your builds, report when something goes wrong, take automatic actions when appropriate or simply keep logs of your builds.

CloudWatch metrics show the behavior of your builds over time. For example, you can monitor:

How many builds were attempted in a build project or an AWS account over time.
How many builds were successful in a build project or an AWS account over time.
How many builds failed in a build project or an AWS account over time.
How much time CodeBuild spent running builds in a build project or an AWS account over time.
Build resource utilization for a build or an entire build project. Build resource utilization metrics include metrics such as CPU, memory, and storage utilization.

However, you may incur charges from Amazon CloudWatch Logs for build log streams. For more information, see Monitoring AWS Codebuild in the CodeBuild User Guide and the CloudWatch pricing page.

Storage Costs

You can create an CodeBuild build project with a set of output artifacts and publish then to S3 buckets. Using S3 as a repository for your artifacts, you only pay for what you use. Check the S3 pricing page.

Encryption

Cloud security at AWS is the highest priority and encryption is an important part of CodeBuild security. Some encryption, such as for data in-transit, is provided by default and does not require you to do anything. Other encryption, such as for data at-rest, you can configure when you create your project or build. Codebuild uses Amazon KMS to encrypt the data at-rest.

Build artifacts, such as a cache, logs, exported raw test report data files, and build results, are encrypted by default using AWS managed keys and are free of charge. Consider using these keys if you don’t need to create your own key.

If you do not want to use these KMS keys, you can create and configure a customer managed key. For more information, see the documentation on creating KMS Keys and AWS Key Management Service concepts in the AWS Key Management Service User Guide.

Check the KMS pricing page.

Data transfer costs

You may incur additional charges if your builds transfer data, for example:

Avoid routing traffic over the internet when connecting to AWS services from within AWS by using VPC endpoints

Traffic that crosses an Availability Zone boundary typically incurs a data transfer charge. Use resources from the local Availability Zone whenever possible.
Traffic that crosses a Regional boundary will typically incur a data transfer charge. Avoid cross-Region data transfer unless your business case requires it
Use the AWS Pricing Calculator to help estimate the data transfer costs for your solution.
Use a dashboard to better visualize data transfer charges – this workshop will show how.

Here’s an Overview of Data Transfer Costs for Common Architectures on AWS.

Conclusion

In this blog post we discussed how compute types; build duration and use of additional services contribute to build costs with AWS CodeBuild.

We highlighted how right sizing compute types is an important practice for teams that want to reduce their build costs while still achieving optimal performance. The key to optimizing is by measuring and observing the workload and selecting the most appropriate compute instance based on requirements.

Further compute type cost optimizations can be found by targeting AWS Graviton processors and Linux environments. AWS Graviton Processors in particular offer several advantages over traditional x86-based instances and are designed by AWS to deliver the best price performance for your cloud workloads.

For further reading, see the summary of CI/CD best practices from the Practicing Continuous Integration and Continuous Delivery on AWS whitepaper,  my CI/CD pipeline is my release captain and also the cost optimization pillar from the AWS Well-Architected Framework which focuses on avoiding unnecessary costs.

About the authors:

Leandro Cavalcante Damascena

Leandro Damascena is a Solutions Architect at AWS, focused on Application Modernization and Serverless. Prior to that, he spent 20 years in various roles across the software delivery lifecycle, from development to architecture and operations. Outside of work, he loves spending time with his family and enjoying water sports like kitesurfing and wakeboarding.

Rafael Ramos

Rafael is a Solutions Architect at AWS, where he helps ISVs on their journey to the cloud. He spent over 13 years working as a software developer, and is passionate about DevOps and serverless. Outside of work, he enjoys playing tabletop RPG, cooking and running marathons.

Matt Laver

Matt Laver is a Solutions Architect at AWS working with SMB customers in the UK. He is passionate about DevOps and loves helping customers find simple solutions to difficult problems.

Publish Amazon DevOps Guru Insights to ServiceNow for Incident Management

Amazon DevOps Guru is a fully managed AIOps service that uses machine learning (ML) to quickly identify when applications are behaving outside of their normal operating patterns and generates insights from its findings. These insights generated by Amazon DevOps Guru can be used to alert on-call teams to react to anomalies for mission critical workloads. Various customers already utilize Incident management systems like ServiceNow to identify, analyze and resolve critical incidents which could impact business operations. ServiceNow is an IT Service Management (ITSM) platform that enables enterprise organizations to improve operational efficiencies. Among its products is Incident Management which provides a single pane view to customers and allows customers restore services and resolve issues quickly.

This blog post will show you how to integrate Amazon DevOps Guru insights with ServiceNow to automatically create and manage Incidents. We will demonstrate how an insight generated by Amazon DevOps Guru for an anomaly can automatically create a ServiceNow Incident, update the incident when there are new anomalies or recommendations from Amazon DevOps Guru, and close the ServiceNow Incident once the insight is resolved by Amazon DevOps Guru.

Overview of solution

This solution uses a combination of event driven architecture and Serverless technologies, to integrate DevOps Guru insights with ServiceNow. When an Amazon DevOps Guru insight is created, an Amazon EventBridge rule is used to capture the insight as an event and routed to an AWS Lambda Function target. The lambda function interacts with ServiceNow using a REST API to create, update and close an incident for corresponding DevOps Guru events captured by EventBridge.

The EventBridge rule can be customized to capture all DevOps Guru insights or narrowed down to specific insights. In this blog, we will be capturing all DevOps Guru insights and will be performing actions on ServiceNow for the below DevOps Guru events:

DevOps Guru New Insight Open
DevOps Guru New Anomaly Association
DevOps Guru Insight Severity Upgraded
DevOps Guru New Recommendation Created
DevOps Guru Insight Closed

Figure 1: Amazon DevOps Guru Integration with ServiceNow using Amazon EventBridge and AWS Lambda

Solution Implementation Steps

Prerequisites

Before you deploy the solution and proceed with this walkthrough, you should have the following prerequisites:

Gather the hostname for your ServiceNow cloud instance. If you do not have a ServiceNow instance, you can request a developer instance through the ServiceNow Developer page.
Gather the credentials of a ServiceNow user who has permissions to make REST API calls to ServiceNow, specifically to the Table API. If you don’t have a user provisioned, you can create one by following the steps in Getting started with the REST API in the ServiceNow documentation.
Create a secret in Secrets Manager to store the ServiceNow credentials created in previous step. You can choose any name for the secret but it should have two key/value pairs, one for username and other for password.
Enable DevOps Guru for your applications by following these steps or you can follow this blog to deploy a sample serverless application that can be used to generate DevOps Guru insights for anomalies detected in the application.
Install and set up SAM CLI – Install the SAM CLI

Download and set up Java. The version should be matching to the runtime that you defined in the SAM template.yaml Serverless function configuration – Install the Java SE Development Kit 11

Maven – Install Maven

Docker – Install Docker community edition

You have two options to deploy this solution, one options is to deploy from the AWS Serverless Repository and other from the Command Line Interface (CLI).

Option 1: Deploy sample ServiceNow Connector App from AWS Serverless Repository

The DevOps Guru ServiceNow Connector application is available in the AWS Serverless Application Repository which is a managed repository for serverless applications. The application is packaged with an AWS Serverless Application Model (SAM) template, definition of the AWS resources used and the link to the source code. Follow the steps below to quickly deploy this serverless application in your AWS account.

Follow the steps below to quickly deploy this serverless application in your AWS account:

Login to the AWS management console of the account to which you plan to deploy this solution.
Go to the DevOps Guru ServiceNow Connector application in the AWS Serverless Repository and click on “Deploy”.

Figure 2: Deploy solution through AWS Serverless Repository

The Lambda application deployment screen will be displayed where you can enter the ServiceNow hostname (do not include the https prefix) and the Secret Name you created in the prerequisite steps. Click on the ‘Deploy’ button.

Figure 3: AWS Lambda Application Settings

After successful deployment the AWS Lambda Application page will display the “Create complete” status for the serverlessrepo-DevOps-Guru-ServiceNow-Connector application. The CloudFormation template creates four resources:

Lambda function which has the logic to integrate to the ServiceNow
Event Bridge rule for the DevOps Guru Insights
Lambda permission
IAM role

5.     Now you can skip Option 2 and follow the steps in the “Test the Solution” section to trigger some DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Option 2: Build and Deploy sample ServiceNow Connector App using AWS SAM Command Line Interface

As you have seen above, you can directly deploy the sample serverless application from the Serverless Repository with one click deployment. Alternatively, you can choose to clone the github source repository and deploy using the SAM CLI from your terminal.

The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing serverless applications. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see AWS SAM reference – AWS Serverless Application Model.

Before you proceed, make sure you have completed the Prerequisites section in the beginning which should set up the AWS SAM CLI, Maven and Java on your local terminal. You also need to install and set up Docker to run your functions in an Amazon Linux environment that matches Lambda.

Follow the steps below to build and deploy this serverless application using AWS SAM CLI in your AWS account:

Clone the source code from the github repo

$ git clone https://github.com/aws-samples/amazon-devops-guru-connector-servicenow.git

Before you build the resources defined in the SAM template, you can use the below validate command which will run cfn-lint validations on your SAM JSON/YAML template

$ sam validate –-lint –template template.yaml

3.     Build the application with SAM CLI

$ cd amazon-devops-guru-connector-servicenow
$ sam build

If everything is set up correctly, you should have a success message like shown below:

Build Succeeded

Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Validate SAM template: sam validate
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync –stack-name {{stack-name}} –watch
[*] Deploy: sam deploy –guided

4.  Deploy the application with SAM CLI

$ sam deploy –-guided

This command will package and deploy your application to AWS, with a series of prompts that you should respond to as shown below:

Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name – amazon-devops-guru-connector-servicenow

AWS Region: The AWS region you want to deploy your application to.

Parameter ServiceNowHost []: The ServiceNow host name/instance URL you set up. Example: dev92031.service-now.com

Parameter SecretName []: The secret name that you set up for ServiceNow credentials in the Prerequisites.

Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes.

Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn’t provided through this prompt, to deploy this example you must explicitly pass –capabilities CAPABILITY_IAM to the sam deploy command.

Disable rollback [y/N]: If set to Y, preserves the state of previously provisioned resources when an operation fails.

Save arguments to configuration file (samconfig.toml): If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application.

After you enter your parameters, you should see something like this if you have provided Y to view and confirm ChangeSets. Proceed here by providing ‘Y’ for deploying the resources.

Initiating deployment
=====================
Uploading to amazon-devops-guru-connector-servicenow/46bb4841f8f37fd41d3f40f86f31c4d7.template 1918 / 1918 (100.00%)

Waiting for changeset to be created..
CloudFormation stack changeset
—————————————————————————————————————————————————–
Operation LogicalResourceId ResourceType Replacement
—————————————————————————————————————————————————–
+ Add FunctionsDevOpsGuruPermission AWS::Lambda::Permission N/A
+ Add FunctionsDevOpsGuru AWS::Events::Rule N/A
+ Add FunctionsRole AWS::IAM::Role N/A
+ Add Functions AWS::Lambda::Function N/A
—————————————————————————————————————————————————–

Changeset created successfully. arn:aws:cloudformation:us-east-1:123456789012:changeSet/samcli-deploy1669232233/7c97b7f5-369d-400d-89cd-ebabefaa0b57

Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]:

Once the deployment succeeds, you should be able to see the successful creation of your resources

CloudFormation events from stack operations (refresh every 0.5 seconds)
—————————————————————————————————————————————————–
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
—————————————————————————————————————————————————–
CREATE_IN_PROGRESS AWS::CloudFormation::Stack amazon-devops-guru-connector- User Initiated
servicenow
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole Resource creation Initiated
CREATE_COMPLETE AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru Resource creation Initiated
CREATE_COMPLETE AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_COMPLETE AWS::CloudFormation::Stack amazon-devops-guru-connector- –
servicenow
—————————————————————————————————————————————————–

Successfully created/updated stack – amazon-devops-guru-connector-servicenow in us-east-1

You can also use the below command to list the resources deployed by passing in the stack name.

$ sam list resources –stack-name amazon-devops-guru-connector-servicenow

You can also choose to test and debug your function locally with sample events using the SAM CLI local functionality. Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Refer the Invoking Lambda functions locally – AWS Serverless Application Model link here for more details.

Follow the below steps for testing the lambda with the SAM CLI local. You have to create an env.json file with the correct values for your ServiceNow Host and SecretManager secret name that was created in the previous step.

Make sure you have created the AWS Secrets Manager secret with the desired name as mentioned in the prerequisites, which should be used here for SECRET_NAME.
Create env.json as below, by replacing the values for SERVICE_NOW_HOST and SECRET_NAME with your real value. These will be set as the local Lambda execution environment variables.

{“Parameters”: {“SERVICE_NOW_HOST”: “SNOW_HOST”,”SECRET_NAME”: “SNOW_CREDS”}}

Run the command below to validate locally that with a sample DevOps Guru payload, to trigger Lambda locally and invoke. Remember for this to work, you should have Docker instance running and also the Secret Name created in your AWS account.

$ sam local invoke Functions –event Functions/src/test/Events/CreateIncident.json –env-vars Functions/src/test/Events/env.json

Once you are done with the above steps, move on to “Test the Solution” section below to trigger sample DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Test the Solution

To test the solution, we will simulate a DevOps Guru insight. You can also simulate an insight by following the steps in this blog. After an anomaly is detected in the application, DevOps Guru creates an insight as seen below.

Figure 4: DevOps Guru Insight created for anomalous behavior

For the DevOps Guru insight shown above, a corresponding incident is automatically created on ServiceNow as shown below. In addition to the incident creation, any new anomalies and recommendations from DevOps Guru is also associated with the incident.

Figure 5: Corresponding ServiceNow Incident is created for the DevOps Guru Insight

When the anomalous behavior that generated the DevOps Guru insight is resolved, DevOps Guru automatically closes the insight. The corresponding ServiceNow incident that was created for the insight is also closed as seen below

Figure 6: ServiceNow Incident created for DevOps Guru Insight is resolved due to insight closure

Cleaning up

To avoid incurring future charges, delete the resources.

To delete the sample application that you created, use the AWS CLI command below and pass the stack name you provided in the sam deploy step.

$ aws cloudformation delete-stack –stack-name amazon-devops-guru-connector-servicenow

You could also use the AWS CloudFormation Console to delete the stack:

Figure 7: AWS Stack Console with Delete action

Conclusion

This blog post showcased how DevOps Guru continuously monitor resources in a particular region in your AWS account and automatically detects operational issues, predicts impending resource exhaustion, details likely cause, and recommends remediation actions. This post described a custom solution using serverless integration pattern with AWS Lambda and Amazon EventBridge which enabled integration of the DevOps Guru insights with customer’s most popular ITSM and Change management tool ServiceNow thus streamlining the Service Management governance and oversight over AWS services. Using this solution helps Customer’s with ServiceNow to improve their operational efficiencies, and get customized insights and real time incident alerts and management directly from DevOps Guru which provides a single pane of glass to restore services and systems quickly.

This solution was created to help customers who already use ServiceNow Incident Management, if you are already using Incident Manager from AWS Systems Manager, check out how that works with Amazon DevOps Guru here.

To learn more about Amazon DevOps Guru, join us for a free hands-on Immersion Day. Events are virtual and hosted at three global time zones. Register here: April 12th.

About the authors:

Abdullahi Olaoye

Abdullahi is a Senior Cloud Infrastructure Architect at AWS Professional Services where he works with enterprise customers to design and build cloud solutions that solve business challenges. When he’s not working, he enjoys travelling, watching documentaries and listening to history podcasts.

Sreenivas Ganesan

Sreenivas Ganesan is a Sr. DevOps Consultant at AWS experienced in architecting and delivering modernized DevOps solutions for enterprise customers in their journey to AWS Cloud, primarily focused on Infrastructure automation, Security and Compliance, Management and Governance, Provisioning and Orchestration. Outside of work, he enjoys watching new TV series, soccer and spending time with his family outdoors.

Mohan Udyavar

Mohan Udyavar is a Principal Technical Account Manager in the Enterprise Support organization of AWS advising customers in successfully migrating and operating their workloads on AWS. He is primarily focused on the Automotive industry providing prescriptive guidance to customers helping them improve the resilience and operational excellence posture of mission-critical applications. Outside of work, he loves cooking and working on tech projects with his son.

Custom CRM System: Benefits, Requirements & Cost of Development

Do you want to investigate the potential of having a custom CRM system for your business? What are the characteristics of custom CRM systems? Or curious about the cost and what it would take to build a custom CRM system? 

Customer relationship management (CRM) systems are widely available to businesses today. Custom software development is the greatest choice for creating and executing software from the beginning because one-size-fits-all software is not available.  Custom software enables businesses to focus a greater emphasis on the people who are a part of the organization, including employees, vendors, clients, and service users. The only way to guarantee that the software precisely meets the company’s demands is to create custom CRM software.
A custom CRM can help you ensure that your business takes advantage of all opportunities to engage, convert, and retain clients. Regardless of the size of your business, a custom CRM system might streamline your operations, improve your interactions with current clients, and elicit new leads and business opportunities.

This article will help you understand the advantages of a custom CRM system, the implementation requirements, and the development costs. You’ll also be able to understand why a customized CRM system is perfect for your business.

What is Custom CRM System?

CRM stands for Customer Relationship Management and refers to all strategies, techniques, tools, and technologies used by enterprises for developing, retaining, and acquiring customers.

Off-the-shelf (OTS) CRM is ready-to-use software that businesses may buy unmodified and use right away. It often includes modules for keeping track of sales possibilities, managing marketing campaigns, and setting up procedures for customer service.

Custom CRM systems can foster customer loyalty and automate several processes, saving businesses time and money. The main purpose of CRM software is to allow salespeople and marketers to better manage and analyze relationships with the business’s customers and potential customers.

Custom CRM vs. Off-the-Shelf CRM: What are the differences?

The biggest difference between a custom CRM and an off-the-shelf CRM is flexibility. An off-the-shelf CRM is frequently already created and prepared for usage, in contrast to a custom CRM, which is created especially for the requirements of the firm. Off-the-shelf CRMs can provide less adequate features and possibilities, but custom CRMs can be designed to satisfy the particular wants of the firm. Custom CRMs typically require more technical expertise and effort to set up and run than off-the-shelf CRMs do.

With a custom CRM, businesses may modify the program to meet their unique needs and requirements, ensuring the application is appropriate for their particular industry. Moreover, custom CRMs provide businesses the opportunity to integrate the application with their existing internal hardware and software, creating a seamless user experience. On the other hand, off-the-shelf CRMs were unable to offer as many customization and integration choices.

Usage of Custom CRM

With a customized CRM, any department in your company – from sales and customer service to business development, recruiting, and marketing – can benefit from improved methods of managing external connections and activities that are integral to success. As no two businesses are alike, a custom CRM is the only way to get precisely what you need without any unnecessary extras or having to make do with features that may not have been included in an off-the-shelf solution.

When it comes to investing in a CRM, you have three choices: 

buying an existing system, 
creating one with an in-house team, 
or outsourcing the development of a custom CRM. 

When you’re considering a CRM investment, it’s important to look into all of your options. Although buying an off-the-shelf CRM system could be the most cost-effective option, it might not be the greatest fit for the unique requirements of your company. While developing a custom CRM in-house can be a terrific option, it often takes a significant amount of time and resources and costs over $50,000. Instead, you might outsource the building of a custom CRM, which would be more cost-effective and efficient. These factors make it crucial to thoroughly consider all of your alternatives before making a choice.

Why is custom CRM important for your business? 

Custom CRM is important for your business since it offers a potent tool for managing client interactions and simplifying sales processes. It gives the ability to recognize, monitor, and control customer interactions, automate lead management, and examine customer data to learn more about how customers behave. This helps you better understand customer needs and create tailored customer experiences that drive customer loyalty and boost sales. As a consequence, improves client retention by raising customer satisfaction and a better understanding of their needs.

Types of Custom CRM

There are three major types of custom CRMs based on their function and the features they provide.

Collaborative

By establishing a clear framework for data exchange, collaborative CRM is intended to improve cooperation and communication. To create a custom CRM adapted to particular needs, it may be utilized internally inside a company or between external teams, such as partners.  Common features of this kind of system include group discussions, content sharing, and real-time activity updates.

Analytical

Analytical CRM systems are intended to help in planning. Such a system offers helpful data, analytics, and insights. It must be able to compile data from several sources, process it, and deliver real-time changes to be useful.

Operational

Operational CRMs focus heavily on simplifying and automating company processes to increase productivity. Lead processing, automated messaging to clients via various channels, and follow-ups are frequently helpful. You have two options when creating your solution: either choose certain features or mix various CRM kinds.

Benefits of Custom CRM

Because of their numerous benefits, custom CRM systems are becoming increasingly popular among businesses. These technologies not only enable businesses to give great and individualized customer care, but they also have a wide range of positive effects on customers. As a result, businesses are using CRM systems more often to increase customer service. Some of the benefits of custom CRM systems are:

Benefit #1 Time Saving

Organizations can easily obtain the information they need to complete some essential activities thanks to custom CRM systems. They can also automate certain monotonous chores. Therefore they may spend less time providing services to their clients. This allows them to focus on other important duties while letting the custom CRM handle additional tasks like data processing, analysis, customer care, sales, and marketing.

Benefit #2 Improved Efficiency

Using a custom CRM for task management gives employees a straightforward way to access the information they need to do their jobs and each employee can use their individualized dashboard. This makes their work easier and more productive; in fact, 60% of businesses report an increase in productivity from implementing a custom CRM.

Benefit #3 Enhanced Customer Relationship

Data about customers, such as their preferences, needs, and pain points, is readily available from a custom CRM. 84% of consumers believe that a business’s experience is just as important as the goods and services they provide. Custom CRM enables you to provide customized services to your consumers while addressing their pain points thanks to all the data available from the system. This can help you improve the way you communicate with your consumers, which will increase their likelihood of becoming repeat customers and help you grow your business.

Benefit #4 Access to in-depth Report

Making data-driven choices, such as adjusting prices or marketing tactics, requires the usage of reports that are produced by custom CRM using customer data. A custom CRM can give you reports that will help your business decision-making process because employing itself might boost report accuracy by 42%, according to studies.

Benefit #5 Increased Income

Сustom CRM system can handle practically all of your company’s requirements for attracting new clients, turning them into customers, and offering top-notch customer service. Sales are boosted as a result since the data it gives may be utilized to deliver better customer service and foster customer loyalty.

What to Consider Before Building Custom CRM Software

A good CRM tool will let you store contact information for clients and prospects, identify sales opportunities, keep a record of service issues, and manage marketing campaigns and tactics – all in one central location. Easy access to this data about customer interaction will allow anyone in your company to make informed decisions based on analytics. Before beginning to build a custom CRM system, there are several crucial considerations to be made. By doing this, you may decide as soon as possible and avoid making costly mistakes.

Setting the Goals

Prioritize the most crucial custom CRM platform goals after determining their importance. This will enable you to select the custom CRM system’s features and design that are most appropriate for your business.

Types of Custom CRM Systems

Decide on the type of CRM solutions you need after you’ve defined your objectives. CRM is divided into three categories: organizational, analytical, and collaborative. Each is made for a specific purpose.

Access Roles & Levels

Since several departments could utilize the custom CRM for various purposes, we recommend adding user functions and permissions. For instance, this may apply to top management, marketing directors, sales, and customer care personnel.

Custom CRM Features

Base your decision on which functionalities to include in your custom CRM on your objectives. Pay attention to the features that will be most beneficial for your business needs. Some of the most important features of custom CRM are dashboards, reports, tasks, contact management, lead management, and mobile access.

SaaS or Internal Software

Think about whether you’re going to create a custom CRM system for internal use or whether you’ll eventually transform it into a SaaS platform. It can be challenging and expensive to change the software architecture. Make sure your architecture is adaptable and scalable from the start if you choose the latter.

Cost of CRM Software Development 

Regardless of the size of your business, custom CRM software can benefit you. Estimating the associated costs can be difficult, so we’re here to explain the anticipated expenses. The cost of a custom CRM system depends on the features you want to include, the technical complexity, the development team’s experience, and other functionalities such as security measures.

To put things into perspective, if you manage a mid-sized business with more than 25 employees and your enterprise-sized subscription costs $125 per user each month, it works out to $3125 per month, $37,500 per year, and $187,500 over five years. The same amount of money, however, might be used to purchase custom CRM software that would be made to meet your unique needs. Determining the precise cost of your project is something we always advise doing in consultation with our specialists. We are aware that every customer is unique and needs a customized strategy to enable the smooth growth of their product. 

Summing Up

Are you looking for a way to improve your business performance? A custom CRM system can be the perfect solution. It provides businesses with a powerful platform for better managing customer relationships and increasing sales. With the right planning and guidance, you can create your own high-quality CRM from scratch that is tailored to your business needs. By centralizing customer data and tracking customer interactions, you can gain insights into customer behavior, create targeted marketing campaigns, and automate specific processes. This can help you improve efficiency, increase customer satisfaction, and boost sales. 

At Flatlogic, we provide full-cycle custom CRM development services to help you turn your concept into a working product. By leveraging our expertise and technical capabilities, you can create a high-quality custom CRM tailored to your business needs that will become the heart of your operations just in a few minutes. 

Custom CRM with Flatlogic

Flatlogic Platform offers an easy way to generate a custom CRM system with full control over the source code. You can make sure that you have the right features, scalability, and performance to match your business needs. Plus, with no-code development, you don’t need to be an expert programmer to make the necessary changes – making it easier to scale and customize as your business grows. With Flatlogic Platform, you have the flexibility to create a custom CRM solution that is tailored to your needs, while still having the scalability of more traditional development.

How to Create Custom CRM with Flatlogic Platform?

Using the Flatlogic Full-Stack Generator you can create CRUD and static applications in a few minutes. To start using the Platform, you need to register on the Flatlogic website. Clicking the “Sign in” button in the header will allow you to register for a Flatlogic account.

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

Step 3. Schema Editor

In this step, you can create your database schema from scratch, import an existing schema or select one of the suggested schemas. 

To import your existing database, click the Import SQL button and select your .sql file. After that, your database will be opened in the Schema Editor where you can further edit your data (add/edit/delete entities).

If you are not familiar with database design and find it difficult to understand what tables are, we have prepared some ready-made sample schemas of real applications that you can modify for your application:

E-commerce app;
Time tracking app;
Book store;
Chat (messaging) app;
Blog.

Or, you can define a database schema and add a description by clicking on the “Generate with AI” button. You need to type the application’s description in the text area and hit “Send”. The application’s schema will be ready in around 15 seconds. You may either hit deploy immediately or review the structure to make manual adjustments.

Next, you can connect your GitHub and push your application code there. Or skip this step by clicking the Finish and Deploy button and in a few minutes, your application will be generated.

The post Custom CRM System: Benefits, Requirements & Cost of Development appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner

Custom ERP System: Benefits, Requirements & Cost of Development

If you are looking for an effective and efficient way to manage your business, then a custom enterprise resource planning (ERP) system is the perfect solution. Custom ERP systems allow businesses to collect, store, and analyze information from several departments in one database, making it easier for executives to manage all fundamental business operations. Furthermore, the implementation of such software optimizes various workflows and processes. While there are various commercially available systems, custom ERP development ensures that you get a solution tailored specifically to your needs.

There is always room for development, even if your product is excellent. This is especially true for businesses looking for a custom ERP solution. The increasing need for specific software solutions serves as evidence of the need for a unique ERP system. 

If you are considering an ERP system for your business when making this important decision, ask yourself: What are the benefits of a custom ERP system? How does a custom ERP system compare to an off-the-shelf version? Is it the right fit for my business needs? Whatever you decide, it’s important to choose the right system for your business. In this article, we’ll talk about all of the above in developing custom ERP systems and provide some helpful tips on how to choose the right one.  Let’s dive into the details.

What is custom ERP?

ERP stands for Enterprise Resource Planning is a sophisticated software package that automates all business operations procedures. It aids businesses in optimizing efficiency and accuracy by automating data input and offering real-time visibility into their operations, consequently reducing the time and costs associated with managing operations.

An Off-the-shelf (OTS) ERP system is a pre-packaged, ready-to-deploy solution that comes with verified capabilities. It is designed to meet the needs of a wide range of businesses, however, its “one-size-fits-all” approach may not be the best solution for addressing specific challenges.

Custom ERP (as opposed to OTS solutions) system is designed around the unique requirements of a particular business. It offers a complete solution that unifies all of the business operations into a single platform and is created to match the particular needs of a particular business.

How Custom ERPs Different From Off-the-shelf ERPs

Custom ERPs are created especially for the individual or business that requires them, in contrast to off-the-shelf ERPs. This makes it possible to develop a solution that is more tailored to the unique requirements and needs of the customer and is also more effective, efficient, and affordable.  The capacity for custom ERPs to often be upgraded and modified over time typically allows for more flexibility and scalability.  Moreover, custom ERPs may be integrated into other systems to increase their versatility. Off-the-shelf ERPs, on the other hand, are pre-packaged, unchangeable systems with frequently limited scalability.

Usage of custom ERP

Custom ERP systems are produced to assist businesses in managing their operations more effectively. As they are customized to the particular requirements of the organization, they are more adaptable and able to offer a greater level of control. Sales, inventories, financial and operational procedures, customer support, and more could all be managed by custom ERP systems. Moreover, they can offer data to assist organizations in making wiser decisions. Customer Relationship Management (CRM), Human Resources (HR), supply chain management, and other business applications are frequently incorporated into custom ERP systems. This facilitates data access and the automation of numerous business operations. Custom ERP systems can help gain businesses a complete picture of their operations and can be extremely motivated, customer experience, and revenue.

Benefits of custom ERP

​​The main benefit of having custom software is that it can help focus on the specific needs and resources of businesses. It is essential to understand how to effectively incorporate and monitor all data management and workflows of the business from a single platform. These are some of the benefits of possessing a custom ERP system:

Benefit #1: Improved System Performance and Accessibility 

The goal of custom ERP is to be as effective and efficient as possible while offering more application availability than other systems. Businesses may easily adapt the software to their changing demands thanks to its modular architecture, saving money on potentially less-effective items.

Benefit #2: Balance Between Integration & Specification

Businesses may gain from an integrated workflow across departments with a custom ERP system, which will lead to better connections between buyers and suppliers. Different departments and teams can have their customized system thanks to the fact that custom ERP systems are created to match the specific needs of each business. Software developers may ensure the system is suited to the precise specifications and circumstances of the company’s numerous internal divisions and departments by developing and building a custom ERP that is adapted to the company’s particular system needs, workflow, and applications.

Benefit #3: No Need for Modifications

Businesses may save time and effort by using custom ERP systems instead of converting their current database architectures, applications, and tech stacks to the new system. Software development businesses may offer a ‘plug and play’ system by designing a custom ERP that suits the company’s current infrastructure, networks, capabilities, and resources, allowing enterprises to swiftly adopt the new system and begin profiting from its features.

Benefit #4: Custom Solutions for Your Business Needs

Custom ERP systems are created to deal with issues frequently encountered by businesses. The developers of such systems build the framework, features, and functionalities based on what they’ve picked up from past customers. This is a major plus point since it draws from the knowledge of many clients and businesses. 

Benefit #5: Automation Without Workflow Changes

Custom ERP can automate multiple operations within the company’s existing systems, databases, infrastructure, networks, and applications, without requiring extensive changes and modifications. This guarantees seamless integration while reducing any gaps between the company’s existing system and custom ERP. One important duty that may be automated with a custom ERP is the periodic report generation process, which makes it simpler to keep track of business activities.

Benefit #6: Improved User Experience and Customer Interface

Without a custom ERP system, many companies suffer from a lack of cohesion between their frontend website and backend database, middleware, and application systems. Even with a custom ERP system, plug-and-play capabilities for productivity applications and workflow systems are not always available. Software development companies can provide a solution to this issue by creating custom ERP frameworks and systems that allow for easy integration of existing infrastructure, workflow processes, and productivity applications.

Benefit #7: Monitor Performance with One Reporting System

Gain total visibility into all business processes, from Finance and Accounts Management, Human Resources to Manufacturing, Marketing, and Sales, and Supply Chain and Warehouse Management. Automate departmental workflow and track each department’s activities with a single reporting system, allowing for easy analysis of performance statistics and assurance that nothing is overlooked.

Benefit #8: Customized Reports & BI

Business is all about understanding the past and using that knowledge to make better decisions in the present. Statistics and analytics are essential for this, as they provide us with the data and insights needed to make informed choices. We can tailor our reports to our exact needs and integrate our business intelligence tools to gain even more statistics and insights.

Benefit #9: Boost Sales Through Enhanced CRM

Businesses may build a CRM that fits their unique demands and operations by deploying a custom ERP system. Their sales teams will be more efficient and productive thanks to our unique CRM system and platform. Several businesses are currently looking toward the use of consumer-level ERP solutions. The business may expand its marketing efforts and enhance the conversion of leads into sales by creating an ERP system that incorporates customer and CRM data. A smooth cooperation between the many departments involved in sales will be made possible by the custom ERP’s complete integration with all of the business’s sales platforms and assets. With this technology in place, sales teams may use customer behavior and purchase history information more effectively.

How to Formulate Clear Requirements for Custom ERP Development

Below are some identified necessary factors to create a custom ERP system that meets your individual needs. They include outlining your core requirements, considering how the system will fit with existing processes, and determining its scalability. With the help of this information, you can build the ideal custom ERP system for your business with confidence.

Functionality 

If you are looking to implement an ERP system, it is necessary to consider the functionality that it needs to include. To make this process easier, you can examine the features of your current system and identify what can be improved. Additionally, it is also beneficial to discuss with your team and ask questions such as: 

What processes can be automated for resource optimization?
What visibility or excessive reporting does each department require? Which information do different departments need access to? 
Why is a custom ERP system being considered in the first place? 

Answering these questions will help you to determine the functionality that needs to be included and assign user experience in the future.

Industry Research

Success depends on being current with developments in your industry. Research for most successful competitors and the top-performing apps for your sector. Learn about the benefits and potential pitfalls of these technologies by watching demos and reading reviews.  To ensure that your expectations are realistic and your custom ERP system meets the specific needs of your business. It is best to engage an experienced team of custom ERP development professionals to conduct a comprehensive market analysis.

Integration Requirements

The key to successful development is integrating your custom ERP system with external applications. The development team will have a clear knowledge of the project scope if you gather all of the software applications that each department in your company uses. Systems for managing client relationships, inventories, online storefronts, project management tools, accounting software, and more are examples of apps that should be on the list. Your development team will be better equipped to provide you with accurate time and cost estimates if you have this information available.

Deployment Type

Businesses must choose between an on-premises and cloud option considering custom ERP development. On-premise software is installed on the company’s hardware, while cloud-based systems are hosted on the vendor’s server and accessed online. In the past, most companies opted for on-premise ERP systems cloud solutions have become increasingly popular due to the numerous advantages they offer:

faster implementation, 
lower initial costs, 
less responsibility for security, 
easier collaboration between teams, 
automated updates provided by the vendor.

Cost of ERP Software Development 

Custom ERP solutions can range significantly in cost depending on the complexity and scope of the project. Factors that influence the total cost of a project include the number of people on the development team, the features and integrations required, and the duration of the project. Generally, custom ERP development projects can cost anywhere between $25,000 to $5,000,000, making it a more expensive option when compared to the average cost of a third-party system for resource planning which typically runs from $9,000 per user for mid-sized and large-scale enterprises respectively. The most crucial factors:

Size of Company

The size of your business will affect the amount of time and effort needed to create an ERP system. As the size of your business grows, the demand for a robust and comprehensive custom ERP system increases. This entails a more complex organizational structure, various business processes, and the need to integrate multiple software products across departments. To meet these demands, an experienced ERP development provider must be engaged to create a custom ERP system with all the required modules, integrations, and features. This will ensure that the custom ERP system is tailored to meet the specific needs of your business, no matter how large it may be.

Tech Stack 

When considering custom ERP development, there is a distinction between open-source software and proprietary solutions. Open-source software is free to the public, while proprietary solutions require the provider to pay a subscription fee for the platform. Additionally, the more sophisticated the technology, the more costly the developer’s time will be. When selecting the right custom ERP solution for your business, it is important to consider the cost implications of open-source versus proprietary solutions, as well as the complexity of the technology. This will ensure that you get the most suitable custom ERP system for your business needs.

Development Team 

For a comprehensive and versatile custom ERP system, it is necessary to hire a larger team of specialists, which will help to reduce the overall implementation timeline. Generally, the cost of a medium-complexity custom ERP solution is around $40,000 per module. This cost is comparable for testing, deployment, and data migration, so for a custom ERP system with 5 modules, the total cost will be approximately $250,000. To ensure that your custom ERP system is cost-efficient, it is important to accurately estimate the amount of time and resources that will be needed.

Summary

Custom ERP development allows companies to have a tailored system that meets their specific needs, allowing them to manage resources more efficiently, streamline their workflows, and modernize the enterprise. By investing in a customized solution, companies can ensure that their unique requirements are taken into account, as well as facilitate the adaptation of staff to the new system and avoid overspending on unnecessary features. 

Custom ERP with Flatlogic

Flatlogic Platform offers an easy way to generate a custom ERP solution with full control over the source code and scalability. With full control over the source code, you can make sure that you have the right features, scalability, and performance to match your business needs. Plus, with no-code development, you don’t need to be an expert programmer to make the necessary changes – making it easier to scale and customize as your business grows. With Flatlogic Platform, you have the flexibility to create a custom ERP solution that is tailored to your needs, while still having the scalability of more traditional development.

How to Create Custom ERP with Flatlogic Platform?

Using the Flatlogic Full-Stack Generator you can create CRUD and static applications in a few minutes. To start using the Platform, you need to register on the Flatlogic website. Clicking the “Sign in” button in the header will allow you to register for a Flatlogic account.

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

Step 3. Schema Editor

In this step, you can create your database schema from scratch, import an existing schema or select one of the suggested schemas. 

To import your existing database, click the Import SQL button and select your .sql file. After that, your database will be opened in the Schema Editor where you can further edit your data (add/edit/delete entities).

If you are not familiar with database design and find it difficult to understand what tables are, we have prepared some ready-made sample schemas of real applications that you can modify for your application:

E-commerce app;
Time tracking app;
Book store;
Chat (messaging) app;
Blog.

Or, you can define a database schema and add a description by clicking on the “Generate with AI” button. You need to type the application’s description in the text area and hit “Send”. The application’s schema will be ready in around 15 seconds. You may either hit deploy immediately or review the structure to make manual adjustments.

Next, you can connect your GitHub and push your application code there. Or skip this step by clicking the Finish and Deploy button and in a few minutes, your application will be generated.

The post Custom ERP System: Benefits, Requirements & Cost of Development appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner