Using Open Source Cedar to Write and Enforce Custom Authorization Policies

Cedar is an open source language and software development kit (SDK) for writing and enforcing authorization policies for your applications. You can use Cedar to control access to resources such as photos in a photo-sharing app, compute nodes in a micro-services cluster, or components in a workflow automation system. You specify fine-grained permissions as Cedar policies, and your application authorizes access requests by calling the Cedar SDK’s authorization engine. Cedar has a simple and expressive syntax that supports common authorization paradigms, including both role-based access control (RBAC) and attribute-based access control (ABAC). Because Cedar policies are separate from application code, they can be independently authored, analyzed, and audited, and even shared among multiple applications.

In this blog post, we introduce Cedar and the SDK using an example application, TinyTodo, whose users and teams can organize, track, and share their todo lists. We present examples of TinyTodo permissions as Cedar policies and how TinyTodo uses the Cedar authorization engine to ensure that only intended users are granted access. A more detailed version of this post is included with the TinyTodo code.

TinyTodo

TinyTodo allows individuals, called Users, and groups, called Teams, to organize, track, and share their todo lists. Users create Lists which they can populate with tasks. As tasks are completed, they can be checked off the list.

TinyTodo Permissions

We don’t want to allow TinyTodo users to see or make changes to just any task list. TinyTodo uses Cedar to control who has access to what. A List‘s creator, called its owner, can share the list with other Users or Teams. Owners can share lists in two different modes: reader and editor. A reader can get details of a List and the tasks inside it. An editor can do those things as well, but may also add new tasks, as well as edit, (un)check, and remove existing tasks.

We specify and enforce these access permissions using Cedar. Here is one of TinyTodo’s Cedar policies.

// policy 1: A User can perform any action on a List they own
permit(principal, action, resource)
when {
    resource has owner && resource.owner == principal
};

This policy states that any principal (a TinyTodo User) can perform any action on any resource (a TinyTodoList) as long as the resource has an owner attribute that matches the requesting principal.

Here’s another TinyTodo Cedar policy.

// policy 2: A User can see a List if they are either a reader or editor
permit (
    principal,
    action == Action::”GetList”,
    resource
)
when {
    principal in resource.readers || principal in resource.editors
};

This policy states that any principal can read the contents of a task list (Action::”GetList”) so long as they are in either the list’s readers group, or its editors group.

Cedar’s authorizer enforces default deny: A request is authorized only if a specific permit policy grants it.

The full set of policies can be found in the file TinyTodo file policies.cedar (discussed below). To learn more about Cedar’s syntax and capabilities, check out the Cedar online tutorial at https://www.cedarpolicy.com/.

Building TinyTodo

To build TinyTodo you need to install Rust and Python3, and the Python3 requests module. Download and build the TinyTodo code by doing the following:

> git clone https://github.com/cedar-policy/tinytodo
…downloading messages here
> cd tinytodo
> cargo build
…build messages here

The cargo build command will automatically download and build the Cedar Rust packages cedar-policy-core, cedar-policy-validator, and others, from Rust’s standard package registry, crates.io, and build the TinyTodo server, tiny-todo-server. The TinyTodo CLI is a Python script, tinytodo.py, which interacts with the server. The basic architecture is shown in Figure 1.

Figure 1: TinyTodo application architecture

Running TinyTodo

Let’s run TinyTodo. To begin, we start the server, assume the identity of user andrew, create a new todo list called Cedar blog post, add two tasks to that list, and then complete one of the tasks.

> python -i tinytodo.py
>>> start_server()
TinyTodo server started on port 8080
>>> set_user(andrew)
User is now andrew
>>> get_lists()
No lists for andrew
>>> create_list(“Cedar blog post”)
Created list ID 0
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
>>> create_task(0,”Draft the post”)
Created task on list ID 0
>>> create_task(0,”Revise and polish”)
Created task on list ID 0
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
1. [ ] Draft the post
2. [ ] Revise and polish
>>> toggle_task(0,1)
Toggled task on list ID 0
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
1. [X] Draft the post
2. [ ] Revise and polish

Figure 2: Users and Teams in TinyTodo

The get_list, create_task, and toggle_task commands are all authorized by the Cedar Policy 1 we saw above: since andrew is the owner of List ID 0, he is allowed to carry out any action on it.

Now, continuing as user andrew, we share the list with team interns as a reader. TinyTodo is configured so that the relationship between users and teams is as shown in Figure 2. We switch the user identity to aaron, list the tasks, and attempt to complete another task, but the attempt is denied because aaronis only allowed to view the list (since he’s a member of interns) not edit it. Finally, we switch to user kesha and attempt to view the list, but the attempt is not allowed (interns is a member of temp, but not the reverse).

>>> share_list(0,interns,read_only=True)
Shared list ID 0 with interns as reader
>>> set_user(aaron)
User is now aaron
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
1. [X] Draft the post
2. [ ] Revise and polish
>>> toggle_task(0,2)
Access denied. User aaron is not authorized to Toggle Task on [0, 2]
>>> set_user(kesha)
User is now kesha
>>> get_list(0)
Access denied. User kesha is not authorized to Get List on [0]
>>> stop_server()
TinyTodo server stopped on port 8080

Here, aaron‘s get_list command is authorized by the Cedar Policy 2 we saw above, since aaron is a member of the Team interns, which andrew made a reader of List 0. aaron‘s toggle_task and kesha‘s get_list commands are both denied because no specific policy exists that authorizes them.

Extending TinyTodo’s Policies with Administrator Privileges

We can change the policies with no updates to the application code because they are defined and maintained independently. To see this, add the following policy to the end of the policies.cedar file:

permit(
principal in Team::”admin”,
action,
resource in Application::”TinyTodo”);

This policy states that any user who is a member of Team::”Admin” is able to carry out any action on any List (all of which are part of the Application::”TinyTodo” group). Since user emina is defined to be a member of Team::”Admin” (see Figure 2), if we restart TinyTodo to use this new policy, we can see emina is able to view and edit any list:

> python -i tinytodo.py
>>> start_server()
=== TinyTodo started on port 8080
>>> set_user(andrew)
User is now andrew
>>> create_list(“Cedar blog post”)
Created list ID 0
>>> set_user(emina)
User is now emina
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
>>> delete_list(0)
List Deleted
>>> stop_server()
TinyTodo server stopped on port 8080

Enforcing access requests

When the TinyTodo server receives a command from the client, such as get_list or toggle_task, it checks to see if that command is allowed by invoking the Cedar authorization engine. To do so, it translates the command information into a Cedar request and passes it with relevant data to the Cedar authorization engine, which either allows or denies the request.

Here’s what that looks like in the server code, written in Rust. Each command has a corresponding handler, and that handler first calls the function self.is_authorized to authorize the request before continuing with the command logic. Here’s what that function looks like:

pub fn is_authorized(
&self,
principal: impl AsRef<EntityUid>,
action: impl AsRef<EntityUid>,
resource: impl AsRef<EntityUid>,
) -> Result<()> {
let es = self.entities.as_entities();
let q = Request::new(
Some(principal.as_ref().clone().into()),
Some(action.as_ref().clone().into()),
Some(resource.as_ref().clone().into()),
Context::empty(),
);
info!(“is_authorized request: …”);
let resp = self.authorizer.is_authorized(&q, &self.policies, &es);
info!(“Auth response: {:?}”, resp);
match resp.decision() {
Decision::Allow => Ok(()),
Decision::Deny => Err(Error::AuthDenied(resp.diagnostics().clone())),
}
}

The Cedar authorization engine is stored in the variable self.authorizer and is invoked via the call self.authorizer.is_authorized(&q, &self.policies, &es). The first argument is the access request &q — can the principal perform action on resource with an empty context? An example from our sample run above is whether User::”kesha” can perform action Action::”GetList” on resource List::”0″. (The notation Type::”id” used here is of a Cedar entity UID, which has Rust type cedar_policy::EntityUid in the code.) The second argument is the set of Cedar policies &self.policies the engine will consult when deciding the request; these were read in by the server when it started up. The last argument &es is the set of entities the engine will consider when consulting the policies. These are data objects that represent TinyTodo’s Users, Teams, and Lists, to which the policies may refer. The Cedar authorizer returns a decision: If Decision::Allow then the TinyTodo command can proceed; if Decision::Deny then the server returns that access is denied. The request and its outcome are logged by the calls to info!(…).

Learn More

We are just getting started with TinyTodo, and we have only seen some of what the Cedar SDK can do. You can find a full tutorial in TUTORIAL.md in the tinytodo source code directory which explains (1) the full set of TinyTodo Cedar policies; (2) information about TinyTodo’s Cedar data model, i.e., how TinyTodo stores information about users, teams, lists and tasks as Cedar entities; (3) how we specify the expected data model and structure of TinyTodo access requests as a Cedar schema, and use the Cedar SDK’s validator to ensure that policies conform to the schema; and (4) challenge problems for extending TinyTodo to be even more full featured.

Cedar and Open Source

Cedar is the authorization policy language used by customers of the Amazon Verified Permissions and AWS Verified Access managed services. With the release of the Cedar SDK on GitHub, we provide transparency into Cedar’s development, invite community contributions, and hope to build trust in Cedar’s security.

All of Cedar’s code is available at https://github.com/cedar-policy/. Check out the roadmap and issues list on the site to see where it is going and how you could contribute. We welcome submissions of issues and feature requests via GitHub issues. We built the core Cedar SDK components (for example, the authorizer) using a new process called verification-guided development in order to provide extra assurance that they are safe and secure. To contribute to these components, you can submit a “request for comments” and engage with the core team to get your change approved.

To learn more, feel free to submit questions, comments, and suggestions via the public Cedar Slack workspace, https://cedar-policy.slack.com. You can also complete the online Cedar tutorial and play with it via the language playground at https://www.cedarpolicy.com/.

Flatlogic Admin Templates banner

Securing PyPI for the Future

We are excited to announce that Amazon Web Services is now the Python Package Index (PyPI) Security Sponsor at the Python Software Foundation, the non-profit devoted to advancing open source technology related to the Python programming language. Through this sponsorship, AWS is providing funding to the PSF to hire a full-time Safety and Security Engineer dedicated to improving the security posture of PyPI. This effort is part of our broader initiative at Amazon Web Services (AWS) to support open source software supply chain security.

Python is an extremely popular open source programming and scripting language among our customers, partners, and Amazon engineers. It is number one on both the TIOBE Index (April 2023) and the PopularitY of Programming Language (PYPL) Index. PyPI is the primary repository of software for the Python programming language. Since Python is modular in nature, most Python applications rely heavily on PyPI to provide the necessary dependencies for core functions rather than reinventing them each time. PyPI is also the primary distribution point for Python applications and libraries.

At AWS, we know that scale and success bring broad responsibility. Amazon and its customers build solutions with Python and we recognize the need to give back to the open source communities that we depend on and help ensure their long term sustainability. AWS is a maintaining sponsor of the PSF and has supported PyPI since 2018, when the index was rewritten to run on AWS in order to address performance and scalability concerns. Today, PyPI scales beautifully due to the significant work from PSF Director of Infrastructure Ee Durbin and the PyPI infrastructure team. AWS is pleased to be able to continue to support PyPI via AWS credits, which offset their infrastructure costs.

PyPI is now facing a new challenge at scale: keeping Python software packages secure. PyPI is regularly threatened by malicious actors, with attacks including typosquatting, dependency injection, and dependency confusion. Companies (including AWS) publish business-critical software on PyPI, and packages are being maliciously published to appear to be from users who represent a large target. These attacks on PyPI have lead to a lengthy support ticket backlog, which are currently addressed by a single part-time volunteer. Their efforts to date to stay on top of this have been nothing short of incredible, but they can be more sustainable.

As the first PyPI Security Sponsor, we are providing additional funding which will allow the PSF to hire a full-time Safety and Security Engineer for PyPI. This will provide PyPI with additional resources to take down malware from the site and respond more quickly to support tickets related to security issues. Additionally, it will allow PyPI to shift from a reactive approach to security to a proactive one in which they can develop a security plan with improvement milestones and enable proper security audits of new PyPI features before launch.

Supply chain security is an industry wide concern, and Python is not alone in these challenges. The Python Package Index is critical to countless users around the world. A new safety and security engineer will help alleviate the current bottleneck of support issues, remove malware faster, and keep PyPI secure for the benefit of all its users. We look forward to continuing our work with the Python Software Foundation as we work towards improving open source supply chain security.

Flatlogic Admin Templates banner

Monitoring Amazon DevOps Guru insights using Amazon Managed Grafana

As organizations operate day-to-day, having insights into their cloud infrastructure state can be crucial for the durability and availability of their systems. Industry research estimates[1] that downtime costs small businesses around $427 per minute of downtime, and medium to large businesses an average of $9,000 per minute of downtime. Amazon DevOps Guru customers want to monitor and generate alerts using a single dashboard. This allows them to reduce context switching between applications, providing them an opportunity to respond to operational issues faster.

DevOps Guru can integrate with Amazon Managed Grafana to create and display operational insights. Alerts can be created and communicated for any critical events captured by DevOps Guru and notifications can be sent to operation teams to respond to these events. The key telemetry data types of logs and metrics are parsed and filtered to provide the necessary insights into observability.

Furthermore, it provides plug-ins to popular open-source databases, third-party ISV monitoring tools, and other cloud services. With Amazon Managed Grafana, you can easily visualize information from multiple AWS services, AWS accounts, and Regions in a single Grafana dashboard.

In this post, we will walk you through integrating the insights generated from DevOps Guru with Amazon Managed Grafana.

Solution Overview:

This architecture diagram shows the flow of the logs and metrics that will be utilized by Amazon Managed Grafana, starting with DevOps Guru and then using Amazon EventBridge to save the insight event logs to Amazon CloudWatch Log Group DevOps Guru service metrics to be parsed by Amazon Managed Grafana and create new dashboards in Grafana from these logs and Metrics.

Now we will walk you through how to do this and set up notifications to your operations team.

Prerequisites:

The following prerequisites are required for this walkthrough:

An AWS Account

Enabled DevOps Guru on your account with CloudFormation stack, or tagged resources monitored.

Using Amazon CloudWatch Metrics

 

DevOps Guru sends service metrics to CloudWatch Metrics. We will use these to      track metrics for insights and metrics for your DevOps Guru usage; the DevOps Guru service reports the metrics to the AWS/DevOps-Guru namespace in CloudWatch by default.

First, we will provision an Amazon Managed Grafana workspace and then create a Dashboard in the workspace that uses Amazon CloudWatch as a data source.

Setting up Amazon CloudWatch Metrics

Create Grafana Workspace
Navigate to Amazon Managed Grafana from AWS console, then click Create workspace

a. Select the Authentication mechanism

i. AWS IAM Identity Center (AWS SSO) or SAML v2 based Identity Providers

ii. Service Managed Permission or Customer Managed

iii. Choose Next

b. Under “Data sources and notification channels”, choose Amazon CloudWatch

c. Create the Service.

You can use this post for more information on how to create and configure the Grafana workspace with SAML based authentication.

Next, we will show you how to create a dashboard and parse the Logs and Metrics to display the DevOps Guru insights and recommendations.

2. Configure Amazon Managed Grafana

a. Add CloudWatch as a data source:
From the left bar navigation menu, hover over AWS and select Data sources.

b. From the Services dropdown select and configure CloudWatch.

3. Create a Dashboard

a. From the left navigation bar, click on add a new Panel.

b. You will see a demo panel.

c. In the demo panel – Click on Data source and select Amazon CloudWatch.

d. For this panel we will use CloudWatch metrics to display the number of insights.

e. From Namespace select the AWS/DevOps-Guru name space, Insights as Metric name and Average for Statistics.

click apply

f. This is our first panel. We can change the panel name from the right-side bar under Title. We will name this panel “Insights

g. From the top right menu, click save dashboard and give your new dashboard a name

Using Amazon CloudWatch Logs via Amazon EventBridge

For other insights outside of the service metrics, such as a number of insights per specific service or the average for a region or for a specific AWS account, we will need to parse the event logs. These logs first need to be sent to Amazon CloudWatch Logs. We will go over the details on how to set this up and how we can parse these logs in Amazon Managed Grafana using CloudWatch Logs Query Syntax. In this post, we will show a couple of examples. For more details, please check out this User Guide documentation. This is not done by default and we will need to use Amazon EventBridge to pass these logs to CloudWatch.

DevOps Guru logs include other details that can be helpful when building Dashboards, such as region, Insight Severity (High, Medium, or Low), associated resources, and DevOps guru dashboard URL, among other things.  For more information, please check out this User Guide documentation.

EventBridge offers a serverless event bus that helps you receive, filter, transform, route, and deliver events. It provides one to many messaging solutions to support decoupled architectures, and it is easy to integrate with AWS Services and 3rd-party tools. Using Amazon EventBridge with DevOps Guru provides a solution that is easy to extend to create a ticketing system through integrations with ServiceNow, Jira, and other tools. It also makes it easy to set up alert systems through integrations with PagerDuty, Slack, and more.

 

Setting up Amazon CloudWatch Logs

Let’s dive in to creating the EventBridge rule and enhance our Grafana dashboard:

a. First head to Amazon EventBridge in the AWS console.

b. Click Create rule.

     Type in rule Name and Description. You can leave the Event bus to default and Rule type to Rule with an event pattern.

c. Select AWS events or EventBridge partner events.

    For event Pattern change to Customer patterns (JSON editor) and use:

{“source”: [“aws.devops-guru”]}

This filters for all events generated from DevOps Guru. You can use the same mechanism to filter out specific messages such as new insights, or insights closed to a different channel. For this demonstration, let’s consider extracting all events.

d. Next, for Target, select AWS service.

    Then use CloudWatch log Group.

    For the Log Group, give your group a name, such as “devops-guru”.

e. Click Create rule.

f. Navigate back to Amazon Managed Grafana.
It’s time to add a couple more additional Panels to our dashboard.  Click Add panel.
    Then Select Amazon CloudWatch, and change from metrics to CloudWatch Logs and select the Log Group we created previously.

g. For the query use the following to get the number of closed insights:

fields @detail.messageType
| filter detail.messageType=”CLOSED_INSIGHT”
| count(detail.messageType)

You’ll see the new dashboard get updated with “Data is missing a time field”.

You can either open the suggestions and select a gauge that makes sense;

Or choose from multiple visualization options.

Now we have 2 panels:

h. You can repeat the same process. To create 3rd panel for the new insights using this query:

fields @detail.messageType
| filter detail.messageType=”NEW_INSIGHT”
| count(detail.messageType)

Now we have 3 panels:

Next, depending on the visualizations, you can work with the Logs and metrics data types to parse and filter the data.

i. For our fourth panel, we will add DevOps Guru dashboard direct link to the AWS Console.

Repeat the same process as demonstrated previously one more time with this query:

fields detail.messageType, detail.insightSeverity, detail.insightUrlfilter
| filter detail.messageType=”CLOSED_INSIGHT” or detail.messageType=”NEW_INSIGHT”                       

                        Switch to table when prompted on the panel.

This will give us a direct link to the DevOps Guru dashboard and help us get to the insight details and Recommendations.

Save your dashboard.

You can extend observability by sending notifications through alerts on dashboards of panels providing metrics. The alerts will be triggered when a condition is met. The Alerts are communicated with Amazon SNS notification mechanism. This is our SNS notification channel setup.

A previously created notification is used next to communicate any alerts when the condition is met across the metrics being observed.

Cleanup

To avoid incurring future charges, delete the resources.

Navigate to EventBridge in AWS console and delete the rule created in step 4 (a-e) “devops-guru”.
Navigate to CloudWatch logs in AWS console and delete the log group created as results of step 4 (a-e) named “devops-guru”.
Amazon Managed Grafana: Navigate to Amazon Managed Grafana service and delete the Grafana services you created in step 1.

Conclusion

In this post, we have demonstrated how to successfully incorporate Amazon DevOps Guru insights into Amazon Managed Grafana and use Grafana as the observability tool. This will allow Operations team to successfully observe the state of their AWS resources and notify them through Alarms on any preset thresholds on DevOps Guru metrics and logs. You can expand on this to create other panels and dashboards specific to your needs. If you don’t have DevOps Guru, you can start monitoring your AWS applications with AWS DevOps Guru today using this link.

[1] https://www.atlassian.com/incident-management/kpis/cost-of-downtime

About the authors:

MJ Kubba

MJ Kubba is a Solutions Architect who enjoys working with public sector customers to build solutions that meet their business needs. MJ has over 15 years of experience designing and implementing software solutions. He has a keen passion for DevOps and cultural transformation.

David Ernst

David is a Sr. Specialist Solution Architect – DevOps, with 20+ years of experience in designing and implementing software solutions for various industries. David is an automation enthusiast and works with AWS customers to design, deploy, and manage their AWS workloads/architectures.

Sofia Kendall

Sofia Kendall is a Solutions Architect who helps small and medium businesses achieve their goals as they utilize the cloud. Sofia has a background in Software Engineering and enjoys working to make systems reliable, efficient, and scalable.

Introducing AWS Libcrypto for Rust, an Open Source Cryptographic Library for Rust

Today we are excited to announce the availability of AWS Libcrypto for Rust (aws-lc-rs), an open source cryptographic library for Rust software developers with FIPS cryptographic requirements. At our 2022 AWS re:Inforce talk we introduced our customers to AWS Libcrypto (AWS-LC), and our investment in and improvements to open source cryptography. Today we continue that mission by releasing aws-lc-rs, a performant cryptographic library for Linux (x86, x86-64, aarch64) and macOS (x86-64) platforms.

Rust developers increasingly need to deploy applications that meet US and Canadian government cryptographic requirements. We evaluated how to deliver FIPS validated cryptography in idiomatic and performant Rust, built around our AWS-LC offering. We found that the popular ring (v0.16) library fulfilled much of the cryptographic needs in the Rust community, but it did not meet the needs of developers with FIPS requirements. Our intention is to contribute a drop-in replacement for ring that provides FIPS support and is compatible with the ring API. Rust developers with prescribed cryptographic requirements can seamlessly integrate aws-lc-rs into their applications and deploy them into AWS Regions.

AWS-LC is the foundation of aws-lc-rs. AWS-LC is the general-purpose cryptographic library for the C programming language at AWS. It is a fork from Google’s BoringSSL, with features and performance enhancements developed by AWS, such as FIPS support, formal verification for validating implementation correctness, performance improvements on Arm processors for ChaCha20-Poly1305 and NIST P-256 algorithms, and improvements to ECDSA signature verification for NIST P-256 curves on x86 based platforms. aws-lc-rs leverages these AWS-LC optimizations to improve performance in Rust applications. AWS-LC has been submitted to an accredited lab for FIPS validation testing, and upon completion will be submitted to NIST for certification. Once NIST grants a validation certificate to AWS-LC, we will make an announcement to Rust developers on how to leverage the FIPS mode in aws-lc-rs.

We used rustls, a Rust library that provides TLS 1.2 and 1.3 protocol implementations to benchmark aws-lc-rs performance. We ran a set of benchmark scenarios on c7g.metal (Graviton3) and c6i.metal (x86-64) Amazon Elastic Compute Cloud (Amazon EC2) instance types. The graph below shows the improvement of a TLS client negotiating new connections using aws-lc-rs. aws-lc-rs with rustls significantly improves throughput in each of the algorithms tested, and for every hardware platform. We are excited to share aws-lc-rs and these cryptographic improvements with the Rust community today. We are continually evaluating our benchmarks for opportunities to improve aws-lc-rs.

Getting Started

Incorporating aws-lc-rs in your project is straightforward. Let’s look at how you can use aws-lc-rs in your Rust application by creating a SHA256 digest of the message “Hello Blog Readers!” An enhanced version of this digest example is available in the aws-lc-rs repository.

Install the Rust toolchain with rustup, if you do not already have it.
Initialize a new cargo package for your Rust project, if you don’t already have one:
$ cargo new –bin aws-lc-rs-example

Record aws-lc-rs as a dependency in the project Cargo file:
$ cargo add aws-lc-rsApplications already using the ring (v0.16.x) API: If your application already leverages the ring API, you can easily test and benchmark your application against aws-lc-rs without changing your application’s use declarations:$ cargo remove ring
$ cargo add –rename ring aws-lc-rs

Edit src/main.rs:

use aws_lc_rs::digest::{digest, SHA256};

fn main() {
const MESSAGE: &[u8] = b”Hello Blog Readers!”;

let output = digest(&SHA256, MESSAGE);

for v in output.as_ref() {
print!(“{:02x}”, *v);
}
println!();
}

Compile and run the program:
$ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Running `target/debug/aws-lc-rs-example`
c10843d459bf8e2fa6000d59a95c0ae57966bd296d9e90531c4ec7261460c6fb

Conclusion

Rust developers increasingly need to deploy applications that meet US and Canadian government cryptographic requirements. In this post, you learned how we are building aws-lc-rs in order to bring FIPS compliant cryptography to Rust applications. Together AWS-LC and aws-lc-rs bring performance improvements for Arm and x86-64 processor families for commonly used cryptographic algorithms. If you are interested in using or contributing to aws-lc-rs source code or documentation, they are publicly available under the terms of either the Apache Software License 2.0 or ISC License from our GitHub repository. We use GitHub Issues for managing feature requests or bug reports. You can follow aws-lc-rs on crates.io for notifications about new releases. If you discover a potential security issue in aws-lc-rs or AWS-LC, we ask that you notify AWS Security using our vulnerability reporting page.

Flatlogic Admin Templates banner

Announcing the Simple Database Archival Solution

Today, we are excited to announce the release of the Simple Database Archival Solution (SDAS). SDAS is an open source solution available under Apache License 2.0, which you can deploy in your AWS account to archive data to AWS. SDAS provides organizations with an efficient, easy and cost-effective solution for archiving Oracle, Microsoft SQL, and MySQL databases.

SDAS addresses a common problem faced by many AWS customers, which is the need to efficiently and securely archive data from their databases. Many organizations are required to retain data for long periods, and storing this data on-premises can be costly and complex. Additionally, cloud adoption is becoming more prevalent, and customers often need a solution to easily transfer data from their cloud-hosted databases to the cloud for long-term storage.

SDAS offers a differentiated solution by providing an easy-to-use, open source tool that can be deployed directly into customers’ AWS accounts. With SDAS, customers can quickly and easily map the schema of their database, perform validation, and transfer data to Amazon Simple Storage Service (Amazon S3) for storage. This is accomplished using AWS Step Functions, AWS Glue, Amazon S3, and Amazon Athena which provide a highly scalable and reliable solution for transferring data.

With its open source approach, SDAS offers customers a high degree of flexibility to customize and extend the solution to meet their unique requirements. This level of flexibility ensures that the solution can be adapted to the specific needs of any organization, regardless of their size or industry.

What is Simple Database Archival Solution (SDAS)?

As businesses accumulate more and more data over time, the need for effective database archiving solutions has become increasingly important, for example moving older, rarely used data to an archive. Businesses can reduce the size of their active databases, which can improve performance and reduce storage costs. Archiving can also help organizations meet legal and regulatory requirements for data retention, as well as ensure that important data is available for future use and discovery, if necessary. Out of the box, SDAS provides the following key features:

Support Oracle, MySQL or Microsoft SQL Server
Identify the data type and table schema
Validate the data on the target after the archiving process has completed
Configure WORM (“Write Once Read Many”)
Define a data retention period for the data
Provide detailed information about the status of the data
Perform various data validation and integrity checks
Make it simple for operations to ingest and archive a database
Preview data archived in Amazon S3

SDAS Architecture

SDAS is a solution that provides customers with a robust and scalable mechanism for archiving databases to AWS. SDAS has an intuitive front-end interface as its core, enabling users to easily manage and configure their archives. The front-end is built using the CloudScape Design Framework, a powerful and flexible framework for developing web applications. This front-end is supported by several key AWS services, including Amazon Cognito for authentication and authorization, API Gateway with Amazon Lambda functions for user operations, and Amazon S3 for storing the front-end build files.

Amazon Cognito is a fully managed service that provides user authentication and authorization, making it easy to secure user access to web and mobile applications. It supports several authentication methods, including social identity providers such as Facebook, Google, and Amazon, as well as enterprise identity providers via SAML 2.0. With Cognito, users can easily sign up, sign in, and manage their own profiles and settings, without the need for complex user management infrastructure.

API Gateway is a fully managed service that provides a secure, scalable, and reliable API for interacting with SDAS’s front-end interface. It enables users to easily integrate SDAS with other AWS services or third-party applications, while also providing features such as authentication, and authorization. Supporting Lambda functions are used to provide serverless compute resources for user operations, enabling SDAS to easily handle varying levels of traffic and user load. Finally, Amazon S3 is used to store the front-end build files, providing a highly durable and scalable object storage service that is optimized for use with AWS applications.

The SDAS platform also includes Amazon DynamoDB, which serves as the primary storage mechanism for archive metadata. DynamoDB provides a highly scalable and durable NoSQL database that is optimized for high-volume workloads. Additionally, the solution leverages AWS Secrets Manager to securely store passwords and other sensitive information.

Archiving of data is performed using AWS Glue, a fully managed ETL service that enables customers to extract, transform, and load data from a variety of sources. SDAS includes pre-built Spark Python scripts that are used to transform the data before archiving it to Amazon S3. The archived data is stored in Parquet file format, which is optimized for query performance and storage efficiency.

Finally, to provide users with an easy-to-use interface for querying archived data, SDAS leverages Amazon Athena. Athena is a serverless query service that allows users to query data stored in S3 using SQL. By using Athena, users can quickly and easily perform ad-hoc analysis on their archived data without the need for complex setup or maintenance.

In summary, SDAS provides a comprehensive solution for archiving databases to AWS that leverages several key AWS services to enable reliability, scalability, and security. The solution is highly customizable and can be tailored to meet the specific needs of individual customers. With its intuitive front-end and powerful back-end architecture, SDAS is an ideal solution for organizations looking to simplify the process of archiving their data to the cloud.

Sample Use Case

A health care and life sciences company needs to decommission a Microsoft SQL database as it is associated with a legacy application that is no longer in use. The database contains important historical data that the company needs to retain. Maintaining the database up and running incurs significant costs, including licensing, maintenance, and hardware expenses and it is no longer being required by day-to-day business operations. To address this challenge, the company decides to decommission the Microsoft SQL database and archive its data to AWS using SDAS.

By archiving the data with SDAS, the company can take advantage of lower storage costs, better data durability, and easy accessibility for analysis and reporting. The business impact of this decision includes:

Cost Savings: Decommissioning the Microsoft SQL database reduces expenses related to licensing, maintenance, and hardware. This frees up resources that can be allocated to more critical business initiatives.

Simplified Data Management: By consolidating the historical data in Amazon S3, the company streamlines its data management processes. This makes it easier to perform data analysis and generate reports when needed, without the complexity of managing a legacy database.

Security: AWS offers advanced security features, such as encryption and access controls, helping the company protect its sensitive historical data.

Scalability: As the company continues to grow, Amazon S3’s scalable storage solution enables them to store and manage increasing amounts of data without worrying about capacity constraints.

By archiving the Microsoft SQL database to Amazon S3 using SDAS, the health care and life sciences company can effectively balance the need to preserve important historical data with the desire to optimize operational costs and improve overall data management.

Steps to Archive a Database with SDAS: A Comprehensive Solution for AWS Customers

Start and Discover

To start the archiving process, gather essential connection information, including the database name, database URI, and credentials. With this information, SDAS attempts to connect to the database, and if successful, proceeds to the next step. In the next step, SDAS collects the tables and associated schema from the target database to be archived.

Figure 1: Screenshot view of SDAS connecting to the Source Database.

To identify the data that needs to be archived, SDAS uses a technique to scan the metadata associated with the table. This process is designed to accurately identify the data type and schema of the table and ensure that the data is properly formatted and validated before being transferred to AWS. The process involves running multiple SQL queries to extract the database schema definition to allow AWS Glue to read and finally write the data to Amazon S3.

Once the data type and schema of the table have been identified, SDAS can begin the process of transferring the data to AWS.

Figure 2: Screenshot view of SDAS performing a scan as well as gathering the database schema definition.

Archive

The archive phase of SDAS is a critical step in the process of archiving data to Amazon S3. SDAS is designed to automatically archive data from Oracle, Microsoft SQL, and MySQL databases, providing flexibility and versatility for customers. The archiving process can be triggered either manually or automatically based on a defined schedule, enabling customers to customize the solution to their specific needs.

Figure 3: Screenshot view of SDAS starting an archive process.

At the core of the archive phase is AWS Glue, a fully managed Extract, Transform, and Load (ETL) service that provides a flexible and scalable solution for copying the database from the source to the target. SDAS leverages the power of AWS Glue to perform necessary transformations on the data, including data cleaning and schema transformations, ensuring that the data is properly formatted and validated before being transferred to Amazon S3.

Once the data is transferred to Amazon S3, it is stored as Parquet files, a columnar storage format that is optimized for query performance and storage efficiency. This makes the archived data easy to query, for instance using Amazon Athena, a serverless query service that allows customers to query data stored in S3 using SQL. By leveraging the power of Amazon Athena, customers can easily perform ad-hoc analysis on their archived data without the need for complex setup or maintenance.

Figure 4: Screenshot view of the AWS Glue jobs running which copies the data from the source to Amazon S3.

Data Validation

The data validation phase of SDAS is a critical step that ensures the accuracy and completeness of the archived data. After the archival process is complete, SDAS automatically triggers a validation process to ensure that the data has been properly transferred and stored in Amazon S3.

The validation process begins by comparing the source data to the archived data stored in Amazon S3, using a variety of techniques such as checksums, and data sampling. This process ensures that the data has been accurately transferred and stored, with no data loss or corruption. SDAS does not perform validation on the source data, only on the data stored in Amazon S3.

If any discrepancies are detected, SDAS provides you with the ability to identify the affected table. In addition to ensuring the accuracy of the archived data, SDAS also provides security features to protect against unauthorized access or modification of the data. Passwords are stored in AWS Secrets Manager, which provides a highly secure mechanism for storing and managing secrets, such as database passwords.

Figure 5: Screenshot of the validation processes performed on the target data.

Access to Archived Databases

Access to the archived databases in SDAS is limited to authorized users who can access them through the Amazon Athena Console. To explore and visualize the data using Business Intelligence tools, users can download, install, and configure either an ODBC (Open Database Connectivity) or JDBC (Java Database Connectivity) driver to connect to Amazon Athena.

SDAS also includes a preview mode through the console, which allows users to quickly view the database that has been archived without the need for additional drivers or tools. This preview mode provides users with a quick and easy way to assess the quality and completeness of the archived data before proceeding with further analysis or querying.

Figure 6: Screenshot of the Data Preview feature in SDAS

Object Lock

SDAS includes a powerful feature that enables users to enable Amazon S3 Object Lock, a feature that allows objects to be stored using a WORM (Write Once, Read Many) model. This feature is designed for use in scenarios where it is critical that data is not modified or deleted after it has been written.

By enabling Amazon S3 Object Lock, users can ensure that their archived data is fully protected from accidental or malicious deletion or modification. This feature provides a powerful layer of security that helps to prevent data loss or corruption, ensuring that the archived data remains complete and accurate for future analysis and querying.

Figure 7: Screenshot of the Object Lock feature

Give SDAS a try!

1. Install the Simple Database Archival Solution in your AWS Account.
2. Send any issues, improvements, or suggestions to us at our GitHub page.
3. To help you get started, we have also published a self-guided workshop that walks through the installation and core features of SDAS. Thank you to James Gaines for their support in building the workshop.

Conclusion

The Simple Database Archival Solution (SDAS) offers organizations a comprehensive open source solution for archiving various types of databases, including Oracle, Microsoft SQL, and MySQL. With SDAS, businesses can easily and cost-effectively archive, validate, and securely store their data in Amazon S3. SDAS also provides users with various options for accessing and analyzing their archived data, making it a valuable tool for data analysis and reporting.

Additional thanks to Rohit Jagetia, Joe Cangialosi, James Gaines, and Duverney Tavares for their work on this solution.

Flatlogic Admin Templates banner

AWS Now Supports Credentials-fetcher for gMSA on Amazon Linux 2023

In Q1 of 2023, AWS announced the release of the group Managed Service Account (gMSA) credentials-fetcher daemon, with initial support on Amazon Linux 2023, Fedora Linux 36, and Red Hat Enterprise Linux 9. The credentials-fetcher daemon, developed by AWS, is an open source project under the Apache 2.0 License. This release solves a 10-year, longstanding challenge affecting domain connected Linux machines. Until now, Linux users couldn’t use Microsoft Active Directory (Microsoft AD) gMSA and thus have missed out on the improved security and flexibility that gMSA offers over standard service accounts. With the release of the credentials-fetcher daemon, organizations now gain all of gMSA’s benefits without being tied to Windows based hosts.

In this blog post, we explain the use case for credentials-fetcher and give simple instructions for using an Active Directory domain joined Linux server with gMSA. We also demonstrate the interaction with other domain joined services such as Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server.  The new capabilities of credentials-fetcher pave the way for additional use cases, such as using a Linux host in Amazon Elastic Container Service (Amazon ECS) clusters with gMSA. AWS is committed to using the credentials-fetcher open source project in the AWS cloud, though users may choose to run the service elsewhere. The utility of the service is not limited to AWS. The credentials-fetcher daemon can be leveraged on any supported distribution of Linux and in any environment that meets the Microsoft Active Directory version requirement. This includes on-premise environments, hosted data centers, and other cloud providers.

Solution overview

Organizations running Windows workloads hosted in on-premises data centers use Microsoft AD to authenticate users and services to shared resources over the network. As these organizations migrate workloads into Windows-based environments on AWS and on other clouds, customers traditionally use the domain-join model to access Microsoft AD from Windows instances. In addition, organizations that use Windows containers to scale their applications and reduce their total cost of ownership (TCO) have used gMSAs for Active Directory access by providing Kerberos tickets for container-hosts.

As customers modernize their Windows and Microsoft SQL Server workloads to Linux-based platforms, they still need to authenticate the migrated applications through the organization’s existing Microsoft AD. Although customers can use the domain-join methodology to connect Linux instances to Microsoft AD, it requires a number of steps that traditionally include security limitations. The current method involves a sidecar architecture that fails to periodically rotate passwords, unlike gMSA on Windows containers, thus inducing a security risk of password exposure. Organizations with stringent security postures have not adopted this method on Linux containers and have been waiting for a “gMSA on Windows containers”-like experience on Linux containers.  Active Directory gMSAs have been technically infeasible for customers on Linux-based environments, until today.

A brief introduction to gMSA

Windows-based server infrastructure commonly uses Microsoft Active Directory to facilitate authentication and authorization between users, computers, and other computer network resources. Traditionally, enterprise applications running on Windows platforms use either manually managed accounts used as service accounts or Managed Service Accounts (MSA) for authentication and authorization. The use of manually managed service accounts brings with it the overhead of service account password management, including manually updating the password and updating the password on all servers. It also introduces increased security risks as these accounts typically have elevated privileges and are not tied to a specific user, which creates challenges for attributing activity when auditing the account. For this reason, password management of these accounts is critical.

In contrast, Managed Service Accounts don’t have any password management overhead; the passwords for these type of accounts are automatically rotated and updated on your servers. They are also limited to a single computer account, which means they can’t be used on more than one computer, and cannot be used for interactive logons. A Group Managed Service Account (gMSA) is a special type of service account which augments the functionality; its identity can be shared across multiple computers without needing to know the password. Computers should be part of a Microsoft Active Directory domain, which manages these service accounts to make use of them. Although Windows containers cannot join a domain like an instance, they can still use gMSA identity for authentication and authorization.

Credentials-fetcher’s potential scenarios

With the addition of the credentials-fetcher daemon, more organizations can use gMSA. This gives customers more options if they’re more familiar with Linux, they’re looking to save on licensing costs, and/or looking to improve their security posture. Customers can now associate Linux machines to a gMSA and take advantage of the authentication and authorization between members of that group managed security account. Environments hosted on domain joined, gMSA associated Linux machines running .NET applications or running in Linux containers can now use the gMSA to authenticate between their own domains and other services like Microsoft SQL Server.

Scenario 1: A Microsoft .NET application is running in Docker containers, with the hosts on a Microsoft Active Directory domain joined Amazon Elastic Computer Cloud (Amazon EC2) Linux server. The Linux application server is added as members of the gMSA group. The gMSA account is granted permissions to the domain joined Microsoft SQL Server or Amazon RDS for Microsoft SQL Server database.

Scenario 2: A Microsoft .NET application is running in Docker containers and Microsoft SQL server running in its own Docker container, with the hosts on a Microsoft Active Directory domain joined Amazon EC2 Linux server.  The Linux host servers of the application containers and Microsoft SQL Server container are added as members of the gMSA group. The gMSA account is granted permissions to the Microsoft SQL Server instance database running in a container.

Scenario 3: A Microsoft .NET application is running on an Amazon Elastic Container Service (Amazon ECS) cluster, hosted on a Microsoft Active Directory domain. The Linux servers within the Amazon ECS cluster are added as members of the gMSA group. The gMSA account is granted permissions to the domain joined Microsoft SQL Server or Amazon RDS for Microsoft SQL Server database.

Here is a visualization of the featured use-case scenarios.

Figure 1 Different use-case scenarios with Credentials-fetcher

Implementing the environment

This section will walk you through the prerequisites, environment setup and the installation steps for the credentials-fetcher daemon’s use cases.

Prerequisites

You have properly installed and configured the AWS Command Line Interface (AWS CLI) and PowerShell core on your workstation. We’ve chosen to use the AWS CLI for these steps so that the end-to-end workflow can be demonstrated.
This blog post as of April 4th, 2023 requires an install of Fedora Linux 36 or newer and the latest Amazon Linux 2023 AMI.
This blog post references AWS Managed Microsoft Active Directory, but it will also work with other self-managed Microsoft Active Directory scenarios as long as the Linux machines are able to be domain joined.
You have installed Amazon Relational Database Service (Amazon RDS) instance that is joined to the domain.
You have elevated administrative Active Directory credentials to configure instances to join a domain and create a Microsoft AD security group.
You have accessed to the credentials-fetcher GitHub package for the installation of the latest daemon and updated instructions.

Environment Setup for gMSA on Linux use cases

Figure 2 Credentials-fetcher running in Fedora Linux Server.

All instructions are assuming the use of the Fedora Linux 36 distro, which has been made available during the time of the blog creation. We plan to add gMSA support for additional Linux distributions in the future.

1.     Set up AWS Managed Microsoft Active Directory or Self-hosted Active Directory.

Active Directory setup:  You will set up domain-join from Linux instance to the AD domain. The Linux instance is part of the AD Security group that has access to gMSA account as configured by AD administrator.
AWS Managed Microsoft Active Directory can be deployed using this AWS CloudFormation template.

2.     Create a gMSA account as a Microsoft AD administrator.

Example: Replace ‘LinuxAppFarm’, ‘LinuxFarm01$’ and ‘CORP.EXAMPLE.COM’ with your own gMSA and domain names, respectively. Three Linux instances are displayed in this example: LinuxInstance01$, LinuxInstance02$ and LinuxInstance03$.

# Create the AD group
New-ADGroup -Name “LinuxAppFarm” -SamAccountName “LinuxAppFarm” -GroupScope DomainLocal

# Create the gMSA
New-ADServiceAccount -Name “gmsamachines” -DnsHostName “gmsamachines.CORP.EXAMPLE.COM” -ServicePrincipalNames “host/LinuxAppFarm”, “host/LinuxAppFarm.CORP.EXAMPLE.COM” -PrincipalsAllowedToRetrieveManagedPassword “LinuxAppFarm”

# Add your Linux Instance or containers to the AD group Add-ADGroupMember -Identity “LinuxAppFarm” -Members “LinuxInstances01$”, “LinuxInstances02$”, “LinuxInstances03$”, “MSSQLRDSIntance$”

3.     Verify and test the gMSA account.

PowerShell
# Get the current computer’s group membership
Test-ADServiceAccount gmsamachines

# Get the current computer’s group membership
Get-ADComputer $env:LinuxInstances01 | Get-ADPrincipalGroupMembership | Select-Object DistinguishedName

# Get the groups allowed to retrieve the gMSA password and Change “gmsamachines” for your own gMSA name
(Get-ADServiceAccount gmsamachines -Properties PrincipalsAllowedToRetrieveManagedPassword).PrincipalsAllowedToRetrieveManagedPassword.

Additional reference detailed instructions can be found in this guide to getting started with group managed service accounts.

4.     Create a credentialspec associated with a gMSA account:

Install powershell CredentialSpec module and create CredentialSpec

PowerShell
Install-Module CredentialSpec

New-CredentialSpec -AccountName LinuxAppFarm // Replace ‘LinuxAppFarm’ with your own gMSA Group

You will find the credentialspec in the directory ‘C:Program DataDockerCredentialspecsLinuxAppFarm_CredSpec.json’

5.     Obtain and deploy the supported Fedora Linux 36 version or newer supported AMI (AWS Public Cloud download.)

6.     Manually join your Linux system to the Microsoft Active Directory domain using the following command:

#Install realmd and configure DNS resolver for the Active Directory domain
sudo dnf install realmd sssd oddjob oddjob-mkhomedir adcli krb5-workstation samba-common-tools -y
sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved
sudo unlink /etc/resolv.conf

#Add your DNS nameserver IP and domain name to the resolv.conf and save
sudo nano /etc/resolv.conf

nameserver 10.0.0.20
search corp.example.com

#Join the Linux Server to the realm/domain case-sensitive

Replace (upper-case) realm account and domain name indicated by <bold text>  with the UPN of domain user and FQDN of domain name. Remove < and > in your final command.

Auto-join is not currently supported until the Amazon Linux 2022 distro is updated with the new rpm.

Microsoft SQL Server and Amazon RDS for Microsoft SQL Server can be added for Kerberos database authentication.

Microsoft SQL and Amazon RDS for Microsoft SQL Server must be joined to the AWS Managed Microsoft AD Domain.

See instructions on how to connect Amazon RDS for Microsoft SQL Server to the Microsoft Active Directory domain.

For the highest recommended security, constrained Kerberos delegation for gMSA should be applied to the accounts for any service access.

Set-ADAccountControl -Identity <TestgMSA$> -TrustedForDelegation $false -TrustedToAuthForDelegation $false
Set-ADServiceAccount -Identity TestgMSA$ -Clear ‘msDS-AllowedToDelegateTo’

Detailed instructions can be found here.

7.     Invoke the AddkerberosLease API with the credentialsspec input as shown in following command. This step is important to allow the credentials-fetcher to make a connection to Microsoft Active Directory. The gMSA account is then used for authentication.
Use this command with Fedora Linux only: (grpc_cli is not available on Amazon Linux)

#Replace gMSA group name, netbios name and DNS names in the command (Bold text)
grpc_cli call unix:/var/credentials-fetcher/socket/credentials_fetcher.sock AddKerberosLease “credspec_contents: ‘{“CmsPlugins”:[“ActiveDirectory”],”DomainJoinConfig”:{“Sid”:”S-1-5-21-1445507628-2856671781-3529916291″,”MachineAccountName”:”gmsamachines”,”Guid”:”af602f85-d754-4eea-9fa8-fd76810485f1″,”DnsTreeName”:”corp.example.com”,”DnsName”:”corp.example.com”,”NetBiosName”:”DEMOCORP”},”ActiveDirectoryConfig”:{“GroupManagedServiceAccounts”:[{“Name”:”gmsamachines”,”Scope”:”corp.example.com”},{“Name”:”gmsamachines”,”Scope”:”DEMOCORP”}]}}'”

Response example: (Note the response for use with your Docker application container)
path to kerberos ticket : /var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01

8.     Invoke the Delete kerberosLease API with lease id input as shown here. Set unique identifier lease_id, associated to the request. The deleted_kerberos_file_paths are paths associated to the Kerberos tickets deleted corresponding to the gMSA accounts.
Use this command from the Linux host:

#Delete Kerberos Lease sample command
grpc_cli call unix:/var/credentials-fetcher/socket/credentials_fetcher.sock DeleteKerberosLease “lease_id: ‘${response_lease_id_from_add_kerberos_lease}'”

Installation of credentials-fetcher on supported Linux distros

In this basic use case for testing the credentials-fetcher rpm, the following architecture is assumed for the purposes of this blog.

An AWS Managed Microsoft Active Directory joined Linux Application Server.
An AWS Managed Microsoft Active Directory joined RDS for Microsoft SQL.
gMSA account established for account credentials in an AWS Managed Microsoft Active Directory.

Fedora Linux 36 Server setup:

Deploy the “Fedora cloud based image for AWS public cloud” located here, to your AWS account.
Credentials-fetcher is packaged and included as part of the standard Fedora package repositories. Install the credentials-fetcher rpm by typing the command:

sudo dnf install credentials-fetcher -y

How to use credentials-fetcher per scenario

In these instructions, we will demonstrate the use of credentials-fetcher with an ASP.NET application and Amazon RDS for Microsoft SQL Server. A Microsoft SQL Server container scenario will also be demonstrated as an additional use case.

Scenario 1:  Using .NET Core container application on Linux with Amazon RDS for Microsoft SQL Server backend

Figure 3 Using .NET Core container application on Linux with Amazon RDS for Microsoft SQL Server backend

Once the environment prerequisites have been met, you can install Docker and a repository in preparation for deploying a .NET Core application to a container on the Linux server or servers.

1.     Set up the repository.

Install the dnf-plugins-core package (which provides the commands to manage your DNF repositories) and set up the repository.

sudo dnf -y install dnf-plugins-core

sudo dnf config-manager
–add-repo
https://download.docker.com/linux/fedora/docker-ce.repo

2.     Install Docker Engine and verify credentials-fetcher is installed and started.

Install the latest version of Docker Engine, containerd, and Docker Compose and start the Docker daemon:

sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl start docker

sudo dnf install credentials-fetcher
sudo systemctl start credentials-fetcher

Additional reference detailed instructions on how to install Docker engine can be found here.

1. Create a kerberos ticket – associated to the gMSA account as described in step 8 of “Environment Setup for gMSA on Linux use cases”

Take a note of the response generated by the Kerberos ticket creation.

2.     Leverage the Docker File for environment variables

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-image
WORKDIR /src
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
EXPOSE 80
COPY —from=build-image /src/out ./
RUN apt-get -qq update &&
apt-get -yqq install krb5-user &&
apt-get -yqq clean

ENV KRB5CCNAME=/var/credentials-fetcher/krbdir/krb5cc

ENTRYPOINT [“dotnet”, “WebApp.dll”]

Example: /var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01

3.     Build a Docker image for your application on the Linux server:

docker build -t <your_image_name>

4.     Run Docker with bind mount to the kerberos ticket, ensure the environment variable KRB5CCNAME (see example docker file above) is pointing to the destination location of the bind mount inside the application container.

sudo docker run -p 80:80 -d -it —name webapp1 —mount type=bind,source=/var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01,target=/var/credentials-fetcher/krbdir {docker_image}

5.     Add Amazon RDS for Microsoft SQL Server to the gMSA group with the following commands:

#Add the AD join SQL RDS server to the gMSA account
Set-ADServiceAccount -Identity “gmsamachines” -PrincipalsAllowedToRetrieveManagedPassword “mssqlrdsname$”

6.     Download and install the SQL Management Tool (SSMS) on a management Windows machine that is a member of the service account group to test the Amazon RDS for Microsoft SQL Server connections. The .NET application will have access to each other and the Amazon RDS for Microsoft SQL Server instance.

7.     Log in to the Amazon RDS for Microsoft SQL Server instance and apply the gMSA service account with the desired permissions required by the .NET application.

Scenario 2:  Using .NET Core container application on Linux with a Microsoft SQL Server Container

Figure 4 Using .NET Core container application on Linux with a Microsoft SQL Server Container.

As with Scenario 1, the same steps to install Docker on Linux and deploy your application will apply. The difference will be the deployment of Microsoft SQL Server in a container and the confirmation that the server operates as expected running on Linux and leveraging gMSA for authentication.

1.     As with the first scenario, install and run the credentials-fetcher on the Linux server or servers you are deploying your .NET application containers to.

2.     Deploy a Microsoft SQL Server 2022 container on one of the Linux servers in your gMSA group.

Leverage the Docker file example in “Use Case 1” environment KRB5CCNAME from the Microsoft SQL Server container.
Reference “Use Case 1” for details on verifying docker file KRB5CCNAME.

Run the following command on your Linux server to install the latest Microsoft SQL Server 2022 Docker container available. Replace yourStrong(!)Password with a strong password in the command.

sudo docker run -e “ACCEPT_EULA=Y” -e “SA_PASSWORD=yourStrong(!)Password” -p 1433:1433 -d mcr.microsoft.com/mssql/server:2022-latest —mount type=bind,source=/var/credentials-fetcher/krbdir/726837743cc966c7b4da/WebApp01,target=/var/credentials-fetcher/krbdir {SQL_docker_image}

Verify that the Microsoft SQL Server Docker container is running with the following command:

sudo docker ps

Additional reference details for deploying a Microsoft SQL Server container on Linux can be found here.

3.     Download and install the SQL Management Tool (SSMS) on a management Windows machine that is a member of the service account group to test the Microsoft SQL Server connection.

4.     Log in to the Microsoft SQL Server instance and apply the gMSA service account with the desired permissions required by the .NET application.

Conclusion

Linux containers have become a key modernization destination for customers running .NET workloads. gMSA for Linux containers will help organizations and the overall Microsoft administrator and developer community to access AD from applications and services hosted on Linux containers using the service account authentication model.

gMSA is a managed account that provides automatic password management, service principal name (SPN) management, and the ability to delegate management to administrators over multiple servers or instances. This unblocks a range of modernization use cases around identity using Microsoft AD, such as, connecting .NET Core applications hosted on Linux containers with SQL Server authenticating over Microsoft AD, and securely opening up access to network resources from applications running with service accounts. Based on the capabilities and customer usefulness, credentials-fetcher is positioned to be extended into services such as Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).

This service feature extends support for non-Windows container applications that require gMSA for Microsoft AD Authentication. AWS is dedicated to continuing development and support for the credentials-fetcher daemon open source project. We believe that open source is good for everyone and we are committed to bringing the value of open source to our customers, and the operational excellence of AWS to open source communities. Contributions and feedback are welcome.

Flatlogic Admin Templates banner

Behind the Scenes on AWS Contributions to Cloud Native Open Source Projects

Amazon Elastic Kubernetes Service (Amazon EKS) is well known in the Kubernetes community. But few realize that AWS engineers are closely involved and contributing upstream to Kubernetes and to many more cloud native open source projects.

In the past year alone, AWS contributed significantly to containerd, Cortex, etcd, Fluentd, nerdctl, Notary, OpenTelemetry, Thanos, and Tinkerbell. We employ maintainers and contributors on these projects and we will contribute more to these and other projects in the coming year. Here’s a behind-the-scenes look at our contributions and why we’re investing in the open source projects we support. You can also meet many of our contributors in the AWS booth at KubeCon Europe in Amsterdam, April 18-21, 2023 and hear from them in our virtual Container Day event 9 a.m. – 4 p.m. CEST on April 18.

“Amazon EKS is committed to open source and we are spending a lot of our cycles now focused on contributing back to the community. Kubernetes is part of a community that’s bigger than AWS and so we’re continuing to be committed to maintaining and helping that community to be successful because without it, we wouldn’t exist, either,” said Barry Cooks, Vice President, Kubernetes, at AWS and a Cloud Native Computing Foundation (CNCF) governing board member.

AWS contributes to Kubernetes and Etcd

Today, AWS is heavily involved in open source, cloud native projects. Consider, for example, some of our recent key contributions to Kubernetes and etcd, the underlying data store for Kubernetes.

“We’re building the AWS cloud provider, contributing to CAPI (cluster API), and serve as part of the security response committee. We helped implement gzip optimization which improves the performance of Kubernetes clients,” said Nathan Taber who leads the product team for Kubernetes at AWS, in a keynote at KubeCon North America 2022. “With etcd we’re bringing our operational learnings from running just so much etcd at scale, back into the community.”

The AWS cloud provider for Kubernetes is the open source interface between a Kubernetes cluster and AWS service APIs. This project allows a Kubernetes cluster to provision, monitor, and remove AWS resources necessary for operation of the cluster.

As of Kubernetes 1.27, AWS has just finished a multi-year effort to migrate our legacy cloud provider out of tree to an external cloud provider. The cloud provider migration reduces binary bloat in the main kubernetes/kubernetes (k/k) repository, as well as reduces dependency complexity and the surface area for security vulnerabilities.

AWS has also built a webhook framework that allows cloud providers to host webhooks in their cloud-controller-managers, which makes certain migration tasks easier. One use case for this is helping other cloud providers to migrate the persistent volume labeller admission controllers from the API server code, which is one of the last areas of cloud provider specific code that needs to be migrated out of core Kubernetes.

“We’ve included a lot of space in our planning for upstream open source work this year,” said Nick Turner, software developer on the AWS Kubernetes team and a chair in Kubernetes SIG-cloud-provider. “Expect us to keep up our contributions to the cloud provider and the load balancer controller as well as increase our investments in the AWS IAM authenticator for Kubernetes and KMS encryption provider.”

These and other Kubernetes contributions bring value to the entire Kubernetes community as well as to the EKS service and its customers.

Since KubeCon Detroit last fall, the EKS-etcd team has contributed numerous improvements to etcd. Chao Chen contributed to the effort to improve testing mechanisms for etcd by unifying the test frameworks used by etcd tests. Baoming Wang contributed an important metric to the Kubernetes API server code base which will help catch data corruption issues early. We’ve also worked on building a linearizability test suite, made various improvements to the core etcd database and the etcd backend database Bolt-DB, contributed to documentation, made helm more resilient to etcd side transient errors, and fixed an issue with the installation script for argo-cd-helmfile.

What’s driving AWS to contribute more to cloud native open source

Like most modern companies, AWS builds many of its services with open source components. There are several business and technical reasons we do this, which we’ve outlined in an article on The New Stack about why we invest in sustainable open source. We recognize that the success of our services depends on the success of those underlying open source projects.

Given that most of the open source projects that AWS supports underpin specific services, AWS tasks all engineers working in services, regardless of their assigned sub-service teams, to contribute in any way that they can to those upstream projects.

The result is a virtuous cycle that promotes mutually beneficial growth. As AWS services grow, so too do the open source projects upon which they are based because of AWS contributions and support. Conversely, as these open source projects grow from the contributions of other companies and developers, so do the benefits to the AWS services that depend upon them.

AWS contributions focus on performance and scale

AWS contributions to open source typically come as a practical matter in the form of bug fixes, code reviews, documentation, new features, or security enhancements. Like many developers working in the open source space, AWS engineers often work to address issues that arise in the course of their day jobs and then share the fixes with the rest of the open source community. Similarly, new features for an open source project are developed by AWS engineers to expand the project’s scale or performance which in turn increases the project’s usability, stability, and overall appeal.

Because AWS has a large number of Kubernetes clusters under management, it affords AWS a unique opportunity to test the limitations of open source software and build its edges stronger and further out from its initial core. So many of the contributions that our team members do for upstream Kubernetes, etcd, containerd, and other projects center on making sure that we provide insights to the upstream community on where things break down in scaling, production, and operational readiness.

The resulting insights provide value for the entire open source community as well as our own customers.

Take for example, the lag fix that curiously performed as a latency expander. AWS engineer Shyam Jeedigunta, was looking at the logs and metrics collected from thousands of production EKS clusters. He determined that Gzip compression is enabled inside the Kubernetes API server to reduce the demand on network bandwidth and to decrease latency.  However, the compression was actually increasing the latency for large list requests made by clients to the Kubernetes API server. Shyam, who is also co-chair of the Kubernetes scalability special interest group (SIG), took a deep dive into the issue to investigate whether a particular compression level created the problem and if so, could the compression level be reduced? Could Gzip compression be disabled entirely? What impact would that have on latency and network bandwidth?

Answers to questions like this one lead to contributions upstream in etcd and core Kubernetes from AWS service teams. Customers and others often report these kinds of issues to the project as well, but the nature of the problem isn’t clear until it’s viewed on 1,000 nodes and 200,000 objects of a certain kind. AWS engineers diagnose what’s going on, put together troubleshooting information, and collate information into proposals on how to fix the problem(s) to upstream to Kubernetes. AWS likes to spearhead fixing issues that arise from running the projects at scale.

Key AWS contributions

AWS contributes to many Kubernetes sub projects and SIGs. For example, Micah Hausler and Sri Saran Balaji Vellore Rajakumar serve on the Kubernetes Security Response Committee (SRC), Davanum Srinivas (Dims) chairs SIG-Architecture and SIG-k8s-infra, and Nick Turner is a chair in SIG-cloud-provider.  Key contributions have gone into projects including containerd, Cortex, cdk8s, CNI, nerdctl and Prometheus. Innovations have also been substantial and include TorchServe, improved ARM support through AWS Graviton, and the Virtual GPU plugin. However, this is not an exhaustive or complete list of AWS contributions and innovations in the cloud native community.

On containerd, for example, AWS employs two maintainers who contribute features and help ensure the project’s general health and security. Key contributions from AWS engineers to the containerd project include OpenTelemetry integration in the 1.7.0 release, improved tracing, and improved fuzzing integration.

“It’s been awesome to see the growth on the container runtime team here at AWS these past few years. I love to see the eagerness to learn not just *how* to contribute, but how to do it well and really benefit the broader community,” said Phil Estes, a principal engineer at AWS and a containerd maintainer.

Nerdctl, a Docker-compatible CLI for containerd and a containerd sub-project, is used by other open source projects Lima, Finch, and Rancher Desktop. AWS engineers significantly improved nerdctl’s compose support by adding 11 out of 13 missing compose commands. We enhanced nerdctl’s image signing/verification support by contributing cosign support for nerdctl compose, and notation support for nerdctl. And engineer Jin Dong recently became the first reviewer for the project from AWS.

AWS services are also standardizing on OpenTelemetry, a set of open source tools and standards for collecting metrics, logs, and traces to measure application performance. AWS Distro for OpenTelemetry (ADOT), OpenSearch, and CloudWatch are all building on OpenTelemetry and contribute back to the upstream project. All ADOT code is 100% open source and contributed upstream. Key contributions include: adding functionality to upstream observability components such as OpenTelemetry language SDKs, collectors, and agents.

“Amazon is the fourth largest contributor to OpenTelemetry with a dedicated maintainer and many contributors working on the project. A key contribution has been improving collector and metric stability, including improved Prometheus interoperability with OpenTelemetry,” said Taber.

A fourth example is Cortex where AWS is the top supporter of the project and employs three maintainers. As AWS runs this project at scale, engineers have the opportunity to identify and fix scaling cliffs before they become a problem for the rest of the community. Some of the key contributions are new features and performance improvements. Examples include partition compactor, Ring DynamoDB Multikey KV, out of order samples ingestion, snappy-block gRPC compression, ARM images, and Thanos PromQL engine integration.

We have also contributed bug fixes to Thanos, a tool for setting up highly available Prometheus instances with long term storage. Thanos is a CNCF incubating project which Cortex depends on. We participated in the development of the new Thanos PromQL engine and open sourced a tool that could use fuzzing for correctness testing which has already caught a few bugs.

AWS employs four maintainers on Tinkerbell, a cloud native open source bare metal provisioning engine for EKS Anywhere and a CNCF Sandbox project. Key contributions include organizing the project roadmap, VLAN support, a Kubernetes native backend, out-of-band management Kubernetes controller, Helm Chart deployment, and Cluster API provider updates.

“Our team has done a lot of work to update the Tinkerbell backend from Postgres to native Kubernetes,” said Taber.

AWS employs three maintainers in Notation, a sub project of Notary under the CNCF, and is the third largest code contributor to Notary. Notation enables the generation of cryptographic signatures for container images so users can verify that they come from a trusted source or process. AWS founded the sub project with other contributors to come up with specifications for signature format, generation, verification, and revocation. As part of this work we also defined a process for evaluating signature envelope formats like COSE ensuring that they met a high security bar before they were used in Notation.

AWS employees have either written or reviewed the majority of code contributions for the core Notation libraries and a CLI. AWS also employs a maintainer to Ratify so Kubernetes users can easily enable policies for signature verification with their existing admissions controllers. Similarly we also employ a maintainer to ORAS so signatures can easily be pushed to OCI registries. Notation enables users to define granular trust policies for defining which sources they want to trust, balance deployment safety and security needs, and flexibility on secure signing key storage options.

We have contributed to many other open source projects as well, including Crossplane, for which AWS added support for EKS IRSA in the China region and fixed Amazon Route 53 wildcard support, and Backstage, with AWS Proton and AWS Code Suite (AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy).

“We’re very excited about doing more development in the open, sharing that with our customers, and working directly in some cases with customers on their needs in open source projects and working together to make the community stronger in the Kubernetes space,” Cooks said.

AWS is open

We want to hear from you. AWS engineers are open to helping community members through collaboration and contribution opportunities. Tell us how we can help meet your needs.

AWS engineers, solutions architects, and product managers are hanging out on the Kubernetes community and the CNCF community Slack channels. Channels where you can reach out to us include the provider AWS channel and Karpenter channel, and the AWS controllers for Kubernetes channel on the Kubernetes Slack.

Find us and tell us what you’d like us to work on. Or if you have a particular issue that you found in one of these upstream projects that you think our engineers can help move the needle on. Come find us and talk to us in the CNCF’s AWS Slack channel and join us for our virtual Container Day on April 18, before KubeCon EU.

Flatlogic Admin Templates banner

Enabling DevSecOps with Amazon CodeCatalyst

DevSecOps is the practice of integrating security testing at every stage of the software development process. Amazon CodeCatalyst includes tools that encourage collaboration between developers, security specialists, and operations teams to build software that is both efficient and secure. DevSecOps brings cultural transformation that makes security a shared responsibility for everyone who is building the software.

Introduction

In a prior post in this series, Maintaining Code Quality with Amazon CodeCatalyst Reports, I discussed how developers can quickly configure test cases, run unit tests, set up code coverage, and generate reports using CodeCatalyst’s workflow actions. This was done through the lens of Maxine, the main character of Gene Kim’s The Unicorn Project. In the story, Maxine meets Purna – the QA and Release Manager and Shannon – a Security Engineer. Everyone has the same common goal to integrate security into every stage of the Software Development Lifecycle (SDLC) to ensure secure code deployments. The issue Maxine faces is that security testing is not automated and the separation of responsibilities by role leads to project stagnation.

In this post, I will focus on how DevSecOps teams can use Amazon CodeCatalyst to easily integrate and automate security using CodeCatalyst workflows. I’ll start by checking for vulnerabilities using OWASP dependency checker and Mend SCA. Then, I’ll conduct Static Analysis (SA) of source code using Pylint. I will also outline how DevSecOps teams can influence the outcome of a build by defining success criteria for Software Composition Analysis (SCA) and Static Analysis actions in the workflow. Last, I’ll show you how to gain insights from CodeCatalyst reports and surface potential issues to development teams through CodeCatalyst Issues for faster remediation.

Prerequisites

If you would like to follow along with this walkthrough, you will need to:

Have an AWS Builder ID for signing in to CodeCatalyst.
Belong to a CodeCatalyst space and have the Space administrator role assigned to you in that space. For more information, see Creating a space in CodeCatalyst, Managing members of your space, and Space administrator role.
Have an AWS account associated with your space and have the IAM role in that account. For more information about the role and role policy, see Creating a CodeCatalyst service role.
Have a Mend Account (required for the optional Mend Section)

Walkthrough

To follow along, you can re-use a project you created previously, or you can refer to a previous post that walks through creating a project using the Modern Three-tier Web Application blueprint. Blueprints provide sample code and CI/CD workflows to help you get started easily across different combinations of programming languages and architectures. The back-end code for this project is written in Python and the front-end code is written in JavaScript.

Figure 1. Modern Three-tier Web Application architecture including a presentation, application and data layer

Once the project is deployed, CodeCatalyst opens the project overview. Select CI/CD → Workflows → ApplicationDeploymentPipeline to view the current workflow.

Figure 2. ApplicationDeploymentPipeline

Modern applications use a wide array of open-source dependencies to speed up feature development, but sometimes these dependencies have unknown exploits within them. As a DevSecOps engineer, I can easily edit this workflow to scan for those vulnerable dependencies to ensure I’m delivering secure code.

Software Composition Analysis (SCA)

Software composition analysis (SCA) is a practice in the fields of Information technology and software engineering for analyzing custom-built software applications to detect embedded open-source software and analyzes whether they are up-to-date, contain security flaws, or have licensing requirements. For this walkthrough, I’ll highlight two SCA methods:

I’ll use the open-source OWASP Dependency-Check tool to scan for vulnerable dependencies in my application. It does this by determining if there is a Common Platform Enumeration (CPE) identifier for a given dependency. If found, it will generate a report linking to the associated Common Vulnerabilities and Exposures (CVE) entries.
I’ll use CodeCatalyst’s Mend SCA Action to integrate Mend for SCA into my CI/CD workflow.

Note that developers can replace either of these with a tool of their choice so long as that tool outputs an SCA report format supported by CodeCatalyst.

Software Composition Analysis using OWASP Dependency Checker

To get started, I select Edit at the top-right of the workflows tab. By default, CodeCatalyst opens the YAML tab. I change to the Visual tab to visually edit the workflow and add a CodeCatalyst Action by selecting “+Actions” (1) and then “+” (2). Next select the Configuration (3) tab and edit the Action Name (4). Make sure to select the check mark after you’re done.

Figure 3. New Action Initial Configuration

Scroll down in the Configuration tab to Shell commands. Here, copy and paste the following command snippets that runs when action is invoked.

#Set Source Repo Directory to variable
– Run: sourceRepositoryDirectory=$(pwd)
#Install Node Dependencies
– Run: cd web &amp;&amp; npm install
#Install known vulnerable dependency (This is for Demonstrative Purposes Only)
– Run: npm install [email protected]
#Go to parent directory and download OWASP dependency-check CLI tool
– Run: cd .. && wget https://github.com/jeremylong/DependencyCheck/releases/download/v8.1.2/dependency-check-8.1.2-release.zip
#Unzip file – Run: unzip dependency-check-8.1.2-release.zip
#Navigate to dependency-check script location
– Run: cd dependency-check/bin
#Execute dependency-check shell script. Outputs in SARIF format
– Run: ./dependency-check.sh –scan $sourceRepositoryDirectory/web -o $sourceRepositoryDirectory/web/vulnerabilities -f SARIF –disableYarnAudit

These commands will install the node dependencies, download the OWASP dependency-check tool, and run it to generate findings in a SARIF file. Note the third command, which installs a module with known vulnerabilities (This is for demonstrative purposes only).

On the Outputs (1) tab, I change the Report prefix (2) to owasp-frontend. Then I set the Success criteria (3) for Vulnerabilities to 0 – Critical (4). This configuration will stop the workflow if any critical vulnerabilities are found.

Figure 4: owasp-dependecy-check-frontend

It is a best practice to scan for vulnerable dependencies before deploying resources so I’ll set my owasp-dependency-check-frontend action as the first step in the workflow. Otherwise, I might accidentally deploy vulnerable code. To do this, I select the Build (1) action group and set the Depends on (2) dropdown to my owasp-dependency-check-frontend action. Now, my action will run before any resources are built and deployed to my AWS environment. To save my changes and run the workflow, I select Commit (3) and provide a commit message.

Figure 5: Setting OWASP as the First Workflow Action

Amazon CodeCatalyst shows me the state of the workflow run in real-time. After the workflow completes, I see that the action has entered a failed state. If I were a QA Manager like Purna from the Unicorn Project, I would want to see why the action failed. On the lefthand navigation bar, I select the Reports → owasp-frontend-web/vulnerabilities/dependency-check-report.sarif for more details.

Figure 6: SCA Report Overview

This report view provides metadata such as the workflow name, run ID, action name, repository, and the commit ID. I can also see the report status, a bar graph of vulnerabilities grouped by severity, the number of libraries scanned, and a Findings panel. I had set the success criteria for this report to 0 – Critical so it failed because 1 Critical vulnerability was found. If I select a specific finding ID, I can learn more about that specific finding and even view it on the National Vulnerability Database website.

Figure 7: Critical Vulnerability CVE Finding

Now I can raise this issue with the development team through the Issues board on the left-hand navigation panel. See this previous post to learn more about how teams can collaborate in CodeCatalyst.

Note: Let’s remove [email protected] install from owasp-dependency-check-frontend action’s list of commands to allow the workflow to proceed and finish successfully.

Software Composition Analysis using Mend

Mend, formerly known as WhiteSource, is an application security company built to secure today’s digital world. Mend secures all aspects of software, providing automated remediation, prevention, and protection from problem to solution versus only detection and suggested fixes. Find more information about Mend here.

Mend Software Composition Analysis (SCA) can be run as an action within Amazon CodeCatalyst CI/CD workflows, making it easy for developers to perform open-source software vulnerability detection when building and deploying their software projects. This makes it easier for development teams to quickly build and deliver secure applications on AWS.

Getting started with CodeCatalyst and Mend is very easy. After logging in to my Mend Account, I need to create a new Mend Product named Amazon-CodeCatalyst and a Project named mythical-misfits.

Next, I navigate back to my existing workflow in CodeCatalyst and add a new action. However, this time I’ll select the Mend SCA action.

Figure 8: Mend Action

All I need to do now is go to the Configuration tab and set the following values:

Mend Project Name: mythical-misfits
Mend Product Name: Amazon-CodeCatalyst
Mend License Key: You can get the License Key from your Mend account in the CI/CD Integration section. You can get more information from here.

Figure 9: Mend Action Configuration

Then I commit the changes and return to Mend.

Figure 10: Mend Console

After successful execution, Mend will automatically update and show a report similar to the screenshot above. It contains useful information about this project like vulnerabilities, licenses, policy violations, etc. To learn more about the various capabilities of Mend SCA, see the documentation here.

Static Analysis (SA)

Static analysis, also called static code analysis, is a method of debugging that is done by examining the code without executing the program. The process provides an understanding of the code structure and can help ensure that the code adheres to industry standards. Static analysis is used in software engineering by software development and quality assurance teams.

Currently, my workflow does not do static analysis. As a DevSecOps engineer, I can add this as a step to the workflow. For this walkthrough, I’ll create an action that uses Pylint to scan my Python source code for Static Analysis. Note that you can also use other static analysis tools or a GitHub Action like SuperLinter, as covered in this previous post.

Static Analysis using Pylint

After navigating back to CI/CD → Workflows → ApplicationDeploymentPipeline and selecting Edit, I create a new test action. I change the action name to pylint and set the Configuration tab to run the following shell commands:

– Run: pip install pylint
– Run: pylint $PWD –recursive=y –output-format=json:pylint-report.json –exit-zero

On the Outputs tab, I change the Report prefix to pylint. Then I set the Success criteria for Static analysis as shown in the figure below:

Figure 11: Static Analysis Report Configuration

Being that Static Analysis is typically run before any execution, the pylint or OWASP action should be the very first action in the workflow. For the sake of this blog we will use pylint. I select the OWASP or Mend actions I created before, set the Depends on dropdown to my pylint action, and commit the changes. Once the workflow finishes, I can go to Reports > pylint-pylint-report.json for more details.

Figure 12: Pylint Static Analysis Report

The Report status is Failed because more than 1 high-severity or above bug was detected. On the Results tab I can view each finding in greater detail, including the severity, type of finding, message from the linter, and which specific line the error originates from.

Cleanup

If you have been following along with this workflow, you should delete the resources you deployed so you do not continue to incur charges. First, delete the two stacks that AWS Cloud Development Kit (CDK) deployed using the AWS CloudFormation console in the AWS account you associated when you launched the blueprint. These stacks will have names like mysfitsXXXXXWebStack and mysfitsXXXXXAppStack. Second, delete the project from CodeCatalyst by navigating to Project settings and choosing Delete project.

Conclusion

In this post, I demonstrated how DevSecOps teams can easily integrate security into Amazon CodeCatalyst workflows to automate security testing by checking for vulnerabilities using OWASP dependency checker or Mend through Software Composition Analysis (SCA) of dependencies. I also outlined how DevSecOps teams can configure Static Analysis (SA) reports and use success criteria to influence the outcome of a workflow action.

Imtranur Rahman

Imtranur Rahman is an experienced Sr. Solutions Architect in WWPS team with 14+ years of experience. Imtranur works with large AWS Global SI partners and helps them build their cloud strategy and broad adoption of Amazon’s cloud computing platform.Imtranur specializes in Containers, Dev/SecOps, GitOps, microservices based applications, hybrid application solutions, application modernization and loves innovating on behalf of his customers. He is highly customer obsessed and takes pride in providing the best solutions through his extensive expertise.

Wasay Mabood

Wasay is a Partner Solutions Architect based out of New York. He works primarily with AWS Partners on migration, training, and compliance efforts but also dabbles in web development. When he’s not working with customers, he enjoys window-shopping, lounging around at home, and experimenting with new ideas.

Publish Amazon DevOps Guru Insights to ServiceNow for Incident Management

Amazon DevOps Guru is a fully managed AIOps service that uses machine learning (ML) to quickly identify when applications are behaving outside of their normal operating patterns and generates insights from its findings. These insights generated by Amazon DevOps Guru can be used to alert on-call teams to react to anomalies for mission critical workloads. Various customers already utilize Incident management systems like ServiceNow to identify, analyze and resolve critical incidents which could impact business operations. ServiceNow is an IT Service Management (ITSM) platform that enables enterprise organizations to improve operational efficiencies. Among its products is Incident Management which provides a single pane view to customers and allows customers restore services and resolve issues quickly.

This blog post will show you how to integrate Amazon DevOps Guru insights with ServiceNow to automatically create and manage Incidents. We will demonstrate how an insight generated by Amazon DevOps Guru for an anomaly can automatically create a ServiceNow Incident, update the incident when there are new anomalies or recommendations from Amazon DevOps Guru, and close the ServiceNow Incident once the insight is resolved by Amazon DevOps Guru.

Overview of solution

This solution uses a combination of event driven architecture and Serverless technologies, to integrate DevOps Guru insights with ServiceNow. When an Amazon DevOps Guru insight is created, an Amazon EventBridge rule is used to capture the insight as an event and routed to an AWS Lambda Function target. The lambda function interacts with ServiceNow using a REST API to create, update and close an incident for corresponding DevOps Guru events captured by EventBridge.

The EventBridge rule can be customized to capture all DevOps Guru insights or narrowed down to specific insights. In this blog, we will be capturing all DevOps Guru insights and will be performing actions on ServiceNow for the below DevOps Guru events:

DevOps Guru New Insight Open
DevOps Guru New Anomaly Association
DevOps Guru Insight Severity Upgraded
DevOps Guru New Recommendation Created
DevOps Guru Insight Closed

Figure 1: Amazon DevOps Guru Integration with ServiceNow using Amazon EventBridge and AWS Lambda

Solution Implementation Steps

Prerequisites

Before you deploy the solution and proceed with this walkthrough, you should have the following prerequisites:

Gather the hostname for your ServiceNow cloud instance. If you do not have a ServiceNow instance, you can request a developer instance through the ServiceNow Developer page.
Gather the credentials of a ServiceNow user who has permissions to make REST API calls to ServiceNow, specifically to the Table API. If you don’t have a user provisioned, you can create one by following the steps in Getting started with the REST API in the ServiceNow documentation.
Create a secret in Secrets Manager to store the ServiceNow credentials created in previous step. You can choose any name for the secret but it should have two key/value pairs, one for username and other for password.
Enable DevOps Guru for your applications by following these steps or you can follow this blog to deploy a sample serverless application that can be used to generate DevOps Guru insights for anomalies detected in the application.
Install and set up SAM CLI – Install the SAM CLI

Download and set up Java. The version should be matching to the runtime that you defined in the SAM template.yaml Serverless function configuration – Install the Java SE Development Kit 11

Maven – Install Maven

Docker – Install Docker community edition

You have two options to deploy this solution, one options is to deploy from the AWS Serverless Repository and other from the Command Line Interface (CLI).

Option 1: Deploy sample ServiceNow Connector App from AWS Serverless Repository

The DevOps Guru ServiceNow Connector application is available in the AWS Serverless Application Repository which is a managed repository for serverless applications. The application is packaged with an AWS Serverless Application Model (SAM) template, definition of the AWS resources used and the link to the source code. Follow the steps below to quickly deploy this serverless application in your AWS account.

Follow the steps below to quickly deploy this serverless application in your AWS account:

Login to the AWS management console of the account to which you plan to deploy this solution.
Go to the DevOps Guru ServiceNow Connector application in the AWS Serverless Repository and click on “Deploy”.

Figure 2: Deploy solution through AWS Serverless Repository

The Lambda application deployment screen will be displayed where you can enter the ServiceNow hostname (do not include the https prefix) and the Secret Name you created in the prerequisite steps. Click on the ‘Deploy’ button.

Figure 3: AWS Lambda Application Settings

After successful deployment the AWS Lambda Application page will display the “Create complete” status for the serverlessrepo-DevOps-Guru-ServiceNow-Connector application. The CloudFormation template creates four resources:

Lambda function which has the logic to integrate to the ServiceNow
Event Bridge rule for the DevOps Guru Insights
Lambda permission
IAM role

5.     Now you can skip Option 2 and follow the steps in the “Test the Solution” section to trigger some DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Option 2: Build and Deploy sample ServiceNow Connector App using AWS SAM Command Line Interface

As you have seen above, you can directly deploy the sample serverless application from the Serverless Repository with one click deployment. Alternatively, you can choose to clone the github source repository and deploy using the SAM CLI from your terminal.

The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing serverless applications. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see AWS SAM reference – AWS Serverless Application Model.

Before you proceed, make sure you have completed the Prerequisites section in the beginning which should set up the AWS SAM CLI, Maven and Java on your local terminal. You also need to install and set up Docker to run your functions in an Amazon Linux environment that matches Lambda.

Follow the steps below to build and deploy this serverless application using AWS SAM CLI in your AWS account:

Clone the source code from the github repo

$ git clone https://github.com/aws-samples/amazon-devops-guru-connector-servicenow.git

Before you build the resources defined in the SAM template, you can use the below validate command which will run cfn-lint validations on your SAM JSON/YAML template

$ sam validate –-lint –template template.yaml

3.     Build the application with SAM CLI

$ cd amazon-devops-guru-connector-servicenow
$ sam build

If everything is set up correctly, you should have a success message like shown below:

Build Succeeded

Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Validate SAM template: sam validate
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync –stack-name {{stack-name}} –watch
[*] Deploy: sam deploy –guided

4.  Deploy the application with SAM CLI

$ sam deploy –-guided

This command will package and deploy your application to AWS, with a series of prompts that you should respond to as shown below:

Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name – amazon-devops-guru-connector-servicenow

AWS Region: The AWS region you want to deploy your application to.

Parameter ServiceNowHost []: The ServiceNow host name/instance URL you set up. Example: dev92031.service-now.com

Parameter SecretName []: The secret name that you set up for ServiceNow credentials in the Prerequisites.

Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes.

Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn’t provided through this prompt, to deploy this example you must explicitly pass –capabilities CAPABILITY_IAM to the sam deploy command.

Disable rollback [y/N]: If set to Y, preserves the state of previously provisioned resources when an operation fails.

Save arguments to configuration file (samconfig.toml): If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application.

After you enter your parameters, you should see something like this if you have provided Y to view and confirm ChangeSets. Proceed here by providing ‘Y’ for deploying the resources.

Initiating deployment
=====================
Uploading to amazon-devops-guru-connector-servicenow/46bb4841f8f37fd41d3f40f86f31c4d7.template 1918 / 1918 (100.00%)

Waiting for changeset to be created..
CloudFormation stack changeset
—————————————————————————————————————————————————–
Operation LogicalResourceId ResourceType Replacement
—————————————————————————————————————————————————–
+ Add FunctionsDevOpsGuruPermission AWS::Lambda::Permission N/A
+ Add FunctionsDevOpsGuru AWS::Events::Rule N/A
+ Add FunctionsRole AWS::IAM::Role N/A
+ Add Functions AWS::Lambda::Function N/A
—————————————————————————————————————————————————–

Changeset created successfully. arn:aws:cloudformation:us-east-1:123456789012:changeSet/samcli-deploy1669232233/7c97b7f5-369d-400d-89cd-ebabefaa0b57

Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]:

Once the deployment succeeds, you should be able to see the successful creation of your resources

CloudFormation events from stack operations (refresh every 0.5 seconds)
—————————————————————————————————————————————————–
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
—————————————————————————————————————————————————–
CREATE_IN_PROGRESS AWS::CloudFormation::Stack amazon-devops-guru-connector- User Initiated
servicenow
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole Resource creation Initiated
CREATE_COMPLETE AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru Resource creation Initiated
CREATE_COMPLETE AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_COMPLETE AWS::CloudFormation::Stack amazon-devops-guru-connector- –
servicenow
—————————————————————————————————————————————————–

Successfully created/updated stack – amazon-devops-guru-connector-servicenow in us-east-1

You can also use the below command to list the resources deployed by passing in the stack name.

$ sam list resources –stack-name amazon-devops-guru-connector-servicenow

You can also choose to test and debug your function locally with sample events using the SAM CLI local functionality. Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Refer the Invoking Lambda functions locally – AWS Serverless Application Model link here for more details.

Follow the below steps for testing the lambda with the SAM CLI local. You have to create an env.json file with the correct values for your ServiceNow Host and SecretManager secret name that was created in the previous step.

Make sure you have created the AWS Secrets Manager secret with the desired name as mentioned in the prerequisites, which should be used here for SECRET_NAME.
Create env.json as below, by replacing the values for SERVICE_NOW_HOST and SECRET_NAME with your real value. These will be set as the local Lambda execution environment variables.

{“Parameters”: {“SERVICE_NOW_HOST”: “SNOW_HOST”,”SECRET_NAME”: “SNOW_CREDS”}}

Run the command below to validate locally that with a sample DevOps Guru payload, to trigger Lambda locally and invoke. Remember for this to work, you should have Docker instance running and also the Secret Name created in your AWS account.

$ sam local invoke Functions –event Functions/src/test/Events/CreateIncident.json –env-vars Functions/src/test/Events/env.json

Once you are done with the above steps, move on to “Test the Solution” section below to trigger sample DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Test the Solution

To test the solution, we will simulate a DevOps Guru insight. You can also simulate an insight by following the steps in this blog. After an anomaly is detected in the application, DevOps Guru creates an insight as seen below.

Figure 4: DevOps Guru Insight created for anomalous behavior

For the DevOps Guru insight shown above, a corresponding incident is automatically created on ServiceNow as shown below. In addition to the incident creation, any new anomalies and recommendations from DevOps Guru is also associated with the incident.

Figure 5: Corresponding ServiceNow Incident is created for the DevOps Guru Insight

When the anomalous behavior that generated the DevOps Guru insight is resolved, DevOps Guru automatically closes the insight. The corresponding ServiceNow incident that was created for the insight is also closed as seen below

Figure 6: ServiceNow Incident created for DevOps Guru Insight is resolved due to insight closure

Cleaning up

To avoid incurring future charges, delete the resources.

To delete the sample application that you created, use the AWS CLI command below and pass the stack name you provided in the sam deploy step.

$ aws cloudformation delete-stack –stack-name amazon-devops-guru-connector-servicenow

You could also use the AWS CloudFormation Console to delete the stack:

Figure 7: AWS Stack Console with Delete action

Conclusion

This blog post showcased how DevOps Guru continuously monitor resources in a particular region in your AWS account and automatically detects operational issues, predicts impending resource exhaustion, details likely cause, and recommends remediation actions. This post described a custom solution using serverless integration pattern with AWS Lambda and Amazon EventBridge which enabled integration of the DevOps Guru insights with customer’s most popular ITSM and Change management tool ServiceNow thus streamlining the Service Management governance and oversight over AWS services. Using this solution helps Customer’s with ServiceNow to improve their operational efficiencies, and get customized insights and real time incident alerts and management directly from DevOps Guru which provides a single pane of glass to restore services and systems quickly.

This solution was created to help customers who already use ServiceNow Incident Management, if you are already using Incident Manager from AWS Systems Manager, check out how that works with Amazon DevOps Guru here.

To learn more about Amazon DevOps Guru, join us for a free hands-on Immersion Day. Events are virtual and hosted at three global time zones. Register here: April 12th.

About the authors:

Abdullahi Olaoye

Abdullahi is a Senior Cloud Infrastructure Architect at AWS Professional Services where he works with enterprise customers to design and build cloud solutions that solve business challenges. When he’s not working, he enjoys travelling, watching documentaries and listening to history podcasts.

Sreenivas Ganesan

Sreenivas Ganesan is a Sr. DevOps Consultant at AWS experienced in architecting and delivering modernized DevOps solutions for enterprise customers in their journey to AWS Cloud, primarily focused on Infrastructure automation, Security and Compliance, Management and Governance, Provisioning and Orchestration. Outside of work, he enjoys watching new TV series, soccer and spending time with his family outdoors.

Mohan Udyavar

Mohan Udyavar is a Principal Technical Account Manager in the Enterprise Support organization of AWS advising customers in successfully migrating and operating their workloads on AWS. He is primarily focused on the Automotive industry providing prescriptive guidance to customers helping them improve the resilience and operational excellence posture of mission-critical applications. Outside of work, he loves cooking and working on tech projects with his son.

Custom CRM System: Benefits, Requirements & Cost of Development

Do you want to investigate the potential of having a custom CRM system for your business? What are the characteristics of custom CRM systems? Or curious about the cost and what it would take to build a custom CRM system? 

Customer relationship management (CRM) systems are widely available to businesses today. Custom software development is the greatest choice for creating and executing software from the beginning because one-size-fits-all software is not available.  Custom software enables businesses to focus a greater emphasis on the people who are a part of the organization, including employees, vendors, clients, and service users. The only way to guarantee that the software precisely meets the company’s demands is to create custom CRM software.
A custom CRM can help you ensure that your business takes advantage of all opportunities to engage, convert, and retain clients. Regardless of the size of your business, a custom CRM system might streamline your operations, improve your interactions with current clients, and elicit new leads and business opportunities.

This article will help you understand the advantages of a custom CRM system, the implementation requirements, and the development costs. You’ll also be able to understand why a customized CRM system is perfect for your business.

What is Custom CRM System?

CRM stands for Customer Relationship Management and refers to all strategies, techniques, tools, and technologies used by enterprises for developing, retaining, and acquiring customers.

Off-the-shelf (OTS) CRM is ready-to-use software that businesses may buy unmodified and use right away. It often includes modules for keeping track of sales possibilities, managing marketing campaigns, and setting up procedures for customer service.

Custom CRM systems can foster customer loyalty and automate several processes, saving businesses time and money. The main purpose of CRM software is to allow salespeople and marketers to better manage and analyze relationships with the business’s customers and potential customers.

Custom CRM vs. Off-the-Shelf CRM: What are the differences?

The biggest difference between a custom CRM and an off-the-shelf CRM is flexibility. An off-the-shelf CRM is frequently already created and prepared for usage, in contrast to a custom CRM, which is created especially for the requirements of the firm. Off-the-shelf CRMs can provide less adequate features and possibilities, but custom CRMs can be designed to satisfy the particular wants of the firm. Custom CRMs typically require more technical expertise and effort to set up and run than off-the-shelf CRMs do.

With a custom CRM, businesses may modify the program to meet their unique needs and requirements, ensuring the application is appropriate for their particular industry. Moreover, custom CRMs provide businesses the opportunity to integrate the application with their existing internal hardware and software, creating a seamless user experience. On the other hand, off-the-shelf CRMs were unable to offer as many customization and integration choices.

Usage of Custom CRM

With a customized CRM, any department in your company – from sales and customer service to business development, recruiting, and marketing – can benefit from improved methods of managing external connections and activities that are integral to success. As no two businesses are alike, a custom CRM is the only way to get precisely what you need without any unnecessary extras or having to make do with features that may not have been included in an off-the-shelf solution.

When it comes to investing in a CRM, you have three choices: 

buying an existing system, 
creating one with an in-house team, 
or outsourcing the development of a custom CRM. 

When you’re considering a CRM investment, it’s important to look into all of your options. Although buying an off-the-shelf CRM system could be the most cost-effective option, it might not be the greatest fit for the unique requirements of your company. While developing a custom CRM in-house can be a terrific option, it often takes a significant amount of time and resources and costs over $50,000. Instead, you might outsource the building of a custom CRM, which would be more cost-effective and efficient. These factors make it crucial to thoroughly consider all of your alternatives before making a choice.

Why is custom CRM important for your business? 

Custom CRM is important for your business since it offers a potent tool for managing client interactions and simplifying sales processes. It gives the ability to recognize, monitor, and control customer interactions, automate lead management, and examine customer data to learn more about how customers behave. This helps you better understand customer needs and create tailored customer experiences that drive customer loyalty and boost sales. As a consequence, improves client retention by raising customer satisfaction and a better understanding of their needs.

Types of Custom CRM

There are three major types of custom CRMs based on their function and the features they provide.

Collaborative

By establishing a clear framework for data exchange, collaborative CRM is intended to improve cooperation and communication. To create a custom CRM adapted to particular needs, it may be utilized internally inside a company or between external teams, such as partners.  Common features of this kind of system include group discussions, content sharing, and real-time activity updates.

Analytical

Analytical CRM systems are intended to help in planning. Such a system offers helpful data, analytics, and insights. It must be able to compile data from several sources, process it, and deliver real-time changes to be useful.

Operational

Operational CRMs focus heavily on simplifying and automating company processes to increase productivity. Lead processing, automated messaging to clients via various channels, and follow-ups are frequently helpful. You have two options when creating your solution: either choose certain features or mix various CRM kinds.

Benefits of Custom CRM

Because of their numerous benefits, custom CRM systems are becoming increasingly popular among businesses. These technologies not only enable businesses to give great and individualized customer care, but they also have a wide range of positive effects on customers. As a result, businesses are using CRM systems more often to increase customer service. Some of the benefits of custom CRM systems are:

Benefit #1 Time Saving

Organizations can easily obtain the information they need to complete some essential activities thanks to custom CRM systems. They can also automate certain monotonous chores. Therefore they may spend less time providing services to their clients. This allows them to focus on other important duties while letting the custom CRM handle additional tasks like data processing, analysis, customer care, sales, and marketing.

Benefit #2 Improved Efficiency

Using a custom CRM for task management gives employees a straightforward way to access the information they need to do their jobs and each employee can use their individualized dashboard. This makes their work easier and more productive; in fact, 60% of businesses report an increase in productivity from implementing a custom CRM.

Benefit #3 Enhanced Customer Relationship

Data about customers, such as their preferences, needs, and pain points, is readily available from a custom CRM. 84% of consumers believe that a business’s experience is just as important as the goods and services they provide. Custom CRM enables you to provide customized services to your consumers while addressing their pain points thanks to all the data available from the system. This can help you improve the way you communicate with your consumers, which will increase their likelihood of becoming repeat customers and help you grow your business.

Benefit #4 Access to in-depth Report

Making data-driven choices, such as adjusting prices or marketing tactics, requires the usage of reports that are produced by custom CRM using customer data. A custom CRM can give you reports that will help your business decision-making process because employing itself might boost report accuracy by 42%, according to studies.

Benefit #5 Increased Income

Сustom CRM system can handle practically all of your company’s requirements for attracting new clients, turning them into customers, and offering top-notch customer service. Sales are boosted as a result since the data it gives may be utilized to deliver better customer service and foster customer loyalty.

What to Consider Before Building Custom CRM Software

A good CRM tool will let you store contact information for clients and prospects, identify sales opportunities, keep a record of service issues, and manage marketing campaigns and tactics – all in one central location. Easy access to this data about customer interaction will allow anyone in your company to make informed decisions based on analytics. Before beginning to build a custom CRM system, there are several crucial considerations to be made. By doing this, you may decide as soon as possible and avoid making costly mistakes.

Setting the Goals

Prioritize the most crucial custom CRM platform goals after determining their importance. This will enable you to select the custom CRM system’s features and design that are most appropriate for your business.

Types of Custom CRM Systems

Decide on the type of CRM solutions you need after you’ve defined your objectives. CRM is divided into three categories: organizational, analytical, and collaborative. Each is made for a specific purpose.

Access Roles & Levels

Since several departments could utilize the custom CRM for various purposes, we recommend adding user functions and permissions. For instance, this may apply to top management, marketing directors, sales, and customer care personnel.

Custom CRM Features

Base your decision on which functionalities to include in your custom CRM on your objectives. Pay attention to the features that will be most beneficial for your business needs. Some of the most important features of custom CRM are dashboards, reports, tasks, contact management, lead management, and mobile access.

SaaS or Internal Software

Think about whether you’re going to create a custom CRM system for internal use or whether you’ll eventually transform it into a SaaS platform. It can be challenging and expensive to change the software architecture. Make sure your architecture is adaptable and scalable from the start if you choose the latter.

Cost of CRM Software Development 

Regardless of the size of your business, custom CRM software can benefit you. Estimating the associated costs can be difficult, so we’re here to explain the anticipated expenses. The cost of a custom CRM system depends on the features you want to include, the technical complexity, the development team’s experience, and other functionalities such as security measures.

To put things into perspective, if you manage a mid-sized business with more than 25 employees and your enterprise-sized subscription costs $125 per user each month, it works out to $3125 per month, $37,500 per year, and $187,500 over five years. The same amount of money, however, might be used to purchase custom CRM software that would be made to meet your unique needs. Determining the precise cost of your project is something we always advise doing in consultation with our specialists. We are aware that every customer is unique and needs a customized strategy to enable the smooth growth of their product. 

Summing Up

Are you looking for a way to improve your business performance? A custom CRM system can be the perfect solution. It provides businesses with a powerful platform for better managing customer relationships and increasing sales. With the right planning and guidance, you can create your own high-quality CRM from scratch that is tailored to your business needs. By centralizing customer data and tracking customer interactions, you can gain insights into customer behavior, create targeted marketing campaigns, and automate specific processes. This can help you improve efficiency, increase customer satisfaction, and boost sales. 

At Flatlogic, we provide full-cycle custom CRM development services to help you turn your concept into a working product. By leveraging our expertise and technical capabilities, you can create a high-quality custom CRM tailored to your business needs that will become the heart of your operations just in a few minutes. 

Custom CRM with Flatlogic

Flatlogic Platform offers an easy way to generate a custom CRM system with full control over the source code. You can make sure that you have the right features, scalability, and performance to match your business needs. Plus, with no-code development, you don’t need to be an expert programmer to make the necessary changes – making it easier to scale and customize as your business grows. With Flatlogic Platform, you have the flexibility to create a custom CRM solution that is tailored to your needs, while still having the scalability of more traditional development.

How to Create Custom CRM with Flatlogic Platform?

Using the Flatlogic Full-Stack Generator you can create CRUD and static applications in a few minutes. To start using the Platform, you need to register on the Flatlogic website. Clicking the “Sign in” button in the header will allow you to register for a Flatlogic account.

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

Step 3. Schema Editor

In this step, you can create your database schema from scratch, import an existing schema or select one of the suggested schemas. 

To import your existing database, click the Import SQL button and select your .sql file. After that, your database will be opened in the Schema Editor where you can further edit your data (add/edit/delete entities).

If you are not familiar with database design and find it difficult to understand what tables are, we have prepared some ready-made sample schemas of real applications that you can modify for your application:

E-commerce app;
Time tracking app;
Book store;
Chat (messaging) app;
Blog.

Or, you can define a database schema and add a description by clicking on the “Generate with AI” button. You need to type the application’s description in the text area and hit “Send”. The application’s schema will be ready in around 15 seconds. You may either hit deploy immediately or review the structure to make manual adjustments.

Next, you can connect your GitHub and push your application code there. Or skip this step by clicking the Finish and Deploy button and in a few minutes, your application will be generated.

The post Custom CRM System: Benefits, Requirements & Cost of Development appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner