Publish Amazon DevOps Guru Insights to ServiceNow for Incident Management

Amazon DevOps Guru is a fully managed AIOps service that uses machine learning (ML) to quickly identify when applications are behaving outside of their normal operating patterns and generates insights from its findings. These insights generated by Amazon DevOps Guru can be used to alert on-call teams to react to anomalies for mission critical workloads. Various customers already utilize Incident management systems like ServiceNow to identify, analyze and resolve critical incidents which could impact business operations. ServiceNow is an IT Service Management (ITSM) platform that enables enterprise organizations to improve operational efficiencies. Among its products is Incident Management which provides a single pane view to customers and allows customers restore services and resolve issues quickly.

This blog post will show you how to integrate Amazon DevOps Guru insights with ServiceNow to automatically create and manage Incidents. We will demonstrate how an insight generated by Amazon DevOps Guru for an anomaly can automatically create a ServiceNow Incident, update the incident when there are new anomalies or recommendations from Amazon DevOps Guru, and close the ServiceNow Incident once the insight is resolved by Amazon DevOps Guru.

Overview of solution

This solution uses a combination of event driven architecture and Serverless technologies, to integrate DevOps Guru insights with ServiceNow. When an Amazon DevOps Guru insight is created, an Amazon EventBridge rule is used to capture the insight as an event and routed to an AWS Lambda Function target. The lambda function interacts with ServiceNow using a REST API to create, update and close an incident for corresponding DevOps Guru events captured by EventBridge.

The EventBridge rule can be customized to capture all DevOps Guru insights or narrowed down to specific insights. In this blog, we will be capturing all DevOps Guru insights and will be performing actions on ServiceNow for the below DevOps Guru events:

DevOps Guru New Insight Open
DevOps Guru New Anomaly Association
DevOps Guru Insight Severity Upgraded
DevOps Guru New Recommendation Created
DevOps Guru Insight Closed

Figure 1: Amazon DevOps Guru Integration with ServiceNow using Amazon EventBridge and AWS Lambda

Solution Implementation Steps

Prerequisites

Before you deploy the solution and proceed with this walkthrough, you should have the following prerequisites:

Gather the hostname for your ServiceNow cloud instance. If you do not have a ServiceNow instance, you can request a developer instance through the ServiceNow Developer page.
Gather the credentials of a ServiceNow user who has permissions to make REST API calls to ServiceNow, specifically to the Table API. If you don’t have a user provisioned, you can create one by following the steps in Getting started with the REST API in the ServiceNow documentation.
Create a secret in Secrets Manager to store the ServiceNow credentials created in previous step. You can choose any name for the secret but it should have two key/value pairs, one for username and other for password.
Enable DevOps Guru for your applications by following these steps or you can follow this blog to deploy a sample serverless application that can be used to generate DevOps Guru insights for anomalies detected in the application.
Install and set up SAM CLI – Install the SAM CLI

Download and set up Java. The version should be matching to the runtime that you defined in the SAM template.yaml Serverless function configuration – Install the Java SE Development Kit 11

Maven – Install Maven

Docker – Install Docker community edition

You have two options to deploy this solution, one options is to deploy from the AWS Serverless Repository and other from the Command Line Interface (CLI).

Option 1: Deploy sample ServiceNow Connector App from AWS Serverless Repository

The DevOps Guru ServiceNow Connector application is available in the AWS Serverless Application Repository which is a managed repository for serverless applications. The application is packaged with an AWS Serverless Application Model (SAM) template, definition of the AWS resources used and the link to the source code. Follow the steps below to quickly deploy this serverless application in your AWS account.

Follow the steps below to quickly deploy this serverless application in your AWS account:

Login to the AWS management console of the account to which you plan to deploy this solution.
Go to the DevOps Guru ServiceNow Connector application in the AWS Serverless Repository and click on “Deploy”.

Figure 2: Deploy solution through AWS Serverless Repository

The Lambda application deployment screen will be displayed where you can enter the ServiceNow hostname (do not include the https prefix) and the Secret Name you created in the prerequisite steps. Click on the ‘Deploy’ button.

Figure 3: AWS Lambda Application Settings

After successful deployment the AWS Lambda Application page will display the “Create complete” status for the serverlessrepo-DevOps-Guru-ServiceNow-Connector application. The CloudFormation template creates four resources:

Lambda function which has the logic to integrate to the ServiceNow
Event Bridge rule for the DevOps Guru Insights
Lambda permission
IAM role

5.     Now you can skip Option 2 and follow the steps in the “Test the Solution” section to trigger some DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Option 2: Build and Deploy sample ServiceNow Connector App using AWS SAM Command Line Interface

As you have seen above, you can directly deploy the sample serverless application from the Serverless Repository with one click deployment. Alternatively, you can choose to clone the github source repository and deploy using the SAM CLI from your terminal.

The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing serverless applications. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see AWS SAM reference – AWS Serverless Application Model.

Before you proceed, make sure you have completed the Prerequisites section in the beginning which should set up the AWS SAM CLI, Maven and Java on your local terminal. You also need to install and set up Docker to run your functions in an Amazon Linux environment that matches Lambda.

Follow the steps below to build and deploy this serverless application using AWS SAM CLI in your AWS account:

Clone the source code from the github repo

$ git clone https://github.com/aws-samples/amazon-devops-guru-connector-servicenow.git

Before you build the resources defined in the SAM template, you can use the below validate command which will run cfn-lint validations on your SAM JSON/YAML template

$ sam validate –-lint –template template.yaml

3.     Build the application with SAM CLI

$ cd amazon-devops-guru-connector-servicenow
$ sam build

If everything is set up correctly, you should have a success message like shown below:

Build Succeeded

Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Validate SAM template: sam validate
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync –stack-name {{stack-name}} –watch
[*] Deploy: sam deploy –guided

4.  Deploy the application with SAM CLI

$ sam deploy –-guided

This command will package and deploy your application to AWS, with a series of prompts that you should respond to as shown below:

Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name – amazon-devops-guru-connector-servicenow

AWS Region: The AWS region you want to deploy your application to.

Parameter ServiceNowHost []: The ServiceNow host name/instance URL you set up. Example: dev92031.service-now.com

Parameter SecretName []: The secret name that you set up for ServiceNow credentials in the Prerequisites.

Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes.

Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn’t provided through this prompt, to deploy this example you must explicitly pass –capabilities CAPABILITY_IAM to the sam deploy command.

Disable rollback [y/N]: If set to Y, preserves the state of previously provisioned resources when an operation fails.

Save arguments to configuration file (samconfig.toml): If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application.

After you enter your parameters, you should see something like this if you have provided Y to view and confirm ChangeSets. Proceed here by providing ‘Y’ for deploying the resources.

Initiating deployment
=====================
Uploading to amazon-devops-guru-connector-servicenow/46bb4841f8f37fd41d3f40f86f31c4d7.template 1918 / 1918 (100.00%)

Waiting for changeset to be created..
CloudFormation stack changeset
—————————————————————————————————————————————————–
Operation LogicalResourceId ResourceType Replacement
—————————————————————————————————————————————————–
+ Add FunctionsDevOpsGuruPermission AWS::Lambda::Permission N/A
+ Add FunctionsDevOpsGuru AWS::Events::Rule N/A
+ Add FunctionsRole AWS::IAM::Role N/A
+ Add Functions AWS::Lambda::Function N/A
—————————————————————————————————————————————————–

Changeset created successfully. arn:aws:cloudformation:us-east-1:123456789012:changeSet/samcli-deploy1669232233/7c97b7f5-369d-400d-89cd-ebabefaa0b57

Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]:

Once the deployment succeeds, you should be able to see the successful creation of your resources

CloudFormation events from stack operations (refresh every 0.5 seconds)
—————————————————————————————————————————————————–
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
—————————————————————————————————————————————————–
CREATE_IN_PROGRESS AWS::CloudFormation::Stack amazon-devops-guru-connector- User Initiated
servicenow
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole Resource creation Initiated
CREATE_COMPLETE AWS::IAM::Role FunctionsRole –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Lambda::Function Functions Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Function Functions –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru Resource creation Initiated
CREATE_COMPLETE AWS::Events::Rule FunctionsDevOpsGuru –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Permission FunctionsDevOpsGuruPermission –
CREATE_COMPLETE AWS::CloudFormation::Stack amazon-devops-guru-connector- –
servicenow
—————————————————————————————————————————————————–

Successfully created/updated stack – amazon-devops-guru-connector-servicenow in us-east-1

You can also use the below command to list the resources deployed by passing in the stack name.

$ sam list resources –stack-name amazon-devops-guru-connector-servicenow

You can also choose to test and debug your function locally with sample events using the SAM CLI local functionality. Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Refer the Invoking Lambda functions locally – AWS Serverless Application Model link here for more details.

Follow the below steps for testing the lambda with the SAM CLI local. You have to create an env.json file with the correct values for your ServiceNow Host and SecretManager secret name that was created in the previous step.

Make sure you have created the AWS Secrets Manager secret with the desired name as mentioned in the prerequisites, which should be used here for SECRET_NAME.
Create env.json as below, by replacing the values for SERVICE_NOW_HOST and SECRET_NAME with your real value. These will be set as the local Lambda execution environment variables.

{“Parameters”: {“SERVICE_NOW_HOST”: “SNOW_HOST”,”SECRET_NAME”: “SNOW_CREDS”}}

Run the command below to validate locally that with a sample DevOps Guru payload, to trigger Lambda locally and invoke. Remember for this to work, you should have Docker instance running and also the Secret Name created in your AWS account.

$ sam local invoke Functions –event Functions/src/test/Events/CreateIncident.json –env-vars Functions/src/test/Events/env.json

Once you are done with the above steps, move on to “Test the Solution” section below to trigger sample DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Test the Solution

To test the solution, we will simulate a DevOps Guru insight. You can also simulate an insight by following the steps in this blog. After an anomaly is detected in the application, DevOps Guru creates an insight as seen below.

Figure 4: DevOps Guru Insight created for anomalous behavior

For the DevOps Guru insight shown above, a corresponding incident is automatically created on ServiceNow as shown below. In addition to the incident creation, any new anomalies and recommendations from DevOps Guru is also associated with the incident.

Figure 5: Corresponding ServiceNow Incident is created for the DevOps Guru Insight

When the anomalous behavior that generated the DevOps Guru insight is resolved, DevOps Guru automatically closes the insight. The corresponding ServiceNow incident that was created for the insight is also closed as seen below

Figure 6: ServiceNow Incident created for DevOps Guru Insight is resolved due to insight closure

Cleaning up

To avoid incurring future charges, delete the resources.

To delete the sample application that you created, use the AWS CLI command below and pass the stack name you provided in the sam deploy step.

$ aws cloudformation delete-stack –stack-name amazon-devops-guru-connector-servicenow

You could also use the AWS CloudFormation Console to delete the stack:

Figure 7: AWS Stack Console with Delete action

Conclusion

This blog post showcased how DevOps Guru continuously monitor resources in a particular region in your AWS account and automatically detects operational issues, predicts impending resource exhaustion, details likely cause, and recommends remediation actions. This post described a custom solution using serverless integration pattern with AWS Lambda and Amazon EventBridge which enabled integration of the DevOps Guru insights with customer’s most popular ITSM and Change management tool ServiceNow thus streamlining the Service Management governance and oversight over AWS services. Using this solution helps Customer’s with ServiceNow to improve their operational efficiencies, and get customized insights and real time incident alerts and management directly from DevOps Guru which provides a single pane of glass to restore services and systems quickly.

This solution was created to help customers who already use ServiceNow Incident Management, if you are already using Incident Manager from AWS Systems Manager, check out how that works with Amazon DevOps Guru here.

To learn more about Amazon DevOps Guru, join us for a free hands-on Immersion Day. Events are virtual and hosted at three global time zones. Register here: April 12th.

About the authors:

Abdullahi Olaoye

Abdullahi is a Senior Cloud Infrastructure Architect at AWS Professional Services where he works with enterprise customers to design and build cloud solutions that solve business challenges. When he’s not working, he enjoys travelling, watching documentaries and listening to history podcasts.

Sreenivas Ganesan

Sreenivas Ganesan is a Sr. DevOps Consultant at AWS experienced in architecting and delivering modernized DevOps solutions for enterprise customers in their journey to AWS Cloud, primarily focused on Infrastructure automation, Security and Compliance, Management and Governance, Provisioning and Orchestration. Outside of work, he enjoys watching new TV series, soccer and spending time with his family outdoors.

Mohan Udyavar

Mohan Udyavar is a Principal Technical Account Manager in the Enterprise Support organization of AWS advising customers in successfully migrating and operating their workloads on AWS. He is primarily focused on the Automotive industry providing prescriptive guidance to customers helping them improve the resilience and operational excellence posture of mission-critical applications. Outside of work, he loves cooking and working on tech projects with his son.

Right-size your Kubernetes Applications Using Open Source Goldilocks for Cost Optimization

In the last few years as companies have modernized their business applications, many have moved to microservices based architectures using containers on Kubernetes. A lot of the initial focus was on designing and building new cloud native architectures to support the applications. As environments have grown, we’ve seen a shift in focus to optimize resource allocation and right-size workloads to reduce costs.

In this blog post we will share guidance on how to optimize resource allocation and right-size applications in Kubernetes environments using Goldilocks. We’ll walk through how to install Goldilocks as well as a sample application to view the suggested resource recommendations. This applies to all Kubernetes applications, including those running on Amazon Elastic Kubernetes Service (Amazon EKS), that are deployed with managed node groups, self-managed node groups, and AWS Fargate.

Right-sizing applications on Kubernetes

In Kubernetes, resource right-sizing is done through setting resource specifications in the application manifest. These settings directly impact:

Performance — Kubernetes applications running on the same node will arbitrarily compete for resources without proper resource specifications. This can adversely impact application performance.
Cost Optimization — Applications deployed with oversized resource specifications will result in increased costs and underutilized infrastructure.
Autoscaling — The Kubernetes Cluster Autoscaler and Horizontal Pod Autoscaling require resource specifications to function.

The most common resource specifications in Kubernetes are for CPU and memory requests and limits.

Requests and Limits

Containerized applications are deployed on Kubernetes as Pods. CPU and memory requests and limits are an optional part of the Pod definition. CPU is specified in units of Kubernetes CPUs while memory is specified in bytes, usually as mebibytes (Mi).

Requests and limits each serve different functions in Kubernetes and affect scheduling and resource enforcement differently.

Scheduling

The Kubernetes scheduler only considers requests when determining where to place Pods in your cluster. Acceptable nodes are those that have enough available resources to satisfy the Pod’s resource requests.  Limits are not considered by the scheduler.

Resource Enforcement

The container runtime on the node where your Pods are running is responsible for resource enforcement.  Both requests and limits are factors in ensuring applications have access to their required compute resources. Their effect on CPU and memory is different:

CPU — If no limits are specified, then each Pod on a node can use all the available CPU on the host. As soon as available CPU is exhausted, Pods are throttled using a Linux primitive called cgroups. This is a resource sharing primitive that ensures each Pod gets its fair share of CPU time. CPU requests determine that fair share and are weighted to give more CPU time to Pods with larger CPU requests. If a limit is specified then CPU time will not exceed the specific limit.
Memory — Just like CPU, if no memory limits are specified, then each Pod can use all the available memory on the host. Unlike CPU, when memory is exhausted, there is no sharing mechanism. The Pod will either be terminated by the Linux Out-of-memory (OOM) killer or the kubelet will evict the Pod. The same process will happen if a Pod’s memory usage exceeds its limit.

Vertical Pod Autoscaler

So how do application owners choose the “right” values for their CPU and memory resource requests? An ideal solution is to load test the application in a development environment and measure resource usage using observability tooling. While that might make sense for your organization’s most critical applications, it’s likely not feasible for every containerized application deployed in your cluster.

Fortunately, there is a Kubernetes project that has a feature specifically designed to help provide resource recommendations — the Vertical Pod Autoscaler (VPA). VPA is a Kubernetes sub-project owned by the Autoscaling special interest group (SIG). It’s designed to automatically set Pod requests based on observed application performance. VPA collects resource usage using the Kubernetes Metric Server by default but can be optionally configured to use Prometheus as a data source.

VPA has a recommendation engine that measures application performance and makes sizing recommendations. The VPA recommendation engine can be deployed stand-alone so VPA will not perform any autoscaling actions. It’s configured by creating a VerticalPodAutoscaler custom resource for each application and VPA updates the object’s status field with resource sizing recommendations.

Creating VerticalPodAutoscaler objects for every application in your cluster and trying to read and interpret the JSON results is challenging at scale. Goldilocks is an open source project that makes this easy.

Goldilocks

Goldilocks is an open source project from Fairwinds that is designed to help organizations get their Kubernetes application resource requests “just right”. It takes its name, very appropriately, from the well known fairly tale Goldilocks and the Three Bears. Goldilocks builds on top of the Kubernetes Vertical Pod Autoscaler and provides:

A controller that automates the creation of VerticalPodAutoscaler objects for workloads in your cluster.
A dashboard that displays resource recommendations for all the monitored workloads.

The default configuration of Goldilocks is an opt-in model. You choose which workloads are monitored by adding the goldilocks.fairwinds.com/enabled: true label to a namespace.

Solution Overview

Let’s walk through how to install Goldilocks, including its dependencies Metrics Server and Vertical Pod Autoscaler. Then we’ll install a sample application to view the suggested resource recommendations. The diagram shown here illustrates all of the components on an Amazon EKS cluster and their interactions.

The Metrics Server collects resource metrics from the Kubelet running on worker nodes and exposes them through Metrics API for use by the Vertical Pod Autoscaler. The Goldilocks controller watches for namespaces with the goldilocks.fairwinds.com/enabled: true label and creates VerticalPodAutoscaler objects for each workload in those namespaces.

In this blog post, we will be creating a namespace called javajmx-sample and will be creating a tomcat deployment. We will label this namespace in order to get a recommendation from Goldilocks. As soon as we label the namespace, we will be able to see a VPA object called goldilocks-tomcat-example created.

Prerequisites

You will need the following to complete the steps in this post:

AWS Command Line Interface (AWS CLI) version 2
kubectl
helm
If you don’t have an Amazon EKS cluster, you can create one using the eksctl

Step 1: Deploying the Metrics Server

In this step, we will be deploying the Metrics server which provides the resource metrics to be used by Vertical Pod Autoscaler.

helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server

helm upgrade –install metrics-server metrics-server/metrics-server

Let’s verify the status of the metrics-server. Once successfully deployed, you should be able to see the resource utilization of the deployments within seconds:

kubectl top pods  -n kube-system

NAME                     CPU(cores)   MEMORY(bytes)  
aws-node-czlb8           2m           35Mi            
aws-node-fs22v           3m           35Mi            
aws-node-nl4js           2m           60Mi            
aws-node-vth4m           2m           59Mi            
coredns-d5b9bfc4-lbhb7   4m           13Mi            
coredns-d5b9bfc4-ngtf9   4m           14Mi            
kube-proxy-5gq76         1m           12Mi            
kube-proxy-mvp6g         1m           12Mi            
kube-proxy-vxpw9         1m           33Mi            
kube-proxy-zsfs4         1m           34Mi  

Step 2 : Enable namespaces which needs resource recommendation from Goldilocks

We will be deploying sample workloads in the javajmx-sample namespace and we will get the resource recommendation for the applications running on it. Let’s go ahead and create the namespace and label it.

kubectl create ns javajmx-sample
kubectl label ns javajmx-sample goldilocks.fairwinds.com/enabled=true

To ensure the label was applied successfully, run describe on the javajmx-sample namespace

kubectl describe ns javajmx-sample

Name:         javajmx-sample
Labels:       goldilocks.fairwinds.com/enabled=true
              kubernetes.io/metadata.name=javajmx-sample
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.

Step 3 : Deploy Goldilocks

We will be using a helm chart to deploy Goldilocks. The deployment creates three objects :

Goldilocks-controller: responsible for creating the VPA objects for the workloads whose namespace is enabled for a Goldilocks recommendation

Goldilocks-vpa-recommender:  responsible for providing the resource recommendations for the workloads

Goldilocks-dashboard: summarizes the resource recommendation of the workloads and will also provide the yaml manifest for implementing the recommendation.

To deploy Goldilocks, run the following helm commands:

helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm upgrade –install goldilocks fairwinds-stable/goldilocks –namespace goldilocks –create-namespace –set vpa.enabled=true

Now, we will use kubectl to verify if the deployment was successful:

NAME                                          READY   STATUS    RESTARTS   AGE
goldilocks-controller-7bc5788596-q752s        1/1     Running   0          18h
goldilocks-dashboard-7ffff8966b-dphmj         1/1     Running   0          18h
goldilocks-dashboard-7ffff8966b-s2dgf         1/1     Running   0          18h
goldilocks-vpa-recommender-5ddf6dcd66-njgt4   1/1     Running   0          18h

Step 4 : Deploy the sample application

In this step, we will be deploying a sample application in the javajmx-sample namespace to get recommendations from Goldilocks. The application tomcat-example  is initially provisioned with a CPU and Memory request of 100m and 180Mi respectively and limits of 300m CPU and 300 Mi Memory.

kubectl apply -f https://raw.githubusercontent.com/aws-observability/aws-o11y-recipes/main/sandbox/javajmx/example/sample-javajmx-app.yaml

nht-admin:~/environment $ kubectl get pods -n javajmx-sample
NAME                              READY   STATUS    RESTARTS   AGE
tomcat-bad-traffic-generator      1/1     Running   0          127m
tomcat-example-5c874c8b8b-zt2tv   1/1     Running   0          127m
tomcat-traffic-generator          1/1     Running   0          127m

As mentioned earlier, Goldilocks will be creating VPAs for each deployment in a Goldilocks enabled namespace. Using the kubectl command, we can verify that a VPA was created in thejavajmx-sample namespace for the goldilocks-tomcat-example:

nht-admin:~/environment $ kubectl get vpa -n javajmx-sample
NAME                        MODE   CPU   MEM         PROVIDED   AGE
goldilocks-tomcat-example   Off    15m   109814751   True       127m

Step 5 : Review the Goldilocks recommendation dashboard

Goldilocks-dashboard will expose the dashboard in the port 8080 and we can access it to get the resource recommendation.  We now run this kubectl command to access the dashboard:

kubectl -n goldilocks port-forward svc/goldilocks-dashboard 8080:80

We can now open a browser to http://localhost:8080 to display the Goldilocks dashboard.

Let’s analyze the javajmx-sample namespace to see the recommendations provided by Goldilocks. We should be able to see the recommendations for the goldilocks-tomcat-example deployment.

Here the screen shows the request and limit recommendations for the javajmx-sample workload. The Current column under each Quality of Service (QoS) indicates the currently configured CPU and Memory request and limits. The Guaranteed and Burstable column under each QoS indicates the recommended CPU and Memory request limits for the respective QoS.

We can clearly notice  that we have over provisioned the resources and Goldilocks has made the recommendations to optimize the CPU and Memory request. The recommended level for CPU request and CPU limit is 15m and 15m compared to the current setting of 100m and 300m for Guaranteed QoS.  Memory request and limits are recommended to be 105M and 105M, compared to the current setting of 180Mi and 300 Mi.

Notice that the recommendations are available for two different Quality of Service (QoS) types: Guaranteed and Burstable. Kubernetes provides different levels of Quality of Service to pods depending on what they request and what limits are set for them. Pods that need to stay up and consistently good can request guaranteed resources, while pods with less exacting requirements can use resources with less or no guarantee.

Guaranteed (QoS) pods are considered top priority and are guaranteed to not be killed until they exceed their limits. If limits, and optionally requests, (not equal to 0) are set for all resources across all containers and limits and requests  are equal, then the pod is classified as Guaranteed.

Burstable (QoS) pods have some form of minimal resource guarantee, but can use more resources when available. Under system memory pressure, these containers are more likely to be killed once they exceed their requests and no Best-Effort pods exist. If requests, and optionally limits, are set (not equal to 0) for one or more resources across one or more containers, and they are not equal, then the pod is classified as Burstable.

To follow the recommended resource specification, customers can simply copy  the respective manifest file for the QoS class they are interested in and deploy the workloads which will then be right-sized and optimized.

For example, if we decide to apply the recommendations for the Guaranteed QoS, we could copy the YAML from the dashboard as shown here and apply them to the deployment object:

Let’s run the kubectl edit command to the deployment to apply the recommendations:

kubectl edit deployment tomcat-example -n javajmx-sample

The resources section in the containers spec  shows that we have successfully applied the recommendation of request and limits for CPU, and memory:

Once we apply the recommendations, we should be able to verify that the pod is trying to restart and come online with the updated resource configuration. Let’s verify the same by running the kubectl describe  command on the tomcat-example deployment:

kubectl describe deployment tomcat-example -n javajmx-sample

The output should look like the following:

Name:                   tomcat-example
Namespace:              javajmx-sample
CreationTimestamp:      Mon, 06 Feb 2023 17:41:38 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=tomcat-example-pods
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=tomcat-example-pods
  Containers:
   tomcat-example-pod:
    Image:       public.ecr.aws/u6p4l7a1/sample-java-jmx-app:latest
    Ports:       8080/TCP, 9404/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      cpu:     15m
      memory:  105Mi
    Requests:
      cpu:        15m
      memory:     105Mi
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>

Cleanup

To delete the deployments and sample workloads we created in the blog, execute the following commands:

helm delete metrics-server
helm delete goldilocks -n goldilocks
kubectl delete -f https://raw.githubusercontent.com/aws-observability/aws-o11y-recipes/main/sandbox/javajmx/example/sample-javajmx-app.yaml

Conclusion

This post demonstrated how Goldilocks can be used to efficiently rightsize the resource requests for Kubernetes applications. Customers in modernization efforts often have minimal time to decide the resource requirements for their applications, which usually involves a complex process of reviewing monitoring dashboards. By adopting the recommendations from Goldilocks, customers can shorten the time to market for their applications and optimize their Amazon EKS costs.

Further reading

EKS Best practices
Blog: Using Prometheus to Avoid Disasters with Kubernetes CPU Limits

Goldilocks project

Playwright now offers a UI mode

#​631 — March 24, 2023

Read on the Web

JavaScript Weekly

Speeding Up the JavaScript Ecosystem: npm Scripts — The latest in what has been a fascinating series on finding ‘low hanging fruit’ when it comes to performance in the JavaScript world. The author explains it best himself:

“‘npm scripts’ are executed by JavaScript developers … all the time. Despite their high usage they are not particularly well optimized and add about 400ms of overhead. In this article we were able to bring that down to ~22ms.”
What Marvin does here is a valuable skill for all developers to pick up, and you can enjoy more by going back to the start.

Marvin Hagemeister

Playwright v1.32 – Now with UI Mode — The popular Web testing and automation framework is taking more steps toward ground currently served by tools like Cypress by offering a ‘UI mode’ that lets you explore, run and debug tests in a UI environment, complete with watch mode. ▶️ This video provides a good introduction.

Microsoft

A Grid Component with All the Features & Great Performance — Try our powerful JS data grid component which lets you edit, sort, group and filter datasets with fantastic performance. Includes a TreeGrid, API docs and plenty of demos. Seamless integration with React, Angular & Vue apps.

Bryntum sponsor

Why We Added package.json Support to Deno — Deno shares some provenance with Node.js but till recently it hadn’t focused on supporting Node features like npm modules. But with Node and npm compatibility beginning to improve, the team has faced questions about the runtime’s priorities. Ryan Dahl explains more about their thinking here.

Ryan Dahl

???? In other Deno news, Deno 1.32 has been released with… improved package.json support, and more.

How to Start a React Project in 2023 — There are lots of ways, but this well-regarded author explains the pros and cons of a few approaches, and gives you a few options targeting specific use cases you might have.

Robin Wieruch

IN BRIEF:

GitHub had to update its RSA SSH host key today so you may see security related warnings when pushing and cloning. It’s easy to fix, but check the new fingerprint matches – it’s for your own security.

The New Stack caught up with Svelte’s Rich Harris on SvelteKit and what’s coming for Svelte 4.

The React team shared some cutting edge updates on what they’re working on including React Server Components and an optimizing compiler.

If you were experiencing errors on the official Node site last week, here’s the (detailed) post mortem of why. Config errors and inappropriate caching, mostly.

✨ Did you know there’s a market in fake GitHub stars? Some developers analyzed some repos to learn more about it.

???? Congratulations to Lea Verou on her TC39 appointment. Her efforts to push the Web forward are legendary. Prism is one project you may be aware of.

Make your opinions known on what should be in the next version of Vite.

RELEASES:

Docusaurus 2.4
↳ Easy to maintain documentation site generator.

Puppeteer 19.8
↳ Headless Chrome Node.js API.

Neutralinojs 4.11
↳ Lightweight cross-platform desktop app framework.

Qwik 0.23

???? Articles & Tutorials

Buying a Hard-to-Get Bicycle using Playwright — An unusual use case for JavaScript, Playwright, and GitHub Actions, but Maciek managed to buy his bike.

Maciek Palmowski

Snyk Top 10: JavaScript OSS Vulnerabilities — Dive into the most prevalent critical and high open source vulnerabilities found by Snyk scans of JavaScript apps in 2022.

Snyk sponsor

The ‘End’ of Front-End Development? — A recent narrative doing the rounds suggests that large language models like GPT-4 (or even tools like Copilot X) could soon put some developers out of a job — however, Josh is “optimistic about what these AI advancements mean for the future of software development”.

Josh W. Comeau

In related news, Eric Elliott put ChatGPT through its paces to see if it would make for a good JavaScript tutor. It did well — though with mixed results.

Migrating from ts-node to Bun — A look at adopting performance-oriented Bun when you’re used to using TypeScript with Node.js. John runs us through porting a console app from the ts-node approach over to Bun — “a pretty easy process,” he says.

John Reilly

▶  A Pinia Crash Course for BeginnersPinia is a store / state management solution for Vue that does believe in pineapple on pizza.

Alexander Gekov

A Practical Guide to Getting Started with Astro — An extensive walkthrough of Astro that covers all the topics you’ll need to get you started.

Mojtaba Seyedi

???? Test Website Speed Continuously and Rank Higher In Google — You need a fast website to make users happy and meet Google’s Core Web Vitals metrics. Test and optimize with DebugBear.

DebugBear sponsor

Automatic npm Publishing with GitHub Actions and Granular Tokens

Tim Perry

Make Sure You Do This Before Switching to Signals in Angular

Jordan Powell

Six CSS Snippets Every Developer Should Know

Adam Argyle (Google)

???? Code & Tools

trace.cafe: Easy Webperf Trace Sharing — A quick way to share a performance profile saved from your DevTools, available for up to 90 days with the DevTools perf panel embedded (see example).

paul irish

VueUse: A Collection of Vue Composition Utilities — With over 200 functions targeting both Vue 2 and 3, there’ll be something in this suite of Composition API-based utility functions for you, whether it’s working with state, browser capabilities, animations, Electron, Firebase, and more.

Anthony Fu

Don’t Let Your Issue Tracker Be a Four-Letter Word. Use Shortcut

Shortcut (formerly Clubhouse.io) sponsor

OTPAuth: One Time Password (HOTP/TOTP) Library — When you log in to a site that uses 2FA and you’re asked for some digits from an authentication app, that’s probably a Time-based One-Time Password (or TOTP). This library for Node, Deno, Bun and the browser lets you work with TOTPs and HOTPs from JS.

Héctor Molinero Fernández

Recharts 2.5: Chart Library Built with React and D3 — Easy to deploy with declarative components, native SVG support, and lightweight dependency on D3. Line, bar, scatter, composed, pie, and radar charts are offered. There are lots of examples, complete with code.

recharts

DOCX 8.0: Generate Word .docx Files from JavaScript — The code to lay out documents is verbose but there’s a lot of functionality. Here’s a CodePen example and release notesGitHub repo.

Dolan Miu

SvHighlight: Code Syntax Highlighter for Svelte — Powered by Highlight.js, it includes a blurring feature to focus attention on specific areas of code and you an customize it with Tailwind. Try the interactive examples to see the effect.

SvHighlight

eslint-formatter-pretty 5.0: Pretty ESLint Formatter — Nicer output than the default. Sort results by severity. Get stylized inline code blocks, and more.

Sindre Sorhus

AWS JWT Verify: Verify JWTs Signed by Amazon Cognito — In both Node.js and the browser.

Amazon Web Services

???? Jobs

Software Engineer (Backend) — Join our “kick ass” team. Our software team operates from 17 countries and we’re always looking for more exceptional engineers.

Sticker Mule

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

????‍???? Got a job listing to share? Here’s how.

melonJS 15.0
↳ Mature HTML5 game engine.

Marked 4.3
↳ Markdown parser and compiler. (Demo.)

v8go 0.9
↳ Execute JavaScript from Go(lang).

Million 2.1
↳ Fast Virtual DOM to make React faster.

Partytown 0.7.6
↳ Take third-party scripts off the main thread.

???? Bonus Item

Make Bookmarklets — Create and test bookmarklets directly in the browser. Makes an irritating task slightly easier if you need to do it.

Cullan Luther

Unit Testing AWS Lambda with Python and Mock AWS Services

When building serverless event-driven applications using AWS Lambda, it is best practice to validate individual components.  Unit testing can quickly identify and isolate issues in AWS Lambda function code.  The techniques outlined in this blog demonstrates unit test techniques for Python-based AWS Lambda functions and interactions with AWS Services.

The full code for this blog is available in the GitHub project as a demonstrative example.

Example use case

Let’s consider unit testing a serverless application which provides an API endpoint to generate a document.  When the API endpoint is called with a customer identifier and document type, the Lambda function retrieves the customer’s name from DynamoDB, then retrieves the document text from DynamoDB for the given document type, finally generating and writing the resulting document to S3.

Figure 1. Example application architecture

Amazon API Gateway provides an endpoint to request the generation of a document for a given customer.  A document type and customer identifier are provided in this API call.
The endpoint invokes an AWS Lambda function that generates a document using the customer identifier and the document type provided.
An Amazon DynamoDB table stores the contents of the documents and the users name, which are retrieved by the Lambda function.
The resulting text document is stored to Amazon S3.

Our testing goal is to determine if an isolated “unit” of code works as intended. In this blog, we will be writing tests to provide confidence that the logic written in the above AWS Lambda function behaves as we expect. We will mock the service integrations to Amazon DynamoDB and S3 to isolate and focus our tests on the Lambda function code, and not on the behavior of the AWS Services.

Define the AWS Service resources in the Lambda function

Before writing our first unit test, let’s look at the Lambda function that contains the behavior we wish to test.  The full code for the Lambda function is available in the GitHub repository as src/sample_lambda/app.py.

As part of our Best practices for working AWS Lambda functions, we recommend initializing AWS service resource connections outside of the handler function and in the global scope.  Additionally, we can retrieve any relevant environment variables in the global scope so that subsequent invocations of the Lambda function do not repeatedly need to retrieve them.  For organization, we can put the resource and variables in a dictionary:

_LAMBDA_DYNAMODB_RESOURCE = { “resource” : resource(‘dynamodb’),
“table_name” : environ.get(“DYNAMODB_TABLE_NAME”,”NONE”) }

However, globally scoped code and global variables are challenging to test in Python, as global statements are executed on import, and outside of the controlled test flow.  To facilitate testing, we define classes for supporting AWS resource connections that we can override (patch) during testing.  These classes will accept a dictionary containing the boto3 resource and relevant environment variables.

For example, we create a DynamoDB resource class with a parameter “boto3_dynamodb_resource” that accepts a boto3 resource connected to DynamoDB:

class LambdaDynamoDBClass:
def __init__(self, lambda_dynamodb_resource):
self.resource = lambda_dynamodb_resource[“resource”]
self.table_name = lambda_dynamodb_resource[“table_name”]
self.table = self.resource.Table(self.table_name)

Build the Lambda Handler

The Lambda function handler is the method in the AWS Lambda function code that processes events. When the function is invoked, Lambda runs the handler method. When the handler exits or returns a response, it becomes available to process another event.

To facilitate unit test of the handler function, move as much of logic as possible to other functions that are then called by the Lambda hander entry point.  Also, pass the AWS resource global variables to these subsequent function calls.  This approach enables us to mock and intercept all resources and calls during test.

In our example, the handler references the global variables, and instantiates the resource classes to setup the connections to specific AWS resources.  (We will be able to override and mock these connections during unit test.)

Then the handler calls the create_letter_in_s3 function to perform the steps of creating the document, passing the resource classes.  This downstream function avoids directly referencing the global context or any AWS resource connections directly.

def lambda_handler(event: APIGatewayProxyEvent, context: LambdaContext) -> Dict[str, Any]:

global _LAMBDA_DYNAMODB_RESOURCE
global _LAMBDA_S3_RESOURCE

dynamodb_resource_class = LambdaDynamoDBClass(_LAMBDA_DYNAMODB_RESOURCE)
s3_resource_class = LambdaS3Class(_LAMBDA_S3_RESOURCE)

return create_letter_in_s3(
dynamo_db = dynamodb_resource_class,
s3 = s3_resource_class,
doc_type = event[“pathParameters”][“docType”],
cust_id = event[“pathParameters”][“customerId”])

Unit testing with mock AWS services

Our Lambda function code has now been written and is ready to be tested, let’s take a look at the unit test code!   The full code for the unit test is available in the GitHub repository as tests/unit/src/test_sample_lambda.py.

In production, our Lambda function code will directly access the AWS resources we defined in our function handler; however, in our unit tests we want to isolate our code and replace the AWS resources with simulations.  This isolation facilitates running unit tests in an isolated environment to prevent accidental access to actual cloud resources.

Moto is a python library for Mocking AWS Services that we will be using to simulate AWS resource our tests.  Moto supports many AWS resources, and it allows you to test your code with little or no modification by emulating functionality of these services.

Moto uses decorators to intercept and simulate responses to and from AWS resources.  By adding a decorator for a given AWS service, subsequent calls from the module to that service will be re-directed to the mock.

@moto.mock_dynamodb
@moto.mock_s3

Configure Test Setup and Tear-down

The mocked AWS resources will be used during the unit test suite.  Using the setUp() method allows you to define and configure the mocked global AWS Resources before the tests are run.

We define the test class and a setUp() method and initialize the mock AWS resource.  This includes configuring the resource to prepare it for testing, such as defining a mock DynamoDB table or creating a mock S3 Bucket.

class TestSampleLambda(TestCase):
def setUp(self) -> None:
dynamodb = boto3.resource(“dynamodb”, region_name=”us-east-1″)
dynamodb.create_table(
TableName = self.test_ddb_table_name,
KeySchema = [{“AttributeName”: “PK”, “KeyType”: “HASH”}],
AttributeDefinitions = [{“AttributeName”: “PK”,
“AttributeType”: “S”}],
BillingMode = ‘PAY_PER_REQUEST’

s3_client = boto3.client(‘s3’, region_name=”us-east-1″)
s3_client.create_bucket(Bucket = self.test_s3_bucket_name )

After creating the mocked resources, the setup function creates resource class object referencing those mocked resources, which will be used during testing.

mocked_dynamodb_resource = resource(“dynamodb”)
mocked_s3_resource = resource(“s3”)
mocked_dynamodb_resource = { “resource” : resource(‘dynamodb’),
“table_name” : self.test_ddb_table_name }
mocked_s3_resource = { “resource” : resource(‘s3’),
“bucket_name” : self.test_s3_bucket_name }
self.mocked_dynamodb_class = LambdaDynamoDBClass(mocked_dynamodb_resource)
self.mocked_s3_class = LambdaS3Class(mocked_s3_resource)

Test #1: Verify the code writes the document to S3

Our first test will validate our Lambda function writes the customer letter to an S3 bucket in the correct manner.  We will follow the standard test format of arrange, act, assert when writing this unit test.

Arrange the data we need in the DynamoDB table:

def test_create_letter_in_s3(self) -> None:

self.mocked_dynamodb_class.table.put_item(Item={“PK”:”D#UnitTestDoc”,
“data”:”Unit Test Doc Corpi”})
self.mocked_dynamodb_class.table.put_item(Item={“PK”:”C#UnitTestCust”,
“data”:”Unit Test Customer”})

Act by calling the create_letter_in_s3 function.  During these act calls, the test passes the AWS resources as created in the setUp().

test_return_value = create_letter_in_s3(
dynamo_db = self.mocked_dynamodb_class,
s3=self.mocked_s3_class,
doc_type = “UnitTestDoc”,
cust_id = “UnitTestCust”
)

Assert by reading the data written to the mock S3 bucket, and testing conformity to what we are expecting:

bucket_key = “UnitTestCust/UnitTestDoc.txt”
body = self.mocked_s3_class.bucket.Object(bucket_key).get()[‘Body’].read()

self.assertEqual(test_return_value[“statusCode”], 200)
self.assertIn(“UnitTestCust/UnitTestDoc.txt”, test_return_value[“body”])
self.assertEqual(body.decode(‘ascii’),”Dear Unit Test Customer;nUnit Test Doc Corpi”)

Tests #2 and #3: Data not found error conditions

We can also test error conditions and handling, such as keys not found in the database.  For example, if a customer identifier is submitted, but does not exist in the database lookup, does the logic handle this and return a “Not Found” code of 404?

To test this in test #2, we add data to the mocked DynamoDB table, but then submit a customer identifier that is not in the database.

This test, and a similar test #3 for “Document Types not found”, are implemented in the example test code on GitHub.

Test #4: Validate the handler interface

As the application logic resides in independently tested functions, the Lambda handler function provides only interface validation and function call orchestration.  Therefore, the test for the handler validates that the event is parsed correctly, any functions are invoked as expected, and the return value is passed back.

To emulate the global resource variables and other functions, patch both the global resource classes and logic functions.

@patch(“src.sample_lambda.app.LambdaDynamoDBClass”)
@patch(“src.sample_lambda.app.LambdaS3Class”)
@patch(“src.sample_lambda.app.create_letter_in_s3”)
def test_lambda_handler_valid_event_returns_200(self,
patch_create_letter_in_s3 : MagicMock,
patch_lambda_s3_class : MagicMock,
patch_lambda_dynamodb_class : MagicMock
):

Arrange for the test by setting return values for the patched objects.

patch_lambda_dynamodb_class.return_value = self.mocked_dynamodb_class
patch_lambda_s3_class.return_value = self.mocked_s3_class

return_value_200 = {“statusCode” : 200, “body”:”OK”}
patch_create_letter_in_s3.return_value = return_value_200

We need to provide event data when invoking the Lambda handler.  A good practice is to save test events as separate JSON files, rather than placing them inline as code. In the example project, test events are located in the folder “tests/events/”. During test execution, the event object is created from the JSON file using the utility function named load_sample_event_from_file.

test_event = self.load_sample_event_from_file(“sampleEvent1”)

Act by calling the lambda_handler function.

test_return_value = lambda_handler(event=test_event, context=None)

Assert by ensuring the create_letter_in_s3 function is called with the expected parameters based on the event, and a create_letter_in_s3 function return value is passed back to the caller.  In our example, this value is simply passed with no alterations.

patch_create_letter_in_s3.assert_called_once_with(
dynamo_db=self.mocked_dynamodb_class,
s3=self.mocked_s3_class,
doc_type=test_event[“pathParameters”][“docType”],
cust_id=test_event[“pathParameters”][“customerId”])

self.assertEqual(test_return_value, return_value_200)

Tear Down

The tearDown() method is called immediately after the test method has been run and the result is recorded.  In our example tearDown() method, we clean up any data or state created so the next test won’t be impacted.

Running the unit tests

The unittest Unit testing framework can be run using the Python pytest utility.  To ensure network isolation and verify the unit tests are not accidently connecting to AWS resources, the pytest-socket project provides the ability to disable network communication during a test.

pytest -v –disable-socket -s tests/unit/src/

The pytest command results in a PASSED or FAILED status for each test.  A PASSED status verifies that your unit tests, as written, did not encounter errors or issues,

Conclusion

Unit testing is a software development process in which different parts of an application, called units, are individually and independently tested. Tests validate the quality of the code and confirm that it functions as expected. Other developers can gain familiarity with your code base by consulting the tests. Unit tests reduce future refactoring time, help engineers get up to speed on your code base more quickly, and provide confidence in the expected behaviour.

We’ve seen in this blog how to unit test AWS Lambda functions and mock AWS Services to isolate and test individual logic within our code.

AWS Lambda Powertools for Python has been used in the project to validate hander events.   Powertools provide a suite of utilities for AWS Lambda functions to ease adopting best practices such as tracing, structured logging, custom metrics, idempotency, batching, and more.

Learn more about AWS Lambda testing in our prescriptive test guidance, and find additional test examples on GitHub.  For more serverless learning resources, visit Serverless Land.

About the authors:

Tom Romano

Tom Romano is a Solutions Architect for AWS World Wide Public Sector from Tampa, FL, and assists GovTech and EdTech customers as they create new solutions that are cloud-native, event driven, and serverless. He is an enthusiastic Python programmer for both application development and data analytics. In his free time, Tom flies remote control model airplanes and enjoys vacationing with his family around Florida and the Caribbean.

Kevin Hakanson

Kevin Hakanson is a Sr. Solutions Architect for AWS World Wide Public Sector based in Minnesota. He works with EdTech and GovTech customers to ideate, design, validate, and launch products using cloud-native technologies and modern development practices. When not staring at a computer screen, he is probably staring at another screen, either watching TV or playing video games with his family.

Integrating with GitHub Actions – Amazon CodeGuru in your DevSecOps Pipeline

Many organizations have adopted DevOps practices to streamline and automate software delivery and IT operations. A DevOps model can be adopted without sacrificing security by using automated compliance policies, fine-grained controls, and configuration management techniques. However, one of the key challenges customers face is analyzing code and detecting any vulnerabilities in the code pipeline due to a lack of access to the right tool. Amazon CodeGuru addresses this challenge by using machine learning and automated reasoning to identify critical issues and hard-to-find bugs during application development and deployment, thus improving code quality.

We discussed how you can build a CI/CD pipeline to deploy a web application in our previous post “Integrating with GitHub Actions – CI/CD pipeline to deploy a Web App to Amazon EC2”. In this post, we will use that pipeline to include security checks and integrate it with Amazon CodeGuru Reviewer to analyze and detect potential security vulnerabilities in the code before deploying it.

Amazon CodeGuru Reviewer helps you improve code security and provides recommendations based on common vulnerabilities (OWASP Top 10) and AWS security best practices. CodeGuru analyzes Java and Python code and provides recommendations for remediation. CodeGuru Reviewer detects a deviation from best practices when using AWS APIs and SDKs, and also identifies concurrency issues, resource leaks, security vulnerabilities and validates input parameters. For every workflow run, CodeGuru Reviewer’s GitHub Action copies your code and build artifacts into an S3 bucket and calls CodeGuru Reviewer APIs to analyze the artifacts and provide recommendations. Refer to the code detector library here for more information about CodeGuru Reviewer’s security and code quality detectors.

With GitHub Actions, developers can easily integrate CodeGuru Reviewer into their CI workflows, conducting code quality and security analysis. They can view CodeGuru Reviewer recommendations directly within the GitHub user interface to quickly identify and fix code issues and security vulnerabilities. Any pull request or push to the master branch will trigger a scan of the changed lines of code, and scheduled pipeline runs will trigger a full scan of the entire repository, ensuring comprehensive analysis and continuous improvement.

Solution overview

The solution comprises of the following components:

GitHub Actions – Workflow Orchestration tool that will host the Pipeline.

AWS CodeDeploy – AWS service to manage deployment on Amazon EC2 Autoscaling Group.

AWS Auto Scaling – AWS service to help maintain application availability and elasticity by automatically adding or removing Amazon EC2 instances.

Amazon EC2 – Destination Compute server for the application deployment.

Amazon CodeGuru – AWS Service to detect security vulnerabilities and automate code reviews.

AWS CloudFormation – AWS infrastructure as code (IaC) service used to orchestrate the infrastructure creation on AWS.

AWS Identity and Access Management (IAM) OIDC identity provider – Federated authentication service to establish trust between GitHub and AWS to allow GitHub Actions to deploy on AWS without maintaining AWS Secrets and credentials.

Amazon Simple Storage Service (Amazon S3) – Amazon S3 to store deployment and code scan artifacts.

The following diagram illustrates the architecture:

Figure 1. Architecture Diagram of the proposed solution in the blog

Developer commits code changes from their local repository to the GitHub repository. In this post, the GitHub action is triggered manually, but this can be automated.
GitHub action triggers the build stage.
GitHub’s Open ID Connector (OIDC) uses the tokens to authenticate to AWS and access resources.
GitHub action uploads the deployment artifacts to Amazon S3.
GitHub action invokes Amazon CodeGuru.
The source code gets uploaded into an S3 bucket when the CodeGuru scan starts.
GitHub action invokes CodeDeploy.
CodeDeploy triggers the deployment to Amazon EC2 instances in an Autoscaling group.
CodeDeploy downloads the artifacts from Amazon S3 and deploys to Amazon EC2 instances.

Prerequisites

This blog post is a continuation of our previous post – Integrating with GitHub Actions – CI/CD pipeline to deploy a Web App to Amazon EC2. You will need to setup your pipeline by following instructions in that blog.

After completing the steps, you should have a local repository with the below directory structure, and one completed Actions run.

Figure 2. Directory structure

To enable automated deployment upon git push, you will need to make a change to your .github/workflow/deploy.yml file. Specifically, you can activate the automation by modifying the following line of code in the deploy.yml file:

From:

workflow_dispatch: {}

To:

#workflow_dispatch: {}
push:
branches: [ main ]
pull_request:

Solution walkthrough

The following steps provide a high-level overview of the walkthrough:

Create an S3 bucket for the Amazon CodeGuru Reviewer.
Update the IAM role to include permissions for Amazon CodeGuru.
Associate the repository in Amazon CodeGuru.
Add Vulnerable code.
Update GitHub Actions Job to run the Amazon CodeGuru Scan.
Push the code to the repository.
Verify the pipeline.
Check the Amazon CodeGuru recommendations in the GitHub user interface.

1. Create an S3 bucket for the Amazon CodeGuru Reviewer

When you run a CodeGuru scan, your code is first uploaded to an S3 bucket in your AWS account.

Note that CodeGuru Reviewer expects the S3 bucket name to begin with codeguru-reviewer-.

You can create this bucket using the bucket policy outlined in this CloudFormation template (JSON or YAML) or by following these instructions.

2.  Update the IAM role to add permissions for Amazon CodeGuru

Locate the role created in the pre-requisite section, named “CodeDeployRoleforGitHub”.
Next, create an inline policy by following these steps. Give it a name, such as “codegurupolicy” and add the following permissions to the policy.

{
“Version”: “2012-10-17″,
“Statement”: [
{
“Action”: [
“codeguru-reviewer:ListRepositoryAssociations”,
“codeguru-reviewer:AssociateRepository”,
“codeguru-reviewer:DescribeRepositoryAssociation”,
“codeguru-reviewer:CreateCodeReview”,
“codeguru-reviewer:DescribeCodeReview”,
“codeguru-reviewer:ListRecommendations”,
“iam:CreateServiceLinkedRole”
],
“Resource”: “*”,
“Effect”: “Allow”
},
{
“Action”: [
“s3:CreateBucket”,
“s3:GetBucket*“,
“s3:List*“,
“s3:GetObject”,
“s3:PutObject”,
“s3:DeleteObject”
],
“Resource”: [
“arn:aws:s3:::codeguru-reviewer-*“,
“arn:aws:s3:::codeguru-reviewer-*/*”
],
“Effect”: “Allow”
}
]
}

3.  Associate the repository in Amazon CodeGuru

Follow the instructions here to associate your repo – https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/create-github-association.html

Figure 3. Associate the repository

At this point, you will have completed your initial full analysis run. However, since this is a simple “helloWorld” program, you may not receive any recommendations. In the following steps, you will incorporate vulnerable code and trigger the analysis again, allowing CodeGuru to identify and provide recommendations for potential issues.

4.  Add Vulnerable code

Create a file application.conf
at /aws-codedeploy-github-actions-deployment/spring-boot-hello-world-example

Add the following content in application.conf file.

db.default.url=”postgres://test-ojxarsxivjuyjc:[email protected].com:5432/dcectn1pto16vi?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory”

db.default.url=${?DATABASE_URL}

db.default.port=”3000″

db.default.datasource.username=”root”

db.default.datasource.password=”testsk_live_454kjkj4545FD3434Srere7878″

db.default.jpa.generate-ddl=”true”

db.default.jpa.hibernate.ddl-auto=”create”

5. Update GitHub Actions Job to run Amazon CodeGuru Scan

You will need to add a new job definition in the GitHub Actions’ yaml file. This new section should be inserted between the Build and Deploy sections for optimal workflow.
Additionally, you will need to adjust the dependency in the deploy section to reflect the new flow: Build -> CodeScan -> Deploy.
Review sample GitHub actions code for running security scan on Amazon CodeGuru Reviewer.

codescan:
needs: build
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
security-events: write

steps:

– name: Download an artifact
uses: actions/[email protected]
with:
name: build-file

– name: Configure AWS credentials
id: iam-role
continue-on-error: true
uses: aws-actions/[email protected]
with:
role-to-assume: ${{ secrets.IAMROLE_GITHUB }}
role-session-name: GitHub-Action-Role
aws-region: ${{ env.AWS_REGION }}

– uses: actions/[email protected]
if: steps.iam-role.outcome == ‘success’
with:
fetch-depth: 0

– name: CodeGuru Reviewer
uses: aws-actions/[email protected]
if: ${{ always() }}
continue-on-error: false
with:
s3_bucket: ${{ env.S3bucket_CodeGuru }}
build_path: .

– name: Store SARIF file
if: steps.iam-role.outcome == ‘success’
uses: actions/[email protected]
with:
name: SARIF_recommendations
path: ./codeguru-results.sarif.json

– name: Upload review result
uses: github/codeql-action/[email protected]
with:
sarif_file: codeguru-results.sarif.json

– run: |

echo “Check for critical volnurability”
count=$(cat codeguru-results.sarif.json | jq ‘.runs[].results[] | select(.level == “error”) | .level’ | wc -l)
if (( $count > 0 )); then
echo “There are $count critical findings, hence stopping the pipeline.”
exit 1
fi

Refer to the complete file provided below for your reference. It is important to note that you will need to replace the following environment variables with your specific values.

S3bucket_CodeGuru
AWS_REGION
S3BUCKET

name: Build and Deploy

on:
#workflow_dispatch: {}
push:
branches: [ main ]
pull_request:

env:
applicationfolder: spring-boot-hello-world-example
AWS_REGION: us-east-1 # <replace this with your AWS region>
S3BUCKET: *<Replace your bucket name here>*
S3bucket_CodeGuru: codeguru-reviewer-<*replacebucketnameher*> # S3 Bucket with “codeguru-reviewer-*” prefix

jobs:
build:
name: Build and Package
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
– uses: actions/[email protected]
name: Checkout Repository

– uses: aws-actions/[email protected]
with:
role-to-assume: ${{ secrets.IAMROLE_GITHUB }}
role-session-name: GitHub-Action-Role
aws-region: ${{ env.AWS_REGION }}

– name: Set up JDK 1.8
uses: actions/[email protected]
with:
java-version: 1.8

– name: chmod
run: chmod -R +x ./.github

– name: Build and Package Maven
id: package
working-directory: ${{ env.applicationfolder }}
run: $GITHUB_WORKSPACE/.github/scripts/build.sh

– name: Upload Artifact to s3
working-directory: ${{ env.applicationfolder }}/target
run: aws s3 cp *.war s3://${{ env.S3BUCKET }}/

– name: Artifacts for codescan action
uses: actions/[email protected]
with:
name: build-file
path: ${{ env.applicationfolder }}/target/*.war

codescan:
needs: build
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
security-events: write

steps:

– name: Download an artifact
uses: actions/[email protected]
with:
name: build-file

– name: Configure AWS credentials
id: iam-role
continue-on-error: true
uses: aws-actions/[email protected]
with:
role-to-assume: ${{ secrets.IAMROLE_GITHUB }}
role-session-name: GitHub-Action-Role
aws-region: ${{ env.AWS_REGION }}

– uses: actions/[email protected]
if: steps.iam-role.outcome == ‘success’
with:
fetch-depth: 0

– name: CodeGuru Reviewer
uses: aws-actions/[email protected]
if: ${{ always() }}
continue-on-error: false
with:
s3_bucket: ${{ env.S3bucket_CodeGuru }}
build_path: .

– name: Store SARIF file
if: steps.iam-role.outcome == ‘success’
uses: actions/[email protected]
with:
name: SARIF_recommendations
path: ./codeguru-results.sarif.json

– name: Upload review result
uses: github/codeql-action/[email protected]
with:
sarif_file: codeguru-results.sarif.json

– run: |

echo “Check for critical volnurability”
count=$(cat codeguru-results.sarif.json | jq ‘.runs[].results[] | select(.level == “error”) | .level’ | wc -l)
if (( $count > 0 )); then
echo “There are $count critical findings, hence stopping the pipeline.”
exit 1
fi
deploy:
needs: codescan
runs-on: ubuntu-latest
environment: Dev
permissions:
id-token: write
contents: read
steps:
– uses: actions/[email protected]
– uses: aws-actions/[email protected]
with:
role-to-assume: ${{ secrets.IAMROLE_GITHUB }}
role-session-name: GitHub-Action-Role
aws-region: ${{ env.AWS_REGION }}
– run: |
echo “Deploying branch ${{ env.GITHUB_REF }} to ${{ github.event.inputs.environment }}”
commit_hash=`git rev-parse HEAD`
aws deploy create-deployment –application-name CodeDeployAppNameWithASG –deployment-group-name CodeDeployGroupName –github-location repository=$GITHUB_REPOSITORY,commitId=$commit_hash –ignore-application-stop-failures

6.  Push the code to the repository:

Remember to save all the files that you have modified.
To ensure that you are in your git repository folder, you can run the command:

git remote -v

The command should return the remote branch address, which should be similar to the following:

[email protected] GitActionsDeploytoAWS % git remote -v
origin [email protected]:<username>/GitActionsDeploytoAWS.git (fetch)
origin [email protected]:<username>/GitActionsDeploytoAWS.git (push)

To push your code to the remote branch, run the following commands:

git add .
git commit -m “Adding Security Scan”
git push

Your code has been pushed to the repository and will trigger the workflow as per the configuration in GitHub Actions.

7.  Verify the pipeline

Your pipeline is set up to fail upon the detection of a critical vulnerability. You can also suppress recommendations from CodeGuru Reviewer if you think it is not relevant for setup. In this example, as there are two critical vulnerabilities, the pipeline will not proceed to the next step.
To view the status of the pipeline, navigate to the Actions tab on your GitHub console. You can refer to the following image for guidance.

Figure 4. GitHub Actions pipeline

To view the details of the error, you can expand the “codescan” job in the GitHub Actions console. This will provide you with more information about the specific vulnerabilities that caused the pipeline to fail and help you to address them accordingly.

Figure 5. Codescan actions logs

8. Check the Amazon CodeGuru recommendations in the GitHub user interface

Once you have run the CodeGuru Reviewer Action, any security findings and recommendations will be displayed on the Security tab within the GitHub user interface. This will provide you with a clear and convenient way to view and address any issues that were identified during the analysis.

Figure 6. Security tab with results

Clean up

To avoid incurring future charges, you should clean up the resources that you created.

Empty the Amazon S3 bucket.
Delete the CloudFormation stack (CodeDeployStack) from the AWS console.

Delete codeguru Amazon S3 bucket.

Disassociate the GitHub repository in CodeGuru Reviewer.
Delete the GitHub Secret (‘IAMROLE_GITHUB’)

Go to the repository settings on GitHub Page.
Select Secrets under Actions.
Select IAMROLE_GITHUB, and delete it.

Conclusion

Amazon CodeGuru is a valuable tool for software development teams looking to improve the quality and efficiency of their code. With its advanced AI capabilities, CodeGuru automates the manual parts of code review and helps identify performance, cost, security, and maintainability issues. CodeGuru also integrates with popular development tools and provides customizable recommendations, making it easy to use within existing workflows. By using Amazon CodeGuru, teams can improve code quality, increase development speed, lower costs, and enhance security, ultimately leading to better software and a more successful overall development process.

In this post, we explained how to integrate Amazon CodeGuru Reviewer into your code build pipeline using GitHub actions. This integration serves as a quality gate by performing code analysis and identifying challenges in your code. Now you can access the CodeGuru Reviewer recommendations directly within the GitHub user interface for guidance on resolving identified issues.

About the author:

Mahesh Biradar

Mahesh Biradar is a Solutions Architect at AWS. He is a DevOps enthusiast and enjoys helping customers implement cost-effective architectures that scale.

Suresh Moolya

Suresh Moolya is a Senior Cloud Application Architect with Amazon Web Services. He works with customers to architect, design, and automate business software at scale on AWS cloud.

Shikhar Mishra

Shikhar is a Solutions Architect at Amazon Web Services. He is a cloud security enthusiast and enjoys helping customers design secure, reliable, and cost-effective solutions on AWS.

Custom CRM System: Benefits, Requirements & Cost of Development

Do you want to investigate the potential of having a custom CRM system for your business? What are the characteristics of custom CRM systems? Or curious about the cost and what it would take to build a custom CRM system? 

Customer relationship management (CRM) systems are widely available to businesses today. Custom software development is the greatest choice for creating and executing software from the beginning because one-size-fits-all software is not available.  Custom software enables businesses to focus a greater emphasis on the people who are a part of the organization, including employees, vendors, clients, and service users. The only way to guarantee that the software precisely meets the company’s demands is to create custom CRM software.
A custom CRM can help you ensure that your business takes advantage of all opportunities to engage, convert, and retain clients. Regardless of the size of your business, a custom CRM system might streamline your operations, improve your interactions with current clients, and elicit new leads and business opportunities.

This article will help you understand the advantages of a custom CRM system, the implementation requirements, and the development costs. You’ll also be able to understand why a customized CRM system is perfect for your business.

What is Custom CRM System?

CRM stands for Customer Relationship Management and refers to all strategies, techniques, tools, and technologies used by enterprises for developing, retaining, and acquiring customers.

Off-the-shelf (OTS) CRM is ready-to-use software that businesses may buy unmodified and use right away. It often includes modules for keeping track of sales possibilities, managing marketing campaigns, and setting up procedures for customer service.

Custom CRM systems can foster customer loyalty and automate several processes, saving businesses time and money. The main purpose of CRM software is to allow salespeople and marketers to better manage and analyze relationships with the business’s customers and potential customers.

Custom CRM vs. Off-the-Shelf CRM: What are the differences?

The biggest difference between a custom CRM and an off-the-shelf CRM is flexibility. An off-the-shelf CRM is frequently already created and prepared for usage, in contrast to a custom CRM, which is created especially for the requirements of the firm. Off-the-shelf CRMs can provide less adequate features and possibilities, but custom CRMs can be designed to satisfy the particular wants of the firm. Custom CRMs typically require more technical expertise and effort to set up and run than off-the-shelf CRMs do.

With a custom CRM, businesses may modify the program to meet their unique needs and requirements, ensuring the application is appropriate for their particular industry. Moreover, custom CRMs provide businesses the opportunity to integrate the application with their existing internal hardware and software, creating a seamless user experience. On the other hand, off-the-shelf CRMs were unable to offer as many customization and integration choices.

Usage of Custom CRM

With a customized CRM, any department in your company – from sales and customer service to business development, recruiting, and marketing – can benefit from improved methods of managing external connections and activities that are integral to success. As no two businesses are alike, a custom CRM is the only way to get precisely what you need without any unnecessary extras or having to make do with features that may not have been included in an off-the-shelf solution.

When it comes to investing in a CRM, you have three choices: 

buying an existing system, 
creating one with an in-house team, 
or outsourcing the development of a custom CRM. 

When you’re considering a CRM investment, it’s important to look into all of your options. Although buying an off-the-shelf CRM system could be the most cost-effective option, it might not be the greatest fit for the unique requirements of your company. While developing a custom CRM in-house can be a terrific option, it often takes a significant amount of time and resources and costs over $50,000. Instead, you might outsource the building of a custom CRM, which would be more cost-effective and efficient. These factors make it crucial to thoroughly consider all of your alternatives before making a choice.

Why is custom CRM important for your business? 

Custom CRM is important for your business since it offers a potent tool for managing client interactions and simplifying sales processes. It gives the ability to recognize, monitor, and control customer interactions, automate lead management, and examine customer data to learn more about how customers behave. This helps you better understand customer needs and create tailored customer experiences that drive customer loyalty and boost sales. As a consequence, improves client retention by raising customer satisfaction and a better understanding of their needs.

Types of Custom CRM

There are three major types of custom CRMs based on their function and the features they provide.

Collaborative

By establishing a clear framework for data exchange, collaborative CRM is intended to improve cooperation and communication. To create a custom CRM adapted to particular needs, it may be utilized internally inside a company or between external teams, such as partners.  Common features of this kind of system include group discussions, content sharing, and real-time activity updates.

Analytical

Analytical CRM systems are intended to help in planning. Such a system offers helpful data, analytics, and insights. It must be able to compile data from several sources, process it, and deliver real-time changes to be useful.

Operational

Operational CRMs focus heavily on simplifying and automating company processes to increase productivity. Lead processing, automated messaging to clients via various channels, and follow-ups are frequently helpful. You have two options when creating your solution: either choose certain features or mix various CRM kinds.

Benefits of Custom CRM

Because of their numerous benefits, custom CRM systems are becoming increasingly popular among businesses. These technologies not only enable businesses to give great and individualized customer care, but they also have a wide range of positive effects on customers. As a result, businesses are using CRM systems more often to increase customer service. Some of the benefits of custom CRM systems are:

Benefit #1 Time Saving

Organizations can easily obtain the information they need to complete some essential activities thanks to custom CRM systems. They can also automate certain monotonous chores. Therefore they may spend less time providing services to their clients. This allows them to focus on other important duties while letting the custom CRM handle additional tasks like data processing, analysis, customer care, sales, and marketing.

Benefit #2 Improved Efficiency

Using a custom CRM for task management gives employees a straightforward way to access the information they need to do their jobs and each employee can use their individualized dashboard. This makes their work easier and more productive; in fact, 60% of businesses report an increase in productivity from implementing a custom CRM.

Benefit #3 Enhanced Customer Relationship

Data about customers, such as their preferences, needs, and pain points, is readily available from a custom CRM. 84% of consumers believe that a business’s experience is just as important as the goods and services they provide. Custom CRM enables you to provide customized services to your consumers while addressing their pain points thanks to all the data available from the system. This can help you improve the way you communicate with your consumers, which will increase their likelihood of becoming repeat customers and help you grow your business.

Benefit #4 Access to in-depth Report

Making data-driven choices, such as adjusting prices or marketing tactics, requires the usage of reports that are produced by custom CRM using customer data. A custom CRM can give you reports that will help your business decision-making process because employing itself might boost report accuracy by 42%, according to studies.

Benefit #5 Increased Income

Сustom CRM system can handle practically all of your company’s requirements for attracting new clients, turning them into customers, and offering top-notch customer service. Sales are boosted as a result since the data it gives may be utilized to deliver better customer service and foster customer loyalty.

What to Consider Before Building Custom CRM Software

A good CRM tool will let you store contact information for clients and prospects, identify sales opportunities, keep a record of service issues, and manage marketing campaigns and tactics – all in one central location. Easy access to this data about customer interaction will allow anyone in your company to make informed decisions based on analytics. Before beginning to build a custom CRM system, there are several crucial considerations to be made. By doing this, you may decide as soon as possible and avoid making costly mistakes.

Setting the Goals

Prioritize the most crucial custom CRM platform goals after determining their importance. This will enable you to select the custom CRM system’s features and design that are most appropriate for your business.

Types of Custom CRM Systems

Decide on the type of CRM solutions you need after you’ve defined your objectives. CRM is divided into three categories: organizational, analytical, and collaborative. Each is made for a specific purpose.

Access Roles & Levels

Since several departments could utilize the custom CRM for various purposes, we recommend adding user functions and permissions. For instance, this may apply to top management, marketing directors, sales, and customer care personnel.

Custom CRM Features

Base your decision on which functionalities to include in your custom CRM on your objectives. Pay attention to the features that will be most beneficial for your business needs. Some of the most important features of custom CRM are dashboards, reports, tasks, contact management, lead management, and mobile access.

SaaS or Internal Software

Think about whether you’re going to create a custom CRM system for internal use or whether you’ll eventually transform it into a SaaS platform. It can be challenging and expensive to change the software architecture. Make sure your architecture is adaptable and scalable from the start if you choose the latter.

Cost of CRM Software Development 

Regardless of the size of your business, custom CRM software can benefit you. Estimating the associated costs can be difficult, so we’re here to explain the anticipated expenses. The cost of a custom CRM system depends on the features you want to include, the technical complexity, the development team’s experience, and other functionalities such as security measures.

To put things into perspective, if you manage a mid-sized business with more than 25 employees and your enterprise-sized subscription costs $125 per user each month, it works out to $3125 per month, $37,500 per year, and $187,500 over five years. The same amount of money, however, might be used to purchase custom CRM software that would be made to meet your unique needs. Determining the precise cost of your project is something we always advise doing in consultation with our specialists. We are aware that every customer is unique and needs a customized strategy to enable the smooth growth of their product. 

Summing Up

Are you looking for a way to improve your business performance? A custom CRM system can be the perfect solution. It provides businesses with a powerful platform for better managing customer relationships and increasing sales. With the right planning and guidance, you can create your own high-quality CRM from scratch that is tailored to your business needs. By centralizing customer data and tracking customer interactions, you can gain insights into customer behavior, create targeted marketing campaigns, and automate specific processes. This can help you improve efficiency, increase customer satisfaction, and boost sales. 

At Flatlogic, we provide full-cycle custom CRM development services to help you turn your concept into a working product. By leveraging our expertise and technical capabilities, you can create a high-quality custom CRM tailored to your business needs that will become the heart of your operations just in a few minutes. 

Custom CRM with Flatlogic

Flatlogic Platform offers an easy way to generate a custom CRM system with full control over the source code. You can make sure that you have the right features, scalability, and performance to match your business needs. Plus, with no-code development, you don’t need to be an expert programmer to make the necessary changes – making it easier to scale and customize as your business grows. With Flatlogic Platform, you have the flexibility to create a custom CRM solution that is tailored to your needs, while still having the scalability of more traditional development.

How to Create Custom CRM with Flatlogic Platform?

Using the Flatlogic Full-Stack Generator you can create CRUD and static applications in a few minutes. To start using the Platform, you need to register on the Flatlogic website. Clicking the “Sign in” button in the header will allow you to register for a Flatlogic account.

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

Step 3. Schema Editor

In this step, you can create your database schema from scratch, import an existing schema or select one of the suggested schemas. 

To import your existing database, click the Import SQL button and select your .sql file. After that, your database will be opened in the Schema Editor where you can further edit your data (add/edit/delete entities).

If you are not familiar with database design and find it difficult to understand what tables are, we have prepared some ready-made sample schemas of real applications that you can modify for your application:

E-commerce app;
Time tracking app;
Book store;
Chat (messaging) app;
Blog.

Or, you can define a database schema and add a description by clicking on the “Generate with AI” button. You need to type the application’s description in the text area and hit “Send”. The application’s schema will be ready in around 15 seconds. You may either hit deploy immediately or review the structure to make manual adjustments.

Next, you can connect your GitHub and push your application code there. Or skip this step by clicking the Finish and Deploy button and in a few minutes, your application will be generated.

The post Custom CRM System: Benefits, Requirements & Cost of Development appeared first on Flatlogic Blog.

Create a Managed FFmpeg Workflow for Your Media Jobs Using AWS Batch

FFmpeg is an industry standard, open source, widely used utility for handling video. FFmpeg has many capabilities, including encoding and decoding all video compression formats, encoding and decoding audio, encapsulating and extracting audio and video from transport streams, and much more.

If AWS customers want to use FFmpeg on AWS, they have to maintain FFmpeg by themselves through an Amazon Elastic Compute Cloud (Amazon EC2) instance and develop a workflow manager to ingest and manipulate media assets. It’s painful.

In this post, I will show how to integrate FFmpeg with AWS Services to build a more easily managed FFmpeg. We’ve  created an open source solution to deploy FFmpeg packaged in a container and managed by AWS Batch. When finished, you will execute an FFmpeg command as a job through a REST API. This solution improves usability and offers relief from the management learning curve and maintenance costs of open source FFmpeg on AWS.

Solution overview

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. There is no additional charge for AWS Batch, you pay only for AWS compute resources. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.

As of February 2023, AWS offers 15 general purpose EC2 instance families, 11 compute optimized instance families and 14 accelerated computing instances. By correlating each instance family specification with the FFmpeg hardware acceleration API, we’ve highlighted these instances that can optimize the performance of FFmpeg:

NVIDIA GPU-powered Amazon EC2 instances : P3 instance family comes equipped with the NVIDIA Tesla V100 GPU. G4dn instance family is powered by NVIDIA T4 GPUs and Intel Cascade Lake CPUs. These GPUs are well suited for video coding workloads and offers enhanced hardware-based encoding/decoding (NVENC/NVDEC).

Xilinx media accelerator cards : VT1 instances are powered by up to 8 Xilinx® Alveo U30 media accelerator cards and support up to 96 vCPUs, 192GB of memory, 25 Gbps of enhanced networking, and 19 Gbps of EBS bandwidth. The Xilinx Video SDK includes an enhanced version of FFmpeg that can communicate with the hardware accelerated transcode pipeline in Xilinx devices. VT1 instances deliver up to 30% lower cost per stream than Amazon EC2 GPU-based instances and up to 60% lower cost per stream than Amazon EC2 CPU-based instances.
EC2 instances powered by Intel : M6i/C6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz.
AWS Graviton-bases instances : Encoding video on C7g instances, the last AWS Graviton processor family, costs measured 29% less for H.264 and 18% less for H.265 compared to C6i, as described in this blog post ‘Optimized Video Encoding with FFmpeg on AWS Graviton’

AMD-powered EC2 instances: M6a instances are powered by 3rd generation AMD EPYC processors (code named Milan).
Serverless compute with Fargate: Fargate allows to have a completely serverless architecture for your batch jobs. With Fargate, every job receives the exact amount of CPU and memory that it requests.

We are going to create a managed video encoding pipeline using AWS Batch with FFmpeg in container images. For example, you will be able to make a simple transmuxing operation, add an audio silent track, extract audio/video track, change video container file, concatenate video files, generate thumbnails, or create a timelapse.  As a starting point, this pipeline uses Intel (C5), Graviton (C6g), Nvidia (G4dn), AMD (C5a, M5a), and Fargate instance families.

The architecture includes 5 key components:

Container images are stored in a Amazon Elastic Container Registry (Amazon ECR). Each container includes an FFmpeg library with a Python wrapper. Container images are specialized per CPU/GPU architecture : ARM64, x86-64, NVIDIA.

AWS Batch is configured with a queue and compute environment per CPU architecture. AWS Batch schedules job queues using Spot Instance compute environments only, to optimize cost.
Customers submit jobs through AWS SDKs with the ‘SubmitJob’ operation or use the Amazon API Gateway REST API to easily submit a job with any HTTP library.
All media assets ingested and produced are stored in an Amazon Simple Storage Service (Amazon S3)

Observability is managed by Amazon CloudWatch and AWS X-Ray. All X-Ray traces are exported on Amazon S3 to benchmark which compute architecture is better for a specific FFmpeg command.

Prerequisites

You need the following prerequisites to set up the solution:

An AWS account with privileges to create AWS Identity and Access Management (IAM) roles and policies. For more information, see Overview of access management: Permissions and policies.
Latest version of AWS Cloud Development Kit (AWS CDK) with bootstrapping already done.
Latest version of Task.
Latest version of Docker.
Latest version of Python 3.

Deploy the solution with AWS CDK

To deploy the solution “AWS Batch with FFmpeg” on your account, complete the following steps:

Clone the GitHub repository https://github.com/aws-samples/aws-batch-with-ffmpeg

Execute this list of commands:

# Create a local Python virtual environment and install requirements

task venv

# Activate the Python virtual environment

source .venv/bin/activate

# Deploy the CDK stack

task cdk:deploy

# Collect AWS CloudFormation outputs from the stack

task env

# Build and push docker images for AMD64 processor architecture

task app:docker-amd64

# Build and push docker images for ARM64 processor architecture

task app:docker-arm64

# Build and push docker images for NVIDIA processor architecture

task app:docker-nvidia

AWS CDK outputs the new Amazon S3 bucket where you can upload and download video assets, and the Amazon API Gateway REST endpoint with which you can submit video jobs.

Use the solution

Once the “AWS Batch with FFmpeg” solution is installed, you can execute FFmpeg commands with the AWS SDKs, the AWS Command Line Interface (AWS CLI) or the API. The solution respects the typical syntax of the FFmpeg command described in the official documentation:

ffmpeg [global_options] {[input_file_options] -i input_url} … {[output_file_options] output_url} …

Parameters of the solution are:

global_options: FFmpeg global options described in the official documentation.

input_file_options: FFmpeg input file options described in the official documentation.

input_url: AWS S3 url synced to the local storage and tranformed to local path by the solution.

output_file_options: FFmpeg output file options described in the official documentation.

output_url: AWS S3 url synced from the local storage to AWS S3 storage.

compute: Instances family used to compute the media asset : intel, arm, amd, nvidia,

name: metadata of this job for observability.

In this example, we use the AWS SDK for Python (Boto3) and we want to cut a specific part of a video. As a prerequisite, we uploaded a video in the Amazon S3 bucket created by the solution. Now, we complete the parameters below:

import boto3

import requests

from urllib.parse import urlparse

from aws_requests_auth.boto_utils import BotoAWSRequestsAuth

# AWS CloudFormation output of the Amazon S3 bucket created by the solution : s3://batch-ffmpeg-stack-bucketxxxx/

s3_bucket_url = “<S3_BUCKET>”

# Amazon S3 key of the media Asset uploaded on S3 bucket, to compute by FFmpeg command : test/myvideo.mp4

s3_key_input = “<MEDIA_ASSET>”

# Amazon S3 key of the result of FFmpeg Command : test/output.mp4

s3_key_output = “<MEDIA_ASSET>”

# EC2 instance family : `intel`, `arm`, `amd`, `nvidia`, `fargate`

compute = “intel”

job_name = “clip-video”

command={

    “name”: job_name,

    #”global_options”:  “”,

    “input_url” : s3_bucket_url + s3_key_input,

    #”input_file_options” : “”,

    “output_url” : s3_bucket_url + s3_key_output,

    “output_file_options”: “-ss 00:00:10 -t 00:00:15 -c:v copy -c:a copy”

}

And, I submit the FFmpeg command with the AWS SDK for Python (Boto3) :

batch = boto3.client(“batch”)

result = batch.submit_job(

    jobName=job_name,

    jobQueue=”batch-ffmpeg-job-queue-” + compute,

    jobDefinition=”batch-ffmpeg-job-definition-” + compute,

    parameters=command,

)

We can also submit the same FFmpeg command with the REST API through an HTTP POST method. I control access to this Amazon API Gateway API with IAM permissions:

# AWS Signature Version 4 Signing process with Python Requests

def apig_iam_auth(rest_api_url):

    domain = urlparse(rest_api_url).netloc

    auth = BotoAWSRequestsAuth(

        aws_host=domain, aws_region=”<AWS_REGION>”, aws_service=”execute-api”

    )

    return auth

# AWS CloudFormation output of the Amazon API Gateway REST API created by the solution : https://xxxx.execute-api.xx-west-1.amazonaws.com/prod/

api_endpoint = “<API_ENDPOINT>”

auth = apig_iam_auth(api_endpoint)

url= api_endpoint + compute + ‘/ffmpeg’

response = requests.post(url=url, json=command, auth=auth, timeout=2)

Per default, AWS Batch chooses an available EC2 instance type. If you want to override it, you can add the `nodeOverride` property when you submit a job with the SDK:

instance_type = ‘c5.large’

result = batch.submit_job(

    jobName=job_name,

    jobQueue=”batch-ffmpeg-job-queue-” + compute,

    jobDefinition=”batch-ffmpeg-job-definition-” + compute,

    parameters=command,

    nodeOverrides={

            “nodePropertyOverrides”: [

                {

                    “targetNodes”: “0,n”,

                    “containerOverrides”: {

                        “instanceType”: instance_type,

                    },

                },

            ]

        },

    )

And with the REST API :

command[‘instance_type’] = instance_type

url= api_endpoint + compute + ‘/ffmpeg’

response = requests.post(url=url, json=command, auth=auth, timeout=2)

Metrics

AWS Customers also want to use this solution to benchmark the video encoding performance of Amazon EC2 instance families. This solution analyzes performance and video quality metrics with AWS X-Ray.

AWS X-Ray helps developers analyze and debug applications. With X-Ray, we can understand how our application and its underlying services are performing to identify and troubleshoot the cause of performance issues.

We defined 3 X-Ray segments: Amazon S3 download, FFmpeg Execution, and Amazon S3 upload.

In the AWS Console (AWS Systems Manager > Parameter Store),  switch the AWS Systems Manager Parameter /batch-ffmpeg/ffqm to TRUE :  The video quality metrics PSNR, SSIM, VMAF are then calculated by FFmpeg and exported as AWS X-RAY metadata and as a JSON file uploaded in the Amazon S3 bucket with the key prefix /metrics/ffqm.

All JSON files are crawled by an Amazon Glue Crawler. This crawler provides an Amazon Athena table with which you can execute SQL requests to analyse the performance of our workload.

For example, we created a visual bar chart with Amazon QuickSight where our Athena table is our dataset. As shown in the chart here, for the job name compress-video launched with several instance types, the most efficient instance type is c5a.2xlarge.

Extend the solution

You can customize and extend the solution as you want. For example, you can customize the FFmpeg Docker image by adding libraries or upgrading the FFmpeg version. All docker files are located in application/docker-images/. You can customize the list of Amazon EC2 instances used by the solution with new instance types to optimize performance, updating the CDK stack located in this CDK file cdk/batch_job_ffmpeg_stack.py.

Cost

There is no additional charge for AWS Batch. You pay only for AWS resources created to store assets and run the solution. We use Spot instances to optimize the cost. With metrics provided by AWS X-Ray, you can benchmark all instances to find the best one for your use case.

Cleanup

To avoid incurring unnecessary charges, clean up the resources you created for testing this solution.

Delete all objects in the Amazon S3 bucket.
Inside the Git repository, execute this command in a terminal : task cdk:destroy

Summary

In this post, we covered the process of setting up an FFmpeg workflow managed by AWS Batch. The solution includes an option to benchmark the video encoding performance of Amazon EC2 instance families. The solution is managed by AWS Batch, a scalable, and cost effective service using EC2 Spot instances.

This solution is open source and available at http://github.com/aws-sample/aws-batch-with-ffmpeg/. You can give us feedback through GitHub issues.

Transformers: JavaScript in Disguise

#​630 — March 17, 2023

Read on the Web

JavaScript Weekly

????  Transformers.js: Running ML Models in the Browser — Transformers are a type of machine learning model often used for natural language or visual processing and while running such models directly in the browser is in its infancy, Transformers.js opens up some ML models to you with some impressive demos here.

Xenova

????  Celebrating 10 Years of Electron — It feels like Electron pops up everywhere (Slack, Spotify, VS Code, and more) so it might feel surprising it’s only been with us for a decade. Slack and Electron developer Erick Zhao gives thanks to Electron’s developers, the community, gives us a bit of Electron related history, and reassures us Electron is still going strong.

Erick Zhao

Dynaboard: A Visual Web App IDE Made for Developers — Build high performance public and private web applications in a collaborative — full-stack — development environment.

Dynaboard sponsor

Announcing TypeScript 5.0 — Note that TypeScript doesn’t follow semantic versioning, so this is as much a ‘major’ release as 4.9 was.. but 5.0 looks cool anyway. This release of the typed JavaScript superset is packed with features like decorators, improved ESM project support for Node and bundlers, const type parameters, and more.

Daniel Rosenwasser (Microsoft)

Turbowatch: File Change Detector and Task Orchestrator — Not just that but it claims to be extremely fast and “if you ever wanted something like Nodemon but more capable, then you are at the right place.” This looks very promising and the README is full of examples.

Gajus Kuizinas

IN BRIEF:

BREAKING NEWS: The JS Party podcast has just dropped an episode called ▶️ The Future of React – so new, we haven’t listened to it, but it features Dan Abramov and Joe Savona so may make for good weekend listening..

“The most dangerous command you run every day: npm install” says Socket, who are introducing what they call ‘safe npm’, a transparent wrapper around npm designed to, well, make it less dangerous.

CORRECTION: In issue 627 we suggested the ECMAScript 2023 spec had entered a new draft stage. TC39 member Jordan Harband pointed out to us that it has been in such a state for some time. “There’s still a stage 4 PR not yet merged,” he noted, but there will be some progress in the next month.

Defer is a new ‘zero-infrastructure’ background jobs platform for Node.js apps.

Recently we linked to ???? Dittytoy, a fun online JavaScript environment for audio coding/experiments. Someone has somehow implemented an entire Commodore 64 SID synthesizer in it!

????  Developer Day: A Front-Row Seat to What’s New with Retool

Retool sponsor

RELEASES:

Node.js v19.8.0/1 (Current)

Jasmine 4.6
↳ Testing framework for browsers and Node.

pm2 5.3
↳ Popular Node production process manager.

Mongoose 7.0
↳ Popular MongoDB ODM for Node.js.

ESLint 8.36

???? Articles & Tutorials

Chrome 111 Gains a ‘View Transition’ Feature for SPAs — The View Transition API is only supported by Chrome so far, but allows easy animated page transitions within single-page apps (demo here). Luckily it suits progressive enhancement so you can start using it right now without feeling too guilty 😉 Multi-page app support is forthcoming.

Jake Archibald (Chrome Developers)

Create and Download Text Files with JavaScript — If you want your code to be able to generate a text (such as JSON) file on the fly and have it downloaded by the user’s browser, it’s reasonably easy.

Amit Merchant

Five Mistakes I Made When Starting My First React Project — Richard shares his early React mistakes with the hope you can learn from his misfortunes. He tackles topics like using defaultProps, propTypes, and class components.

Richard Oliver Bray

Too Much Tech Debt in Your node_modules? Our Team of JS Devs Can Help — We are a team of senior software engineers who specialize in tech debt. Let us modernize your JavaScript stack ????

UpgradeJS.com | JavaScript Upgrade Services sponsor

Progressively Enhancing a Table with a Web Component — Building a web component wrapper to add table sorting.

Raymond Camden

Shell-Free Node.js Scripting with Execa 7.1Execa is a popular process execution library for Node and the latest version includes an interesting $ method feature for writing zx-style scripts with it, making it even more useful for shell scripting style usecases.

ehmicky

What is Vite and Why Use It Over Create React App?

Luke Twomey

Pointers on Upgrading from Cypress v9 to v12

Gleb Bahmutov

How to Use v-model with Form Inputs in Vue

Dmitri Pavlutin

How to Create and Use Path Aliases in TypeScript Imports with Vite

Hasibul Hasan

What Is Deno and How to Use Its Sandbox?

Roman Zaynetdinov

???? Code & Tools

Template: A Simple Framework for Webapps — The author built it for his own projects, but notes: “It’s a joy to work in, feels “frameworky” but it’s just web standards with <100 lines of convenience JS wrapped around it. There is no magic beyond what the browser provides – I like it that way.” We do too.

William Blankenship

React ProseMirror: Integrate the ProseMirror Editor with ReactProseMirror is a toolkit for building rich text editors for the Web.

The New York Times

Breakpoints and console.log Is the Past, Time Travel Is the Future — 15x faster JavaScript debugging than with breakpoints and console.log, now with support for Vitest.

Wallaby.js sponsor

Fable 4.0: F# to JavaScript Compiler — If you fancy F#’s flavor of almost-entirely-functional development, this could be for you. GitHub repo.

Fable

MiniSearch: Small In-Memory Fulltext Search Engine for Browser and Node — The strength is that the indexed data is stored locally, allowing it to work offline and giving good performance, as seen in this demo.

Luca Ongaro

css-variable: Tiny Treeshakable Library to Define CSS Custom Properties in JS — Compatible with popular CSS-in-JS libraries like Emotion, styled-components, Linaria, etc., and it boasts better CSS minification and smaller virtual DOM updates, among other features.

Jan Nicklas

Tremor 2.0: The React Library to Build Dashboards Fast — Provides an array of modular components to build data-driven dashboards. v2.0 is the “first step towards a production-ready version of Tremor” and sees a full switch to Tailwind CSS. Homepage.

Tremor Labs

Stable Diffusion Plugin for Photoshop — Writing code that worked with Adobe’s weird JS variant was ghastly, but this uses their new ‘UXP’ based approach, so is interesting enough for that alone. This plugin also opens up the Stable Diffusion generative art system to Photoshop users.

Abdullah Alfaraj

Flexboard: A React Component Library for Resizable Sidebars — Try the live example. The code allows you to set min/max sizes for the resizable parts of the layout.

Dorbus

???? Jobs

Full Stack JavaScript Engineer @ Emerging Cybersecurity Startup — Small team/big results. Fun + flexible + always interesting. Come build our award-winning, all-in-one cybersecurity platform.

Defendify

Software Engineer (Frontend) — Join our “kick ass” team. Our software team operates from 17 countries and we’re always looking for more exceptional engineers.

Sticker Mule

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

????‍???? Got a job listing to share? Here’s how.

Fuite 2.0
↳ Tool for finding memory leaks in web apps.

???? wavesurfer.js 6.6
↳ Navigable waveform built on Web Audio & canvas.

Svelte-Inview 4.0
↳ Svelte action that monitors when an element enters/leaves the viewport.

Discord.js 14.8
↳ Library for using the Discord chat API.

Plotly.js 2.20
↳ Powerful charting library. (Examples.)

Recharts 2.5
↳ React + D3 charting library. (Examples.)

deepmerge 4.3.1
↳ Merges the enumerable properties of objects.

Vue Testing Library 7.0

React Table Library 4.1

Custom ERP System: Benefits, Requirements & Cost of Development

If you are looking for an effective and efficient way to manage your business, then a custom enterprise resource planning (ERP) system is the perfect solution. Custom ERP systems allow businesses to collect, store, and analyze information from several departments in one database, making it easier for executives to manage all fundamental business operations. Furthermore, the implementation of such software optimizes various workflows and processes. While there are various commercially available systems, custom ERP development ensures that you get a solution tailored specifically to your needs.

There is always room for development, even if your product is excellent. This is especially true for businesses looking for a custom ERP solution. The increasing need for specific software solutions serves as evidence of the need for a unique ERP system. 

If you are considering an ERP system for your business when making this important decision, ask yourself: What are the benefits of a custom ERP system? How does a custom ERP system compare to an off-the-shelf version? Is it the right fit for my business needs? Whatever you decide, it’s important to choose the right system for your business. In this article, we’ll talk about all of the above in developing custom ERP systems and provide some helpful tips on how to choose the right one.  Let’s dive into the details.

What is custom ERP?

ERP stands for Enterprise Resource Planning is a sophisticated software package that automates all business operations procedures. It aids businesses in optimizing efficiency and accuracy by automating data input and offering real-time visibility into their operations, consequently reducing the time and costs associated with managing operations.

An Off-the-shelf (OTS) ERP system is a pre-packaged, ready-to-deploy solution that comes with verified capabilities. It is designed to meet the needs of a wide range of businesses, however, its “one-size-fits-all” approach may not be the best solution for addressing specific challenges.

Custom ERP (as opposed to OTS solutions) system is designed around the unique requirements of a particular business. It offers a complete solution that unifies all of the business operations into a single platform and is created to match the particular needs of a particular business.

How Custom ERPs Different From Off-the-shelf ERPs

Custom ERPs are created especially for the individual or business that requires them, in contrast to off-the-shelf ERPs. This makes it possible to develop a solution that is more tailored to the unique requirements and needs of the customer and is also more effective, efficient, and affordable.  The capacity for custom ERPs to often be upgraded and modified over time typically allows for more flexibility and scalability.  Moreover, custom ERPs may be integrated into other systems to increase their versatility. Off-the-shelf ERPs, on the other hand, are pre-packaged, unchangeable systems with frequently limited scalability.

Usage of custom ERP

Custom ERP systems are produced to assist businesses in managing their operations more effectively. As they are customized to the particular requirements of the organization, they are more adaptable and able to offer a greater level of control. Sales, inventories, financial and operational procedures, customer support, and more could all be managed by custom ERP systems. Moreover, they can offer data to assist organizations in making wiser decisions. Customer Relationship Management (CRM), Human Resources (HR), supply chain management, and other business applications are frequently incorporated into custom ERP systems. This facilitates data access and the automation of numerous business operations. Custom ERP systems can help gain businesses a complete picture of their operations and can be extremely motivated, customer experience, and revenue.

Benefits of custom ERP

​​The main benefit of having custom software is that it can help focus on the specific needs and resources of businesses. It is essential to understand how to effectively incorporate and monitor all data management and workflows of the business from a single platform. These are some of the benefits of possessing a custom ERP system:

Benefit #1: Improved System Performance and Accessibility 

The goal of custom ERP is to be as effective and efficient as possible while offering more application availability than other systems. Businesses may easily adapt the software to their changing demands thanks to its modular architecture, saving money on potentially less-effective items.

Benefit #2: Balance Between Integration & Specification

Businesses may gain from an integrated workflow across departments with a custom ERP system, which will lead to better connections between buyers and suppliers. Different departments and teams can have their customized system thanks to the fact that custom ERP systems are created to match the specific needs of each business. Software developers may ensure the system is suited to the precise specifications and circumstances of the company’s numerous internal divisions and departments by developing and building a custom ERP that is adapted to the company’s particular system needs, workflow, and applications.

Benefit #3: No Need for Modifications

Businesses may save time and effort by using custom ERP systems instead of converting their current database architectures, applications, and tech stacks to the new system. Software development businesses may offer a ‘plug and play’ system by designing a custom ERP that suits the company’s current infrastructure, networks, capabilities, and resources, allowing enterprises to swiftly adopt the new system and begin profiting from its features.

Benefit #4: Custom Solutions for Your Business Needs

Custom ERP systems are created to deal with issues frequently encountered by businesses. The developers of such systems build the framework, features, and functionalities based on what they’ve picked up from past customers. This is a major plus point since it draws from the knowledge of many clients and businesses. 

Benefit #5: Automation Without Workflow Changes

Custom ERP can automate multiple operations within the company’s existing systems, databases, infrastructure, networks, and applications, without requiring extensive changes and modifications. This guarantees seamless integration while reducing any gaps between the company’s existing system and custom ERP. One important duty that may be automated with a custom ERP is the periodic report generation process, which makes it simpler to keep track of business activities.

Benefit #6: Improved User Experience and Customer Interface

Without a custom ERP system, many companies suffer from a lack of cohesion between their frontend website and backend database, middleware, and application systems. Even with a custom ERP system, plug-and-play capabilities for productivity applications and workflow systems are not always available. Software development companies can provide a solution to this issue by creating custom ERP frameworks and systems that allow for easy integration of existing infrastructure, workflow processes, and productivity applications.

Benefit #7: Monitor Performance with One Reporting System

Gain total visibility into all business processes, from Finance and Accounts Management, Human Resources to Manufacturing, Marketing, and Sales, and Supply Chain and Warehouse Management. Automate departmental workflow and track each department’s activities with a single reporting system, allowing for easy analysis of performance statistics and assurance that nothing is overlooked.

Benefit #8: Customized Reports & BI

Business is all about understanding the past and using that knowledge to make better decisions in the present. Statistics and analytics are essential for this, as they provide us with the data and insights needed to make informed choices. We can tailor our reports to our exact needs and integrate our business intelligence tools to gain even more statistics and insights.

Benefit #9: Boost Sales Through Enhanced CRM

Businesses may build a CRM that fits their unique demands and operations by deploying a custom ERP system. Their sales teams will be more efficient and productive thanks to our unique CRM system and platform. Several businesses are currently looking toward the use of consumer-level ERP solutions. The business may expand its marketing efforts and enhance the conversion of leads into sales by creating an ERP system that incorporates customer and CRM data. A smooth cooperation between the many departments involved in sales will be made possible by the custom ERP’s complete integration with all of the business’s sales platforms and assets. With this technology in place, sales teams may use customer behavior and purchase history information more effectively.

How to Formulate Clear Requirements for Custom ERP Development

Below are some identified necessary factors to create a custom ERP system that meets your individual needs. They include outlining your core requirements, considering how the system will fit with existing processes, and determining its scalability. With the help of this information, you can build the ideal custom ERP system for your business with confidence.

Functionality 

If you are looking to implement an ERP system, it is necessary to consider the functionality that it needs to include. To make this process easier, you can examine the features of your current system and identify what can be improved. Additionally, it is also beneficial to discuss with your team and ask questions such as: 

What processes can be automated for resource optimization?
What visibility or excessive reporting does each department require? Which information do different departments need access to? 
Why is a custom ERP system being considered in the first place? 

Answering these questions will help you to determine the functionality that needs to be included and assign user experience in the future.

Industry Research

Success depends on being current with developments in your industry. Research for most successful competitors and the top-performing apps for your sector. Learn about the benefits and potential pitfalls of these technologies by watching demos and reading reviews.  To ensure that your expectations are realistic and your custom ERP system meets the specific needs of your business. It is best to engage an experienced team of custom ERP development professionals to conduct a comprehensive market analysis.

Integration Requirements

The key to successful development is integrating your custom ERP system with external applications. The development team will have a clear knowledge of the project scope if you gather all of the software applications that each department in your company uses. Systems for managing client relationships, inventories, online storefronts, project management tools, accounting software, and more are examples of apps that should be on the list. Your development team will be better equipped to provide you with accurate time and cost estimates if you have this information available.

Deployment Type

Businesses must choose between an on-premises and cloud option considering custom ERP development. On-premise software is installed on the company’s hardware, while cloud-based systems are hosted on the vendor’s server and accessed online. In the past, most companies opted for on-premise ERP systems cloud solutions have become increasingly popular due to the numerous advantages they offer:

faster implementation, 
lower initial costs, 
less responsibility for security, 
easier collaboration between teams, 
automated updates provided by the vendor.

Cost of ERP Software Development 

Custom ERP solutions can range significantly in cost depending on the complexity and scope of the project. Factors that influence the total cost of a project include the number of people on the development team, the features and integrations required, and the duration of the project. Generally, custom ERP development projects can cost anywhere between $25,000 to $5,000,000, making it a more expensive option when compared to the average cost of a third-party system for resource planning which typically runs from $9,000 per user for mid-sized and large-scale enterprises respectively. The most crucial factors:

Size of Company

The size of your business will affect the amount of time and effort needed to create an ERP system. As the size of your business grows, the demand for a robust and comprehensive custom ERP system increases. This entails a more complex organizational structure, various business processes, and the need to integrate multiple software products across departments. To meet these demands, an experienced ERP development provider must be engaged to create a custom ERP system with all the required modules, integrations, and features. This will ensure that the custom ERP system is tailored to meet the specific needs of your business, no matter how large it may be.

Tech Stack 

When considering custom ERP development, there is a distinction between open-source software and proprietary solutions. Open-source software is free to the public, while proprietary solutions require the provider to pay a subscription fee for the platform. Additionally, the more sophisticated the technology, the more costly the developer’s time will be. When selecting the right custom ERP solution for your business, it is important to consider the cost implications of open-source versus proprietary solutions, as well as the complexity of the technology. This will ensure that you get the most suitable custom ERP system for your business needs.

Development Team 

For a comprehensive and versatile custom ERP system, it is necessary to hire a larger team of specialists, which will help to reduce the overall implementation timeline. Generally, the cost of a medium-complexity custom ERP solution is around $40,000 per module. This cost is comparable for testing, deployment, and data migration, so for a custom ERP system with 5 modules, the total cost will be approximately $250,000. To ensure that your custom ERP system is cost-efficient, it is important to accurately estimate the amount of time and resources that will be needed.

Summary

Custom ERP development allows companies to have a tailored system that meets their specific needs, allowing them to manage resources more efficiently, streamline their workflows, and modernize the enterprise. By investing in a customized solution, companies can ensure that their unique requirements are taken into account, as well as facilitate the adaptation of staff to the new system and avoid overspending on unnecessary features. 

Custom ERP with Flatlogic

Flatlogic Platform offers an easy way to generate a custom ERP solution with full control over the source code and scalability. With full control over the source code, you can make sure that you have the right features, scalability, and performance to match your business needs. Plus, with no-code development, you don’t need to be an expert programmer to make the necessary changes – making it easier to scale and customize as your business grows. With Flatlogic Platform, you have the flexibility to create a custom ERP solution that is tailored to your needs, while still having the scalability of more traditional development.

How to Create Custom ERP with Flatlogic Platform?

Using the Flatlogic Full-Stack Generator you can create CRUD and static applications in a few minutes. To start using the Platform, you need to register on the Flatlogic website. Clicking the “Sign in” button in the header will allow you to register for a Flatlogic account.

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

Step 3. Schema Editor

In this step, you can create your database schema from scratch, import an existing schema or select one of the suggested schemas. 

To import your existing database, click the Import SQL button and select your .sql file. After that, your database will be opened in the Schema Editor where you can further edit your data (add/edit/delete entities).

If you are not familiar with database design and find it difficult to understand what tables are, we have prepared some ready-made sample schemas of real applications that you can modify for your application:

E-commerce app;
Time tracking app;
Book store;
Chat (messaging) app;
Blog.

Or, you can define a database schema and add a description by clicking on the “Generate with AI” button. You need to type the application’s description in the text area and hit “Send”. The application’s schema will be ready in around 15 seconds. You may either hit deploy immediately or review the structure to make manual adjustments.

Next, you can connect your GitHub and push your application code there. Or skip this step by clicking the Finish and Deploy button and in a few minutes, your application will be generated.

The post Custom ERP System: Benefits, Requirements & Cost of Development appeared first on Flatlogic Blog.

Building Automation for Fraud Detection Using OpenSearch and Terraform

Organizations that interface with online payments are continuously monitoring and guarding against fraudulent activity. Transactional fraud usually presents itself as discrete data points, making it challenging to identify multiple actors involved in the same group of transactions. Even a single actor operating over a period of time can be hard to detect. Visibility is key to prevent fraud incidents from occurring and to give meaningful knowledge of the activities within your environment to data, security, and operations engineers.

Understanding the connections between individual data points can reduce the time for customers to detect and prevent fraud. You can use a graph database to store transaction information along with the relationships between individual data points. Analyzing those relationships through a graph database can uncover patterns difficult to identify with relational tables. Fraud graphs enable customers to find common patterns between transactions, such as phone numbers, locations, and origin and destination accounts. Additionally, combining fraud graphs with full text search provides additional benefits as it can simplify analysis and integration with existing applications.

In our solution, financial analysts can upload graph data, which gets automatically ingested into the Amazon Neptune graph database service and replicated into Amazon OpenSearch Service for analysis. Data ingestion is automated with Amazon Simple Storage Service (Amazon S3) and Amazon Simple Queue Service (Amazon SQS) integration. We do data replication through AWS Lambda functions and AWS Step Functions for orchestration. The design is using open source tools and AWS Managed Services to build resources and is available in this https://github.com/aws-samples/neptune-fraud-detection-with-opensearch GitHub repository under an MIT-0 license. You will use Terraform and Docker to deploy the architecture, and will be able to send search requests to the system to explore the dataset.

Solution overview

This solution takes advantage of native integration between AWS services for scalability and performance, as well as the Neptune-to-OpenSearch Service replication pattern described in Neptune’s official documentation.

Figure 1 An architectural diagram that illustrates the infrastructure state and workflow as defined in the Terraform templates.

The process for this solution consists of the following steps, also shown in the architecture diagram here:

Financial analyst uploads graph data files to an Amazon S3 bucket.

Note: The data files are in a Gremlin load data format (CSV) and can include vertex files and edge files.

The action of the upload invokes a PUT object event notification with a destination set to an Amazon SQS queue.
The SQS queue is configured as an AWS Lambda event source, which invokes a Lambda function.
This Lambda function sends an HTTP request to an Amazon Neptune database to load data stored in an S3 bucket.
The Neptune database reads data from the S3 endpoint defined in the Lambda request and loads the data into the graph database.
An Amazon EventBridge rule is scheduled to run every 5 minutes. This rule targets an AWS Step Functions state machine to create a new execution.
The Neptune Poller step function (state machine) replicates the data in the Neptune database to an OpenSearch Service cluster.
Note: The Neptune Poller step function is responsible for continually syncing new data after the initial data upload using Neptune Streams.

User can access the replicated data from the Neptune database with Amazon OpenSearch Service.
Note: A Lambda function is invoked to send a search request or query to an OpenSearch Service endpoint to get results.

Prerequisites

To implement this solution, you must have the following prerequisites:

An AWS account with local credentials is configured. For more information, check the documentation on configuration and credential file settings.
The latest version of the AWS Command Line Interface (AWS CLI).
An IAM user with Git credentials.
A Git client to clone the source code provided.
A Bash shell.

Docker installed on your localhost.

Terraform installed on your localhost.

Deploying the Terraform templates

The solution is available in this GitHub repository with the following structure:

data: Contains a sample dataset to be used with the solution for demonstration purposes. Information on fictional transactions, identities and devices is represented in files within the nodes/ folder, and relationships between them are represented in files in the edges/ folder.
terraform: This folder contains the Terraform modules to deploy the solution.
documents: This folder contains the architecture diagram image file of the solution.

Create a local directory called NeptuneOpenSearchDemo and clone the source code repository:

mkdir -p $HOME/NeptuneOpenSearchDemo

cd $HOME/NeptuneOpenSearchDemo

git clone https://github.com/aws-samples/neptune-fraud-detection-with-opensearch.git

Change directory into the Terraform directory:

cd $HOME/NeptuneOpenSearchDemo neptune-fraud-detection-with-opensearch /terraform

Make sure that the Docker daemon is running:

docker info

If the previous command outputs an error that is unable to connect to the Docker daemon, start Docker and run the command again.

Initialize the Terraform folder to install required providers:

terraform init

The solution is deployed on us-west-2 by default. The user can change this behavior by modifying the variable “region” in variables.tf file.

Deploy the AWS services:

terraform apply -auto-approve

Note: Deployment will take around 30 minutes due to the time necessary to provision the Neptune and OpenSearch Service clusters.

To retrieve the name of the S3 bucket to upload data to:

aws s3 ls | grep “neptunestream-loader.*d$”

Upload node data to the S3 bucket obtained in the previous step:

aws s3 cp $HOME/NeptuneOpenSearchDemo/neptune-fraud-detection-with-opensearch /data s3:// neptunestream-loader-us-west-2-123456789012 –recursive

Note: This is a sample dataset for demonstration purposes only created from the IEEE-CIS Fraud Detection dataset.

Test the solution

After the solution is deployed and the dataset is uploaded to S3, the dataset can be retrieved and explored through a Lambda function that sends a search request to the OpenSearch Service cluster.

Confirm the Lambda function that sends a request to OpenSearch was deployed correctly:

aws lambda get-function –function-name NeptuneStreamOpenSearchRequestLambda –-query ‘Configuration.[FunctionName, State]’

Invoke the Lambda function to see all records present in OpenSearch that are added from Neptune:

aws lambda invoke –function-name NeptuneStreamOpenSearchRequestLambda response.json

The results of the Lambda invocation are stored in the response.json file. This file contains the total number of records in the cluster and all records ingested up to that point. The solution stores records in the index amazon_neptune. An example of a node with device information looks like this:

{
“_index”: “amazon_neptune”,
“_type”: “_doc”,
“_id”: “1fb6d4d2936d6f590dc615142a61059e”,
“_score”: 1.0,
“_source”: {
“entity_id”: “d3”,
“document_type”: “vertex”,
“entity_type”: [
“vertex”
],
“predicates”: {
“deviceType”: [
{
“value”: “desktop”
}
],
“deviceInfo”: [
{
“value”: “Windows”
}
]
}
}
}

Cleaning up

To avoid incurring future charges, clean up the resources deployed in the solution:

terraform destroy –auto-approve

The command will output information on resources being destroyed.

Destroy complete! Resources: 101 destroyed.

Conclusion

Fraud graphs are complementary to other techniques organizations can use to detect and prevent fraud. The solution presented in this blog post reduces the time financial analysts would take to access transactional data by automating data ingestion and replication. It also improves performance for systems with growing volumes of data when compared to executing a large number of insert statements or other API calls.