Integrating DevOps Guru Insights with CloudWatch Dashboard

Many customers use Amazon CloudWatch dashboards to monitor applications and often ask how they can integrate Amazon DevOps Guru Insights in order to have a unified dashboard for monitoring.  This blog post showcases integrating DevOps Guru proactive and reactive insights to a CloudWatch dashboard by using Custom Widgets. It can help you to correlate trends over time and spot issues more efficiently by displaying related data from different sources side by side and to have a single pane of glass visualization in the CloudWatch dashboard.

Amazon DevOps Guru is a machine learning (ML) powered service that helps developers and operators automatically detect anomalies and improve application availability. DevOps Guru’s anomaly detectors can proactively detect anomalous behavior even before it occurs, helping you address issues before they happen; detailed insights provide recommendations to mitigate that behavior.

Amazon CloudWatch dashboard is a customizable home page in the CloudWatch console that monitors multiple resources in a single view. You can use CloudWatch dashboards to create customized views of the metrics and alarms for your AWS resources.

Solution overview

This post will help you to create a Custom Widget for Amazon CloudWatch dashboard that displays DevOps Guru Insights. A custom widget is part of your CloudWatch dashboard that calls an AWS Lambda function containing your custom code. The Lambda function accepts custom parameters, generates your dataset or visualization, and then returns HTML to the CloudWatch dashboard. The CloudWatch dashboard will display this HTML as a widget. In this post, we are providing sample code for the Lambda function that will call DevOps Guru APIs to retrieve the insights information and displays as a widget in the CloudWatch dashboard. The architecture diagram of the solution is below.

Figure 1: Reference architecture diagram

Prerequisites and Assumptions

An AWS account. To sign up:

Create an AWS account. For instructions, see Sign Up For AWS.

DevOps Guru should be enabled in the account. For enabling DevOps guru, see DevOps Guru Setup

Follow this Workshop to deploy a sample application in your AWS Account which can help generate some DevOps Guru insights.

Solution Deployment

We are providing two options to deploy the solution – using the AWS console and AWS CloudFormation. The first section has instructions to deploy using the AWS console followed by instructions for using CloudFormation. The key difference is that we will create one Widget while using the Console, but three Widgets are created when we use AWS CloudFormation.

Using the AWS Console:

We will first create a Lambda function that will retrieve the DevOps Guru insights. We will then modify the default IAM role associated with the Lambda function to add DevOps Guru permissions. Finally we will create a CloudWatch dashboard and add a custom widget to display the DevOps Guru insights.

Navigate to the Lambda Console after logging to your AWS Account and click on Create function.

Figure 2a: Create Lambda Function

Choose Author from Scratch and use the runtime Node.js 16.x. Leave the rest of the settings at default and create the function.

Figure 2b: Create Lambda Function

After a few seconds, the Lambda function will be created and you will see a code source box. Copy the code from the text box below and replace the code present in code source as shown in screen print below. // SPDX-License-Identifier: MIT-0
// CloudWatch Custom Widget sample: displays count of Amazon DevOps Guru Insights
const aws = require(‘aws-sdk’);

const DOCS = `## DevOps Guru Insights Count
Displays the total counts of Proactive and Reactive Insights in DevOps Guru.
`;

async function getProactiveInsightsCount(DevOpsGuru, StartTime, EndTime) {
let NextToken = null;
let proactivecount=0;

do {
const args = { StatusFilter: { Any : { StartTimeRange: { FromTime: StartTime, ToTime: EndTime }, Type: ‘PROACTIVE’ }}}
const result = await DevOpsGuru.listInsights(args).promise();
console.log(result)
NextToken = result.NextToken;
result.ProactiveInsights.forEach(res => {
console.log(result.ProactiveInsights[0].Status)
proactivecount++;
});
} while (NextToken);
return proactivecount;
}

async function getReactiveInsightsCount(DevOpsGuru, StartTime, EndTime) {
let NextToken = null;
let reactivecount=0;

do {
const args = { StatusFilter: { Any : { StartTimeRange: { FromTime: StartTime, ToTime: EndTime }, Type: ‘REACTIVE’ }}}
const result = await DevOpsGuru.listInsights(args).promise();
NextToken = result.NextToken;
result.ReactiveInsights.forEach(res => {
reactivecount++;
});
} while (NextToken);
return reactivecount;
}

function getHtmlOutput(proactivecount, reactivecount, region, event, context) {

return `DevOps Guru Proactive Insights<br><font size=”+10″ color=”#FF9900″>${proactivecount}</font>
<p>DevOps Guru Reactive Insights</p><font size=”+10″ color=”#FF9900″>${reactivecount}`;
}

exports.handler = async (event, context) => {
if (event.describe) {
return DOCS;
}
const widgetContext = event.widgetContext;
const timeRange = widgetContext.timeRange.zoom || widgetContext.timeRange;
const StartTime = new Date(timeRange.start);
const EndTime = new Date(timeRange.end);
const region = event.region || process.env.AWS_REGION;
const DevOpsGuru = new aws.DevOpsGuru({ region });

const proactivecount = await getProactiveInsightsCount(DevOpsGuru, StartTime, EndTime);
const reactivecount = await getReactiveInsightsCount(DevOpsGuru, StartTime, EndTime);

return getHtmlOutput(proactivecount, reactivecount, region, event, context);

};

Figure 3: Lambda Function Source Code

Click on Deploy to save the function code
Since we used the default settings while creating the function, a default Execution role is created and associated with the function. We will need to modify the IAM role to grant DevOps Guru permissions to retrieve Proactive and Reactive insights.
Click on the Configuration tab and select Permissions from the left side option list. You can see the IAM execution role associated with the function as shown in figure 4.

Figure 4: Lambda function execution role

Click on the IAM role name to open the role in the IAM console. Click on Add Permissions and select Attach policies.

Figure 5: IAM Role Update

Search for DevOps and select the AmazonDevOpsGuruReadOnlyAccess. Click on Add permissions to update the IAM role.

Figure 6: IAM Role Policy Update

Now that we have created the Lambda function for our custom widget and assigned appropriate permissions, we can navigate to CloudWatch to create a Dashboard.
Navigate to CloudWatch and click on dashboards from the left side list. You can choose to create a new dashboard or add the widget in an existing dashboard.
We will choose to create a new dashboard

Figure 7: Create New CloudWatch dashboard

Choose Custom Widget in the Add widget page

Figure 8: Add widget

Click Next in the custom widge page without choosing a sample

Figure 9: Custom Widget Selection

Choose the region where devops guru is enabled. Select the Lambda function that we created earlier. In the preview pane, click on preview to view DevOps Guru metrics. Once the preview is successful, create the Widget.

Figure 10: Create Custom Widget

Congratulations, you have now successfully created a CloudWatch dashboard with a custom widget to get insights from DevOps Guru. The sample code that we provided can be customized to suit your needs.

Using AWS CloudFormation

You may skip this step and move to future scope section if you have already created the resources using AWS Console.

In this step we will show you how to  deploy the solution using AWS CloudFormation. AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Customers define an initial template and then revise it as their requirements change. For more information on CloudFormation stack creation refer to  this blog post.

The following resources are created.

Three Lambda functions that will support CloudWatch Dashboard custom widgets
An AWS Identity and Access Management (IAM) role to that allows the Lambda function to access DevOps Guru Insights and to publish logs to CloudWatch
Three Log Groups under CloudWatch
A CloudWatch dashboard with widgets to pull data from the Lambda Functions

To deploy the solution by using the CloudFormation template

You can use this downloadable template  to set up the resources. To launch directly through the console, choose Launch Stack button, which creates the stack in the us-east-1 AWS Region.
Choose Next to go to the Specify stack details page.
(Optional) On the Configure Stack Options page, enter any tags, and then choose Next.
On the Review page, select I acknowledge that AWS CloudFormation might create IAM resources.
Choose Create stack.

It takes approximately 2-3 minutes for the provisioning to complete. After the status is “Complete”, proceed to validate the resources as listed below.

Validate the resources

Now that the stack creation has completed successfully, you should validate the resources that were created.

On AWS Console, head to CloudWatch, under Dashboards – there will be a dashboard created with name <StackName-Region>.
On AWS Console, head to CloudWatch, under LogGroups there will be 3 new log-groups created with the name as:

lambdaProactiveLogGroup
lambdaReactiveLogGroup
lambdaSummaryLogGroup

On AWS Console, head to Lambda, there will be lambda function(s) under the name:

lambdaFunctionDGProactive
lambdaFunctionDGReactive
lambdaFunctionDGSummary

On AWS Console, head to IAM, under Roles there will be a new role created with name “lambdaIAMRole”

To View Results/Outcome

With the appropriate time-range setup on CloudWatch Dashboard, you will be able to navigate through the insights that have been generated from DevOps Guru on the CloudWatch Dashboard.

Figure 11: DevOpsGuru Insights in Cloudwatch Dashboard

Cleanup

For cost optimization, after you complete and test this solution, clean up the resources. You can delete them manually if you used the AWS Console or by deleting the AWS CloudFormation stack called devopsguru-cloudwatch-dashboard if you used AWS CloudFormation.

For more information on deleting the stacks, see Deleting a stack on the AWS CloudFormation console.

Conclusion

This blog post outlined how you can integrate DevOps Guru insights into a CloudWatch Dashboard. As a customer, you can start leveraging CloudWatch Custom Widgets to include DevOps Guru Insights in an existing Operational dashboard.

AWS Customers are now using Amazon DevOps Guru to monitor and improve application performance. You can start monitoring your applications by following the instructions in the product documentation. Head over to the Amazon DevOps Guru console to get started today.

To learn more about AIOps for Serverless using Amazon DevOps Guru check out this video.

Suresh Babu

Suresh Babu is a DevOps Consultant at Amazon Web Services (AWS) with 21 years of experience in designing and implementing software solutions from various industries. He helps customers in Application Modernization and DevOps adoption. Suresh is a passionate public speaker and often speaks about DevOps and Artificial Intelligence (AI)

Venkat Devarajan

Venkat Devarajan is a Senior Solutions Architect at Amazon Webservices (AWS) supporting enterprise automotive customers. He has over 18 years of industry experience in helping customers design, build, implement and operate enterprise applications.

Ashwin Bhargava

Ashwin is a DevOps Consultant at AWS working in Professional Services Canada. He is a DevOps expert and a security enthusiast with more than 15 years of development and consulting experience.

Murty Chappidi

Murty is an APJ Partner Solutions Architecture Lead at Amazon Web Services with a focus on helping customers with accelerated and seamless journey to AWS by providing solutions through our GSI partners. He has more than 25 years’ experience in software and technology and has worked in multiple industry verticals. He is the APJ SME for AI for DevOps Focus Area. In his free time, he enjoys gardening and cooking.

Dive Deeper into Data Lake for Nonprofits, a New Open Source Solution from AWS for Salesforce for Nonprofits

Nonprofits are using cloud-based solutions for fundraising, donor and member management, and communications. With this move online, they have access to more data than ever. This data has the potential to transform their missions and increase their impact. However, sharing, connecting, and interpreting data from many different sources can be a challenge. To address this challenge, Amazon Web Services (AWS) and AWS Partner Salesforce for Nonprofits announced the general availability of Data Lake for Nonprofits – Powered by AWS.

Data Lake for Nonprofits is an open source application that helps nonprofit organizations set up a data lake in their AWS account and populate it with the data that they have in the Salesforce Non Profit Success Pack (NPSP) schema. This data resides in Amazon Relational Database Service (Amazon RDS), where it can be accessed by other AWS services like Amazon Redshift and Amazon QuickSight, as well as Business Intelligence products such as Tableau.

This post is written for developers and solution integrator partners who want to get a closer look at the architecture and implementation of Data Lake for Nonprofits to better understand the solution.  We’ll walk you through the architecture and how to set up a data lake using AWS Amplify.

Prerequisites

To follow along with this walkthrough, you must have the following prerequisites:

An AWS Account

An AWS Identity and Access Management (IAM) user with Administrator permissions that enables you to interact with your AWS account
A Salesforce account that has the Non Profit Success Pack managed packages installed
A Salesforce user that has API permissions to interact with your Salesforce org

Git command line interface installed on your computer for cloning the repository

Node.js and Yarn Package Manager installed on your computer for cleanup

Solution Overview

The following diagram shows the solution’s high-level architecture.

Salesforce and AWS have released the source code in GitHub under a BSD 3-Clause license so nonprofits and their cloud partners can use, customize, and innovate on top of it at no cost.

Use your command-line shell to clone the GitHub repository for your own development.

git clone https://github.com/salesforce-misc/Data-Lake-for-Nonprofits

The solution consists of two tiers which are frontend and backend applications. In the following sections, we’ll walk you through both architectures and show you how to install a data lake.

Frontend Walkthrough

We have developed the frontend application using React with Typescript, and it has three layers.

The view layer comprises React components.
The model layer is where the most of the application logic is maintained. The models are Mobx State Tree (MST) types with properties, actions, and computed values. Relationships between models are expressed as MST trees.
The API layer is used to communicate with AWS Services using AWS SDK for JavaScript v3.

The frontend application has been built and zipped for AWS Amplify using GitHub Workflows. This is done for every change in the repository. The latest build can be found in the GitHub repository.

Backend Walkthrough

We have developed the backend application using using AWS CloudFormation templates that can be found under the infra/cf folder in the GitHub repository. These templates will provision the resources on your AWS account for your data lake as below.

vpc.yaml template is used to provision the Amazon Virtual Private Cloud (Amazon VPC) that the application will be running in.

buckets.yaml template is used to provision several Amazon Simple Storage Service (Amazon S3) buckets.

datastore.yaml is used to provision an Amazon Relational Database Service (Amazon RDS) PostgreSQL database.

athena.yaml is used to provision Amazon Athena with a custom workgroup.

step_function.yaml is used to provision AWS Step Functions and related AWS Lambda functions. Step Functions is used to orchestrate the Lambda functions that are going to import your data from your Salesforce account into Amazon RDS for PostgreSQL using an Amazon AppFlow connection.

Lambda functions can be found in the infra/lambdasfolder.

src/cleanupSQL.ts performs the movement of data from the data-loading database schema to the public database schema.

src/filterSchemaListing.ts filters out S3’s “schema/” key as well as queries Salesforce for deleted objects.

src/finalizeSQL.ts drops tables that will be no longer needed in the application to save cost.

src/listEntities.ts calls APIs since the output is too large for Step Functions.

src/processImport.ts imports a Lambda function which writes to RDS the data which is sent to the Amazon SQS queue.

src/pullNewSchema.ts queries Amazon AppFlow and then Salesforce to gather any new or updated fields.

src/setupSQL.ts sets up the data-loading database schema and creates the tables based on the schema file.

src/statusReport.ts performs a status update to S3 based on where it is in the Step Functions State Machine.

src/updateFlowSchema.ts uses the updated schema file on S3 to create or update the Amazon AppFlow flow.

Amazon Simple Queue Service (Amazon SQS) is used to queue the import data so that the Lambda function can use it.

Amazon EventBridge is used to set up a job that syncs your data based on your choice of frequency.

Amazon CloudWatch is used to keep logs during installation as well as synchronization. The application creates a CloudWatch dashboard to track the usage of the data lake.

Deploy the Frontend Application using AWS Amplify

The latest release of the frontend application can be downloaded from the GitHub repository and deployed in AWS Amplify in your AWS account as explained in the User Guide. It typically takes a few minutes to deploy the frontend application, after which a URL to the frontend application is presented. The URL should look like the link here:

https://abc.xyz….amplifyapp.com

When you open the URL, you will see the frontend application, which provides a wizard-like guide to the steps. Each step will guide you through the instructions on how to move forward.

Step 1 will ask for an Access Key ID and Secret Access key for an IAM user of your AWS account. The application shows guidance on how to log in to AWS Management Console and use Identity and Access Management (IAM) to create the IAM user with admin permissions.

This step also requires you to select the AWS region where you would like to create the Amazon AppFlow connection and deploy the data lake.

Step 2 establishes the connection to your Salesforce account. The application guides the user to leverage Amazon AppFlow, which allows AWS to connect to your Salesforce account.

At the end of the page, use the drop down menu to select the connection name and click Next.

Step 3 will help you choose the data objects and set the frequency of data synchronization for your data lake. The data import option can be set to any date and time, and you can choose the frequency from the given options: daily/weekly/monthly.

This page further displays the complete set of objects from your Salesforce account. Choose the necessary objects that you want to import into your data lake.

In Step 4, you can review the data lake configuration and confirm. This step is where you are allowed to go back to the previous step to make any changes if needed.

Step 5 is where the data lake is provisioned, and then your data is imported. This can take half an hour to several hours, depending on the size of the data in your Salesforce account.

Once the data lake is ready, in Step 6, you can find the instructions and information needed to connect to your data lake using business intelligence applications such as Tableau Cloud and Tableau Desktop that will help you visualize your data and analyze it per your business needs.

Cleanup

Keeping the data lake in your AWS account will incur charges due to the resources provisioned. To avoid incurring future charges, run these commands on your terminal where you cloned the GitHub repository and follow the instructions:

cd Data-Lake-for-Nonprofits/app

yarn delete-datalake

Conclusion

This post showed you how AWS services are used to transform your Salesforce data into a data lake. It uses AWS Amplify to host the frontend application and provisions several AWS services for the data lake backend. The architecture is based on the successful collaboration between AWS and Salesforce to build an open source data lake solution using a simple and easy-to-use, wizard-like application.

We invite you to clone the GitHub repository and develop your own solution for your needs, provide feedback, and contribute to the project.

Flatlogic Admin Templates banner

Deliver Operational Insights to Atlassian Opsgenie using DevOps Guru

As organizations continue to grow and scale their applications, the need for teams to be able to quickly and autonomously detect anomalous operational behaviors becomes increasingly important. Amazon DevOps Guru offers a fully managed AIOps service that enables you to improve application availability and resolve operational issues quickly. DevOps Guru helps ease this process by leveraging machine learning (ML) powered recommendations to detect operational insights, identify the exhaustion of resources, and provide suggestions to remediate issues. Many organizations running business critical applications use different tools to be notified about anomalous events in real-time for the remediation of critical issues. Atlassian is a modern team collaboration and productivity software suite that helps teams organize, discuss, and complete shared work. You can deliver these insights in near-real time to DevOps teams by integrating DevOps Guru with Atlassian Opsgenie. Opsgenie is a modern incident management platform that receives alerts from your monitoring systems and custom applications and categorizes each alert based on importance and timing.

This blog post walks you through how to integrate Amazon DevOps Guru with Atlassian Opsgenie to
receive notifications for new operational insights detected by DevOps Guru with more flexibility and customization using Amazon EventBridge and AWS Lambda. The Lambda function will be used to demonstrate how to customize insights sent to Opsgenie.

Solution overview

Figure 1: Amazon EventBridge Integration with Opsgenie using AWS Lambda

Amazon DevOps Guru directly integrates with Amazon EventBridge to notify you of events relating to generated insights and updates to insights. To begin routing these notifications to Opsgenie, you can configure routing rules to determine where to send notifications. As outlined below, you can also use pre-defined DevOps Guru patterns to only send notifications or trigger actions that match that pattern. You can select any of the following pre-defined patterns to filter events to trigger actions in a supported AWS resource. Here are the following predefined patterns supported by DevOps Guru:

DevOps Guru New Insight Open
DevOps Guru New Anomaly Association
DevOps Guru Insight Severity Upgraded
DevOps Guru New Recommendation Created
DevOps Guru Insight Closed

By default, the patterns referenced above are enabled so we will leave all patterns operational in this implementation.  However, you do have flexibility to change which of these patterns to choose to send to Opsgenie. When EventBridge receives an event, the EventBridge rule matches incoming events and sends it to a target, such as AWS Lambda, to process and send the insight to Opsgenie.

Prerequisites

The following prerequisites are required for this walkthrough:

An AWS Account

An Opsgenie Account

Maven
AWS Command Line Interface (CLI)
AWS Serverless Application Model (SAM) CLI

Create a team and add members within your Opsgenie Account

AWS Cloud9 is recommended to create an environment to get access to the AWS Serverless Application Model (SAM) CLI or AWS Command Line Interface (CLI) from a bash terminal.

Push Insights using Amazon EventBridge & AWS Lambda

In this tutorial, you will perform the following steps:

Create an Opsgenie integration
Launch the SAM template to deploy the solution
Test the solution

Create an Opsgenie integration

In this step, you will navigate to Opsgenie to create the integration with DevOps Guru and to obtain the API key and team name within your account. These parameters will be used as inputs in a later section of this blog.

Navigate to Teams, and take note of the team name you have as shown below, as you will need this parameter in a later section.

Figure 2: Opsgenie team names

Click on the team to proceed and navigate to Integrations on the left-hand pane. Click on Add Integration and select the Amazon DevOps Guru option.

Figure 3: Integration option for DevOps Guru

Now, scroll down and take note of the API Key for this integration and copy it to your notes as it will be needed in a later section. Click Save Integration at the bottom of the page to proceed.

­­­

Figure 4: API Key for DevOps Guru Integration

Now, the Opsgenie integration has been created and we’ve obtained the API key and team name. The email of any team member will be used in the next section as well.

Review & launch the AWS SAM template to deploy the solution

In this step, you will review & launch the SAM template. The template will deploy an AWS Lambda function that is triggered by an Amazon EventBridge rule when Amazon DevOps Guru generates a new event. The Lambda function will retrieve the parameters obtained from the deployment and pushes the events to Opsgenie via an API.

Reviewing the template

Below is the SAM template that will be deployed in the next step. This template launches a few key components specified earlier in the blog. The Transform section of the template allows us takes an entire template written in the AWS Serverless Application Model (AWS SAM) syntax and transforms and expands it into a compliant CloudFormation template. Under the Resources section this solution will deploy an AWS Lamba function using the Java runtime as well as an Amazon EventBridge Rule/Pattern. Another key aspect of the template are the Parameters. As shown below, the ApiKey, Email, and TeamName are parameters we will use for this CloudFormation template which will then be used as environment variables for our Lambda function to pass to OpsGenie.

Figure 5: Review of SAM Template

Launching the Template

Navigate to the directory of choice within a terminal and clone the GitHub repository with the following command:

Change directories with the command below to navigate to the directory of the SAM template.

cd amazon-devops-guru-connector-opsgenie/OpsGenieServerlessTemplate

From the CLI, use the AWS SAM to build and process your AWS SAM template file, application code, and any applicable language-specific files and dependencies.

sam build

From the CLI, use the AWS SAM to deploy the AWS resources for the pattern as specified in the template.yml file.

sam deploy –guided

You will now be prompted to enter the following information below. Use the information obtained from the previous section to enter the Parameter ApiKey, Parameter Email, and Parameter TeamName fields.

 Stack Name
AWS Region
Parameter ApiKey
Parameter Email
Parameter TeamName
Allow SAM CLI IAM Role Creation

Test the solution

Follow this blog to enable DevOps Guru and generate an operational insight.
When DevOps Guru detects a new insight, it will generate an event in EventBridge. EventBridge then triggers Lambda and sends the event to Opsgenie as shown below.

Figure 6: Event Published to Opsgenie with details such as the source, alert type, insight type, and a URL to the insight in the AWS console.enecccdgruicnuelinbbbigebgtfcgdjknrjnjfglclt

Cleaning up

To avoid incurring future charges, delete the resources.

Delete resources deployed from this blog.
From the command line, use AWS SAM to delete the serverless application along with its dependencies.

sam delete

Customizing Insights published using Amazon EventBridge & AWS Lambda

The foundation of the DevOps Guru and Opsgenie integration is based on Amazon EventBridge and AWS Lambda which allows you the flexibility to implement several customizations. An example of this would be the ability to generate an Opsgenie alert when a DevOps Guru insight severity is high. Another example would be the ability to forward appropriate notifications to the AIOps team when there is a serverless-related resource issue or forwarding a database-related resource issue to your DBA team. This section will walk you through how these customizations can be done.

EventBridge customization

EventBridge rules can be used to select specific events by using event patterns. As detailed below, you can trigger the lambda function only if a new insight is opened and the severity is high. The advantage of this kind of customization is that the Lambda function will only be invoked when needed.

{
“source”: [
“aws.devops-guru”
],
“detail-type”: [
“DevOps Guru New Insight Open”
],
“detail”: {
“insightSeverity”: [
“high”
]
}
}

Applying EventBridge customization

Open the file template.yaml reviewed in the previous section and implement the changes as highlighted below under the Events section within resources (original file on the left, changes on the right hand side).

Figure 7: CloudFormation template file changed so that the EventBridge rule is only triggered when the alert type is “DevOps Guru New Insight Open” and insightSeverity is “high”.

Save the changes and use the following command to apply the changes

sam deploy –template-file template.yaml

Accept the changeset deployment

Determining the Ops team based on the resource type

Another customization would be to change the Lambda code to route and control how alerts will be managed.  Let’s say you want to get your DBA team involved whenever DevOps Guru raises an insight related to an Amazon RDS resource. You can change the AlertType Java class as follows:

To begin this customization of the Lambda code, the following changes need to be made within the AlertType.java file:

At the beginning of the file, the standard java.util.List and java.util.ArrayList packages were imported
Line 60: created a list of CloudWatch metrics namespaces
Line 74: Assigned the dataIdentifiers JsonNode to the variable dataIdentifiersNode
Line 75: Assigned the namespace JsonNode to a variable namespaceNode
Line 77: Added the namespace to the list for each DevOps Insight which is always raised as an EventBridge event with the structure detail►anomalies►0►sourceDetails►0►dataIdentifiers►namespace
Line 88: Assigned the default responder team to the variable defaultResponderTeam
Line 89: Created the list of responders and assigned it to the variable respondersTeam
Line 92: Check if there is at least one AWS/RDS namespace
Line 93: Assigned the DBAOps_Team to the variable dbaopsTeam
Line 93: Included the DBAOps_Team team as part of the responders list
Line 97: Set the OpsGenie request teams to be the responders list

Figure 8: java.util.List and java.util.ArrayList packages were imported

 

Figure 9: AlertType Java class customized to include DBAOps_Team for RDS-related DevOps Guru insights.

 

You then need to generate the jar file by using the mvn clean package command.

The function needs to be updated with:

FUNCTION_NAME=$(aws lambda
list-functions –query ‘Functions[?contains(FunctionName, `DevOps-Guru`) ==
`true`].FunctionName’ –output text)
aws lambda update-function-code –region
us-east-1 –function-name $FUNCTION_NAME –zip-file fileb://target/Functions-1.0.jar

As result, the DBAOps_Team will be assigned to the Opsgenie alert in the case a DevOps Guru Insight is related to RDS.

Figure 10: Opsgenie alert assigned to both DBAOps_Team and AIOps_Team.

Conclusion

In this post, you learned how Amazon DevOps Guru integrates with Amazon EventBridge and publishes insights to Opsgenie using AWS Lambda. By creating an Opsgenie integration with DevOps Guru, you can now leverage Opsgenie strengths, incident management, team communication, and collaboration when responding to an insight. All of the insight data can be viewed and addressed in Opsgenie’s Incident Command Center (ICC).  By customizing the data sent to Opsgenie via Lambda, you can empower your organization even more by fine tuning and displaying the most relevant data thus decreasing the MTTR (mean time to resolve) of the responding operations team.

About the authors:

Brendan Jenkins

Brendan Jenkins is a solutions architect working with Enterprise AWS customers providing them with technical guidance and helping achieve their business goals. He has an area of interest around DevOps and Machine Learning technology. He enjoys building solutions for customers whenever he can in his spare time.

Pablo Silva

Pablo Silva is a Sr. DevOps consultant that guide customers in their decisions on technology strategy, business model, operating model, technical architecture, and investments.

He holds a master’s degree in Artificial Intelligence and has more than 10 years of experience with telecommunication and financial companies.

Joseph Simon

Joseph Simon is a solutions architect working with mid to large Enterprise AWS customers. He has been in technology for 13 years with 5 of those centered around DevOps. He has a passion for Cloud, DevOps and Automation and in his spare time, likes to travel and spend time with his family.

The AWS Modern Applications and Open Source Zone: Learn, Play, and Relax at AWS re:Invent 2022

AWS re:Invent is filled with fantastic opportunities, but I wanted to tell you about a space that lets you dive deep with some fantastic open source projects and contributors: the AWS Modern Applications and Open Source Zone! Located in the east alcove on the third floor of the Venetian Conference Center, this space exists so that re:Invent attendees can be introduced to some of the amazing projects that power and enhance the AWS solutions you know and use. We’ve divided the space up into three areas: Demos, Experts, and Fun.

Demos: Learn and be curious

We have two dedicated demo stations in the Zone and a deep list of projects that we are excited to show you from Amazonians, AWS Heroes, and AWS Community Builders. Please keep in mind this schedule may be subject to change, and we have some last minute surprises that we can’t share here, so be sure to drop by.

Monday, November 28, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM
3 PM – 5 PM

1

Continuous Deployment
and GitOps delivery with Amazon EKS Blueprints and ArgoCD

Tsahi Duek, Dima Breydo

StackGres: An Advanced
PostgreSQL Platform on EKSAlvaro Hernandez

 TBD

Step Functions templates and
prebuilt Lambda Packages for
deploying scalable serverless applications in seconds

Rustem Feyzkhanov

2

Data on EKS

Vara Bonthu, Brian Hammons

 TBD
How to use Amazon Keyspaces
(for Apache Cassandra) and Apache Spark
to build applicationsMeet Bhagdev

Scale your applications beyond IPv4 limits

Sheetal Joshi

Tuesday, November 29, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM
3 PM – 5 PM

1

Let’s build a self service developer portal with AWS Proton

Adam Keller

Doing serverless on AWS with Terraform (serverless.tf + terraform-aws-modules)

Anton Babenko

 Steampipe

Chris Farris, Bob Tordella

Using Lambda Powertools for better observability in IoT Applications

Alina Dima

2

Build and run containers on AWS with AWS Copilot

Sergey Generalov

Fargate Surprise
Fargate Surprise

Amplify Libraries Demo

Matt Auerbach

Wednesday, November 30, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM
3 PM – 5 PM

1

Building Embedded Devices with FreeRTOS SMP and the Raspberry Pi Pico

Dan Gross

Quantum computing in the cloud with Amazon Braket

Michael Brett, Katharine Hyatt

Leapp

Andrea Cavagna

EKS multicluster management and applications delivery

Nicholas Thomson, Sourav Paul

2

Using SAM CLI and Terraform for local testing

Praneeta Prakash, Suresh Poopandi

How to use Terraform AWS and AWSCC provider in your project

Tyler Lynch, Drew Mullen

How to use Terraform AWS and AWSCC provider in your project

Glenn Chia, Welly Siau

Smart City Monitoring Using AWS IoT and Digital Twin

Syed Rehan

Thursday, December 1, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM

1

Modern data exchange using AWS data streaming

Ali Alemi

Learn how to leverage your Amazon EKS cluster as a substrate for execution of distributed Ray programs for Machine Learning.

Apoorva Kulkarni

 TBD

2

Spreading apps, controlling traffic, and optimizing costs in Kubernetes

Lukonde Mwila

 TBD

Terraform IAM policy validator

Bohan Li

Experts

Pull up a chair, grab a drink and a snack, charge your devices, and have a conversation with some of our experts. We’ll have people visiting the zone all throughout re:Invent, with expertise in a variety of open source technologies and AWS services including (but not limited to):

Amazon Athena
Amazon DocumentDB (with MongoDB compatibility)
Amazon DynamoDB
Amazon Elastic Container Service (Amazon ECS)
Amazon Elastic Kubernetes Service (Amazon EKS)
Amazon Eventbridge
Amazon Keyspaces (for Apache Cassandra)
Amazon Kinesis
Amazon Linux
Amazon Managed Grafana
Amazon Managed Service for Prometheus
Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
Amazon MQ
Amazon Redshift
Amazon Simple Notification Service (Amazon SNS)
Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Storage Service (Amazon S3)
Apache Flink, Hadoop, Hudi, Iceberg, Kafka, and Spark
Automotive Grade Linux
AWS Amplify
AWS App Mesh
AWS App Runner
AWS CDK
AWS Copilot
AWS Distro for OpenTelemetry
AWS Fargate
AWS Glue
AWS IoT Greengrass
AWS Lambda
AWS Proton
AWS SDKs
AWS Serverless Application Model (AWS SAM)
AWS Step Functions
Bottlerocket
Cloudscape Design System
Embedded Linux
Flutter
FreeRTOS
Javascript
Karpenter
Lambda Powertools
OpenSearch
Red Hat OpenShift Service on AWS (ROSA)
Rust
Terraform

Fun

Want swag? We’ve got it, but it is protected by THE CLAW. That’s right, we brought back the claw machine, and this year, we might have some extra special items in there for you to catch. No spoilers, but we’ve heard there have been some Rustaceans sighted. You might want to bring an extra (empty) suitcase.

But we’re not done. By popular request, we also brought back Dance Dance Revolution. Warm up your dancing shoes or just cheer on the crowd. You never know who will be showing off their best moves.

Conclusion

The AWS Modern Applications and Open Source Zone is a must-visit destination for your re:Invent journey. With demos, experts, food, drinks, swag, games, and mystery surprises, how can you not stop by?

Flatlogic Admin Templates banner

Adding CDK Constructs to the AWS Analytics Reference Architecture

In 2021, we released the AWS Analytics Reference Architecture, a new AWS Cloud Development Kit (AWS CDK) application end-to-end example, as open source (docs are CC-BY-SA 4.0 International, sample code is MIT-0). It shows how our customers can use the available AWS products and features to implement well-architected analytics solutions. It also regroups AWS best practices for designing, implementing and operating analytics solutions through different purpose-built patterns. Altogether, the AWS Analytics Reference Architecture answers common requirements and solves customer challenges.

In 2022, we extended the scope of this project with AWS CDK constructs to provide more granular and reusable examples. This project is now composed of:

Reusable core components exposed in an AWS CDK library currently available in Typescript and Python. This library contains the AWS CDK constructs that can be used to quickly provision prepackaged analytics solutions.
Reference architectures consuming the reusable components in AWS CDK applications, and demonstrating end-to-end examples in a business context. Currently, only the AWS native reference architecture is available but others will follow.

In this blog post, we will first show how to consume the core library to quickly provision analytics solutions using CDK Constructs and experiment with AWS analytics products.

Building solutions with the Core Library

To illustrate how to use the core components,  let’s see how we can quickly build a Data Lake, a central piece for most analytics projects. The storage layer is implemented with the DataLakeStorage CDK construct relying on Amazon Simple Storage Service (Amazon S3), a durable, scalable and cost-effective object storage service. The query layer is implemented with the AthenaDemoSetup construct using Amazon Athena, an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. With regard to the data catalog, it‘s implemented with the DataLakeCatalog construct using AWS Glue Data Catalog.

Before getting started, please make sure to follow the instructions available here for setting up the prerequisites:

Install the necessary build dependencies
Bootstrap the AWS account
Initialize the CDK application.

This architecture diagram depicts the data lake building blocks we are going to deploy using the AWS Analytics Reference Architecture library. These are higher level constructs (commonly called L3 constructs) as they integrate several AWS services together in patterns.

To assemble these components, you can add this code snippet in your app.py file:

import aws_analytics_reference_architecture as ara

# Create a new DataLakeStorage with Raw, Clean and Transform buckets
storage = ara.DataLakeStorage(scope=self, id=”storage”)

# Create a new DataLakeCatalog with Raw, Clean and Transform databases
catalog = ara.DataLakeCatalog(scope=self, id=”catalog”)

# Configure a new Athena Workgroup
athena_defaults = ara.AthenaDemoSetup(scope=self, id=”demo_setup”)

# Generate data from Customer TPC dataset
data_generator = ara.BatchReplayer(
scope=self,
id=”customer-data”,
dataset=ara.PreparedDataset.RETAIL_1_GB_CUSTOMER,
sink_object_key=”customer”,
sink_bucket=storage.raw_bucket,
)

# Role with default permissions for any Glue service
glue_role = ara.GlueDemoRole.get_or_create(self)

# Crawler to create tables automatically
crawler = glue.CfnCrawler(self, id=’ara-crawler’, name=’ara-crawler’,
role=glue_role.iam_role.role_arn, database_name=’raw’,
targets={‘s3Targets’: [{“path”: f”s3://{storage.raw_bucket.bucket_name}/{data_generator.sink_object_key}/”}],}
)

# Trigger to kick off the crawler
cfn_trigger = glue.CfnTrigger(self, id=”MyCfnTrigger”,
actions=[{‘crawlerName’: crawler.name}],
type=”SCHEDULED”, description=”ara_crawler_trigger”,
name=”min_based_trigger”, schedule=”cron(0/5 * * * ? *)”, start_on_creation=True,
)

In addition to this library construct, the example also includes lower level constructs (commonly called L1 constructs) from the AWS CDK standard library. This shows that you can combine constructs from any CDK library interchangeably.

For use cases where customers have a need to adjust the default configurations in order to align with their organization specific requirements (e.g. data retention rules), the constructs can be changed through the class parameters as shown in this example:

storage = ara.DataLakeStorage(scope=self, id=”storage”, raw_archive_delay=180, clean_archive_delay=1095)

Finally, you can deploy the solution using the AWS CDK CLI from the root of the application with this command: cdk deploy. Once you deploy the solution, AWS CDK provisions the AWS resources included in the Constructs and you can log into your AWS account.

Go to the Athena console and start querying the data. The AthenaDemoSetup provides an Athena workgroup called “demo” that you can select to start querying the BatchReplayer data very quickly. Data is stored in the DataLakeStorage and registered in the DataLakeCatalog. Here is an example of an Athena query accessing the customer data from the BatchReplayer:

Accelerate the implementation

Earlier in the post we pointed out that the library simplifies and accelerates the development process. First, writing Python code is more appealing than writing CloudFormation markup code, either in json or yaml. Second, the CloudFormation template generated by the AWS CDK for the data lake example is 16 times more verbose than Python scripts.

❯ cdk synth | wc -w
2483

❯ wc -w ara_demo/ara_demo_stack.py
154

Demonstrating end-to-end examples with reference architectures

The AWS native reference architecture is the first reference architecture available. It explains the journey of a fake company, MyStore Inc., as it implements its data platform solution with AWS products and services . Deploying the AWS native reference architecture demonstrates a fully working example of a data platform from data ingestion to business analysis. AWS customers can learn from it, see analytics solutions in action, and play with retail dataset and business analysis.

More reference architectures will will be added to this project in Github later.

Business Story

The AWS native reference architecture is faking a retail company called MyStore Inc. that is building a new analytics platform on top of AWS products. This example shows how retail data can be ingested, processed, and analyzed in streaming and batch processes to provide business insights like sales analysis. The platform is built on top of the CDK Constructs from the core library to minimize development effort and inherit from AWS best practices.

Here is the architecture deployed by the AWS native reference architecture:

The platform is implemented in purpose-built modules. They are decoupled and can be independently provisioned but still integrate with each other. The global platformMyStore’s analytics platform has been able to deploy the following modules thanks to:

Data Lake foundations: This mandatory module (based on DataLakeCatalog and DataLakeStorage core constructs) is the core of the analytics platform. It contains the data lake storage and associated metadata for both batch and streaming data. The data lake is organized in multiple Amazon S3 buckets representing different versions of the data. (a) The raw layer contains the data coming from the data sources in the raw format. (b) The cleaned layer contains the raw data that has been cleaned and parsed to a consumable schema. (c) And the curated layer contains refactored data based on business requirements.

Batch analytics: This module is in charge of ingesting and processing data from a Stores channel generated by the legacy systems in batch mode. Data is then exposed to other modules for downstream consumption. The data preparation process leverages various features of AWS Glue, a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development via the Apache Spark framework. The orchestration of the preparation is handled using AWS Glue Workflows that allows managing and monitoring executions of Extract, Transform, and Load (ETL) activities involving multiple crawlers, jobs, and triggers. The metadata management is implemented via AWS Glue Crawlers, a serverless process that crawls data sources and sinks to extract the metadata including schemas, statistics and partitions. It saves them in the AWS Glue Data Catalog.

Streaming analytics: This module is ingesting and processing real time data from the Web channel generated by cloud native systems. The solution minimizes data analysis latency but also to feed the data lake for downstream consumption.

Data Warehouse: This module is ingesting data from the data lake to support reporting, dashboarding and ad hoc querying capabilities. The module is using an Extract, Load, and Transform (ELT) process to transform the data from the Data Lake foundations module. Here are the steps that outline the data pipeline from the data lake into the data warehouse. 1. AWS Glue Workflow reads CSV files from the Raw layer of the data lake and writes them to the Clean layer as Parquet files. 2. Stored procedures in Amazon Redshift’s stg_mystore schema extract data from the Clean layer of the data lake using Amazon Redshift Spectrum. 3. The stored procedures then transform and load the data into a star schema model.

Data Visualization: This module is providing dashboarding capabilities to business users like data analysts on top of the Data Warehouse module, but also provides data exploration on top of the Data Lake module. It is implemented with Amazon Quicksight, a scalable, serverless, embeddable, and machine learning-powered business intelligence tool. Amazon QuickSight is connected to the data lake via Amazon Athena and the data lake via Amazon Redshift using direct query mode, in opposition to the caching mode with SPICE.

Project Materials

The AWS native reference architecture provides both code and documentation about MyStore’s analytics platform:

Documentation is available on GitHub and comes in two different parts:

The high level design describes the overall data platform implemented by MyStore, and the different components involved. This is the recommended entry point to discover the solution.
The analytics solutions provide fine-grained solutions to the challenges MyStore met during the project. These technical patterns can help you choose the right solution for common challenges in analytics.

The code is publicly available here and can be reused as an example for other analytics platform implementations. The code can be deployed in an AWS account by following the getting started guide.

Conclusion

In this blog post, we introduced new AWS CDK content available for customers and partners to easily implement AWS analytics solutions with the AWS Analytics Reference Architecture. The core library provides reusable building blocks with best practices to accelerate the development life cycle on AWS and the reference architecture demonstrates running examples with end-to-end integration in a business context.

Because of its reusable nature, this project will be the foundation for lots of additional content. We plan to extend the technical scope of it with Constructs and reference architectures for a data mesh. We’ll also expand the business scope with industry focused examples. In a future blog post, we will go deeper into the constructs related to Amazon EMR Studio and Amazon EMR on EKS to demonstrate how customers can easily bootstrap an efficient data platform based on Amazon EMR Spark and notebooks.

Flatlogic Admin Templates banner

Flatlogic Introduces Enhanced CSV Export Feature!

We are happy to announce that now you can export data from all your’s entities in CSV format! The new feature allows users to easily export data from the admin panel tables and view it in any spreadsheet application. 

CSV stands for Comma-Separated Values. It is a file format used for storing tabular data, such as a spreadsheet or database, in a plain text format. Each line in a CSV file contains one record, with each record consisting of one or more fields separated by commas. CSV files can be opened and edited in any spreadsheet application, such as Microsoft Excel or Google Sheets. They are commonly used to transfer data between different applications and systems.

Unlock the power of your data now with our enhanced CSV export feature! Take advantage of this powerful tool today and make the most of your data! If you’re having trouble, don’t hesitate to reach out to us by posting a message on our forum, Twitter, or Facebook. We’ll get back to you as soon as we can!

The post Flatlogic Introduces Enhanced CSV Export Feature! appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner

10 KPI Templates and Dashboards for Tracking KPI’s

Introduction
What Instruments Do We Need to Build an Effective Dashboard for KPIs?

The Top Dashboards for Tracking KPIs
Sing App Admin Dashboard
Retail Dashboard from Simple KPI
Light Blue React Node.js
Limitless Dashboard
Cork Admin Dashboard
Paper Admin Template
Pick Admin Dashboard Template
Able Pro Admin Dashboard
Architect UI Admin Template
Flatlogic One Admin Dashboard Template

You might also like these articles

Introduction

KPIs or Key Performance Indicators are a modern instrument to make a system (business, for example) work effectively. KPIs show how successful the business is, or how professional the employee is. It works with the help of measurable values, that are intended to show the success of achieving your strategic goals. KPIs are measurable indicators that you should track, calculate, analyze, and represent.  If you read this article, it means that you want to find or build an app to help you in all operations above. But before we list the top dashboard KPI templates, it’s essential to understand how exactly to choose a set of indicators that boost the growth of a business. For KPIs to be useful, they should be relevant to a business. That is crucial not only for entrepreneurs who try to improve their businesses but also for the developers of the software for tracking KPIs. Why?

Developers need to be aware of what instruments they should include in the app so the users will be able to use KPI’s easily and effectively. Since there are much more than a handful of articles and approaches on how to find the right performance indicators, what KPIs to choose, how to track them, development of a quality web application can be complicated. 

However, from our point of view, the most challenging part of such an app is building a dashboard that displays all necessary KPIs on a single screen. We have explored the Internet, analyzed different types of tools to represent KPIs, found great dashboards, and make two lists: one consists of the charts and instruments you should definitely include in your future app, the other is top dashboards we found that contain elements from the first top. Each KPI template on the list is a potent tool that will boost your metrics considerably. Let’s start from the first list.  

Enjoy reading! 

What Instruments Do We Need to Build an Effective Dashboard for KPIs?

Absolute numerical values and percentage (in absolute amount)

With the help of percentage, you can make it more informative by adding the comparison of KPI with the previous periods.

The source: https://vuestic.epicmax.co/admin/dashboard

Non-linear chart

One of the core charts.

The source: https://visme.co/blog/types-of-graphs/

Bar chart

Another core element to display KPIs.

The source: http://ableproadmin.com/angular/default/charts/apex

Stacked Bar Graphs

It’s a more complex instrument, but more informative respectively.

Progress bars

Can be confused with a horizontal bar chart. The main difference: a horizontal bar chart is used to compare the values in several categories, while a progress bar is supposed to show the progress in a single category.

The source: https://vinteedois.com.br/progress-bars/

Pie charts

The source: https://www.cleanpng.com/png-pie-chart-finance-accounting-financial-statement-3867064/preview.html

Donut chart

You can replace pie charts with a donut chart, the meaning is the same.

The source: http://webapplayers.com/luna_admin-v1.4/chartJs.html

Gauge chart

This chart helps users to track their progress towards achieving goals. It’s interchangeable with a progress bar. 

The source: https://www.datapine.com/kpi-examples-and-templates/finance

Pictograms

Instead of using an axis with numbers, it uses pictures to represent a relative or an absolute number of items.

The source: https://www.bootstrapdash.com/demo/corona/jquery/template/modern-vertical/pages/charts/justGage.html

Process behavior chart

Especially valuable for financial KPIs. The mainline shows measurement over time or categories, while two red lines are control limits that shouldn’t be surpassed.

The source: https://www.leanblog.org/2018/12/using-process-behavior-charts-to-compare-red-bead-game-willing-workers-and-baseball-teams/

Combined bar and line graph

The source: https://www.pinterest.co.uk/pin/254031235216555663/

Some additional tools:

These tools are also essential for building a dashboard for tracking KPI: calendar, dropdowns, checkboxes, input fields. The option to create and download a report will also be helpful.

The Top Dashboards for Tracking KPIs

Sing App Admin Dashboard

The source: https://demo.flatlogic.com/sing-app-vue/#/app/main/visits

If you look through a huge number of KPI templates and don’t find one that you need, you should take a look at Sing app. Sing is a premium admin dashboard template that offers all necessary opportunities to turn data into easy to understand graphs and charts. Besides all charts and functions listed above, with Sing, you get such options as downloading graphs in SVG and PNG format, animated and interactive pointer that highlights the point where the cursor is placed, and change the period for values calculation inside the frame with the graph!

MORE INFO
DEMO

Retail Dashboard from Simple KPI

The source: https://dashboards.simplekpi.com/dashboards/shared/NT0B93_AnEG1AD7YKn60zg

That is a dashboard focused on the retail trade sphere. It already contains relevant KPIs and Metrics for that sector, so you need just to download it and use it. Since it’s an opinioned dashboard you will not get a great customization option. If you are a retailer or trader you should try that dashboard to track the performance when selling goods or services.

MORE INFO
DEMO

Light Blue React Node.js

The source: https://flatlogic.com/templates/light-blue-react-node-js/demo

It is a React Admin dashboard template with Node.JS backend. The template is more suited for KPIs that reflect goals in web app traffic analysis, revenue and current balance tracking, and sales management. However, Light blue contains a lot of ready-to-use working components and charts to build a necessary dashboard. It’s very easy to customize and implement, both beginners in React and professional developers can benefit from that template and get a track on KPIs, metrics, and business data.

MORE INFO
DEMO

Limitless Dashboard

The source: http://demo.interface.club/limitless/demo/Template/layout_1/LTR/default/full/index.html

Limitless is a powerful admin template and a best-seller on ThemeForest. It goes with a modern business KPI dashboard that simplifies the processes of monitoring, analyzing, and generating insights. With the help of that dashboard, you can easily monitor the progress of growing sales or traffic and adjust the sales strategy according to customer behavior. Furthermore, the dashboard contains a live update function to keep you abreast of the latest changes.

MORE INFO
DEMO

Cork Admin Dashboard

The source: https://designreset.com/cork/ltr/demo4/index2.html

That is an awesome bootstrap-based dashboard template that follows the best design and programming principles.  The template provides you with more than 10 layout options and Laravel Version of the extremely rare dashboard. Several pages with charts and two dashboards with different metrics ensure you have the basic elements to build a great dashboard for tracking KPI.

MORE INFO
DEMO

Paper Admin Template

The source: https://xvelopers.com/demos/html/paper-panel/index.html

This template fits you if you are looking for a concrete solution since Paper goes with eleven dashboards in the package! They all are unnamed so it will take time to look through them, but that time will be less than time for building your dashboard. Every dashboard provides a simple single-screen view of data and allows sharing it with your collages.

MORE INFO
DEMO

Pick Admin Dashboard Template

The source: http://html.designstream.co.in/pick/html/index-analytic.html

Pick is a modern and stylish solution for the IT industry. It’s a multipurpose dashboard that helps you to gain full control over the performance.

MORE INFO
DEMO

Able Pro Admin Dashboard

The source: http://ableproadmin.com/angular/default/dashboard/analytics

If you believe that the most qualified products are the most rated products, take a look at Able pro. Able pro is a best-rated bootstrap admin template on Themeforest. The human eye captures information within the graph blazingly fast! With that dashboard, you can go much deeper into the understanding of KPIs and make the decision-making process much easier.

MORE INFO
DEMO

Architect UI Admin Template

The source: https://demo.dashboardpack.com/architectui-html-pro/index.html

Those who download Architect UI make the right choice. This KPI template created with hundreds of build-in elements and components, and three blocks of charts. The modular frontend architecture makes dashboard customization fast and easy, while animated graphs provide insights about KPIs.

MORE INFO
DEMO

Flatlogic One Admin Dashboard Template

The source: https://templates-flatlogic.herokuapp.com/flatlogic-one/html5/dashboard/visits.html

Flatlogic one is a one-size-fits-all solution for any type of dashboard. It is a premium bootstrap admin dashboard template that has been released recently in July 2020. It goes with two developed dashboards that serve well as KPI templates: analytics and visits. But it also offers four additional pages with smoothly animated charts for any taste and needs. The dashboard is flexible and highly customizable, so you easily get the benefit from that template.

MORE INFO
DEMO

Thanks for reading.

You might also like these articles:

14+ Best Node.js Open Source Projects

8 Essential Bootstrap Components for Your Web App

Best 14+ Bootstrap Open- Source Projects

The post 10 KPI Templates and Dashboards for Tracking KPI’s appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

Welcome Snowflake Scripting!

The Snowflake Data Platform is full of surprises.
 
Since Snowflake got its start,
you have had the ability to create stored procedures. 
However, this capability was limited to using JavaScript to write those stored procedures.
Using JavaScript opens up great potential, but JS is a new language and some data engineers may
face a steep learning curve.
Well… not anymore. Let’s welcome SNOWFLAKE SCRIPTING!!!!!

What is Snowflake Scripting?

Snowflake has just extended their SQL dialect to allow programmatic instructions.
For example, you can now write conditional blocks:
CREATE OR REPLACE TABLE TEST(LINE VARCHAR);
INSERT INTO TEST(LINE) VALUES(‘FIST LINE’);

EXECUTE IMMEDIATE $$
DECLARE
COUNT INT;
BEGIN
SELECT COUNT(*) INTO :COUNT FROM TEST;
IF (COUNT < 2) THEN
INSERT INTO TEST(LINE) VALUES(‘SECOND LINE’);
RETURN ‘INSERTED’;
END IF;
RETURN ‘NOT INSERTED. COUNT = ‘ || :COUNT;
END;
$$;

Ok. Now that we’ve seen a conditional block, let’s see in more detail what Snowflake Scripting brings to the table.

Variable Declaration

Snowflake Scripting provides a declare section just before your
BEGIN/
END block. You can
declare variables in that section. If you want them to have an initial value, you can use the DEFAULT clause. Here’s an example:

EXECUTE IMMEDIATE $$
DECLARE
VAR1 VARCHAR DEFAULT ‘Hello World’;
BEGIN
return VAR1;
END;
$$;

You could also declare them inline with your code using a let statement and “:=”. Here’s an example of that:
EXECUTE IMMEDIATE $$
BEGIN
let VAR1 VARCHAR := ‘Hello World’;
return VAR1;
END;
$$;

Passing variables to SQL statements in Snowflake Scripting

Binding variables is something that was a little more complicated in the JavaScript world. But in Snowflake Scripting, it’s super easy to
pass variables to SQL statements. Just remember to use a semicolon (‘:’) before the variable name as shown here:

EXECUTE IMMEDIATE
$$
BEGIN
let VAR1 VARCHAR := ‘Hello World’;
CREATE OR REPLACE TABLE TEST AS select :VAR1 as LINE;
END;
$$;

 

Reading values into variables

Retrieving results is also easy. You can
read values into a variable as shown below, but just like before, remember to use a semicolon (‘:’) character before the variable name.

EXECUTE IMMEDIATE
$$
BEGIN
let VAR1 INT := 0;
select 1000 INTO :VAR1;
return VAR1;
END;
$$;

This will print something like this:
+—————–+

| anonymous block |

|—————–|

| 1000            |

+—————–+

And what about non-scalar values? What about doing a query?
That can be done too:
execute immediate
$$
BEGIN
CREATE OR REPLACE TABLE MYTABLE as SELECT $1 as ID, $2 as Name FROM VALUES(1,‘John’),(2,‘DeeDee’);
LET res RESULTSET := (select Name from MYTABLE ORDER BY ID);
LET c1 CURSOR for res;
LET all_people VARCHAR := ;
FOR record IN c1 DO
all_people := all_people || ‘,’ || record.Name;
END FOR;
RETURN all_people;
END
$$;

And it will print something like:

+—————–+

| anonymous block |

|—————–|

| ,John,DeeDee    |

+—————–+

The results from a query are called a
RESULTSET. To iterate on the results of a query, you can open a cursor for that
RESULTSET and the user a
FOR with the cursor variable.

Conditional Logic

Snowflake Scripting brings operators to branch on conditions. You can use both
IF and
CASE. For example:
EXECUTE IMMEDIATE
$$
DECLARE
VAR1 INT DEFAULT 10;
BEGIN
IF (VAR1 > 10) THEN
RETURN ‘more than 10’;
ELSE
RETURN ‘less than 10’;
END IF;
END;
$$;

With this being the result:

+—————–+
| anonymous block |
|—————–|
| less than 10    |
+—————–+
Snowflake Scripting can be
super convenient to
 simplify some administrative tasks.
For example, I usually have this code on a Snowflake worksheet and I need to change it each time I need to create a test database.
create database database1;
create warehouse database1_wh;
create role database1_role;
grant ownership on database database1 to database1_role;
grant ownership on schema database1.public to database1_role;
grant ownership on warehouse database1_wh to database1_role;
grant role database1_role to user USER1;

I can use a Snowflake Scripting block and then I only need to change the database and user 🙂
execute immediate
$$
declare
client varchar default ‘database1’;
user varchar default ‘user1’;
sql varchar;
begin
execute immediate ‘create database if not exists ‘ || client;
execute immediate ‘create warehouse if not exists ‘ || client || ‘_wh’;
execute immediate ‘create role if not exists ‘ || client || ‘_role’;
execute immediate ‘grant ownership on database ‘ || client || ‘ to ‘ || client || ‘_role’;
execute immediate ‘grant ownership on schema ‘ || client || ‘.public to ‘ || client || ‘_role’;
execute immediate ‘grant ownership on warehouse ‘ || client || ‘_wh to ‘ || client || ‘_role’;
execute immediate ‘grant role ‘ || client || ‘_role to user ‘ || user;
end;
$$;

Snowflake Scripting for me and all of us here at Mobilize.Net is a game changer. It really makes it easier for people coming from Teradata, Oracle, or SQL Server to start enjoying the Snowflake Data Platform.
As a matter of fact, Mobilize.Net
SnowConvert is being updated so you can start modernizing your
Oracle,
Teradata, and SQL Server to Snowflake Scripting.  So we hope you enjoy it as much as I am enjoying it.
Thanks for reading!

Flatlogic Admin Templates banner