Deliver Operational Insights to Atlassian Opsgenie using DevOps Guru

As organizations continue to grow and scale their applications, the need for teams to be able to quickly and autonomously detect anomalous operational behaviors becomes increasingly important. Amazon DevOps Guru offers a fully managed AIOps service that enables you to improve application availability and resolve operational issues quickly. DevOps Guru helps ease this process by leveraging machine learning (ML) powered recommendations to detect operational insights, identify the exhaustion of resources, and provide suggestions to remediate issues. Many organizations running business critical applications use different tools to be notified about anomalous events in real-time for the remediation of critical issues. Atlassian is a modern team collaboration and productivity software suite that helps teams organize, discuss, and complete shared work. You can deliver these insights in near-real time to DevOps teams by integrating DevOps Guru with Atlassian Opsgenie. Opsgenie is a modern incident management platform that receives alerts from your monitoring systems and custom applications and categorizes each alert based on importance and timing.

This blog post walks you through how to integrate Amazon DevOps Guru with Atlassian Opsgenie to
receive notifications for new operational insights detected by DevOps Guru with more flexibility and customization using Amazon EventBridge and AWS Lambda. The Lambda function will be used to demonstrate how to customize insights sent to Opsgenie.

Solution overview

Figure 1: Amazon EventBridge Integration with Opsgenie using AWS Lambda

Amazon DevOps Guru directly integrates with Amazon EventBridge to notify you of events relating to generated insights and updates to insights. To begin routing these notifications to Opsgenie, you can configure routing rules to determine where to send notifications. As outlined below, you can also use pre-defined DevOps Guru patterns to only send notifications or trigger actions that match that pattern. You can select any of the following pre-defined patterns to filter events to trigger actions in a supported AWS resource. Here are the following predefined patterns supported by DevOps Guru:

DevOps Guru New Insight Open
DevOps Guru New Anomaly Association
DevOps Guru Insight Severity Upgraded
DevOps Guru New Recommendation Created
DevOps Guru Insight Closed

By default, the patterns referenced above are enabled so we will leave all patterns operational in this implementation.  However, you do have flexibility to change which of these patterns to choose to send to Opsgenie. When EventBridge receives an event, the EventBridge rule matches incoming events and sends it to a target, such as AWS Lambda, to process and send the insight to Opsgenie.

Prerequisites

The following prerequisites are required for this walkthrough:

An AWS Account

An Opsgenie Account

Maven
AWS Command Line Interface (CLI)
AWS Serverless Application Model (SAM) CLI

Create a team and add members within your Opsgenie Account

AWS Cloud9 is recommended to create an environment to get access to the AWS Serverless Application Model (SAM) CLI or AWS Command Line Interface (CLI) from a bash terminal.

Push Insights using Amazon EventBridge & AWS Lambda

In this tutorial, you will perform the following steps:

Create an Opsgenie integration
Launch the SAM template to deploy the solution
Test the solution

Create an Opsgenie integration

In this step, you will navigate to Opsgenie to create the integration with DevOps Guru and to obtain the API key and team name within your account. These parameters will be used as inputs in a later section of this blog.

Navigate to Teams, and take note of the team name you have as shown below, as you will need this parameter in a later section.

Figure 2: Opsgenie team names

Click on the team to proceed and navigate to Integrations on the left-hand pane. Click on Add Integration and select the Amazon DevOps Guru option.

Figure 3: Integration option for DevOps Guru

Now, scroll down and take note of the API Key for this integration and copy it to your notes as it will be needed in a later section. Click Save Integration at the bottom of the page to proceed.

­­­

Figure 4: API Key for DevOps Guru Integration

Now, the Opsgenie integration has been created and we’ve obtained the API key and team name. The email of any team member will be used in the next section as well.

Review & launch the AWS SAM template to deploy the solution

In this step, you will review & launch the SAM template. The template will deploy an AWS Lambda function that is triggered by an Amazon EventBridge rule when Amazon DevOps Guru generates a new event. The Lambda function will retrieve the parameters obtained from the deployment and pushes the events to Opsgenie via an API.

Reviewing the template

Below is the SAM template that will be deployed in the next step. This template launches a few key components specified earlier in the blog. The Transform section of the template allows us takes an entire template written in the AWS Serverless Application Model (AWS SAM) syntax and transforms and expands it into a compliant CloudFormation template. Under the Resources section this solution will deploy an AWS Lamba function using the Java runtime as well as an Amazon EventBridge Rule/Pattern. Another key aspect of the template are the Parameters. As shown below, the ApiKey, Email, and TeamName are parameters we will use for this CloudFormation template which will then be used as environment variables for our Lambda function to pass to OpsGenie.

Figure 5: Review of SAM Template

Launching the Template

Navigate to the directory of choice within a terminal and clone the GitHub repository with the following command:

Change directories with the command below to navigate to the directory of the SAM template.

cd amazon-devops-guru-connector-opsgenie/OpsGenieServerlessTemplate

From the CLI, use the AWS SAM to build and process your AWS SAM template file, application code, and any applicable language-specific files and dependencies.

sam build

From the CLI, use the AWS SAM to deploy the AWS resources for the pattern as specified in the template.yml file.

sam deploy –guided

You will now be prompted to enter the following information below. Use the information obtained from the previous section to enter the Parameter ApiKey, Parameter Email, and Parameter TeamName fields.

 Stack Name
AWS Region
Parameter ApiKey
Parameter Email
Parameter TeamName
Allow SAM CLI IAM Role Creation

Test the solution

Follow this blog to enable DevOps Guru and generate an operational insight.
When DevOps Guru detects a new insight, it will generate an event in EventBridge. EventBridge then triggers Lambda and sends the event to Opsgenie as shown below.

Figure 6: Event Published to Opsgenie with details such as the source, alert type, insight type, and a URL to the insight in the AWS console.enecccdgruicnuelinbbbigebgtfcgdjknrjnjfglclt

Cleaning up

To avoid incurring future charges, delete the resources.

Delete resources deployed from this blog.
From the command line, use AWS SAM to delete the serverless application along with its dependencies.

sam delete

Customizing Insights published using Amazon EventBridge & AWS Lambda

The foundation of the DevOps Guru and Opsgenie integration is based on Amazon EventBridge and AWS Lambda which allows you the flexibility to implement several customizations. An example of this would be the ability to generate an Opsgenie alert when a DevOps Guru insight severity is high. Another example would be the ability to forward appropriate notifications to the AIOps team when there is a serverless-related resource issue or forwarding a database-related resource issue to your DBA team. This section will walk you through how these customizations can be done.

EventBridge customization

EventBridge rules can be used to select specific events by using event patterns. As detailed below, you can trigger the lambda function only if a new insight is opened and the severity is high. The advantage of this kind of customization is that the Lambda function will only be invoked when needed.

{
“source”: [
“aws.devops-guru”
],
“detail-type”: [
“DevOps Guru New Insight Open”
],
“detail”: {
“insightSeverity”: [
“high”
]
}
}

Applying EventBridge customization

Open the file template.yaml reviewed in the previous section and implement the changes as highlighted below under the Events section within resources (original file on the left, changes on the right hand side).

Figure 7: CloudFormation template file changed so that the EventBridge rule is only triggered when the alert type is “DevOps Guru New Insight Open” and insightSeverity is “high”.

Save the changes and use the following command to apply the changes

sam deploy –template-file template.yaml

Accept the changeset deployment

Determining the Ops team based on the resource type

Another customization would be to change the Lambda code to route and control how alerts will be managed.  Let’s say you want to get your DBA team involved whenever DevOps Guru raises an insight related to an Amazon RDS resource. You can change the AlertType Java class as follows:

To begin this customization of the Lambda code, the following changes need to be made within the AlertType.java file:

At the beginning of the file, the standard java.util.List and java.util.ArrayList packages were imported
Line 60: created a list of CloudWatch metrics namespaces
Line 74: Assigned the dataIdentifiers JsonNode to the variable dataIdentifiersNode
Line 75: Assigned the namespace JsonNode to a variable namespaceNode
Line 77: Added the namespace to the list for each DevOps Insight which is always raised as an EventBridge event with the structure detail►anomalies►0►sourceDetails►0►dataIdentifiers►namespace
Line 88: Assigned the default responder team to the variable defaultResponderTeam
Line 89: Created the list of responders and assigned it to the variable respondersTeam
Line 92: Check if there is at least one AWS/RDS namespace
Line 93: Assigned the DBAOps_Team to the variable dbaopsTeam
Line 93: Included the DBAOps_Team team as part of the responders list
Line 97: Set the OpsGenie request teams to be the responders list

Figure 8: java.util.List and java.util.ArrayList packages were imported

 

Figure 9: AlertType Java class customized to include DBAOps_Team for RDS-related DevOps Guru insights.

 

You then need to generate the jar file by using the mvn clean package command.

The function needs to be updated with:

FUNCTION_NAME=$(aws lambda
list-functions –query ‘Functions[?contains(FunctionName, `DevOps-Guru`) ==
`true`].FunctionName’ –output text)
aws lambda update-function-code –region
us-east-1 –function-name $FUNCTION_NAME –zip-file fileb://target/Functions-1.0.jar

As result, the DBAOps_Team will be assigned to the Opsgenie alert in the case a DevOps Guru Insight is related to RDS.

Figure 10: Opsgenie alert assigned to both DBAOps_Team and AIOps_Team.

Conclusion

In this post, you learned how Amazon DevOps Guru integrates with Amazon EventBridge and publishes insights to Opsgenie using AWS Lambda. By creating an Opsgenie integration with DevOps Guru, you can now leverage Opsgenie strengths, incident management, team communication, and collaboration when responding to an insight. All of the insight data can be viewed and addressed in Opsgenie’s Incident Command Center (ICC).  By customizing the data sent to Opsgenie via Lambda, you can empower your organization even more by fine tuning and displaying the most relevant data thus decreasing the MTTR (mean time to resolve) of the responding operations team.

About the authors:

Brendan Jenkins

Brendan Jenkins is a solutions architect working with Enterprise AWS customers providing them with technical guidance and helping achieve their business goals. He has an area of interest around DevOps and Machine Learning technology. He enjoys building solutions for customers whenever he can in his spare time.

Pablo Silva

Pablo Silva is a Sr. DevOps consultant that guide customers in their decisions on technology strategy, business model, operating model, technical architecture, and investments.

He holds a master’s degree in Artificial Intelligence and has more than 10 years of experience with telecommunication and financial companies.

Joseph Simon

Joseph Simon is a solutions architect working with mid to large Enterprise AWS customers. He has been in technology for 13 years with 5 of those centered around DevOps. He has a passion for Cloud, DevOps and Automation and in his spare time, likes to travel and spend time with his family.

Optimized Video Encoding with FFmpeg on AWS Graviton Processors

If you have not tried video encoding on Graviton lately, now is the time to give it another look. Recent FFmpeg improvements, contributed by AWS and others in the open source community, have increased the performance of fully loaded video workloads on Graviton processors.

Measured on Amazon Elastic Compute Cloud (Amazon EC2) C7g instances, for offline video encoding we saw a 63% performance boost for H.264 and 60% for H.265. Encoding video on C7g costs measured 29% less for H.264 and 18% less for H.265 compared to C6i, the latest x86-based Amazon EC2 instance (both using on-demand instance pricing). This makes C7g the fastest compute optimized cloud instance that is the most cost effective and the most energy efficient for video encoding.

When the AWS Graviton2 instances were introduced, they provided 40% better price performance for many workloads, compared to similar x86 Amazon EC2 instances. Graviton3 features an additional 25% improved performance over Graviton2. Video processing and transcoding has been growing in importance, and Graviton is well suited for this workload. AWS engineers and the open source community have worked on video encoding tools, such as FFmpeg and the codec libraries, to further optimize for Graviton. You can get these improvements on GitHub from a build in the development branch of FFmpeg, or use FFmpeg version 5.2 when it is released.

Use cases

One of the common use cases for video in the cloud is batch transcoding multiple videos concurrently on the same instance. This optimizes for the best throughput and price. Another popular use case is transcoding a single input stream to multiple output formats optimized for different viewing resolutions. Both of these cases require optimizing performance for concurrent processing. For the following benchmarks we scale down the incoming 4k stream and encode multiple target resolutions for each input. Each different target resolution can be used to support different device and network capabilities at their native resolution: 1080p, 720p, 480p, 360p, and 160p.

Figure 1: Encoding multiple streams in parallel on a single instance.

We tested encoding the target videos into H.264 and H.265 using the x264 and x265 open source libraries. The H.264 or AVC (Advanced Video Coding) standard was first published in 2004 and enjoys broad compatibility. Devices including mobile phones, tablets, personal computers, smart TVs, and others generally have support for hardware accelerated H.264 decoding. The H.265 or HEVC (High Efficiency Video Coding) standard was first published in 2013 and has better compression at a given level of quality than H.264, but hardware accelerated decoding is not as widely deployed and patents and licensing restrictions have prevented some companies from adopting it in their software. For most video use cases, having more than one video format will be necessary in order to provide the best quality for devices which can play H.265 and also H.264 for devices without H.265 decoding support.

Offline (batch) encoding

Speed: The following diagram shows the encoding speed in frames per second (FPS) for a sample workload. It was tested comparing FFmpeg 4.2 with the development branches of FFmpeg and x265 that include the latest optimizations.

Figure 2: Speed results are the mean frame per second (FPS) for different input samples.
Higher is better.

Cost: The cost of encoding on the latest Graviton instance, C7g, is compared with the latest Amazon EC2 x86 based instances, C6i and C6a, showing better performance and a reduction of 18-29% in cost compared to C6i.

Figure 3: Comparing cost for the latest generations of Amazon EC2 compute instances.

Lower is better. Normalized so that cost of x264, preset ultrafast on c6i is equal to one.

The results show the total cost to transcode 1 million input frames in parallel jobs to five output sizes. Each value is a mean of results for three different input files tested. 1 million frames is about 4 hours and 37 minutes at 60 frames per second.

Live stream encoding

For a live streaming use case, we can measure the maximum number of streams for which an instance can maintain full frame rate while transcoding to 3 output sizes. The results below are the number of streams the instance was able to sustain divided by the cost per hour, resulting in 15-35% lower overall cost on C7g vs. C6i. This makes the C7g instance the most cost effective AWS compute instance type for transcoding streaming video.

Figure 5: Results show the hourly cost per video stream at 24FPS, using -preset ultrafast with x264 and x265.
Lower is better.

The changes

The aarch64 version of the scaling functions initially used the reference implementations written in C. After rewriting these C functions in aarch64 assembly, the performance improved significantly. Video scaling is a component of FFmpeg which consistently takes a high percentage of compute time; most encode jobs will include a scaling step, since it is necessary to create multiple outputs to support different device resolutions, both for offline and live streams. All of these changes have been contributed upstream into FFmpeg. See the table below for some of the changes AWS contributed since the 2019 release of FFmpeg version 4.2. In Figure 6, below, are the sample code changes and their effects on the encoding performance on Graviton.

Function name
Speed up
Commit

ff_yuv2planeX_8_neon
1.08x
https://github.com/FFmpeg/FFmpeg/commit/c3a17ffff6b

ff_hscale_8_to_15_neon
1.39x
https://github.com/FFmpeg/FFmpeg/commit/bd831912712

ff_hscale8to15_4_neon
1.38x
https://github.com/FFmpeg/FFmpeg/commit/0ea61725b1b

ff_pix_abs16_neon
7.00x
https://github.com/FFmpeg/FFmpeg/commit/c471cc74747

ff_hscale8to15_X4_neon
4.00x
https://github.com/FFmpeg/FFmpeg/commit/75ffca7eef5

ff_yuv2planeX_8_neon
1.13x
https://github.com/FFmpeg/FFmpeg/commit/3e708722a2d

ff_yuv2planeX_8_neon
2.00x
https://github.com/FFmpeg/FFmpeg/commit/0d7caa5b09b

Through a series of optimizations to the horizontal and vertical scaling functions, as detailed in the pull requests listed here, AWS engineers were able to improve performance for a variety of input cases. After optimizations optimizations and others applied to FFmpeg and to x265, Graviton instances perform better than comparable Amazon EC2 x86 based instances. Comparing C7g instances to C6i instances for the mainline branch of FFmpeg, C7g shows higher performance in every category.

Benchmarking method

To benchmark FFmpeg we used three different test files, each 10 seconds long. One was a high bitrate test with complex motion and lots of high frequency detail changes, another was mostly a still scene and a low bitrate, and a third was a moderate bitrate scene from the open source Tears of Steel film. We transcoded each clip into the five target sizes using multiple parallel jobs intended to simulate a service transcoding many sources in parallel. To increase the stability of the measurements, we also executed multiple iterations of these parallel jobs sequentially. The total time to execute these jobs is then used to calculate frames per second and cost per frame. Results are measured in frames per second and use the number of source frames transcoded, rather than the output frames, since the output consists of many different sizes. All input files are 4K in size and had H.264 encoding. We tested with the following software versions: FFmpeg, 2022-08-23; x264, 2022-06-01; x265, 2022-09-12.

Conclusion

Graviton2 and Graviton3 processors are cost efficient and fast for running video transcoding. With the latest improvements to FFmpeg and codecs, the advantage has only improved. In order to achieve these results for yourself, the first step is to ensure you are running an optimized build from the latest code. There’s a pre-built binary on https://github.com/BtbN/FFmpeg-Builds/releases, a third-party which maintains builds using the latest source code. VT1 and GPU instances can also be a compelling option, especially for live video, but have less flexibility for getting the best quality at a given bit rate than software encoders. If a software encoder is right for your workload, Graviton is a great option.

There is still more work to do for FFmpeg, especially if you are using HDR content with 10 or 12 bit color depth. If you are, and even if you are not, be sure to keep up to date with FFmpeg and codec releases. If you find use cases where FFmpeg on Graviton does not meet expectations, please open an issue on the Graviton Technical Guide to let us know about it. We will continue to add more performance improvements to make Graviton the most cost effective and efficient general purpose processor for video encoding.

Flatlogic Admin Templates banner

One Weird Trick to Try @parcel/css on CodePen

Ideally, we’d just offer @parcel/css as a CSS processor choice right in our editors. We could absolutely do that, but we’re smack in the middle of a bunch of next-gen CodePen stuff, and we’re keeping our efforts focused there. Never fear, interesting new processors like this will be there along with it. But this CSS processor caught my eye especially because it’s a very fresh, modern, and interesting take on CSS processing. It handles vendor prefixing on its own (something you might otherwise use Autoprefixer for), it handles “syntax lowering” (love that term) for future-syntax CSS (like you’d use postcss-preset-env for), offers scoping, and even has its own built-in minifier, while being super fast. Nice!

So what if you do wanna try it on CodePen? Well, it’s actually possible because they have cleverly released the processor with a Wasm option, not just a backend-language-only thing. So here’s the plan:

Load the processor in the browser as a script (go Wasm go!)
Pull the CSS from the current Pen
Pass that CSS to the in-browser processor we just loaded
Get the transformed CSS
Replace the CSS in the preview with the transformed CSS

Check it:

CodePen Embed Fallback

The post One Weird Trick to Try @parcel/css on CodePen appeared first on CodePen Blog.Flatlogic Admin Templates banner

Caching NextJS Apps with Serverless Redis using Upstash

The modern application we build today is sophisticated. Every time a user loads a webpage, their browser needs to download the bulk of data in order to display that page. A website may consist of millions of data and serve hundreds of API calls. For the data to move smoothly with zero delays between server and client we can follow many strategies. We, developers want our app to deliver the best user experience possible, to achieve this we can employ a variety of techniques available.

There are a number of ways we can address this situation. It would be the best optimization if we could apply techniques that can reduce the amount of latency to perform read/write operations on the database. One of the most popular ways to optimize our API calls is by implementing Caching mechanism.

What is Caching?

Caching is the process of storing copies of files in a cache, or temporary storage location so that they can be accessed more quickly. Technically, a cache is any temporary storage location for copies of files or data, but the term is often used in reference to Internet technologies.

By Cloudflare.com

The most common example of caching we can see is the browser cache, which stores frequently accessed website resources locally so that it does not have to retrieve them over the network each time they are needed. Caching can boost the performance bottleneck of our web applications. When mostly dealing with heavy network traffic and large API calls optimization this technique can be one of the best options for our performance optimization.

Redis: Caching in Server-side

When we talk about caching in servers, one of the top pioneers of caching built-in databases is Redis. Redis (for REmote DIctionary Server) is an open-source NoSQL in-memory key-value data store. One of the best things about Redis is that we can persist data in a database that can continuously store them unless we delete or flush it manually. It is an in-memory database, its data access operations are faster than any other disk-based database, which eventually makes Redis the best choice for caching.

Redis can also be used as a primary database if needed. With the help of Redis, we can call to access and reaccessed as many times as needed without running the database query again. Depending on the Redis cache setup, this can stay in memory for a few hours, a few minutes, or longer. We even can set an expiration time for our caching which we will implement in our demo application.

Redis is able to handle huge amounts of data in real-time, making use of its in-memory data storage capabilities to help support highly responsive database constructs. Caching with Redis allows for fewer database accesses, which helps to reduce the amount of traffic and instances required even achieving a sub-millisecond of latency.

We will implement Redis in our Next application and see the performance gain we can achieve.

Let’s dive into it.

Initializing our Project

Before we begin I assume you have Node installed on your machine so that you can follow along with the steps involved. We will use Next for our project because it helps us write front-end and back-end logic with no configuration needed. We will create a starter project with the following command:

$ npx [email protected]typescript

After the command, give the project the desired name. After everything is done and the project is made for us we can add the dependencies we need to work on in this demo application.

$ npm i ioredis @chakra-ui/core @emotion/core @emotion/styled emotion-theming
$ npm i –save-dev @types/node @types/ioredis

The command above is all the dependencies we will deal with in this project. We will be making the use of ioredis to communicate with our Redis database and style things up with ChakraUI.

As we are using typescript for our project. We will also need to install the typescript version of the node and ioredis which we did in the second command as our local dev dependencies.

Setting up Redis with Upstash

We definitely need to connect our application with Redis. You can use Redis locally and connect to it from your application or use a Redis cloud instance. For this project demo, we will be using Upstash Redis.

Upstash is a serverless database for Redis, with servers/instances, you pay per hour or a fixed price. With Serverless, you pay per request. This means we are not charged when the database is not in use. Upstash configures and manages the database for you.

Head on to Upstash official website and start with an easy free plan. For our demo purpose, we don’t need to pay. Visit the Upstash console after creating your new account and create a new Redis serverless database with Upstash.

You can find the example of the connection string used ioredis in the Upstash dashboard. Copy the blue overlay URL. We will use this connection string to connect to the serverless Redis instance provided in with free tire by Upstash.

import Redis from “ioredis”;
export const redisConnect = new Redis(process.env.REDIS_URL);

In the snippet above we connected our app with the database. We can now use our Redis server instance provided by Upstash inside or our App.

Populating static data

The application we are building might not be an exact use case but, we actually want to see the implementation of caching performance Redis can make to our Application and know how it’s done.

Here we are making a Pokemon application where users can select a list of Pokemon and choose to see the details of Pokemon. We will implement caching to the visited Pokemon. In other words, if users visit the same Pokemon twice they will receive the cached result.

Let’s populate some data inside of our Pokemon options.

export const getStaticProps: GetStaticProps = async () => {
const res = await fetch(
‘https://pokeapi.co/api/v2/pokemon?limit=200&offset=200’
);
const { results }: GetPokemonResults = await res.json();

return {
props: {
pokemons: results,
},
};
};

We are making a call to our endpoint to fetch all the names of Pokemon. The GetStaticProps help us to fetch data at build time. The getStaticProps()function gives props needed for the component Home to render the pages that are generated at build time, not at runtime, and are static.

const Home: NextPage<{ pokemons: Pokemons[] }> = ({ pokemons }) => {
const [selectedPokemon, setSelectedPokemon] = useState<string>(”);
const toast = useToast();
const router = useRouter();

const handelSelect = (e: any) => {
setSelectedPokemon(e.target.value);
};

const searchPokemon = () => {
if (selectedPokemon === ”)
return toast({
title: ‘No pokemon selected’,
description: ‘You need to select a pokemon to search.’,
status: ‘error’,
duration: 3000,
isClosable: true,
});
router.push(`/details/${selectedPokemon}`);
};

return (
<div className={styles.container}>
<main className={styles.main}>
<Box my=”10″>
<FormControl>
<Select
id=”country”
placeholder={
selectedPokemon ? selectedPokemon : ‘Select a pokemon’
}
onChange={handelSelect}
>
{pokemons.map((pokemon, index) => {
return <option key={index}>{pokemon.name}</option>;
})}
</Select>
<Button
colorScheme=”teal”
size=”md”
ml=”3″
onClick={searchPokemon}
>
Search
</Button>
</FormControl>
</Box>
</main>
</div>
);
};

We have successfully populated some static data inside our dropdown to select some Pokemon. Let’s implement a page redirect to a dynamic route when we select a Pokemon name and click the search button.

Adding dynamic page

Creating a dynamic page inside of Next is simple as it has a folder structure provided, which we can leverage to add our dynamic Routes. Let’s create a detailed page for our Pokemon.

const PokemonDetail: NextPage<{ info: PokemonDetailResults }> = ({ info }) => {
return (
<div>
// map our data here
</div>
);
};

export const getServerSideProps: GetServerSideProps = async (context) => {
const { id } = context.query;
const name = id as string;
const data = await fetch(`https://pokeapi.co/api/v2/pokemon/${name}`);
const response: PokemonDetailResults = await data.json();

return {
props: {
info: response,
},
};
};

We made the use of getServerSideProps we are making the use of Server-Side-Rendering provided by Next which will help us to pre-render the page on each request using the data returned by getServerSideProps. This comes in handy when we want to fetch data that changes often and have the page updated to show the most current data. After receiving data we are mapping it over to display it on the screen.

Until now we really have not implemented caching mechanism into our project. Each time the user visits the page we are hitting the API endpoint and sending them back the data they requested for. Let’s move ahead and implement caching into our application.

Caching data

To implement caching in the first place we want to read our Redis database. As discussed Redis stores its data as key-value pairs. We will find whether the key is stored in Redis or not and feed the client with the respective data needed. For this to achieve we will create a function that reads Redis for the key client is requesting.

export const fetchCache = async <T>(key: string, fetchData: () => Promise<T>) => {
const cachedData = await getKey(key);
if (cachedData)return cachedData
return setValue(key, fetchData);
}

When we will know the client is requesting data they have not visited yet we will provide them a copy of data from the server and also behind the scene make a copy inside our Redis database. So, that we can serve data fast through Redis in the next request.

We will write a function where it takes in a parameter of key and if the key exists in the database it will return us parsed value to the client.

const getKey = async <T>(key: string): Promise<T | null> => {
const result = await redisConnect.get(key);
if (result) return JSON.parse(result);
return null;
}

We also need a function where it takes in a key and set the new values alongside with the keys inside our database only if we don’t have that key stored inside of Redis.

const setValue = async <T>(key: string, fetchData: () => Promise<T>): Promise<T> => {
const setValue = await fetchData();
await redisConnect.set(key, JSON.stringify(setValue));
return setValue;
}

Until now we have written everything we need to implement Caching. We will just need is to invoke the function in our dynamic pages. Inside of our [id].tsx we will make a minor tweak where we can invoke an API call only if we don’t have the requested key in Redis.

For this to happen we will need to pass a function as a prop to our fetchCache function.

export const getServerSideProps: GetServerSideProps = async (context) => {
const { id } = context.query;
const name = id as string;

const fetchData = async () => {
const data = await fetch(`https://pokeapi.co/api/v2/pokemon/${name}`);
const response: PokemonDetailResults = await data.json();
return response;
};

const cachedData = await fetchCache(name, fetchData);

return {
props: {
info: cachedData,
},
};
};

We added some tweaks to our code we wrote before. We imported and made the use of fetchCache functions inside of the dynamic page. This function will take in function as a prop and do the checking for key respectively.

Adding expiry

The expiration policy employed by a cache is another factor that helps determine how long a cached item is retained. The expiration policy is usually assigned to the object when it is added to the cache. This can also be customized according to the type of object that’s being cached. A common strategy involves assigning an absolute time of expiration to each object when it is added to the cache. Once that time passes, the item is removed from the cache accordingly.

Let’s also use the caching expiration feature of Redis in our Application. To implement this we just need to add a parameter to our fetchCache function.

const cachedData = await fetchCache(name, fetchData, 60 * 60 * 24);
return {
props: {
info: cachedData,
},
};

export const fetchCache = async (key: string, fetchData: () => Promise<unknown>, expiresIn: number) => {
const cachedData = await getKey(key);
if (cachedData) return cachedData
return setValue(key, fetchData, expiresIn);
}

const setValue = async <T>(key: string, fetchData: () => Promise<T>, expiresIn: number): Promise<T> => {
const setValue = await fetchData();
await redisConnect.set(key, JSON.stringify(setValue), “EX”, expiresIn);
return setValue;
}

For each key that is stored in our Redis database, we have added an expiry time of one day. When the set amount of time elapses, Redis will automatically get rid of the object from the cache so that it may be refreshed by calling the API again. This really helps when we want to feed the client with the updated fresh data every time they call an API.

Performance testing

After all of all these efforts we did which is all for our App performance and optimization. Let’s take a look at our application performance.

This might not be a suitable performance testing for small application. But app serving thousands of API calls with big data set can see a big advantage.

I will make use of the perf_hooks module to assist in measuring the time for our Next lambda to complete an invocation. This is not really provided by Next instead it’s imported from Node. With these APIs, you can measure the time it takes individual dependencies to load, how long your app takes to initially start, and even how long individual web service API calls take. This allows you to make more informed decisions on the efficiency of specific code blocks or even algorithms.

import { performance } from “perf_hooks”;

const startPerfTimer = (): number => {
return performance.now();
}

const endPerfTimer = (): number => {
return performance.now();
}

const calculatePerformance = (startTime: number, endTime: number): void => {
console.log(`Response took ${endTime – startTime} milliseconds`);
}

This may be overkill, to create a function for a line of code but we basically can reuse this function in our application when needed. We will add these function calls to our application and see the results millisecond(ms) of latency, it can impact our app performance overall.

In the above screenshot, we can see the millisecond of improvements in fetching the response. This can be a small improvement in the small application we have built. But, this may be a huge time and performance boost, especially working with large datasets.

Conclusion

Data-heavy applications do need caching operations to improve the response time and even reduce the cost of data volume and bandwidth. With the help of Redis, we can deduct the expensive operation database operations, third-party API calls, and server to server requests by duplicating a copy of the previous requests in our Redis instance.

There might be some cases, we might need to delegate caching to other applications or microservices or any form of key-value storage system that allows us to store and use when we need it. We chose Redis since it is open source and very popular in the industry. Redis’s other cool features include data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, HyperLogLogs, and many more.

I highly recommend you visit the Redis documentation here to gain a depth understanding of other features provided out of the box. Now we can go forth and use Redis to cache frequently queried data in our applications and gain a considerable performance boost.

Please find the code repository here.

Happy coding!

The post Caching NextJS Apps with Serverless Redis using Upstash appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

Announcing SQL Server to Snowflake Migration Solutions

It’s Spring (or at least it will be soon), and while nature may take the Winter off from growing it’s product, Mobilize.Net did not. As Snowflake continues to grow, SnowConvert continues to grow as well. Last month, Mobilize.Net announced SnowConvert for Oracle, the first follow-up to the immensely popular SnowConvert for Teradata. This month? It’s time for SnowConvert for SQL Server

SQL Server has been Microsoft’s database of choice since before Windows was in existence. It has provided a lightweight option for thousands of application’s back-end, and has evolved to be a comprehensive database platform for thousands of organizations. As an on-solution, SQL Server carried many developers and organization through the 90s and early 2000s. But like other on-prem solutions, the cloud has come. Even Microsoft has taken it’s database-ing to the cloud through Azure and Synapse. Snowflake has taken the lead as the Data Cloud, and SnowConvert is the best and most experienced way to help you get there. 

If you have SQL Server, I would hope the SQL you have written for SQL Server is not quite as old as the first version of windows. But even if it is and the architects of that original SQL are nowhere to be anymore, SnowConvert’s got you covered. SnowConvert automates any code the conversion of any DDL and DML that you may have to an equivalent in Snowflake. But that’s the easy part. The hard problem in a code migration for databases is the procedural code. That mean Transact SQL for MSSQL Server. And with T-SQL, SnowConvert again has you covered.

Procedures Transformed

SnowConvert can take your T-SQL to functionally equivalent JavaScript or Snowflake Scripting. Both our product page and documentation have more information on the type of transformation performed, so why not show you what that looks like on this page? Let’s take a look at really basic procedure from the Microsoft Adventure Works database and convert into functionally equivalent JavaScript. This is a procedure that does an update to a table: 

CREATE PROCEDURE [HumanResources].[uspUpdateEmployeePersonalInfo]
@BusinessEntityID [int],
@NationalIDNumber [nvarchar](15),
@BirthDate [datetime],
@MaritalStatus [nchar](1),
@Gender [nchar](1)
WITH EXECUTE AS CALLER
AS
BEGIN
SET NOCOUNT ON;

BEGIN TRY
UPDATE [HumanResources].[Employee]
SET [NationalIDNumber] = @NationalIDNumber
,[BirthDate] = @BirthDate
,[MaritalStatus] = @MaritalStatus
,[Gender] = @Gender
WHERE [BusinessEntityID] = @BusinessEntityID;
END TRY
BEGIN CATCH
EXECUTE [dbo].[uspLogError];
END CATCH;
END;

Pretty straightforward in SQL Server. But how do you replicate this functionality in JavaScript automatically? Of course, by using SnowConvert. Here’s the output transformation:

CREATE OR REPLACE PROCEDURE HumanResources.uspUpdateEmployeePersonalInfo (BUSINESSENTITYID FLOAT, NATIONALIDNUMBER STRING, BIRTHDATE DATE, MARITALSTATUS STRING, GENDER STRING)
RETURNS STRING
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
// REGION SnowConvert Helpers Code
// This section would be populated by SnowConvert for SQL Server’s JavaScript Helper Classes. If you’d like to see more of the helper classes, fill out the form on the SnowConvert for SQL Server Getting Started Page.
// END REGION

/*** MSC-WARNING – MSCEWI1040 – THE STATEMENT IS NOT SUPPORTED IN SNOWFLAKE ***/
/* SET NOCOUNT ON*/
;
try {
EXEC(` UPDATE HumanResources.Employee
SET NationalIDNumber = ?
, BirthDate = ?
, MaritalStatus = ?
, Gender = ?
WHERE BusinessEntityID = ?`,[NATIONALIDNUMBER,BIRTHDATE,MARITALSTATUS,GENDER,BUSINESSENTITYID]);
} catch(error) {
EXEC(`CALL dbo.uspLogError(/*** MSC-WARNING – MSCEWI4010 – Default value added ***/ 0)`);
}
$$;

SnowConvert creates multiple helper class functions (including the EXEC helper called in the output procedure) to recreate the functionality that is present in the source code. SnowConvert also has finely tuned error messages to give you more information about any issues that may be present. You can actually click on both of the codes in the output procedure above to see the documentation page for that error code.

Want to see the same procedure above in Snowflake Scripting? Interested in getting an inventory of code that you’d like to take the cloud? Let us know. We can help you get started and understand the codebase you’re working with. If you’re already familiar with SnowConvert in general, SnowConvert for SQL Server has all the capabilities that you’ve come to expect. From the ability to generate granular assessment data to functionally equivalent transformations built upon a semantic model of the source code, SnowConvert for SQL Server is ready to see what you can throw at it. Get started today!

Flatlogic Admin Templates banner