Maintaining Code Quality with Amazon CodeCatalyst Reports

Amazon CodeCatalyst reports contain details about tests that occur during a workflow run. You can create tests such as unit tests, integration tests, configuration tests, and functional tests. You can use a test report to help troubleshoot a problem during a workflow.

Introduction

In prior posts in this series, I discussed reading The Unicorn Project, by Gene Kim, and how the main character, Maxine, struggles with a complicated Software Development Lifecycle (SDLC) after joining a new team. One of the challenges she encounters is the difficulties in shipping secure, functioning code without an automated testing mechanism. To quote Gene Kim, “Without automated testing, the more code we write, the more money it takes for us to test.”

Software Developers know that shipping vulnerable or non-functioning code to a production environment is to be avoided at all costs; the monetary impact is high and the toll it takes on team morale can be even greater. During the SDLC, developers need a way to easily identify and troubleshoot errors in their code.

In this post, I will focus on how developers can seamlessly run tests as a part of workflow actions as well as configure unit test and code coverage reports with Amazon CodeCatalyst. I will also outline how developers can access these reports to gain insights into their code quality.

Prerequisites

If you would like to follow along with this walkthrough, you will need to:

Have an AWS Builder ID for signing in to CodeCatalyst.
Belong to a CodeCatalyst space and have the Space administrator role assigned to you in that space. For more information, see Creating a space in CodeCatalyst, Managing members of your space, and Space administrator role.
Have an AWS account associated with your space and have the IAM role in that account. For more information about the role and role policy, see Creating a CodeCatalyst service role.

Walkthrough

As with the previous posts in the CodeCatalyst series, I am going to use the Modern Three-tier Web Application blueprint. Blueprints provide sample code and CI/CD workflows to help you get started easily across different combinations of programming languages and architectures. To follow along, you can re-use a project you created previously, or you can refer to a previous post that walks through creating a project using the Three-tier blueprint.

Once the project is deployed, CodeCatalyst opens the project overview. This view shows the content of the README file from the project’s source repository, workflow runs, pull requests, etc. The source repository and workflow are created for me by the project blueprint. To view the source code, I select Code → Source Repositories from the left-hand navigation bar. Then, I select the repository name link from the list of source repositories.

Figure 1. List of source repositories including Mythical Mysfits source code.

From here I can view details such as the number of branches, workflows, commits, pull requests and source code of this repo. In this walkthrough, I’m focused on the testing capabilities of CodeCatalyst. The project already includes unit tests that were created by the blueprint so I will start there.

From the Files list, navigate to web → src → components→ __tests__ → TheGrid.spec.js. This file contains the front-end unit tests which simply check if the strings “Good”, “Neutral”, “Evil” and “Lawful”, “Neutral”, “Chaotic” have rendered on the web page. Take a moment to examine the code. I will use these tests throughout the walkthrough.

Figure 2. Unit test for the front-end that test strings have been rendered properly. 

Next, I navigate to the  workflow that executes the unit tests. From the left-hand navigation bar, select CI/CD → Workflows. Then, find ApplicationDeploymentPipeline, expand Recent runs and select  Run-xxxxx . The Visual tab shows a graphical representation of the underlying YAML file that makes up this workflow. It also provides details on what started the workflow run, when it started,  how long it took to complete, the source repository and whether it succeeded.

Figure 3. The Deployment workflow open in the visual designer.

Workflows are comprised of a source and one or more actions. I examined test reports for the back-end in a prior post. Therefore, I will focus on the front-end tests here. Select the build_and_test_frontend action to view logs on what the action ran, its configuration details, and the reports it generated. I’m specifically interested in the Unit Test and Code Coverage reports under the Reports tab:

Figure 4. Reports tab showing line and branch coverage.

Select the report unitTests.xml (you may need to scroll). Here, you can see an overview of this specific report with metrics like pass rate, duration, test suites, and the test cases for those suites:

Figure 5. Detailed report for the front-end tests.

This report has passed all checks.  To make this report more interesting, I’ll intentionally edit the unit test to make it fail. First, navigate back to the source repository and open web → src → components→ __tests__→TheGrid.spec.js. This test case is looking for the string “Good” so change it to say “Best” instead and commit the changes.

Figure 6. Front-End Unit Test Code Change.

This will automatically start a new workflow run. Navigating back to CI/CD →  Workflows, you can see a new workflow run is in progress (takes ~7 minutes to complete).

Once complete, you can see that the build_and_test_frontend action failed. Opening the unitTests.xml report again, you can see that the report status is in a Failed state. Notice that the minimum pass rate for this test is 100%, meaning that if any test case in this unit test ever fails, the build fails completely.

There are ways to configure these minimums which will be explored when looking at Code Coverage reports. To see more details on the error message in this report, select the failed test case.

Figure 7. Failed Test Case Error Message.

As expected, this indicates that the test was looking for the string “Good” but instead, it found the string “Best”. Before continuing, I return to the TheGrid.spec.js file and change the string back to “Good”.

CodeCatalyst also allows me to specify code and branch coverage criteria. Coverage is a metric that can help you understand how much of your source was tested. This ensures source code is properly tested before shipping to a production environment. Coverage is not configured for the front-end, so I will examine the coverage of the back-end.

I select Reports on the left-hand navigation bar, and open the report called backend-coverage.xml. You can see details such as line coverage, number of lines covered, specific files that were scanned, etc.

Figure 8. Code Coverage Report Succeeded.

The Line coverage minimum is set to 70% but the current coverage is 80%, so it succeeds. I want to push the team to continue improving, so I will edit the workflow to raise the minimum threshold to 90%. Navigating back to CI/CD → Workflows → ApplicationDeploymentPipeline, select the Edit button. On the Visual tab, select build_backend. On the Outputs tab, scroll down to Success Criteria and change Line Coverage to 90%.

Figure 9. Configuring Code Coverage Success Criteria.

On the top-right, select Commit. This will push the changes to the repository and start a new workflow run. Once the run has finished, navigate back to the Code Coverage report. This time, you can see it reporting a failure to meet the minimum threshold for Line coverage.

Figure 10. Code Coverage Report Failed.

There are other success criteria options available to experiment with. To learn more about success criteria, see Configuring success criteria for tests.

Cleanup

If you have been following along with this workflow, you should delete the resources you deployed so you do not continue to incur charges. First, delete the two stacks that CDK deployed using the AWS CloudFormation console in the AWS account you associated when you launched the blueprint. These stacks will have names like mysfitsXXXXXWebStack and mysfitsXXXXXAppStack. Second, delete the project from CodeCatalyst by navigating to Project settings and choosing Delete project.

Summary

In this post, I demonstrated how Amazon CodeCatalyst can help developers quickly configure test cases, run unit/code coverage tests, and generate reports using CodeCatalyst’s workflow actions. You can use these reports to adhere to your code testing strategy as a software development team. I also outlined how you can use success criteria to influence the outcome of a build in your workflow.  In the next post, I will demonstrate how to configure CodeCatalyst workflows and integrate Software Composition Analysis (SCA) reports. Stay tuned!

About the authors:

Imtranur Rahman

Imtranur Rahman is an experienced Sr. Solutions Architect in WWPS team with 14+ years of experience. Imtranur works with large AWS Global SI partners and helps them build their cloud strategy and broad adoption of Amazon’s cloud computing platform.Imtranur specializes in Containers, Dev/SecOps, GitOps, microservices based applications, hybrid application solutions, application modernization and loves innovating on behalf of his customers. He is highly customer obsessed and takes pride in providing the best solutions through his extensive expertise.

Wasay Mabood

Wasay is a Partner Solutions Architect based out of New York. He works primarily with AWS Partners on migration, training, and compliance efforts but also dabbles in web development. When he’s not working with customers, he enjoys window-shopping, lounging around at home, and experimenting with new ideas.

A pattern for dealing with #legacy code in c#

static string legacy_code(int input)
{
// some magic process
const int magicNumber = 7;

var intermediaryValue = input + magicNumber;

return “The answer is ” + intermediaryValue;
}

When dealing with a project more than a few years old, the issue of legacy code crops up time and time again. In this case, you have a function that’s called from lots of different client applications, so you can’t change it without breaking the client apps.

I’m using the code example above to keep the illustration simple, but you have to imagine that this function “legacy_code(int)”, in reality, could be hundreds of lines long, with lots of quirks and complexities. So you really don’t want to duplicate it.

Now, imagine, that as an output, I want to have just the intermediary value, not the string “The answer is …”. My client could parse the number out of the string, but that’s a horrible extra step to put on the client.

Otherwise you could create “legacy_code_internal()” that returns the int, and legacy_code() calls legacy_code_internal() and adds the string. This is the most common approach, but can end up with a rat’s nest of _internal() functions.

Here’s another approach – you can tell me what you think :

static string legacy_code(int input, Action<int> intermediary = null)
{
// some magic process
const int magicNumber = 7;

var intermediaryValue = input + magicNumber;

if (intermediary != null) intermediary(intermediaryValue);

return “The answer is ” + intermediaryValue;
}

Here, we can pass an optional function into the legacy_code function, that if present, will return the intermediaryValue as an int, without interfering with how the code is called by existing clients.

A new client looking to use the new functionality could call;

int intermediaryValue = 0;
var answer = legacy_code(4, i => intermediaryValue = i);
Console.WriteLine(answer);
Console.WriteLine(intermediaryValue);

This approach could return more than one object, but this could get very messy.Flatlogic Admin Templates banner

.NET IdempotentAPI 1.0.0 Release Candidate

Introduction

A distributed system consists of multiple components located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. Fault-tolerant applications can continue operating despite the system, hardware, and network faults of one or more components.

Idempotency in Web APIs ensures that the API works correctly (as designed) even when consumers (clients) send the same request multiple times. To simplify the integration of Idempotency in an API project, we could use the IdempotentAPI open-source NuGet library. IdempotentAPI implements an ASP.NET Core attribute (filter) to handle the HTTP write operations (POST and PATCH) to affect only once for the given request data and idempotency key.

In July 2021, we saw how the IdempotentAPI v0.1.0-beta in .NET Nakama (2021, July 4) provides an easy way to develop idempotent Web APIs in .NET Core. Since then, with the community’s help, several issues and improvements have been identified and implemented. The complete journey of the IdempotentAPI is available in the CHANGELOG.md file.

Now, the IdempotentAPI 1.0.0-RC-01 is available with many improvements ?✨. In the following sections, we will see the complete features, details regarding the improvements, the available NuGet packages, and instructions to start using the IdempotentAPI library quickly.

Features

Simple: Support idempotency in your APIs easily with three simple steps 1️⃣2️⃣3️⃣.
? Validations: Performs validation of the request’s hash-key to ensure that the cached response is returned for the same combination of Idempotency Key and Request to prevent accidental misuse.
? Use it anywhere!: IdempotentAPI targets .NET Standard 2.0. So, we can use it in any compatible .NET implementation (.NET Framework, .NET Core, etc.). Click here to see the minimum .NET implementation versions that support each .NET Standard version.
Configurable: Customize the idempotency in your needs.

Configuration Options (see the GitHub repository for more details)
Logging Level configuration.

? Caching Implementation based on your needs.

? DistributedCache: A build-in caching based on the standard IDistributedCache interface.
? FusionCache: A high-performance and robust cache with an optional distributed 2nd layer and advanced features.
… or you could use your implementation ?

Improvement Details

Improving Concurrent Requests Handling

The standard IDistributedCache interface doesn’t support a command to GetOrSet a cached value with atomicity. However, it defines the Get and Set methods. In our previous implementation, we used these two methods without locking (i.e. without grouping into a single logical operation). As a result, we had an issue with concurrent requests with the same idempotency key. The problem was that the controller action could be executed multiple times.

As we can observe in Figure 1, this issue happens when the API Server receives a second request (with the same idempotency key) before we flag the first idempotency key as Inflight (i.e., execution in progress). Thus, racing conditions occur when setting idempotency key as Inflight.

Figure 1. – The issue of executing the controller more than once when concurrent requests with the same idempotency key are performed.

To overcome this issue, we defined the IIdempotencyCache interface and implemented the GetOrSet method, which performs a lock (locally) for each idempotency key (see code below). In Figure 2, we can see how we used the GetOrSet method to execute the controller action only once on concurrent requests with the same idempotency key.

The idea is to use GetOrSet method to set an Inflight object with a dynamic unique id per request when the Get method returns Null (it doesn’t have a value) as a single logical operation. The second call of the GetOrSet will wait for the first call to complete. Thus, only the execution that receives its unique id can continue with the execution of the controller action.

Figure 2. – Concurrent requests with the same idempotency key, execute the controller action only once.

public byte[] GetOrSet(
string key,
byte[] defaultValue,
object? options = null,
CancellationToken token = default)
{
if (key is null)
{
throw new ArgumentNullException(nameof(key));
}

if (options is not null && options is not DistributedCacheEntryOptions)
{
throw new ArgumentNullException(nameof(options));
}

using (var valuelocker = new ValueLocker(key))
{
byte[] cachedData = _distributedCache.Get(key);
if (cachedData is null)
{
_distributedCache.Set(key, defaultValue, (DistributedCacheEntryOptions?)options);
return defaultValue;
}
else
{
return cachedData;
}
}
}

Caching as Implementation Detail

To overcome the concurrent requests with the same idempotency key issue, we defined the IIdempotencyCache interface and implemented the GetOrSet method. This is implemented in our build-in DistributedCache caching project, which is based on the standard IDistributedCache interface.

Our implementation provides basic caching functionality. However, by defining the IIdempotencyCache interface, our IdempotencyAPI logic becomes independent from the caching implementation. Thus, we can support other caching implementations with advanced features, such as the FusionCache.

FusionCache is a high-performance and robust caching .NET library with an optional distributed 2nd layer with advanced features, such as fail-safe mechanism, cache stampede prevention, fine-grained soft/hard timeouts with background factory completion, extensive customizable logging, and more.

BinaryFormatter is Obsolete

The BinaryFormatter serialization methods become obsolete from ASP .NET Core 5.0. In the IdempotentAPI project, the BinaryFormatter was used in the Utils.cs class for serialization and deserialization. As a result, our library was not working in .NET Core 5.0 and later versions unless we enabled the BinaryFormatterSerialization option in the .csproj file.

The recommended action based on the .NET documentation is to stop using BinaryFormatter and use a JSON or XML serializer. In our case, we used the Newtonsoft JsonSerializer, which can include type information when serializing JSON, and read this type of information when deserializing JSON to create the target object with the original types.

In the following JSON example, we can see how the type of information is included in the data.

{
“$type”: “System.Collections.Generic.Dictionary`2[[System.String, System.Private.CoreLib],[System.Object, System.Private.CoreLib]], System.Private.CoreLib”,
“Request.Method”: “POST”,
“Response.StatusCode”: 200,
“Response.Headers”: {
“$type”: “System.Collections.Generic.Dictionary`2[[System.String, System.Private.CoreLib],[System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]], System.Private.CoreLib]], System.Private.CoreLib”,
“myHeader1”: {
“$type”: “System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]], System.Private.CoreLib”,
“$values”: [
“value1-1”,
“value1-2”
]
},
“myHeader2”: {
“$type”: “System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]], System.Private.CoreLib”,
“$values”: [
“value2-1”,
“value2-1”
]
}
}
}

Quick Start

Step 1: Register the Caching Storage

Storing-caching data is necessary for idempotency. Therefore, the IdempotentAPI library needs an implementation of the IIdempotencyCache to be registered in the Program.cs or Startup.cs file depending on the used style (.NET 6.0 or older). The IIdempotencyCache defines the caching storage service for the idempotency needs.

Currently, we support the following two implementations (see the following table). However, you can use your implementation ?. Both implementations support the IDistributedCache either as primary caching storage (requiring registration) or secondary (optional registration).

Thus, we can define our caching storage service in the IDistributedCache, such as in Memory, SQL Server, Redis, NCache, etc. See the Distributed caching in the ASP.NET Core article for more details about the available framework-provided implementations.

IdempotentAPI.Cache
Implementation
Support Concurrent Requests
Primary Cache
2nd-Level Cache
Advanced Features

DistributedCache (Default)

IDistributedCache

FusionCache

Memory Cache

(IDistributedCache)

Choice 1 (Default): IdempotentAPI.Cache.DistributedCache

Install the IdempotentAPI.Cache.DistributedCache via the NuGet UI or the NuGet package manager console.

// Register an implementation of the IDistributedCache.
// For this example, we are using a Memory Cache.
services.AddDistributedMemoryCache();

// Register the IdempotentAPI.Cache.DistributedCache.
services.AddIdempotentAPIUsingDistributedCache();

Choice 2: Registering: IdempotentAPI.Cache.FusionCache

Install the IdempotentAPI.Cache.FusionCache via the NuGet UI or the NuGet package manager console. To use the advanced FusionCache features (2nd-level cache, Fail-Safe, Soft/Hard timeouts, etc.), configure the FusionCacheEntryOptions based on your needs (for more details, visit the FusionCache repository).

// Register the IdempotentAPI.Cache.FusionCache.
// Optionally: Configure the FusionCacheEntryOptions.
services.AddIdempotentAPIUsingFusionCache();

Tip: To use the 2nd-level cache, we should register an implementation for the IDistributedCache and register the FusionCache Serialization (NewtonsoftJson or SystemTextJson). For example, check the following code:

// Register an implementation of the IDistributedCache.
// For this example, we are using Redis.
services.AddStackExchangeRedisCache(options =>
{
options.Configuration = “YOUR CONNECTION STRING HERE, FOR EXAMPLE:localhost:6379”;
});

// Register the FusionCache Serialization (e.g. NewtonsoftJson).
// This is needed for the a 2nd-level cache.
services.AddFusionCacheNewtonsoftJsonSerializer();

// Register the IdempotentAPI.Cache.FusionCache.
// Optionally: Configure the FusionCacheEntryOptions.
services.AddIdempotentAPIUsingFusionCache();

Step 2: Decorate Response Classes as Serializable

The response Data Transfer Objects (DTOs) need to be serialized before caching. For that reason, we will have to decorate the relative DTOs as [Serializable]. For example, see the code below.

using System;

namespace WebApi_3_1.DTOs
{
[Serializable]
public class SimpleResponse
{
public int Id { get; set; }
public string Message { get; set; }
public DateTime CreatedOn { get; set; }
}
}

Step 3: Set Controller Operations as Idempotent

In your Controller class, add the following using statement. Then choose which operations should be Idempotent by setting the [Idempotent()] attribute, either on the controller’s class or each action separately. The following two sections describe these two cases. First, however, we should define the Consumes and Produces attributes on the controller in both cases.

using IdempotentAPI.Filters;

Using the Idempotent Attribute on a Controller’s Class

By using the Idempotent attribute on the API controller’s class, all POST and PATCH actions will work as idempotent operations (requiring the IdempotencyKey header).

[ApiController]
[Route(“[controller]“)]
[Consumes(“application/json”)] // We should define this.
[Produces(“application/json”)] // We should define this.
[Idempotent(Enabled = true)]
public class SimpleController : ControllerBase
{
// …
}

Using the Idempotent Attribute on a Controller’s Action

By using the Idempotent attribute on each action (HTTP POST or PATCH), we can choose which of them should be Idempotent. In addition, we could use the Idempotent attribute to set different options per action.

[HttpPost]
[Idempotent(ExpireHours = 48)]
public IActionResult Post([FromBody] SimpleRequest simpleRequest)
{
// …
}

NuGet Packages

Package Name
Description

IdempotentAPI
The implementation of the IdempotentAPI library.

IdempotentAPI.Cache
Defines the caching abstraction (IIdempotencyCache) that IdempotentAPI is based.

IdempotentAPI.Cache.DistributedCache
The default caching implementation, based on the standard IDistributedCache interface.

IdempotentAPI.Cache.FusionCache
Supports caching via the FusionCache third-party library.

Summary

The IdempotentAPI 1.0.0-RC-01 is available with many improvements ?✨. With the community’s help, several issues and improvements have been identified and implemented. I want to take this opportunity to thank @apchenjun, @fjsosa, @lvzhuye, @RichardGreen-IS2, and @william-keller for your support, ideas, and time to improve this library.

Any help in coding, suggestions, questions, giving a GitHub Star, etc., are welcome ?. If you are using this library, don’t hesitate to contact me. I would be happy to know your use case ?.Flatlogic Admin Templates banner