How Traveloka Uses Backstage as an API Developer Portal for Amazon API Gateway

In this post, you will learn how Amazon Web Services (AWS) customer, Traveloka, one of the largest online travel companies in Southeast Asia, uses Backstage as the developer portal for APIs hosted on Amazon API Gateway (API Gateway).

API Gateway is a fully managed service that creates RESTful and WebSocket APIs. An API developer portal is a bridge between API publishers and consumers. Consumers use API contracts and documentation published by producers to build client applications.

Prior to Traveloka implementing Backstage, each service team documented APIs in their own way. As a result, developers lost time going back and forth with service teams to build integrations.

With Backstage, developers centralize and standardize API contracts and documentation to accelerate development and avoid bottlenecks.

API portal

Modern application design, such as a microservices architecture, is highly modular. This increases the number of integrations you need to build. APIs are a common way to expose microservices while abstracting away implementation details.

As the number of services and teams supporting them grow, developers find it difficult to discover APIs and build integrations. APIs also simplify how third parties integrate with your service. An API portal consolidates API documentation in a central place. This image shows the role of an API portal:

Figure 1: Role of an API portal

There are multiple ways to build an API portal for API Gateway. One option is to integrate AWS Partner solutions like ReadMe with API Gateway. Software as a Service (SaaS) solutions like ReadMe let you get started quickly and reduce operational overhead.

But customers who prefer a highly configurable set up or open source solutions choose to build their own. Traveloka chose to host Backstage on Amazon Elastic Kubernetes Service (Amazon EKS) for the portal, in part because the company was already using Backstage as its developer portal.

Backstage

Backstage is an open source framework for developer portals under the Cloud Native Computing Foundation (CNCF). The software catalog, at the heart of Backstage, consists of entities that describe software components such as services, APIs, CI/CD pipelines, and documentation.

Service owners can define entities using metadata YAML files, and developers can use the interface to discover services.

This image shows the API portal:

Figure 2: Example API portal on Backstage

Backstage has a frontend and backend component. It stores state in a relational database. You can customize Backstage using YAML configuration files. It integrates with cloud providers and other external systems using plugins.

For instance, you can use Amazon Simple Storage Service (Amazon S3) as the source of entities that will be ingested into the catalog. This image shows a common production deployment approach using containers:

Figure 3: Backstage architecture

Backstage at Traveloka

The majority of services at Traveloka are written in Java and run on Amazon EKS. Backstage is managed by the cloud infrastructure team in a separate AWS account. This team provides a Java SDK, called generator, that service teams use to onboard their services to Backstage.

This generator abstracts away Backstage integration details from service developers and generates the YAML entity files automatically from application configuration at run time. This image shows the Backstage API portal deployment at Traveloka:

Figure 4: API portal on Backstage at Traveloka

Service teams integrate the Backstage generator SDK with their applications.
Changes kick off the CI/CD pipeline.
The pipeline runs tests and deploys the service.
The generator creates or updates an API entity definition in an Amazon S3 bucket.
The Amazon S3 discovery plugin installed on Backstage syncs the catalog from the Amazon S3 bucket.
Developers authenticate with GitHub apps to access the portal.

The generator runs on service start up post deployment. First, it generates the Open API specification (OAS) for the API if needed. Not all services at Traveloka have standardized on OAS yet. Second, it uses runtime application configuration such as a service name, environment, and custom tags to generate the metadata YAML files. It copies it to an Amazon S3 bucket in the Backstage AWS account using a cross-account AWS Identity and Access Management (IAM) role.

The generator is non-blocking and errors do not affect the service. Generating documentation from a running application guarantees accurate representation of service capabilities. It handles cases where API Gateway is only a proxy and application framework manages routing. The AWS S3 discovery plugin installed on Backstage crawls the Amazon S3 bucket and registers new and updated entities.

Backstage deployment is automated with a CI/CD pipeline. Development and production environments are hosted on separate Amazon EKS clusters. Services are always onboarded to the production cluster. The infrastructure team uses the development cluster to test new plugins, features, and version upgrades before deploying them to production.

A two-person team manages the Backstage infrastructure. Another two-person team builds and maintains the SDK and helps service teams onboard.

Since the project began in June 2022, Traveloka has onboarded 204 services in Q1 2023 and plans to onboard 600 more in Q2 2023.

We have seen how Traveloka automates API onboarding to Backstage dynamically based on runtime configurations. Alternately, you can generate and update entity definitions as part of your API CI/CD pipeline. This is more straightforward to do if you do not have any dependency on runtime configuration for your application.

Solution overview

In this post, we will run Backstage locally as an API portal. Amazon S3 is used as a source of entities that will be ingested into the catalog. State is tracked in a SQLite in-memory database.

Prerequisites

Configuration of AWS Command Line Interface (AWS CLI) v2.
Prerequisites for creating the Backstage app:

 Access to a Unix-based operating system, such as Linux, MacOS or Windows Subsystem for Linux.
An account with elevated rights to install the dependencies.
Installation of curl or wget.
Access to Node.js Active LTS Release.
Installation of yarn (you will need to use Yarn classic to create a new project, but it can then be migrated to Yarn 3)/
Installation of docker
Installation of git.

How to deploy the solution:

1.     Create Backstage app.

2.     Configure Amazon S3 as catalog location.

3.     Upload API definition to Amazon S3 and test portal.

Step 1: Create Backstage app

Create a working directory for the code.

mkdir backstage-api-portal-blog
cd backstage-api-portal-blog

We will use npm to create the Backstage app. When prompted for the app name, enter api-portal. This will take a few minutes.

npx @backstage/create-app

Use yarn to run the app.

cd api-portal && yarn dev

Access your Backstage portal at http://localhost:3000. This image shows a sample catalog deployed:

Figure 5: Backstage running locally

Step 2: Configure Amazon S3 as catalog location

There are two steps to set up the Amazon S3 integration. You need to configure the Amazon S3 entity provider in app-config.yaml, and install the plugin.

Step 2.1: Configure Amazon S3 entity provider

Update the catalog providers section in api-portal/app-config.yaml to refer to the Amazon S3 bucket you wish to use as catalog. Create a new bucket using the command below. Replace sample-bucket with a unique name.

aws s3 mb s3://sample-bucket

You can also use an existing bucket. Replace the catalog section, line 73 through the end of the file, with the following code:

catalog:
providers:
awsS3:
# uses “default” as provider ID
bucketName: sample-bucket # replace with your S3 bucket
prefix: backstage-api-blog/ # optional
region: ap-southeast-1 # replace with your bucket region
schedule: # optional; same options as in TaskScheduleDefinition
# supports cron, ISO duration, “human duration” as used in code
frequency: { minutes: 2 }
# supports ISO duration, “human duration” as used in code
timeout: { minutes: 3 }

Step 2.2: Install the AWS catalog plugin

Run this command from the Backstage root folder, api-portal, to install the AWS catalog plugin:

yarn add –cwd packages/backend @backstage/plugin-catalog-backend-module-aws

Replace the content of api-portal/packages/backend/src/plugins/catalog.ts with the following code:

import { CatalogBuilder } from ‘@backstage/plugin-catalog-backend’;
import { ScaffolderEntitiesProcessor } from ‘@backstage/plugin-scaffolder-backend’;
import { Router } from ‘express’;
import { PluginEnvironment } from ‘../types’;
import { AwsS3EntityProvider } from ‘@backstage/plugin-catalog-backend-module-aws’;

export default async function createPlugin(
env: PluginEnvironment,
): Promise<Router> {
const builder = await CatalogBuilder.create(env);
builder.addProcessor(new ScaffolderEntitiesProcessor());
builder.addEntityProvider(
AwsS3EntityProvider.fromConfig(env.config, {
logger: env.logger,
// optional: same as defined in app-config.yaml
schedule: env.scheduler.createScheduledTaskRunner({
frequency: { minutes: 2 },
timeout: { minutes: 3 },
}),
}),
);
const { processingEngine, router } = await builder.build();
await processingEngine.start();
return router;
}

Step 3: Upload API definition to Amazon S3 and test portal

Create an API kind definition for the Swagger PetStore REST API. Save the following code in the file pet-store-api.yaml.

apiVersion: backstage.io/v1alpha1
kind: API
metadata:
name: petstore
description: Swagger PetStore Sample REST API
spec:
type: openapi
lifecycle: production
owner: user:guest
definition:
$text: https://petstore.swagger.io/v2/swagger.json

Copy this file to Amazon S3. Replace the bucket name and prefix with the ones you specified in step 2.1.

aws s3 cp pet-store-api.yaml s3://sample-bucket/backstage-api-blog/pet-store-api.yaml

To allow testing the API directly from the portal, Backstage must trust the domain swagger.io first. To activate this, open api-portal/app-config.yaml. Allow reading from swagger.io in the backend configuration section starting on line 8. Add this reading config:

backend:
reading:
allow:
– host: ‘*.swagger.io’

If you have the portal running, restart it with Ctrl+C followed by yarn dev to pick the configuration change. You do not have to restart to pick up new entities or updates to existing ones in the Amazon S3 bucket.

Now when you navigate to the API section, you will see the PetStore API. Select the API name and navigate to the DEFINITION tab. This image shows the API documentation:

Figure 6: PetStore API documentation

Look for the GET /pet/findByStatus resource and expand the definition. This image shows the resource documentation:

Figure 7: PetStore API /pet/findByStatus resource documentation

Choose Try it out, then select available for status, followed by Execute, as shown in this image:

Figure 8: Test PetStore API /pet/findByStatus resource

You will get a status code of 200 and a list of available pets.

This concludes the steps to test the API portal with Backstage locally. Refer to the instructions for host build from the Backstage documentation to containerize the Backstage app for production deployment.

Cleanup

Delete the folder created for the Backstage app and the working folder. Run the following command from the working folder, backstage-api-portal-blog:

rm -rf api-portal && cd .. && rmdir backstage-api-portal-blog

Delete the API definition from the Amazon S3 bucket.

aws s3 rm s3://sample-bucket/backstage-api-blog/pet-store-api.yaml

Delete the Amazon S3 bucket.

aws s3 rb s3://sample-bucket

Conclusion

In this blog, we have shown you how to use Backstage to build an API portal. API portals are important to simplify the developer experience of discovering API products and building integrations.

Backstage addresses this capability when you host APIs on API Gateway on AWS. For production deployment, the containerized Backstage application can be deployed to either Amazon Elastic Container Service (Amazon ECS) or Amazon EKS.

For Amazon ECS, refer to the sample code for deploying Backstage using AWS Fargate and Amazon Aurora Postgres. For Amazon EKS, the Backstage Helm chart is an easy way to get started.

Visit the API Gateway pattern collection on Serverlessland to learn more on designing REST API integrations on AWS.

Optimize software development with Amazon CodeWhisperer

Businesses differentiate themselves by delivering new capabilities to their customers faster. They must leverage automation to accelerate their software development by optimizing code quality, improving performance, and ensuring their software meets security/compliance requirements. Trained on billions of lines of Amazon and open-source code, Amazon CodeWhisperer is an AI coding companion that helps developers write code by generating real-time whole-line and full-function code suggestions in their IDEs. Amazon CodeWhisperer has two tiers: the individual tier is free for individual use, and the professional tier provides administrative capabilities for organizations seeking to grant their developers access to CW. This blog provides a high-level overview of how developers can use CodeWhisperer.

Getting Started

Getting started with CodeWhisperer is straightforward and documented here. After setup, CodeWhisperer integrates with the IDE and provides code suggestions based on comments written in the IDE. Use TAB to accept a suggestion, ESC to reject the suggestion ALT+C (Windows)/Option + C(MAC) to force a suggestion, and left and right arrow keys to switch between suggestions.

CodeWhisperer supports code generation for 15 programming languages. CodeWhisperer can be used in various IDEs like Amazon Sagemaker Studio, Visual Studio Code, AWS Cloud9, AWS Lambda and many JetBrains IDEs. Refer to the Amazon CodeWhisperer documentation for the latest updates on supported languages and IDEs.

Contextual Code Suggestions

CodeWhisperer continuously examines code and comments for contextual code suggestions. It will generate code snippets using this contextual information and the location of your cursor. Illustrated below is an example of a code suggestion from inline comments in Visual Studio Code that demonstrates how CodeWhisperer can provide context-specific code suggestions without requiring the user to manually replace variables or parameters. In the comment, the file and Amazon Simple Storage Service (Amazon S3) bucket are specified, and CodeWhisperer uses this context to suggest relevant code.

CodeWhisperer also supports and recommends writing declarative code and procedural code, such as shell scripting and query languages. The following example shows how CodeWhisperer recommend the blocks of code in a shell script to loop through servers to execute the hostname command and save their response to an output file.

In the following example, based on the comment, CodeWhisperer suggests Structured Query Language (SQL) code for using common table expression.

CodeWhisperer works with popular Integrated Development Environments (IDEs), for more information on IDE’s supported please refer to CodeWhisperer’s documentation. Illustrated below is CodeWhisperer integrated with AWS Lambda console.

Amazon CodeWhisperer is a versatile AI coding assistant that can aid in a variety of tasks, including AWS-related tasks and API integrations, as well as external (non AWS) API integrations. For example, illustrated below is CodeWhisperer suggesting code for Twilio’s APIs.

Now that we have seen how CodeWhisperer can help with writing code faster, the next section explores how to use AI responsibly.

Use AI responsibly

Developers often leverage open-source code, however run into challenges of license attribution such as attributing the original authors or maintaining the license text. The challenge lies in properly identifying and attributing the relevant open-source components used within a project. With the abundance of open-source libraries and frameworks available, it can be time-consuming and complex to track and attribute each piece of code accurately. Failure to meet the license attribution requirements can result in legal issues, violation of intellectual property rights, and damage to a developer’s reputation. Code Whisperer’s reference tracking continuously monitors suggested code for similarities with known open-source code, allowing developers to make informed decisions about incorporating it into their project and ensuring proper attribution.

Shift left application security

CodeWhisperer can scan code for hard-to-find vulnerabilities such as those in the top ten Open Web Application Security Project (OWASP), or those that don’t meet crypto library best practices, AWS internal security best practices, and others. As of this writing, CodeWhisperer supports security scanning in Python, Java, and JavaScript languages. Below is an illustration of identifying the most known CWEs (Common Weakness Enumeration) along with the ability to dive deep into the problematic line of code with a click of a button.

In the following example, CodeWhisperer provides file-by-file analysis of CWE’s and highlights the top 10 OWASP CWEs such as Unsensitized input is run as code, Cross-site scripting, Resource leak, Hardcoded credentials, SQL injection, OS command injection and Insecure hashing.

Generating Test Cases

A good developer always writes tests. CodeWhisperer can help suggest test cases and verify the code’s functionality. CodeWhisperer considers boundary values, edge cases, and other potential issues that may need to be tested. In the example below, a comment referring to using fact_demo() function leads CodeWhisperer to suggest a unit test for fact_demo() while leveraging contextual details.

Also, CodeWhisperer can simplify creating repetitive code for unit testing. For example, if you need to create sample data using INSERT statements, CodeWhisperer can generate the necessary inserts based on a pattern.

CodeWhisperer with Amazon SageMaker Studio and Jupyter Lab

CodeWhisperer works with SageMaker Studio and Jupyter Lab, providing code completion support for Python in code cells. To utilize CodeWhisperer, follow the setup instructions to activate it in Amazon SageMaker Studio and Jupyter Lab. To begin coding, see User actions.
The following illustration showcases CodeWhisperer’s code recommendations in SageMaker Studio. It demonstrates the suggested code based on comments for loading and analyzing a dataset.

Conclusion

In conclusion, this blog has highlighted the numerous ways in which developers can leverage CodeWhisperer to increase productivity, streamline workflows, and ensure the development of secure code. By adopting Code Whisperer’s AI-powered features, developers can experience enhanced productivity, accelerated learning, and significant time savings.

To take advantage of CodeWhisperer and optimize your coding process, here are the next steps:

1. Visit feature page to learn more about the benefits of CodeWhisperer.

2. Sign up and start using CodeWhisperer.

3. Read about CodeWhisperer success stories

About the Authors

Vamsi Cherukuri

Vamsi Cherukuri is a Senior Technical Account Manager at Amazon Web Services (AWS), leveraging over 15 years of developer experience in Analytics, application modernization, and data platforms. With a passion for technology, Vamsi takes joy in helping customers achieve accelerated business outcomes through their cloud transformation journey. In his free time, he finds peace in the pursuits of running and biking, frequently immersing himself in the thrilling realm of marathons.

Dhaval Shah

Dhaval Shah is a Senior Solutions Architect at AWS, specializing in Machine Learning. With a strong focus on digital native businesses, he empowers customers to leverage AWS and drive their business growth. As an ML enthusiast, Dhaval is driven by his passion for creating impactful solutions that bring positive change. In his leisure time, he indulges in his love for travel and cherishes quality moments with his family.

Nikhil Sharma

Nikhil Sharma is a Solutions Architecture Leader at Amazon Web Services (AWS) where he and his team of Solutions Architects help AWS customers solve critical business challenges using AWS cloud technologies and services.

Introducing the Enhanced Document API for DynamoDB in the AWS SDK for Java 2.x

We are excited to announce that the AWS SDK for Java 2.x now offers the Enhanced Document API for DynamoDB, providing an enhanced way of working with Amazon DynamoDb items.
This post covers using the Enhanced Document API for DynamoDB with the DynamoDB Enhanced Client. By using the Enhanced Document API, you can create an EnhancedDocument instance to represent an item with no fixed schema, and then use the DynamoDB Enhanced Client to read and write to DynamoDB.
Furthermore, unlike the Document APIs of aws-sdk-java 1.x, which provided arguments and return types that were not type-safe, the EnhancedDocument provides strongly-typed APIs for working with documents. This interface simplifies the development process and ensures that the data is correctly typed.

Prerequisites:

Before getting started, ensure you are using an up-to-date version of the AWS Java SDK dependency with all the latest released bug-fixes and features. For Enhanced Document API support, you must use version 2.20.33 or later. See our “Set up an Apache Maven project” guide for details on how to manage the AWS Java SDK dependency in your project.

Add dependency for dynamodb-enhanced in pom.xml.

<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>dynamodb-enhanced</artifactId>
<version>2.20.33</version>
</dependency>

Quick walk-through for using Enhanced Document API to interact with DDB

Step 1 : Create a DynamoDB Enhanced Client

Create an instance of the DynamoDbEnhancedClient class, which provides a high-level interface for Amazon DynamoDB that simplifies working with DynamoDB tables.

DynamoDbEnhancedClient enhancedClient = DynamoDbEnhancedClient.builder()
.dynamoDbClient(DynamoDbClient.create())
.build();

Step 2 : Create a DynamoDbTable resource object with Document table schema

To execute commands against a DynamoDB table using the Enhanced Document API, you must associate the table with your Document table schema to create a DynamoDbTable resource object. The Document table schema builder requires the primary index key and attribute converter providers. Use AttributeConverterProvider.defaultProvider() to convert document attributes of default types. An optional secondary index key can be added to the builder.

DynamoDbTable<EnhancedDocument> documentTable = enhancedClient.table(“my_table”,
TableSchema.documentSchemaBuilder()
.addIndexPartitionKey(TableMetadata.primaryIndexName(),”hashKey”, AttributeValueType.S)
.addIndexSortKey(TableMetadata.primaryIndexName(), “sortKey”, AttributeValueType.N)
.attributeConverterProviders(AttributeConverterProvider.defaultProvider())
.build());

// call documentTable.createTable() if “my_table” does not exist in DynamoDB

Step 3 : Write a DynamoDB item using an EnhancedDocument

The EnhancedDocument class has static factory methods along with a builder method to add attributes to a document. The following snippet demonstrates the type safety provided by EnhancedDocument when you construct a document item.

EnhancedDocument simpleDoc = EnhancedDocument.builder()
.attributeConverterProviders(defaultProvider())
.putString(“hashKey”, “sampleHash”)
.putNull(“nullKey”)
.putNumber(“sortKey”, 1.0)
.putBytes(“byte”, SdkBytes.fromUtf8String(“a”))
.putBoolean(“booleanKey”, true)
.build();

documentTable.putItem(simpleDoc);

Step 4 : Read a Dynamo DB item as an EnhancedDocument

Attributes of the Documents retrieved from a DynamoDB table can be accessed with getter methods

EnhancedDocument docGetItem = documentTable.getItem(r -> r.key(k -> k.partitionValue(“samppleHash”).sortValue(1)));

docGetItem.getString(“hashKey”);
docGetItem.isNull(“nullKey”)
docGetItem.getNumber(“sortKey”).floatValue();
docGetItem.getBytes(“byte”);
docGetItem.getBoolean(“booleanKey”);

AttributeConverterProviders for accessing document attributes as custom objects

You can provide a custom AttributeConverterProvider instance to an EnhancedDocument to convert document attributes to a specific object type.
These providers can be set on either DocumentTableSchema or EnhancedDocument to read or write attributes as custom objects.

TableSchema.documentSchemaBuilder()
.attributeConverterProviders(CustomClassConverterProvider.create(), defaultProvider())
.build();

// Insert a custom class instance into an EnhancedDocument as attribute ‘customMapOfAttribute’.
EnhancedDocument customAttributeDocument =
EnhancedDocument.builder().put(“customMapOfAttribute”, customClassInstance, CustomClass.class).build();

// Retrieve attribute ‘customMapOfAttribute’ as CustomClass object.
CustomClass customClassObject = customAttributeDocument.get(“customMapOfAttribute”, CustomClass.class);

Convert Documents to JSON and vice-versa

The Enhanced Document API allows you to convert a JSON string to an EnhancedDocument and vice-versa.

// Enhanced document created from JSON string using defaultConverterProviders.
EnhancedDocument documentFromJson = EnhancedDocument.fromJson(“{“key”: “Value”}”)

// Converting an EnhancedDocument to JSON string “{“key”: “Value”}”
String jsonFromDocument = documentFromJson.toJson();

Define a Custom Attribute Converter Provider

Custom attribute converter providers are implementations of AttributeConverterProvider that provide converters for custom classes.
Below is an example for a CustomClassForDocumentAPI which has as a single field stringAttribute of type String and its corresponding AttributeConverterProvider implementation.

public class CustomClassForDocumentAPI {
private final String stringAttribute;

public CustomClassForDocumentAPI(Builder builder) {
this.stringAttribute = builder.stringAttribute;
}
public static Builder builder() {
return new Builder();
}
public String stringAttribute() {
return stringAttribute;
}
public static final class Builder {
private String stringAttribute;
private Builder() {
}
public Builder stringAttribute(String stringAttribute) {
this.stringAttribute = string;
return this;
}
public CustomClassForDocumentAPI build() {
return new CustomClassForDocumentAPI(this);
}
}
}
import java.util.Map;
import software.amazon.awssdk.enhanced.dynamodb.AttributeConverter;
import software.amazon.awssdk.enhanced.dynamodb.AttributeConverterProvider;
import software.amazon.awssdk.enhanced.dynamodb.EnhancedType;
import software.amazon.awssdk.utils.ImmutableMap;

public class CustomAttributeForDocumentConverterProvider implements AttributeConverterProvider {
private final Map<EnhancedType<?>, AttributeConverter<?>> converterCache = ImmutableMap.of(
EnhancedType.of(CustomClassForDocumentAPI.class), new CustomClassForDocumentAttributeConverter());
// Different types of converters can be added to this map.

public static CustomAttributeForDocumentConverterProvider create() {
return new CustomAttributeForDocumentConverterProvider();
}

@Override
public <T> AttributeConverter<T> converterFor(EnhancedType<T> enhancedType) {
return (AttributeConverter<T>) converterCache.get(enhancedType);
}
}

A custom attribute converter is an implementation of AttributeConverter that converts a custom classes to and from a map of attribute values, as shown below.

import java.util.LinkedHashMap;
import java.util.Map;
import software.amazon.awssdk.enhanced.dynamodb.AttributeConverter;
import software.amazon.awssdk.enhanced.dynamodb.AttributeValueType;
import software.amazon.awssdk.enhanced.dynamodb.EnhancedType;
import software.amazon.awssdk.enhanced.dynamodb.internal.converter.attribute.EnhancedAttributeValue;
import software.amazon.awssdk.enhanced.dynamodb.internal.converter.attribute.StringAttributeConverter;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;

public class CustomClassForDocumentAttributeConverter implements AttributeConverter<CustomClassForDocumentAPI> {
public static CustomClassForDocumentAttributeConverter create() {
return new CustomClassForDocumentAttributeConverter();
}
@Override
public AttributeValue transformFrom(CustomClassForDocumentAPI input) {
Map<String, AttributeValue> attributeValueMap = new LinkedHashMap<>();
if(input.string() != null){
attributeValueMap.put(“stringAttribute”, AttributeValue.fromS(input.string()));
}
return EnhancedAttributeValue.fromMap(attributeValueMap).toAttributeValue();
}

@Override
public CustomClassForDocumentAPI transformTo(AttributeValue input) {
Map<String, AttributeValue> customAttr = input.m();
CustomClassForDocumentAPI.Builder builder = CustomClassForDocumentAPI.builder();
if (customAttr.get(“stringAttribute”) != null) {
builder.stringAttribute(StringAttributeConverter.create().transformTo(customAttr.get(“stringAttribute”)));
}
return builder.build();
}
@Override
public EnhancedType<CustomClassForDocumentAPI> type() {
return EnhancedType.of(CustomClassForDocumentAPI.class);
}
@Override
public AttributeValueType attributeValueType() {
return AttributeValueType.M;
}
}

Attribute Converter Provider for EnhancedDocument Builder

When working outside of a DynamoDB table context, make sure to set the attribute converter providers explicitly on the EnhancedDocument builder. When used within a DynamoDB table context, the table schema’s converter provider will be used automatically for the EnhancedDocument.
The code snippet below shows how to set an AttributeConverterProvider using the EnhancedDocument builder method.

// Enhanced document created from JSON string using custom AttributeConverterProvider.
EnhancedDocument documentFromJson = EnhancedDocument.builder()
.attributeConverterProviders(CustomClassConverterProvider.create())
.json(“{“key”: “Values”}”)
.build();

CustomClassForDocumentAPI customClass = documentFromJson.get(“key”, CustomClassForDocumentAPI.class)

Conclusion

In this blog post we showed you how to set up and begin using the Enhanced Document API with the DynamoDB Enhanced Client and standalone with the EnhancedDocument class. The enhanced client is open-source and resides in the same repository as the AWS SDK for Java 2.0.
We hope you’ll find this new feature useful. You can always share your feedback on our GitHub issues page.

Microsoft shrunk the TypeScript

#​640 — May 25, 2023

Read on the Web

JavaScript Weekly

DeviceScript: TypeScript for Tiny Thingamabobs — DeviceScript is a new Microsoft effort to take the TypeScript experience to low-resource microcontroller-based devices. It’s compiled to a custom VM bytecode which can run in such constrained environments. (A bit like Go’s TinyGo.) It’s aimed at VS Code users but there’s a CLI option too.

Microsoft

The State of Node.js Performance in 2023 — Node 20 gets put through its paces against 18.16 and 16.20 with a few different benchmark suites running on an EC2 instance. It goes into a lot of depth that’s worth checking out, but if you haven’t got time, the conclusion is “Node 20 is faster.” Good.

Rafael Gonzaga

Lightning Fast JavaScript Data Grid Widget — Try a professional JS data grid component which lets you edit, sort, group and filter datasets with fantastic UX & performance. Includes a TreeGrid, API docs and lots of demos. Seamlessly integrates with React, Angular & Vue apps.

Bryntum Grid sponsor

Deno 1.34: Now deno compile Supports npm PackagesDeno isn’t Node, but it increasingly likes to wear a Node-shaped costume. This release focuses on npm and Node compatibility and Deno’s compile command (for turning projects into single binary executables) now supports npm packages too which opens up a lot of use cases.

The Deno Team

⚡️ IN BRIEF:

TC39’s Hemanth.HM shares some updates from TC39’s 96th meeting. Atomics.waitAsync, the /v flag for regexes, and a method to detect well formatted Unicode strings all move up to stage 4.

The Angular team shares the results of their annual developer survey. Over 12,000 Angular developers responded.

RELEASES:

Astro 2.5

Preact 10.15 – Fast 3KB React alternative.

TypeScript 5.1 RC

Electron 24.4

MapLibre GL JS v3 – WebGL-powered vector tile maps.

???? Articles & Tutorials

Demystifying Tupper’s FormulaTupper’s self-referential formula is a formula that, when plotted, can represent itself. Confused? Luckily Eli shows us how simple the concept is and how to use JavaScript to render your own.

Eli Bendersky

An Introduction to Web Components — A practical and straightforward introduction to using the custom element API now supported in all major browsers to create a basic tabbed panel.

Mohamed Rasvi

▶  Creative Coding with p5.js in Visual Studio Codep5.js is a ‘creative coding’ library that takes a lot of inspiration from Processing. Dan does a great job at showing it off and sharing his enthusiasm for it. The main content starts at about 8-minutes in.

Daniel Shiffman and Olivia Guzzardo

Auth. Built for Devs, by Devs — Easily add login, registration, SSO, MFA, user controls and more auth features to your app in any framework.

FusionAuth sponsor

▶  Why React is Here to Stay — A rebuttal of sorts to Adam Elmore’s video from two weeks ago: ▶️ I’m Done with React.

Joscha Neske

Comparing Three Ways of Processing Arrays Non-Destructively — for-of, .reduce(), and .flatMap() go up against each other.

Dr. Axel Rauschmayer

Build Your First JavaScript ChatGPT Plugin — Plugins provide a way to extend ChatGPT’s functionality.

Mark O’Neill

How I’ve Shifted My Angular App to a Standalone Components Approach

Kamil Konopka

???? Code & Tools

Javy 1.0: A JS to WebAssembly Toolchain — Originally built at Shopify, Java takes your JS code and runs it in a WASM-embedded runtime. It’s worth scanning the example to get a feel for the process. “We’re confident that the Javy CLI is in good enough shape for general use so we’re releasing it as v1.0.0.”

Bytecode Alliance

Inkline 4.0: A Customizable Vue.js 3 UI/UX Library — A design system and numerous customizable components designed for mobile-first (but desktop friendly) and built with accessibility in mind.

Alex Grozav

Dynaboard: A Visual Web App IDE Made for Developers — Build high performance public and private web applications in a collaborative — full-stack — development environment.

Dynaboard sponsor

BlockNote: A ‘Notion-Like’ Block-Based Text Editor — Flexible and presents an extensive API so you can integrate it with whatever you want to do. You can drag and drop blocks, add real-time collaboration, add customizable ‘slash command’ menus, and more. Builds on top of ProseMirror and TipTap.

TypeCell

Windstatic: A Set of 170+ Components and Layouts Made with Tailwind and Alpine.js — Categorized under page sections, nav, and forms, and each category includes multiple components you can drop into projects.

Michael Andreuzza

ls-lint 2.0: A Fast File and Directory Name Linter — Written in Go but aimed at JS/front-end dev use cases, ls-lint provides a way to enforce rules for file naming and directory structures.

Lucas Löffel

Jest Puppeteer 9.0: Run Tests using Jest and Puppeteer — A Jest preset enabling end-to-end testing with Puppeteer.

Argos CI

ts-sql-query: Type-Safe SQL Query Builder — Want to build dynamic SQL queries in a type-safe way with TypeScript verifying queries? This is for you. Supports numerous SQL-based database systems and isn’t an ORM itself.

Juan Luis Paz Rojas

React Authentication, Simplified

Userfront sponsor

Hashids.js 2.3
↳ Generate YouTube-like IDs.

Tabulator 5.5
↳ Interactive table and data grid control.

gridstack.js 8.2
↳ Dashboard layout and creation library.

Cypress GitHub Action 5.8
↳ Action for running Cypress end-to-end tests.

ReacType 16.0
↳ Visual prototyping tool that exports React code.

Mongoose 7.2 – MongoDB modelling library.

Eta (η) 2.2 – Embedded JS template engine.

AVA 5.3 – Popular Node test runner.

MelonJS 15.3 – HTML5 game engine.

???? Jobs

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

Fullstack Engineer at Everfund.com — Push code, change lives. Help us become the center for good causes on the modern web with our dev tools.

Everfund

????‍???? Got a job listing to share? Here’s how.

???? Node.js developer? Check out the latest issue of Node Weekly, our sibling newsletter about all things Node.js — from tutorials and screencasts to news and releases. While we include some Node related items here in JavaScript Weekly, we save most of it for there.

→ Check out Node Weekly here.

Configure Continuous Deployment Using Kustomize Components and Spinnaker Operator in Amazon EKS

Spinnaker is a cloud native continuous delivery platform that provides fast, safe, and repeatable deployments for every enterprise.

In the precursor to this blog, we learned how to manage Spinnaker using the Apache licensed open source Spinnaker Operator and deploy the application using Spinnaker continuous delivery pipeline to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. We configured different components using Spinnaker YAML, as well as the KubeConfig using Spinnaker Tools.

In this blog, we will streamline the Spinnaker service configurations using Kustomize components, Spinnaker Operator, and Amazon EKS Blueprint. We also presented this topic at the 2022 Spinnaker Summit.

Kustomize is an open source tool for customizing Kubernetes configurations to generate resources from other sources and compose and customize collections of resources. We will also introduce Kustomize patches for Spinnaker resources, which is a new kind of Kustomization that allows users to define reusable Kustomizations. In this blog, we will use the kustomize.yaml file to work with the overlays base components from this repository and the patches from the local files.

We will use two personas when talking about continuous deployment using Spinnaker: platform team and development team.

Platform team

In the diagram below, the platform team will setup the infrastructure for Spinnaker using the following steps:

Use Amazon EKS Blueprints to create the Amazon EKS cluster.
Install the Spinnaker Operator, a Kubernetes operator for managing Spinnaker that is built by Armory.
Setup the Amazon Elastic Container Registry (Amazon ECR) repository and Amazon Simple Storage Service (Amazon S3) bucket in your Amazon Web Services (AWS) account. We will create these as part of the walkthrough.
Use Kustomize components to deploy Spinnaker service on Amazon EKS. We will also use the kustomize patch configuration to integrate with different AWS services. All the patched information to configure the components below will live in the kustomize.yml file:

Amazon S3 to persist the data for Spinnaker metadata and the pipeline
Application load balancer to expose the Spinnaker UI
Amazon ECR for the docker registry

Development team

In this diagram, we document how Spinnaker is used as a CI/CD tool and will help deploy the application using GitOps.

The DevOps team (who can either be part of the development team or not depending on organizational structure) will be responsible for creating the Spinnaker pipeline. In our case, we have imported the pre-created pipeline.json, which you will see in the walkthrough section.
The developer will commit code changes that should trigger the build and upload Artifact to Amazon ECR.
The Spinnaker pipeline will detect the new artifact with a new tag and start the deployment to test environment using a Kustomize configuration for test environment.
Once approved, the pipeline will complete the deployment to production environment using a Kustomize configuration for production environment.

Walkthrough

Prerequisites

You will need to use AWS Command Line Interface (AWS CLI), eksctl, kubectl, Terraform, jq, and yq. At the time of writing this blog, the latest version of yq was having issues while passing environment variables, so make sure to use the 4.25.1 version.

Step 1 ⁠- Provision Amazon EKS Cluster using Amazon EKS Terraform Blueprint

Follow steps from the Amazon EKS Terraform Blueprint Git repository to create an Amazon EKS cluster. For this example, we have named the Amazon EKS Cluster eksworkshop-eksctl and set the version to 1.24. Refer to Amazon EKS Blueprint for Terraform for more information.

Step 2 – Install Spinnaker CRDs

Pick a release from GitHub and export that version. We are using 1.3.0, the latest Spinnaker Operator that was available at the time of writing this blog. You can see the latest Spinnaker operator update on the Spinnaker blog.

The operator pattern allows us to extend the Kubernetes API to manage applications and their components through constructs such as the control loop. The Spinnaker Operator streamlines the following tasks:

Validate Spinnaker configurations to reduce the incidences of incorrect feature configuration
Create and monitor all Spinnaker microservices
Activates upgrades between versions of Spinnaker

To install the Spinnaker CRDs, run these commands:

export VERSION=1.3.0
echo $VERSION

cd ~/environment
mkdir -p spinnaker-operator && cd spinnaker-operator
bash -c “curl -L https://github.com/armory/spinnaker-operator/releases/download/v${VERSION}/manifests.tgz | tar -xz”
kubectl apply -f deploy/crds/

When successful, you should get the following output:

customresourcedefinition.apiextensions.k8s.io/spinnakerservices.spinnaker.io created

Step 3 – Install Spinnaker Operator

Next, we need to install the Spinnaker Operator in the namespace spinnaker-operator. We have used cluster mode for the operator that works across namespaces and requires a cluster role to perform validation. Run these commands:

kubectl create ns spinnaker-operator
kubectl -n spinnaker-operator apply -f deploy/operator/cluster

Make sure the Spinnaker Operator pod is running. This may take a couple of minutes. To confirm, run this command:

kubectl get pod -n spinnaker-operator

When successful, you should get the following output:

NAME READY STATUS RESTARTS AGE
spinnaker-operator-6d95f9b567-tcq4w 2/2 Running 0 82s

Step 4 – Create an Amazon ECR instance

Now, we need to create an Amazon ECR instance. Make sure you have your AWS Region and Account ID ready. Run these commands:

export AWS_REGION=<your region>
export AWS_ACCOUNT_ID=<your aws account id>
echo “export AWS_REGION=${AWS_REGION}” >> ~/.bash_profile
echo “export AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID}” >> ~/.bash_profile

export ECR_REPOSITORY=spinnaker-summit-22
echo “export ECR_REPOSITORY=${ECR_REPOSITORY}” >> ~/.bash_profile
aws –region ${AWS_REGION} ecr get-login-password | docker login –username AWS –password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com

aws ecr create-repository –repository-name ${ECR_REPOSITORY} –region ${AWS_REGION}>/dev/null

Next, push the sample NGINX image into your Amazon ECR instance:

docker pull nginx:latest
docker tag nginx:latest ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${ECR_REPOSITORY}:v1.1.0
docker push ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${ECR_REPOSITORY}:v1.1.0

When successful, you should get output like this:

latest: Pulling from library/nginx

Status: Downloaded newer image for nginx:latest
The push refers to repository [123456789.dkr.ecr.us-east-1.amazonaws.com/spinnaker-summit-22]
d6a3537fc36a: Pushed

v1.1.0: digest: sha256:bab399017a659799204147065aab53838ca6f5aeed88cf7d329bc4fda1d2bac7 size: 1570

Step 5 – Create an Amazon S3 bucket

Using these commands, create and configure an Amazon S3 bucket:

export S3_BUCKET=spinnaker-workshop-$(cat /dev/urandom | LC_ALL=C tr -dc “[:alpha:]” | tr ‘[:upper:]’ ‘[:lower:]’ | head -c 10)
aws s3 mb s3://$S3_BUCKET –region ${AWS_REGION}
aws s3api put-public-access-block
–bucket $S3_BUCKET
–public-access-block-configuration “BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true”
echo ${S3_BUCKET}
echo “export S3_BUCKET=${S3_BUCKET}” >> ~/.bash_profile 

Step 6 – Create a service account

Run these commands to create a service account on your Amazon EKS instance:

eksctl utils associate-iam-oidc-provider –cluster eksworkshop-eksctl –approve
kubectl create ns spinnaker

eksctl create iamserviceaccount
 –name s3-access-sa
 –namespace spinnaker
 –cluster eksworkshop-eksctl 
 –attach-policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
 –approve
 –override-existing-serviceaccounts

Step 7 – Create a secret

Make sure you have the GitHub token created using the instructions here: https://github.com/settings/tokens. Your username and token as secret will be used by the Spinnaker pipeline to clone the spinnaker-summit-22 git repo, via these commands:

cd ~/environment
kubectl -n spinnaker create secret generic spin-secrets –from-literal=http-password=”spinsum22!?” –from-literal=github-token=<Your Git hub Token>

Step 8 – Install Spinnaker

Clone the Spinnaker repository:

cd ~/environment/
git clone https://github.com/armory/spinnaker-summit-22.git
cd spinnaker-summit-22

Change the ~/environment/spinnaker-summit-22/ecr-registry.yml configuration file by adding your account and region.

export ECR_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
echo “export ECR_URI=${ECR_URI}” >> ~/.bash_profile 

ECR_URI=${ECR_URI} yq -i ‘.[1].value = env(ECR_URI)’ ~/environment/spinnaker-summit-22/ecr-registry.yml
sed -i ‘s|AWS_REGION|’${AWS_REGION}’|g’ ~/environment/spinnaker-summit-22/ecr-registry.yml

Change the ~/environment/spinnaker-summit-22/s3-bucket.yml configuration file by adding your Amazon S3 bucket name.

S3_BUCKET=${S3_BUCKET} yq  -i ‘.spec.spinnakerConfig.config.persistentStorage.s3.bucket = env(S3_BUCKET)’ ~/environment/spinnaker-summit-22/s3-bucket.yml

Change account/name in the ~/environment/spinnaker-summit-22/gitrepo.yaml configuration file, and run the command below based on your GitHub account.

yq -i ‘.spec.spinnakerConfig.config.artifacts.gitrepo.accounts[0].name = “<Your github user name>”‘ ~/environment/spinnaker-summit-22/gitrepo.yml

Delete the validation webhook. This is the current workaround for the Spinnaker Operator having a validation error in Kubernetes 1.22.

kubectl delete ValidatingWebhookConfiguration spinnakervalidatingwebhook 

Create Spinnaker service with these commands:

cd ~/environment/spinnaker-summit-22/
kubectl apply -k .

When successful, you should get the following output:

serviceaccount/spin-sa created
clusterrole.rbac.authorization.k8s.io/spin-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/spin-cluster-role-binding created
spinnakerservice.spinnaker.io/spinnaker created

Check that all pods and services are running with these kubectl commands:

kubectl get svc -n spinnaker
kubectl get pods -n spinnaker

Here is some example output:

NAME               TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
spin-clouddriver   ClusterIP      172.20.7.182     <none>                                                                    7002/TCP       5d23h
spin-deck          LoadBalancer   172.20.212.79    ac425b232e35f482c8bb6b0badd2dfbd-1427084503.us-west-2.elb.amazonaws.com   80:30893/TCP   5d23h
spin-echo          ClusterIP      172.20.124.248   <none>                                                                    8089/TCP       5d23h
spin-front50       ClusterIP      172.20.90.183    <none>                                                                    8080/TCP       5d23h
spin-gate          LoadBalancer   172.20.215.95    a751ef0d990564aa29757f2546c55fb9-1757439055.us-west-2.elb.amazonaws.com   80:30591/TCP   5d23h
spin-igor          ClusterIP      172.20.72.15     <none>                                                                    8088/TCP       5d23h
spin-orca          ClusterIP      172.20.187.10    <none>                                                                    8083/TCP       5d23h
spin-redis         ClusterIP      172.20.176.213   <none>                                                                    6379/TCP       5d23h
spin-rosco         ClusterIP      172.20.161.240   <none>                                                                    8087/TCP       5d23h

NAME READY STATUS RESTARTS AGE
spin-clouddriver-865f7d77d5-lxfps 1/1 Running 0 19h
spin-deck-5d546d6f59-psmk8 1/1 Running 0 19h
spin-echo-6579d45865-dlxs2 1/1 Running 0 19h
spin-front50-74646b785d-jqxh5 1/1 Running 0 19h
spin-gate-7f6f86d75f-65rdm 1/1 Running 0 19h
spin-igor-868dbb6656-qqrgh 1/1 Running 0 19h
spin-orca-5458c9c4c4-s4r5x 1/1 Running 0 19h
spin-redis-5b685889fd-mjjjd 1/1 Running 0 19h
spin-rosco-6969544f6b-s4nc8 1/1 Running 0 19h

Step 9 – Configure Spinnaker pipeline

In this example we will use a pre-created Spinnaker pipeline.json. However, we need to edit the file ~/environment/spinnaker-summit-22/pipeline,json with your Amazon ECR repository information by inputting the command below. Replace the Amazon ECR endpoint in this command:

cd ~/environment/spinnaker-summit-22/
cat <<< “$(jq ‘.parameterConfig[0].default = “123456789.dkr.ecr.us-west-2.amazonaws.com/spinnaker-summit-22″‘ pipeline.json)” > ~/environment/spinnaker-summit-22/pipeline.json
cat <<< “$(jq ‘.triggers[0].registry = “123456789.dkr.ecr.us-west-2.amazonaws.com”‘ pipeline.json)” > ~/environment/spinnaker-summit-22/pipeline.json

Open the Spinnaker UI by getting the load balancer URL for the Spinnaker service spin-deck from this kubectl command:

kubectl get svc -n spinnaker

The hostname will be listed in the LoadBalancer row in the output, at the end of the line:

NAME               TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE

spin-deck          LoadBalancer   172.20.60.118    a3a8bfe32c2ef4cbda372c5b689ae020-616300770.us-west-2.elb.amazonaws.com   80:30781/TCP   5m2s
spin-echo          ClusterIP      172.20.113.119   <none>                                                                   8089/TCP       5m2s

Go to the browser and load the Load Balancer hostname as a URL. From the UI, create an application for continuous deployment.

This image demonstrates all of the information that needs to be entered into the UI to create a new application, specifically, Name, Owner Email, Repo Type, Description, Instance Health, Instance Port, and Pipeline Behavior.

Create a new pipeline for your application. The UI will prompt you to enter a type (select Pipeline) and provide a pipeline name (for this example, use NginxApp):

Create the pipeline stages using the JSON file. In the UI, select “Pipeline Actions” then “Edit as JSON”:

Replace the pipeline json file content with the content from your ~/environment/spinnaker-summit-22/pipeline.json file. Update your Pipeline by selecting “Save Changes.”
The Spinnaker UI does not auto save in the bake stages. Click the drop down and select your account in “Bake Test” and “Bake Production”:

Verify your stages in the UI. Your Pipeline should be NginxApp, the tag value should be v1.1.0, and the repo_name should reflect your specific repository:

Test each input before going back to the pipeline and manually completing it:

Your pipeline will run until the stage, Promote to Production, pause, and wait for approval. Confirm that the application has been deployed to the test environment.

Using this kubectl command, check if the application has been deployed to the test environment:

kubectl get pods -n test-environment

You should get output showing one pod running in the test-environment namespace:

NAME READY STATUS RESTARTS AGE
test-hello-world-5b9c48d997-ksprc 1/1 Running 1 23h

Click on the “Continue on the Spinnaker Pipeline” popup in the UI for Promote to Production and the pipeline will deploy three pods to the prod environment:

Check if the application has been deployed to the prod environment. You should see three pods running in the production-environment namespace as the output from this kubectl command:

kubectl get pods -n production-environment

Step 10 – Activate GitOps based automated deployment

Run deployment.sh. This bash script will create the application container image with a new tag and push the image to your Amazon ECR repository:

cd ~/environment/spinnaker-summit-22
./deployment.sh 1.3.0

From here, the pipeline should trigger automatically. You can confirm this in the UI:

Check the pods after test deployment with this kubectl command:

kubectl get pods -n test-environment

NAME READY STATUS RESTARTS AGE
test-hello-world-777ddbb675-w7s9p 1/1 Running 0 5m55s

Check the image used by the pod with this kubectl command. It should match the tag used in the script:

kubectl get pod test-hello-world-777ddbb675-w7s9p -n test-environment -ojson| jq “.spec.containers[0].image”

“123456789.dkr.ecr.us-west-2.amazonaws.com/spinnaker-summit-22:v1.3.0”

Check the pod after production deployment with this kubectl command. You should see three pods running in the production-environment namespace:

kubectl get pods -n production-environment

Check the image used by the pod with this kubectl command. It should match the tag used in the script:

kubectl get pod production-hello-world-66d9f986c9-45s8b -n production-environment -ojson| jq “.spec.containers[0].image”

“123456789.dkr.ecr.us-west-2.amazonaws.com/spinnaker-summit-22:v1.3.0”

Cleanup

To clean up your environment, run the following commands, being careful to substitute in the correct values for your AWS_REGION, ECR_REPOSITORY, and S3_BUCKET:

cd ~/environment/spinnaker-summit-22
kubectl delete -k .
eksctl delete iamserviceaccount
–name s3-access-sa
–namespace spinnaker
–cluster eksworkshop-eksctl
aws ecr delete-repository –repository-name ${ECR_REPOSITORY} –region ${AWS_REGION} –force
aws s3 rb s3://$S3_BUCKET –region ${AWS_REGION} –force
cd ~/environment/terraform-aws-eks-blueprints/examples/ipv4-prefix-delegation/
terraform destroy –auto-approve
cd ~/environment/
rm -rf spinnaker-summit-22
rm -rf terraform-aws-eks-blueprints
rm -rf spinnaker-operator

Conclusion

In this post, we installed Spinnaker Service using Spinnaker Operator and Kustomize and walked you through the process of setting up a sample application in Spinnaker service using Kustomize. Then we built a Spinnaker CD pipeline which used Kustomize to overlay the test and prod environment during the deployment stage.

We observed how the Spinnaker pipeline got triggered when we pushed a new image into an Amazon ECR repository. Spinnaker then executed the pipeline deployment stage and deployed the sample application artifacts into an Amazon EKS cluster.

To learn more, we recommend you review these additional resources:

Spinnaker Concepts
Spinnaker Architecture Overview
GitHub Spinnaker Operator
Deploy Armory Continuous Deployment or Spinnaker Using Kubernetes Operators
Spinnaker Architecture
Kustomize patches for configuring Armory Continuous Deployment

How Cirrusgo enabled rapid resolution with Amazon DevOps Guru

In this blog, we will walk through how Cirrusgo used Amazon DevOps Guru for RDS to quickly identify and resolve their operational issue related to database performance and reduce the impact on their business. This capability is offered by Amazon DevOps Guru for RDS which uses machine learning algorithms to help organizations identify and resolve operational issues in their applications and infrastructure.

Challenge:

Knowlegebeam, one of Cirrusgo’s managed service customers, has an e-learning web application that serves as a mission-critical tool for nearly 90,000 teachers. The application tracks daily activities, including teaching and evaluating homework and quizzes submitted by students. Any interruption of the availability of this application causes significant inconvenience to teachers and students, as well as damage to the company’s reputation. Ensuring the continuous and reliable performance of customer workloads is of utmost importance to Cirrusgo.

Identification of Operational issues with Amazon DevOps Guru:

To streamline the troubleshooting process and avoid time-consuming manual analysis of logs, Cirrusgo leveraged the power of Amazon DevOps Guru to monitor Knowledge Beam’s stack. With just a few clicks in the AWS console, Cirrusgo seamlessly enabled DevOps Guru that uses advanced machine learning techniques to analyze Amazon CloudWatch metrics, AWS CloudTrail, and Amazon Relational Database Service (Amazon RDS) Performance Insights. This enables it to quickly identify behaviors that deviate from standard operating patterns and pinpoint the root cause of operational issues.

When users reported difficulty submitting assignments via the e-learning portal, Cirrusgo’s team launched an investigation. The team discovered 4xx and 5xx Amazon Elastic Load Balancing errors in the CloudWatch metrics. There was no additional information available. While examining the load balancer and application logs, the engineers received Amazon DevOps Guru notifications regarding Amazon RDS) replica lag. The team promptly investigated and confirmed the existence of the Amazon RDS replica lag. The team ran commands to stop traffic to the replica instance and shift all traffic to the Amazon RDS primary node. Thanks to DevOps Guru’s insightful recommendations, the team identified and resolved the issue. The team was able to use the root cause of the issue and take additional steps to prevent its recurrence. This included creating an Amazon RDS Read Replica and upgrading the instance type based on the current workload.

Cirrusgo quickly identified and addressed critical operational issues in Knowledge Beam’s application. This enabled them to minimize the immediate impact and enhance their customer’s applications’ future reliability and performance.

Amazon DevOps Guru was very beneficial that helped us identify incidents in Amazon RDS. It provided useful insights we previously didn’t have, and it helped reduce our mitigation time. We implemented it to some accounts we are managing and are taking advantage”, says Mohammed Douglas Otaibi, Technical Co-Founder of Cirrusgo

Conclusion:

This post highlights how Cirrusgo leveraged Amazon DevOps Guru to identify and quickly address anomalous behavior.

Are you looking for a way to improve the monitoring of your Amazon RDS databases? Look no further than Amazon DevOps Guru. With DevOps Guru’s RDS monitoring capabilities, you can gain deep insights into the performance and health of your databases. This includes automatic anomaly detection, proactive recommendations, and alerts for issues that require your attention.

About the authors:

Harish Bannai

Harish Bannai is a Sr. Technical Account Manager at Amazon Web Services. He holds the AWS Solutions Architect Professional, Developer Associate, Analytics Specialty , AWS Database Specialty and Solutions Architect Professional certifications. He works with enterprise customers providing technical assistance on RDS, Database Migration services operational performance and sharing database best practices.

Adnan Bilwani

Adnan Bilwani is a Sr. Senior Specialist at Amazon Web Services. Lucy focuses on improving application qualification and availability by leveraging AWS DevOps services and tools.

Lucy Hartung

Lucy Hartung is a Senior Specialist at Amazon Web Services. Lucy focuses on improving application qualification and availability by leveraging AWS.

jQuery lives on; major changes teased

#​639 — May 18, 2023

Read on the Web

JavaScript Weekly

Bun’s New Bundler: 220x Faster than webpack?Bun is one of the newest JavaScript runtimes (built atop the JavaScriptCore engine) and focuses on speed while aiming to be a drop-in replacement for Node.js. This week’s v0.6.0 release is the ‘biggest release yet’ with standalone executable generation and more, but its new JavaScript bundler and minifier may attract most of the attention and this post digs into why.

Jarred Sumner

???? If you’d prefer to read what a third party thinks, Shane O’Sullivan gave the new bundler a spin and shared his thoughts. There’s also some discussion on Hacker News. It’s early days and while esbuild may be fast enough for most right now, it’s fantastic to see any progress in bundling.

Deopt Explorer: A VS Code Extension to Inspect V8 Trace Log Info — A thorough introduction to MS’s new tool for performing analysis of the V8 engine’s internals, including CPU profile data, how inline caches operate, deoptimizations, how functions were run (interpreted or compiled) and more. There’s a lot going on.

Ron Buckton (Microsoft)

Supercharge Your Websites and Applications with Cloudflare — Get ready for supercharged speed and reliability with Cloudflare’s suite of performance tools. With ultra-fast CDN, smart traffic routing, media optimization, and more, Cloudflare has everything you need to ensure your site or app runs at peak performance.

Cloudflare sponsor

jQuery 3.7.0 Released — JavaScript Weekly is 638 issues old, or almost 13 years once you take away weeks off, so jQuery was a big deal in our early days. We hold a lot of nostalgia for it, and it remains widely used even if no-one is writing about it anymore ???? v3.7 folds the Sizzle selector engine into the core, adds some unitless CSS properties, gains a new uniqueSort method, and “major changes” are still promised in future. jQuery lives on!

Timmy Willison (jQuery Foundation)

⚡️ IN BRIEF:

TC39’s Hemanth.HM has begun keeping a list of ES2023 code examples like he did for ES2022, ES2021, and ES2020.

???? The New Stack has a story about Meta supporting the OpenJS Foundation – but who wrote the article is what we found more interesting..

The folks at Meta / Facebook have written about the efficiency gains made in Messenger Desktop by moving from Electron to React Native.

One downside to platforms like Cloudflare Workers using V8 isolates has been a lack of support for opening TCP sockets – quite an impediement if you want to talk to a RDBMS over TCP or something. Fear no more, Cloudflare Workers has introduced a connect() API for creating TCP sockets from Workers functions.

Promise.withResolvers progressed to stage 2 at the latest TC39 meeting.

RELEASES:

Node.js 20.2

Rome 12.1
↳ The formatter/linter gains stage 3 decorator support.

Ember.js 5.0 – App framework.

Jasmine 5.0 – Testing framework.

Gatsby 5.10

???? Articles & Tutorials

How to Get Full Type Support with Plain JavaScript — It’s possible to reap the benefits of TypeScript, yet still write plain JavaScript, as TypeScript’s analyzer understands types written in the JSDoc format.

Pausly

TypeScript’s own JS Projects Utilizing TypeScript page has more info on the different levels of strictness you can follow from mere inference on regular JS code through to full on TypeScript with strict enabled.

▶  Coding a Working Game of Chess in Pure JavaScript — No canvas, either. All using the DOM, SVG, and JavaScript. No AI and it’s not perfect, but it’s only 88 minutes long and it’ll give you something to work on..

Ania Kubow

Automate Slack and MS Teams Notifications Using Node.js — Quick guide to send and automate messages via Slack, MS Teams, and any other channel from your Node.js applications.

Courier.com sponsor

Your Jest Tests Might Be Wrong — Is your Jest test suite failing you? You might not be using the testing framework’s full potential, especially when it comes to preventing state leakage between tests.

Jamie Magee

A Guide to Visual Regression Testing with Playwright — The Playwright browser control library can form the basis of an end-to-end testing mechanism all written in JavaScript, and comparing the visual output of tests can help show where things are going wrong.

Dima Ivashchuk (Lost Pixel)

Create a Real Time Multi Host Video Chat in a Browser with Amazon IVS

Amazon Web Services (AWS) sponsor

React Server Components, Next.js App Router and Examples — Addy Osmani’s overview of of the state of React Server Components, the Next.js App Router implementation, other implementations, the move towards hybrid rendering, plus related links.

Addy Osmani

..and if React is your thing, the latest issue of React Status is for you.

???? Code & Tools

VanJS: A 1.2KB Reactive UI Framework Without JSX — A new entrant to an increasingly crowded space, VanJS is particularly light and elegant, and its author has put some serious effort into documenting it and offering tools to convert your HTML to its custom format. It’s short for vanilla JavaScript, by the way.. GitHub repo.

Tao Xin

JavaScript Scratchpad for VS Code (2m+ Downloads) — Quokka.js is the #1 tool for exploring/testing JavaScript with edit-continue experience to see realtime execution and runtime values.

Wallaby.js sponsor

Introducing Legend-State 1.0: Faster State for ReactAnother state management solution? After a year of effort, Legend State 1.0 claims to be the fastest option “on just about every metric” and they have the benchmarks to prove it. Whatever the case, this thorough intro is worth a look. GitHub repo.

Moo․do

Starry Night: GitHub-Like Syntax Highlighting — Apparently, GitHub’s own syntax highlighting approach isn’t open source, but this takes a similar approach and is. It’s admittedly quite ‘heavy’ (due to using a WASM build of the Oniguruma regex engine) but that’s the price of quality.

Titus Wormer

Garph 0.5: A Fullstack GraphQL Framework for TypeScript — Full-stack ‘batteries included’ GraphQL APIs without codegen. GitHub repo.

Step CI

headless-qr: A Simple, Modern QR Code Library — A slimmer adaptation of an older project without the extra code that isn’t necessary today. Turning the binary into an image is your job, or use something like QRCode.js if you want a canvas-rendered QR code out of the box.

Rich Harris

Scroll Btween: Use Scroll Position to Tween CSS Values on DOM Elements — Scrolling/parallax libraries tend to feel the same but this one demonstrates some diverse examples with colors, images, and text — all with no dependencies.

Olivier Blanc

eslint-plugin-check-file: Rules for Consistent Filename and Folder Names — Allows you to enforce a consistent naming pattern for file and directory names in projects.

Huan

Transformers.js 2.0 – Run Hugging Face transformers directly in browser.

PrimeReact 9.4 – Extensive UI component library.

The Lounge 4.4 – Cross-platform, self-hosted web IRC client.

Faast.js 8.0 – Serverless batch computing made simple.

???? Jobs

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

Fullstack Engineer at Everfund.com — Push code, change lives! Help us become the center for good causes on the modern web with our dev tools.

Everfund

????‍???? Got a job listing to share? Here’s how.

???? Go with the flow..

js2flowchart.js — A visualization library to convert JavaScript code into attractive SVG flowcharts. Luckily, there’s a live online version if you want to play without having to install anything.

Bohdan Liashenko

Flatlogic Admin Templates banner

Fully Automated Deployment of an Open Source Mail Server on AWS

Many AWS customers have the requirement to host their own email solution and prefer to operate mail severs over using fully managed solutions (e.g. Amazon WorkMail). While certainly there are merits to either approach, motivations for self-managing often include:

full ownership and control
need for customized configuration
restricting access to internal networks or when connected via VPN.

In order to achieve this, customers frequently rely on open source mail servers due to flexibility and free licensing as compared to proprietary solutions like Microsoft Exchange. However, running and operating open source mail servers can be challenging as several components need to be configured and integrated to provide the desired end-to-end functionality. For example, to provide a fully functional email system, you need to combine dedicated software packages to send and receive email, access and manage inboxes, filter spam, manage users etc. Hence, this can be complex and error-prone to configure and maintain. Troubleshooting issues often calls for expert knowledge. As a result, several open source projects emerged that aim at simplifying setup of open source mail servers, such as Mail-in-a-Box, Docker Mailserver, Mailu, Modoboa, iRedMail, and several others.

In this blog post, we take this one step further by adding infrastructure automation and integrations with AWS services to fully automate the deployment of an open source mail server on AWS. We present an automated setup of a single instance mail server, striving for minimal complexity and cost, while still providing high resiliency by leveraging incremental backups and automations. As such, the solution is best suited for small to medium organizations that are looking to run open source mail servers but do not want to deal with the associated operational complexity.

The solution in this blog uses AWS CloudFormation templates to automatically setup and configure an Amazon Elastic Compute Cloud (Amazon EC2) instance running Mail-in-a-Box, which integrates features such as email , webmail, calendar, contact, and file sharing, thus providing functionality similar to popular SaaS tools or commercial solutions. All resources to reproduce the solution are provided in a public GitHub repository under an open source license (MIT-0).

Amazon Simple Storage Service (Amazon S3) is used both for offloading user data and for storing incremental application-level backups. Aside from high resiliency, this backup strategy gives way to an immutable infrastructure approach, where new deployments can be rolled out to implement updates and recover from failures which drastically simplifies operation and enhances security.

We also provide an optional integration with Amazon Simple Email Service (Amazon SES) so customers can relay their emails through reputable AWS servers and have their outgoing email accepted by third-party servers. All of this enables customers to deploy a fully featured open source mail server within minutes from AWS Management Console, or restore an existing server from an Amazon S3 backup for immutable upgrades, migration, or recovery purposes.

Overview of Solution

The following diagram shows an overview of the solution and interactions with users and other AWS services.

After preparing the AWS Account and environment, an administrator deploys the solution using an AWS CloudFormation template (1.). Optionally, a backup from Amazon S3 can be referenced during deployment to restore a previous installation of the solution (1a.). The admin can then proceed to setup via accessing the web UI (2.) to e.g., provision TLS certificates and create new users. After the admin has provisioned their accounts, users can access the web interface (3.) to send email, manage their inboxes, access calendar and contacts and share files. Optionally, outgoing emails are relayed via Amazon SES (3a.) and user data is stored in a dedicated Amazon S3 bucket (3b.). Furthermore, the solution is configured to automatically and periodically create incremental backups and store them into an S3 bucket for backups (4.).

On top of popular open source mail server packages such as Postfix for SMTP and Dovecot for IMAP, Mail-in-a-box integrates Nextcloud for calendar, contacts, and file sharing. However, note that Nextcloud capabilities in this context are limited. It’s primarily intended to be used alongside the core mail server functionalities to maintain calendar and contacts and for lightweight file sharing (e.g. for sharing files via links that are too large for email attachments). If you are looking for a fully featured, customizable and scalable Nextcloud deployment on AWS, have a look at this AWS Sample instead.

Deploying the Solution

Prerequisites

For this walkthrough, you should have the following prerequisites:

An AWS account

An existing external email address to test your new mail server. In the context of this sample, we will use [email protected] as the address.
A domain that can be exclusively used by the mail server in the sample. In the context of this sample, we will use aws-opensource-mailserver.org as the domain. If you don’t have a domain available, you can register a new one with Amazon Route 53. In case you do so, you can go ahead and delete the associated hosted zone that gets automatically created via the Amazon Route 53 Console. We won’t need this hosted zone because the mail server we deploy will also act as Domain Name System (DNS) server for the domain.
An SSH key pair for command line access to the instance. Command line access to the mail server is optional in this tutorial, but a key pair is still required for the setup. If you don’t already have a key pair, go ahead and create one in the EC2 Management Console:

(Optional) In this blog, we verify end-to-end functionality by sending an email to a single email address ([email protected] ) leveraging Amazon SES in sandbox mode. In case you want to adopt this sample for your use case and send email beyond that, you need to request removal of email sending limitations for EC2 or alternatively, if you relay your mail via Amazon SES request moving out of Amazon SES sandbox.

Preliminary steps: Setting up DNS and creating S3 Buckets

Before deploying the solution, we need to set up DNS and create Amazon S3 buckets for backups and user data.

1.     Allocate an Elastic IP address: We use the address 52.6.x.y in this sample.

2.     Configure DNS: If you have your domain registered with Amazon Route 53, you can use the AWS Management Console to change the name server and glue records for your domain. Configure two DNS servers ns1.box.<your-domain> and ns2.box.<your-domain> by placing your Elastic IP (allocated in step 1) into the Glue records field for each name server:

If you use a third-party DNS service, check their corresponding documentation on how to set the glue records.

It may take a while until the updates to the glue records propagate through the global DNS system. Optionally, before proceeding with the deployment, you can verify your glue records are setup correctly with the dig command line utility:

# Get a list of root servers for your top level domain
dig +short org. NS
# Query one of the root servers for an NS record of your domain
dig c0.org.afilias-nst.info. aws-opensource-mailserver.org. NS

This should give you output as follows:

;; ADDITIONAL SECTION:
ns1.box.aws-opensource-mailserver.org. 3600 IN A 52.6.x.y
ns2.box.aws-opensource-mailserver.org. 3600 IN A 52.6.x.y

3.     Create S3 buckets for backups and user data: Finally, in the Amazon S3 Console, create a bucket to store Nextcloud data and another bucket for backups, choosing globally unique names for both of them. In context of this sample, we will be using the two buckets (aws-opensource-mailserver-backup and aws-opensource-mailserver-nextcloud) as shown here:

Deploying and Configuring Mail-in-a-Box

Click    to deploy and specify the parameters as shown in the below screenshot to match the resources created in the previous section, leave other parameters at their default value, then click Next and Submit.

This will deploy your mail server into a public subnet of your default VPC which takes about 10 minutes. You can monitor the progress in the AWS CloudFormation Console. Meanwhile, retrieve and note the admin password for the web UI from AWS Systems Manager Parameter Store via the MailInABoxAdminPassword parameter.

Roughly one minute after your mail server finishes deploying, you can log in at its admin web UI residing at https://52.6.x.y/admin with username admin@<your-domain>, as shown in the following picture (you need to confirm the certificate exception warning from your browser):

Finally, in the admin UI navigate to System > TLS(SSL) Certificates and click Provision to obtain a valid SSL certificate and complete the setup (you might need to click on Provision twice to have all domains included in your certificate, as shown here).

At this point, you could further customize your mail server setup (e.g., by creating inboxes for additional users). However, we will continue to use the admin user in this sample for testing the setup in the next section.

Note: If your AWS account is subject to email sending restrictions on EC2, you will see an error in your admin dashboard under System > System Status Checks that says ‘Incoming Email (SMTP/postfix) is running but not publicly accessible’. You are safe to ignore this and should be able to receive emails regardless.

Testing the Solution

Receiving Email

With your existing email account, compose and send an email to admin@<your-domain>. Then login as admin@<your-domain> to the webmail UI of your AWS mail server at https://box.<your-domain>/mail and verify you received the email:

Test file sharing, calendar and contacts with Nextcloud

Your Nextcloud installation can be accessed under https://box.<your-domain>/cloud, as shown in the next figure. Here you can manage your calendar, contacts, and shared files. Contacts created and managed here are also accessible in your webmail UI when you compose an email. Refer to the Nextcloud documentation for more details. In order to keep your Nextcloud installation consistent and automatically managed by Mail-in-a-box setup scripts, admin users are advised to refrain from changing and customizing the Nextcloud configuration.

Sending Email

For this sample, we use Amazon SES to forward your outgoing email, as this is a simple way to get the emails you send accepted by other mail servers on the web. Achieving this is not trivial otherwise, as several popular email services tend to block public IP ranges of cloud providers.

Alternatively, if your AWS account has email sending limitations for EC2 you can send emails directly from your mail server. In this case, you can skip the next section and continue with Send test email, but make sure you’ve deployed your mail server stack with the SesRelay set to false. In that case, you can also bring your own IP addresses to AWS and continue using your reputable addresses or build reputation for addresses you own.

Verify your domain and existing Email address to Amazon SES

In order to use Amazon SES to accept and forward email for your domain, you first need to prove ownership of it. Navigate to Verified Identities in the Amazon SES Console and click Create identity, select domain and enter your domain. You will then be presented with a screen as shown here:

You now need to copy-paste the three CNAME DNS records from this screen over to your mail server admin dashboard. Open the admin web UI of your mail server again, select System > Custom DNS, and add the records as shown in the next screenshot.

Amazon SES will detect these records, thereby recognizing you as the owner and verifying the domain for sending emails. Similarly, while still in sandbox mode, you also need to verify ownership of the recipient email address. Navigate again to Verified Identities in the Amazon SES Console, click Create identity, choose Email Address, and enter your existing email address.

Amazon SES will then send a verification link to this address, and once you’ve confirmed via the link that you own this address, you can send emails to it.

Summing up, your verified identities section should look similar to the next screenshot before sending the test email:

Finally, if you intend to send email to arbitrary addresses with Amazon SES beyond testing in the next step, refer to the documentation on how to request production access.

Send test email

Now you are set to log back into your webmail UI and reply to the test mail you received before:

Checking the inbox of your existing mail, you should see the mail you just sent from your AWS server.

Congratulations! You have now verified full functionality of your open source mail server on AWS.

Restoring from backup

Finally, as a last step, we demonstrate how to roll out immutable deployments and restore from a backup for simple recovery, migration and upgrades. In this context, we test recreating the entire mail server from a backup stored in Amazon S3.

For that, we use the restore feature of the CloudFormation template we deployed earlier to migrate from the initial t2.micro installation to an AWS Graviton arm64-based t4g.micro instance. This exemplifies the power of the immutable infrastructure approach made possible by the automated application level backups, allowing for simple migration between instance types with different CPU architectures.

Verify you have a backup

By default, your server is configured to create an initial backup upon installation and nightly incremental backups. Using your ssh key pair, you can connect to your instance and trigger a manual backup to make sure the emails you just sent and received when testing will be included in the backup:

ssh -i aws-opensource-mailserver.pem [email protected] sudo /opt/mailinabox/management/backup.py

You can then go to your mail servers’ admin dashboard at https://box.<your-doamin>/admin and verify the backup status under System > Backup Status:

Recreate your mail server and restore from backup

First, double check that you have saved the admin password, as you will no longer be able to retrieve it from Parameter Store once you delete the original installation of your mail server. Then go ahead and delete the aws-opensource-mailserver stack from your CloudFormation Console an redeploy it by clicking on this . However, this time, adopt the parameters as shown below, changing the instance type and corresponding AMI as well as specifying the prefix in your backup S3 bucket to restore from.

Within a couple of minutes, your mail server will be up and running again, featuring the exact same state it was before you deleted it, however, running on a completely new instance powered by AWS Graviton. You can verify this by going to your webmail UI at https://box.<yourdomain>/mail and logging in with your old admin credentials.

Cleaning up

 Delete the mail server stack from CloudFormation Console

Empty and delete both the backup and Nextcloud data S3 Buckets

Release the Elastic IP
In case you registered your domain from Amazon Route 53 and do not want to hold onto it, you need to disable automatic renewal. Further, if you haven’t already, delete the hosted zone that got created automatically when registering it.

Outlook

The solution discussed so far focuses on minimal operational complexity and cost and hence is based on a single Amazon EC2 instance comprising all functions of an open source mail server, including a management UI, user database, Nextcloud and DNS. With a suitably sized instance, this setup can meet the demands of small to medium organizations. In particular, the continuous incremental backups to Amazon S3 provide high resiliency and can be leveraged in conjunction with the CloudFormation automations to quickly recover in case of instance or single Availablity Zone (AZ) failures.

Depending on your requirements, extending the solution and distributing components across AZs allows for meeting more stringent requirements regarding high availability and scalability in the context of larger deployments. Being based on open source software, there is a straight forward migration path towards these more complex distributed architectures once you outgrow the setup discussed in this post.

Conclusion

In this blog post, we showed how to automate the deployment of an open source mail server on AWS and how to quickly and effortlessly restore from a backup for rolling out immutable updates and providing high resiliency. Using AWS CloudFormation infrastructure automations and integrations with managed services such as Amazon S3 and Amazon SES, the lifecycle management and operation of open source mail servers on AWS can be simplified significantly. Once deployed, the solution provides an end-user experience similar to popular SaaS and commercial offerings.

You can go ahead and use the automations provided in this blog and the corresponding GitHub repository to get started with running your own open source mail server on AWS!

Flatlogic Admin Templates banner

jQuery 3.7.0 Released: Staying in Order

jQuery 3.7.0 is now available! This release has it all: bug fixes, a new method, and a performance improvement! We even dropped our longtime selector engine: Sizzle. Or, I should say, we moved it into jQuery. jQuery no longer depends on Sizzle as a separate project, but has instead dropped its code directly into jQuery core. This helps us prepare for the major changes coming to selection in future jQuery versions. That doesn’t mean much right now, but jQuery did drop a few bytes because Sizzle supports even older browsers than jQuery. As an aside, we do plan on archiving Sizzle, but we’ll have more details on that in a future blog post.

As usual, the release is available on our cdn and the npm package manager. Other third party CDNs will probably have it soon as well, but remember that we don’t control their release schedules and they will need some time. Here are the highlights for jQuery 3.7.0.

New method: .uniqueSort()

Some APIs, like .prevAll(), return elements in reverse order, which can result in some confusing behavior when used with wrapping methods. For example,

$elem.prevAll().wrapAll(“<p/>”)

The above would wrap all of the elements as expected, but it would write those elements to the DOM in reverse order. To solve this in a way that prevented breaking existing code, we’ve documented that .prevAll() and similar methods return reverse-order collections, which is still desirable in many cases. But we’ve also added a new method to make things easier: a chainable .uniqueSort(), which does the equivalent of the existing but static jQuery.uniqueSort().

So, our previous example would become:

$elem.prevAll().uniqueSort().wrapAll(“<p/>”)

and the element order in the DOM would remain the same.

Added some unitless CSS properties

jQuery 3.7.0 adds support for more CSS properties that should not automatically have “px” added to them when they are set without units. For instance, .css(‘aspect-ratio’, 5) would result in the CSS aspect-ratio: 5px;. All in all, we added seven more properties, and we got a little help with our list from React. Thanks, React!

It’s worth noting that jQuery 4.0 will change the way we handle unitless CSS properties. Rather than relying on a list of CSS properties to avoid adding “px”, we’ll instead have an list of properties to which we definitely want to add “px” when there are no units passed. That should be more future-proof.

Performance improvement in manipulation

jQuery 3.7.0 comes with a measurable performance improvement for some use cases when using manipulation methods like .append(). When we removed a support test for a browser we no longer support, it meant that checks against document changes no longer needed to run at all. Essentially, that resulted in a speedup anywhere between 0% and 100%. The most significant speedup will be for some rare cases where users frequently switch contexts between different documents, perhaps by running manipulations across multiple iframes.

Negative margins in outerHeight(true)

Back in jQuery 3.3.0, we fixed an issue to include scroll gutters in the calculations for .innerWidth() and .innerHeight(). However, that fix didn’t take negative margins into account, which meant that .outerWidth(true) and .outerHeight(true) no longer respected negative margins. We’ve fixed that in 3.7.0 by separating the margin calculations from the scroll gutter adjustments.

Using different native focus events in IE

Focus and blur events are probably the most complicated events jQuery has to deal with across browsers. jQuery 3.4.0 introduced some minor regressions when it fixed an issue with the data passed through focus events. We were finally able to close all of those tickets in jQuery 3.7.0!

But, we need to point out a possible breaking change in IE. In all versions of IE, focus & blur events are fired asynchronously. In all other browsers, those events are fired synchronously. The asynchronous behavior in IE caused issues. The fix was to change which events we used natively. Fortunately, focusin & focusout are run synchronously in IE, and so we now simulate focus via focusin and blur via focusout in IE. That one change allowed us to rely on synchronous focus events in IE, which solved a lot of issues (see the changelog for the full list).

If you’re curious, support for IE will be dropped in jQuery 4.0 and many of those changes are already in our main branch.

Upgrading

We do not expect compatibility issues when upgrading from a jQuery 3.0+ version. To upgrade, have a look at the new 3.5 Upgrade Guide. If you haven’t yet upgraded to jQuery 3+, first have a look at the 3.0 Upgrade Guide.

The jQuery Migrate plugin will help you to identify compatibility issues in your code. Please try out this new release and let us know about any issues you experienced.

If you can’t yet upgrade to 3.5+, Daniel Ruf has kindly provided patches for previous jQuery versions.

Download

You can get the files from the jQuery CDN, or link to them directly:

https://code.jquery.com/jquery-3.7.0.js

https://code.jquery.com/jquery-3.7.0.min.js

You can also get this release from npm:

npm install [email protected]

Slim build

Sometimes you don’t need ajax, or you prefer to use one of the many standalone libraries that focus on ajax requests. And often it is simpler to use a combination of CSS and class manipulation for web animations. Along with the regular version of jQuery that includes the ajax and effects modules, we’ve released a “slim” version that excludes these modules. The size of jQuery is very rarely a load performance concern these days, but the slim build is about 6k gzipped bytes smaller than the regular version. These files are also available in the npm package and on the CDN:

https://code.jquery.com/jquery-3.7.0.slim.js

https://code.jquery.com/jquery-3.7.0.slim.min.js

These updates are already available as the current versions on npm and Bower. Information on all the ways to get jQuery is available at https://jquery.com/download/. Public CDNs receive their copies today, please give them a few days to post the files. If you’re anxious to get a quick start, use the files on our CDN until they have a chance to update.

Thanks

Thank you to all of you who participated in this release by submitting patches, reporting bugs, or testing, including fecore1, Michal Golebiowski-Owczarek and the whole jQuery team.

We’re on Mastodon!

jQuery now has its very own Mastodon account. We will be cross posting to both Twitter and Mastodon from now on. Also, you may be interested in following some of our team members that have Mastodon accounts.

jQuery: https://social.lfx.dev/@jquery

mgol: https://hachyderm.io/@mgol

timmywil: https://hachyderm.io/@timmywil

Changelog

Full changelog: 3.7.0

Build

Only install Playwright dependencies when needed (212b6a4f)
Bump actions/setup-node from 3.5.1 to 3.6.0 (582785e0)
Run GitHub Action browser tests on Playwright WebKit (da7057e9)
Migrate middleware-mockserver to modern JS (6b2abbdc)
remove stale Insight package from custom builds (37b04d5a)

CSS

Make `offsetHeight( true )`, etc. include negative margins (#3982, 7bb48a02)
Add missing jQuery.cssNumber entries (#5179, 3eed2820)

Deferred

Rename `getStackHook` to `getErrorHook` (3.x version) (#5201, cca71186)

Docs

Remove stale badge from README (e062f9cb)
update irc to Libera and fix LAMP dead link (e0c670e6)

Event

Simplify the check for saved data in leverageNative (9ab26aa5)
Make trigger(focus/blur/click) work with native handlers (#5015, 754108fb)
Simulate focus/blur in IE via focusin/focusout (3.x version) (#4856, #4859, #4950, 59f7b55b)

Release

add support for md5 sums in windows (3b7bf199)

Selector

Remove an obsolete comment (14685b31)
Wrap activeElement access in try-catch (3936cf3e)
Stop relying on CSS.supports( “selector(…)” ) (#5194, 63c3af48)
Rename rcombinators to rleadingCombinator (ac1c59a3)
Make selector lists work with `qSA` again (#5177, 848de625)
Implement the `uniqueSort` chainable method (#5166, 0acbe643)
Inline Sizzle into the selector module: 3.x version (#5113) (6306ca49)

Tests

Indicate Chrome 112 & Safari 16.4 pass the cssHas support test (3.x version) (1a4d87af)
Fix tests added in gh-5233 (759232e5)
Add tests for arary data in ajax (4837a95b)
Skip jQuery.Deferred.exceptionHook tests in IE 9 (98dd622a)
Test AJAX deprecated event aliases properly (18139213)
Fix selector tests in Chrome (732592c2)
Skip the native :valid tests in IE 9 (6b2094da)

Flatlogic Admin Templates banner

Why Svelte is converting TypeScript to JSDoc

#​638 — May 11, 2023

Read on the Web

JavaScript Weekly

The JavaScript Ecosystem is Delightfully Weird — There are plenty of examples of how JavaScript is weird but Sam focuses on the why. If you’ve been a JS developer for many years you’ll have seen it go through many phases and morph to fit its environment. Sam paints the big picture, concluding with a talk Dan Abramov gave yesterday called “React from Another Dimension.”

Sam Ruby

The New JS Features Coming in ECMAScript 2023 — The next JavaScript update brings smaller additions familiar from other languages, but there are more significant developments waiting in the wings. 

Mary Branscombe (The New Stack)

Full Stack for Front-End Engineers with Jem Young (Netflix) — Learn what it means to become a well-rounded full-stack engineer with this hands-on video course. You’ll dive into servers, work with the command line, understand networking and security, set up continuous integration and deployment, manage databases, build containers, and more.

Frontend Masters sponsor

Vue 3.3 ‘Rurouni Kenshin’ Released — Named after a popular manga series, the latest release of Vue is focused on developer experience improvements, particular for those using TypeScript.

Evan You

John Komarnicki says ▶️ Vue 3.3’s defineModel macro will change the way you write your components.

Next.js 13.4 Released — Despite the minor version bump, this is a big release for the popular React framework. The new app router and its improved approach to filesystem based routing is now offered as a stable feature, with a new concept of server actions being introduced in alpha as a way to mutate data on the server without needing to create an in-between API layer.

Tim Neutkens and Sebastian Markbåge

⚡️ IN BRIEF:

???? Svelte is converting from TypeScript to JSDoc (example).. sort of. Rich Harris popped up on Hacker News to provide some all important context but the ultimate result will be smaller package sizes and a better experience for Svelte’s maintainers.

React now has official ‘canary’ releases if you want to use newer features than in the stable releases but still be on an officially supported channel.

Newly released Firefox 113 lets you override JS files in its debugger.

No stranger to controversy, Ruby on Rails’s David Heinemeier Hansson (DHH) tweeted: ???? “TypeScript sucked out much of the joy I had writing JavaScript.”

RELEASES:

Glint 1.0 – TypeScript powered tooling for Glimmer / Ember templates.

Elementary 2.0 – JS/C++ library for building audio apps.

???? Articles & Tutorials

ES2023’s New Array Copying Methods — The newest ECMAScript spec introduces some new methods on Array that you’ll eventually find useful in your own programs. Phil gives us the tour.

Phil Nash

Private Class Fields Considered Harmful“As a library author, I’ve decided to avoid private class fields from now on and gradually refactor them out of my existing libraries.” Why? Well, that’s the interesting part..

Lea Verou

▶  I’m Done with React — Going from least-to-most important, the reasons this developer isn’t choosing React for future projects make for interesting watching, particularly if you too are overwhelmed by upheaval in the React world. Solid is one of the alternatives he has warmed to.

Adam Elmore

Constraining Language Runtimes with Deterministic Execution — Explore various challenges encountered while using different language runtimes to execute workflow code deterministically.

Temporal Technologies sponsor

Running JavaScript in Rust with Deno — Deno’s use of Rust makes it a natural choice if you’re building a Rust app and want to integrate a JavaScript engine.

Austin Poor

Regular Expressions in JavaScript — Powerful but often misunderstood, many will benefit from this roundup of the potential regexes offer to JavaScript developers.

Adebayo Adams

How to Measure Page Loading Time with the Performance API — The Performance API is a group of standards used to measure the performance of webapps supported in most modern browsers.

Silvestar Bistrović

How to Build a JS VST or Audio Unit Plugin on macOS — VSTs and Audio Units are both types of audio plugins for audio editing software and they’re usually built in C or C++. This tutorial doesn’t dig into the audio side of things, but more the practicalities of packaging things up to get started.

Chris Mendez

An Introduction to the Bun Runtime — If you’ve not yet played with the newest entrant into the JS runtime space, this is a high level overview.

Craig Buckler

2023 State of the Java Ecosystem

New Relic sponsor

Configuring ESLint, Prettier, and TypeScript Together

Josh Goldberg

DestroyRef: Your New Angular 16 Friend

Ion Prodan

Why Astro is My Favorite Framework

Ryan Trimble

???? Code & Tools

file-type 18.4: Detect the File Type of a Buffer, Uint8Array or ArrayBuffer — For example, give it the raw data from a PNG file, and it’ll tell you it’s a PNG file. Uses magic numbers so is targeted solely at non text-based formats.

Sindre Sorhus

Learn How the Rising Trend of Malicious Packages Can Affect Your Apps — Keep your applications secure with Snyk’s article on the increasing number of malicious OS packages and ways to mitigate these risks.

Snyk sponsor

Livefir: Build Reactive HTML Apps with Go and Alpine.js — Go isn’t a language that often pops up in the context of the frontend, but this is a neat integration between Go on the backend and Alpine.js up front.

Adnaan Badr

JZZ.js: A Developer Friendly MIDI library — For both browsers and Node, JZZ.js provides an abstraction over working with MIDI related concepts. There are many examples, but the easter egg in the top left is our favorite.

Sema / Jazz-Soft

htmlparser2 9.0: A ‘Fast and Forgiving’ HTML and XML Parser — Consumes documents and calls callbacks, but it can generate a DOM as well. Works in both Node and browser.

Felix Böhm

cRonstrue: Library to Convert cron Expressions into Human-Readable Form — Given something like */5 * * * *, it’ll return “Every 5 minutes”. No dependencies.

Brady Holt

Knip: Find Unused Files, Dependencies and Exports in TypeScript Projects — Being Dutch for “snip” is appropriate as Knip can trim away things that aren’t being used in your project.

Lars Kappert

jsPlumb 6.1
↳ Visual connectivity for webapps.

gridstack.js 8.1
↳ Build interactive dashboards quickly.

???? Jobs

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

Team Lead Web Development — Experienced with Node, React, and TS? Join us and lead a motivated team of devs and help grow and shape the future of our web app focused on helping millions explore the outdoors.

Komoot

????‍???? Got a job listing to share? Here’s how.

???? Don’t tell Satya Nadella..

Fake Windows 11 in Svelte — This is a cute little side project, and the code is available too. The most common complaint I’ve seen is that it’s actually more responsive than the real Windows.. ???? Be sure to check out both ‘VS Code’ and ‘Microsoft Edge’ in this environment.

Yashash Pugalia

???? Prefer Windows XP? Maybe RebornXP is more for you. Complete with the classic starting up sound!

Flatlogic Admin Templates banner