Optimized Video Encoding with FFmpeg on AWS Graviton Processors

If you have not tried video encoding on Graviton lately, now is the time to give it another look. Recent FFmpeg improvements, contributed by AWS and others in the open source community, have increased the performance of fully loaded video workloads on Graviton processors.

Measured on Amazon Elastic Compute Cloud (Amazon EC2) C7g instances, for offline video encoding we saw a 63% performance boost for H.264 and 60% for H.265. Encoding video on C7g costs measured 29% less for H.264 and 18% less for H.265 compared to C6i, the latest x86-based Amazon EC2 instance (both using on-demand instance pricing). This makes C7g the fastest compute optimized cloud instance that is the most cost effective and the most energy efficient for video encoding.

When the AWS Graviton2 instances were introduced, they provided 40% better price performance for many workloads, compared to similar x86 Amazon EC2 instances. Graviton3 features an additional 25% improved performance over Graviton2. Video processing and transcoding has been growing in importance, and Graviton is well suited for this workload. AWS engineers and the open source community have worked on video encoding tools, such as FFmpeg and the codec libraries, to further optimize for Graviton. You can get these improvements on GitHub from a build in the development branch of FFmpeg, or use FFmpeg version 5.2 when it is released.

Use cases

One of the common use cases for video in the cloud is batch transcoding multiple videos concurrently on the same instance. This optimizes for the best throughput and price. Another popular use case is transcoding a single input stream to multiple output formats optimized for different viewing resolutions. Both of these cases require optimizing performance for concurrent processing. For the following benchmarks we scale down the incoming 4k stream and encode multiple target resolutions for each input. Each different target resolution can be used to support different device and network capabilities at their native resolution: 1080p, 720p, 480p, 360p, and 160p.

Figure 1: Encoding multiple streams in parallel on a single instance.

We tested encoding the target videos into H.264 and H.265 using the x264 and x265 open source libraries. The H.264 or AVC (Advanced Video Coding) standard was first published in 2004 and enjoys broad compatibility. Devices including mobile phones, tablets, personal computers, smart TVs, and others generally have support for hardware accelerated H.264 decoding. The H.265 or HEVC (High Efficiency Video Coding) standard was first published in 2013 and has better compression at a given level of quality than H.264, but hardware accelerated decoding is not as widely deployed and patents and licensing restrictions have prevented some companies from adopting it in their software. For most video use cases, having more than one video format will be necessary in order to provide the best quality for devices which can play H.265 and also H.264 for devices without H.265 decoding support.

Offline (batch) encoding

Speed: The following diagram shows the encoding speed in frames per second (FPS) for a sample workload. It was tested comparing FFmpeg 4.2 with the development branches of FFmpeg and x265 that include the latest optimizations.

Figure 2: Speed results are the mean frame per second (FPS) for different input samples.
Higher is better.

Cost: The cost of encoding on the latest Graviton instance, C7g, is compared with the latest Amazon EC2 x86 based instances, C6i and C6a, showing better performance and a reduction of 18-29% in cost compared to C6i.

Figure 3: Comparing cost for the latest generations of Amazon EC2 compute instances.

Lower is better. Normalized so that cost of x264, preset ultrafast on c6i is equal to one.

The results show the total cost to transcode 1 million input frames in parallel jobs to five output sizes. Each value is a mean of results for three different input files tested. 1 million frames is about 4 hours and 37 minutes at 60 frames per second.

Live stream encoding

For a live streaming use case, we can measure the maximum number of streams for which an instance can maintain full frame rate while transcoding to 3 output sizes. The results below are the number of streams the instance was able to sustain divided by the cost per hour, resulting in 15-35% lower overall cost on C7g vs. C6i. This makes the C7g instance the most cost effective AWS compute instance type for transcoding streaming video.

Figure 5: Results show the hourly cost per video stream at 24FPS, using -preset ultrafast with x264 and x265.
Lower is better.

The changes

The aarch64 version of the scaling functions initially used the reference implementations written in C. After rewriting these C functions in aarch64 assembly, the performance improved significantly. Video scaling is a component of FFmpeg which consistently takes a high percentage of compute time; most encode jobs will include a scaling step, since it is necessary to create multiple outputs to support different device resolutions, both for offline and live streams. All of these changes have been contributed upstream into FFmpeg. See the table below for some of the changes AWS contributed since the 2019 release of FFmpeg version 4.2. In Figure 6, below, are the sample code changes and their effects on the encoding performance on Graviton.

Function name
Speed up








Through a series of optimizations to the horizontal and vertical scaling functions, as detailed in the pull requests listed here, AWS engineers were able to improve performance for a variety of input cases. After optimizations optimizations and others applied to FFmpeg and to x265, Graviton instances perform better than comparable Amazon EC2 x86 based instances. Comparing C7g instances to C6i instances for the mainline branch of FFmpeg, C7g shows higher performance in every category.

Benchmarking method

To benchmark FFmpeg we used three different test files, each 10 seconds long. One was a high bitrate test with complex motion and lots of high frequency detail changes, another was mostly a still scene and a low bitrate, and a third was a moderate bitrate scene from the open source Tears of Steel film. We transcoded each clip into the five target sizes using multiple parallel jobs intended to simulate a service transcoding many sources in parallel. To increase the stability of the measurements, we also executed multiple iterations of these parallel jobs sequentially. The total time to execute these jobs is then used to calculate frames per second and cost per frame. Results are measured in frames per second and use the number of source frames transcoded, rather than the output frames, since the output consists of many different sizes. All input files are 4K in size and had H.264 encoding. We tested with the following software versions: FFmpeg, 2022-08-23; x264, 2022-06-01; x265, 2022-09-12.


Graviton2 and Graviton3 processors are cost efficient and fast for running video transcoding. With the latest improvements to FFmpeg and codecs, the advantage has only improved. In order to achieve these results for yourself, the first step is to ensure you are running an optimized build from the latest code. There’s a pre-built binary on https://github.com/BtbN/FFmpeg-Builds/releases, a third-party which maintains builds using the latest source code. VT1 and GPU instances can also be a compelling option, especially for live video, but have less flexibility for getting the best quality at a given bit rate than software encoders. If a software encoder is right for your workload, Graviton is a great option.

There is still more work to do for FFmpeg, especially if you are using HDR content with 10 or 12 bit color depth. If you are, and even if you are not, be sure to keep up to date with FFmpeg and codec releases. If you find use cases where FFmpeg on Graviton does not meet expectations, please open an issue on the Graviton Technical Guide to let us know about it. We will continue to add more performance improvements to make Graviton the most cost effective and efficient general purpose processor for video encoding.

Flatlogic Admin Templates banner

The AWS Modern Applications and Open Source Zone: Learn, Play, and Relax at AWS re:Invent 2022

AWS re:Invent is filled with fantastic opportunities, but I wanted to tell you about a space that lets you dive deep with some fantastic open source projects and contributors: the AWS Modern Applications and Open Source Zone! Located in the east alcove on the third floor of the Venetian Conference Center, this space exists so that re:Invent attendees can be introduced to some of the amazing projects that power and enhance the AWS solutions you know and use. We’ve divided the space up into three areas: Demos, Experts, and Fun.

Demos: Learn and be curious

We have two dedicated demo stations in the Zone and a deep list of projects that we are excited to show you from Amazonians, AWS Heroes, and AWS Community Builders. Please keep in mind this schedule may be subject to change, and we have some last minute surprises that we can’t share here, so be sure to drop by.

Monday, November 28, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM
3 PM – 5 PM


Continuous Deployment
and GitOps delivery with Amazon EKS Blueprints and ArgoCD

Tsahi Duek, Dima Breydo

StackGres: An Advanced
PostgreSQL Platform on EKSAlvaro Hernandez


Step Functions templates and
prebuilt Lambda Packages for
deploying scalable serverless applications in seconds

Rustem Feyzkhanov


Data on EKS

Vara Bonthu, Brian Hammons

How to use Amazon Keyspaces
(for Apache Cassandra) and Apache Spark
to build applicationsMeet Bhagdev

Scale your applications beyond IPv4 limits

Sheetal Joshi

Tuesday, November 29, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM
3 PM – 5 PM


Let’s build a self service developer portal with AWS Proton

Adam Keller

Doing serverless on AWS with Terraform (serverless.tf + terraform-aws-modules)

Anton Babenko


Chris Farris, Bob Tordella

Using Lambda Powertools for better observability in IoT Applications

Alina Dima


Build and run containers on AWS with AWS Copilot

Sergey Generalov

Fargate Surprise
Fargate Surprise

Amplify Libraries Demo

Matt Auerbach

Wednesday, November 30, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM
3 PM – 5 PM


Building Embedded Devices with FreeRTOS SMP and the Raspberry Pi Pico

Dan Gross

Quantum computing in the cloud with Amazon Braket

Michael Brett, Katharine Hyatt


Andrea Cavagna

EKS multicluster management and applications delivery

Nicholas Thomson, Sourav Paul


Using SAM CLI and Terraform for local testing

Praneeta Prakash, Suresh Poopandi

How to use Terraform AWS and AWSCC provider in your project

Tyler Lynch, Drew Mullen

How to use Terraform AWS and AWSCC provider in your project

Glenn Chia, Welly Siau

Smart City Monitoring Using AWS IoT and Digital Twin

Syed Rehan

Thursday, December 1, 2022

Kiosk #
9 AM – 11 AM
11 AM – 1 PM
1 PM – 3 PM


Modern data exchange using AWS data streaming

Ali Alemi

Learn how to leverage your Amazon EKS cluster as a substrate for execution of distributed Ray programs for Machine Learning.

Apoorva Kulkarni



Spreading apps, controlling traffic, and optimizing costs in Kubernetes

Lukonde Mwila


Terraform IAM policy validator

Bohan Li


Pull up a chair, grab a drink and a snack, charge your devices, and have a conversation with some of our experts. We’ll have people visiting the zone all throughout re:Invent, with expertise in a variety of open source technologies and AWS services including (but not limited to):

Amazon Athena
Amazon DocumentDB (with MongoDB compatibility)
Amazon DynamoDB
Amazon Elastic Container Service (Amazon ECS)
Amazon Elastic Kubernetes Service (Amazon EKS)
Amazon Eventbridge
Amazon Keyspaces (for Apache Cassandra)
Amazon Kinesis
Amazon Linux
Amazon Managed Grafana
Amazon Managed Service for Prometheus
Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
Amazon MQ
Amazon Redshift
Amazon Simple Notification Service (Amazon SNS)
Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Storage Service (Amazon S3)
Apache Flink, Hadoop, Hudi, Iceberg, Kafka, and Spark
Automotive Grade Linux
AWS Amplify
AWS App Mesh
AWS App Runner
AWS Copilot
AWS Distro for OpenTelemetry
AWS Fargate
AWS Glue
AWS IoT Greengrass
AWS Lambda
AWS Proton
AWS Serverless Application Model (AWS SAM)
AWS Step Functions
Cloudscape Design System
Embedded Linux
Lambda Powertools
Red Hat OpenShift Service on AWS (ROSA)


Want swag? We’ve got it, but it is protected by THE CLAW. That’s right, we brought back the claw machine, and this year, we might have some extra special items in there for you to catch. No spoilers, but we’ve heard there have been some Rustaceans sighted. You might want to bring an extra (empty) suitcase.

But we’re not done. By popular request, we also brought back Dance Dance Revolution. Warm up your dancing shoes or just cheer on the crowd. You never know who will be showing off their best moves.


The AWS Modern Applications and Open Source Zone is a must-visit destination for your re:Invent journey. With demos, experts, food, drinks, swag, games, and mystery surprises, how can you not stop by?

Flatlogic Admin Templates banner

Adding CDK Constructs to the AWS Analytics Reference Architecture

In 2021, we released the AWS Analytics Reference Architecture, a new AWS Cloud Development Kit (AWS CDK) application end-to-end example, as open source (docs are CC-BY-SA 4.0 International, sample code is MIT-0). It shows how our customers can use the available AWS products and features to implement well-architected analytics solutions. It also regroups AWS best practices for designing, implementing and operating analytics solutions through different purpose-built patterns. Altogether, the AWS Analytics Reference Architecture answers common requirements and solves customer challenges.

In 2022, we extended the scope of this project with AWS CDK constructs to provide more granular and reusable examples. This project is now composed of:

Reusable core components exposed in an AWS CDK library currently available in Typescript and Python. This library contains the AWS CDK constructs that can be used to quickly provision prepackaged analytics solutions.
Reference architectures consuming the reusable components in AWS CDK applications, and demonstrating end-to-end examples in a business context. Currently, only the AWS native reference architecture is available but others will follow.

In this blog post, we will first show how to consume the core library to quickly provision analytics solutions using CDK Constructs and experiment with AWS analytics products.

Building solutions with the Core Library

To illustrate how to use the core components,  let’s see how we can quickly build a Data Lake, a central piece for most analytics projects. The storage layer is implemented with the DataLakeStorage CDK construct relying on Amazon Simple Storage Service (Amazon S3), a durable, scalable and cost-effective object storage service. The query layer is implemented with the AthenaDemoSetup construct using Amazon Athena, an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. With regard to the data catalog, it‘s implemented with the DataLakeCatalog construct using AWS Glue Data Catalog.

Before getting started, please make sure to follow the instructions available here for setting up the prerequisites:

Install the necessary build dependencies
Bootstrap the AWS account
Initialize the CDK application.

This architecture diagram depicts the data lake building blocks we are going to deploy using the AWS Analytics Reference Architecture library. These are higher level constructs (commonly called L3 constructs) as they integrate several AWS services together in patterns.

To assemble these components, you can add this code snippet in your app.py file:

import aws_analytics_reference_architecture as ara

# Create a new DataLakeStorage with Raw, Clean and Transform buckets
storage = ara.DataLakeStorage(scope=self, id=”storage”)

# Create a new DataLakeCatalog with Raw, Clean and Transform databases
catalog = ara.DataLakeCatalog(scope=self, id=”catalog”)

# Configure a new Athena Workgroup
athena_defaults = ara.AthenaDemoSetup(scope=self, id=”demo_setup”)

# Generate data from Customer TPC dataset
data_generator = ara.BatchReplayer(

# Role with default permissions for any Glue service
glue_role = ara.GlueDemoRole.get_or_create(self)

# Crawler to create tables automatically
crawler = glue.CfnCrawler(self, id=’ara-crawler’, name=’ara-crawler’,
role=glue_role.iam_role.role_arn, database_name=’raw’,
targets={‘s3Targets’: [{“path”: f”s3://{storage.raw_bucket.bucket_name}/{data_generator.sink_object_key}/”}],}

# Trigger to kick off the crawler
cfn_trigger = glue.CfnTrigger(self, id=”MyCfnTrigger”,
actions=[{‘crawlerName’: crawler.name}],
type=”SCHEDULED”, description=”ara_crawler_trigger”,
name=”min_based_trigger”, schedule=”cron(0/5 * * * ? *)”, start_on_creation=True,

In addition to this library construct, the example also includes lower level constructs (commonly called L1 constructs) from the AWS CDK standard library. This shows that you can combine constructs from any CDK library interchangeably.

For use cases where customers have a need to adjust the default configurations in order to align with their organization specific requirements (e.g. data retention rules), the constructs can be changed through the class parameters as shown in this example:

storage = ara.DataLakeStorage(scope=self, id=”storage”, raw_archive_delay=180, clean_archive_delay=1095)

Finally, you can deploy the solution using the AWS CDK CLI from the root of the application with this command: cdk deploy. Once you deploy the solution, AWS CDK provisions the AWS resources included in the Constructs and you can log into your AWS account.

Go to the Athena console and start querying the data. The AthenaDemoSetup provides an Athena workgroup called “demo” that you can select to start querying the BatchReplayer data very quickly. Data is stored in the DataLakeStorage and registered in the DataLakeCatalog. Here is an example of an Athena query accessing the customer data from the BatchReplayer:

Accelerate the implementation

Earlier in the post we pointed out that the library simplifies and accelerates the development process. First, writing Python code is more appealing than writing CloudFormation markup code, either in json or yaml. Second, the CloudFormation template generated by the AWS CDK for the data lake example is 16 times more verbose than Python scripts.

❯ cdk synth | wc -w

❯ wc -w ara_demo/ara_demo_stack.py

Demonstrating end-to-end examples with reference architectures

The AWS native reference architecture is the first reference architecture available. It explains the journey of a fake company, MyStore Inc., as it implements its data platform solution with AWS products and services . Deploying the AWS native reference architecture demonstrates a fully working example of a data platform from data ingestion to business analysis. AWS customers can learn from it, see analytics solutions in action, and play with retail dataset and business analysis.

More reference architectures will will be added to this project in Github later.

Business Story

The AWS native reference architecture is faking a retail company called MyStore Inc. that is building a new analytics platform on top of AWS products. This example shows how retail data can be ingested, processed, and analyzed in streaming and batch processes to provide business insights like sales analysis. The platform is built on top of the CDK Constructs from the core library to minimize development effort and inherit from AWS best practices.

Here is the architecture deployed by the AWS native reference architecture:

The platform is implemented in purpose-built modules. They are decoupled and can be independently provisioned but still integrate with each other. The global platformMyStore’s analytics platform has been able to deploy the following modules thanks to:

Data Lake foundations: This mandatory module (based on DataLakeCatalog and DataLakeStorage core constructs) is the core of the analytics platform. It contains the data lake storage and associated metadata for both batch and streaming data. The data lake is organized in multiple Amazon S3 buckets representing different versions of the data. (a) The raw layer contains the data coming from the data sources in the raw format. (b) The cleaned layer contains the raw data that has been cleaned and parsed to a consumable schema. (c) And the curated layer contains refactored data based on business requirements.

Batch analytics: This module is in charge of ingesting and processing data from a Stores channel generated by the legacy systems in batch mode. Data is then exposed to other modules for downstream consumption. The data preparation process leverages various features of AWS Glue, a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development via the Apache Spark framework. The orchestration of the preparation is handled using AWS Glue Workflows that allows managing and monitoring executions of Extract, Transform, and Load (ETL) activities involving multiple crawlers, jobs, and triggers. The metadata management is implemented via AWS Glue Crawlers, a serverless process that crawls data sources and sinks to extract the metadata including schemas, statistics and partitions. It saves them in the AWS Glue Data Catalog.

Streaming analytics: This module is ingesting and processing real time data from the Web channel generated by cloud native systems. The solution minimizes data analysis latency but also to feed the data lake for downstream consumption.

Data Warehouse: This module is ingesting data from the data lake to support reporting, dashboarding and ad hoc querying capabilities. The module is using an Extract, Load, and Transform (ELT) process to transform the data from the Data Lake foundations module. Here are the steps that outline the data pipeline from the data lake into the data warehouse. 1. AWS Glue Workflow reads CSV files from the Raw layer of the data lake and writes them to the Clean layer as Parquet files. 2. Stored procedures in Amazon Redshift’s stg_mystore schema extract data from the Clean layer of the data lake using Amazon Redshift Spectrum. 3. The stored procedures then transform and load the data into a star schema model.

Data Visualization: This module is providing dashboarding capabilities to business users like data analysts on top of the Data Warehouse module, but also provides data exploration on top of the Data Lake module. It is implemented with Amazon Quicksight, a scalable, serverless, embeddable, and machine learning-powered business intelligence tool. Amazon QuickSight is connected to the data lake via Amazon Athena and the data lake via Amazon Redshift using direct query mode, in opposition to the caching mode with SPICE.

Project Materials

The AWS native reference architecture provides both code and documentation about MyStore’s analytics platform:

Documentation is available on GitHub and comes in two different parts:

The high level design describes the overall data platform implemented by MyStore, and the different components involved. This is the recommended entry point to discover the solution.
The analytics solutions provide fine-grained solutions to the challenges MyStore met during the project. These technical patterns can help you choose the right solution for common challenges in analytics.

The code is publicly available here and can be reused as an example for other analytics platform implementations. The code can be deployed in an AWS account by following the getting started guide.


In this blog post, we introduced new AWS CDK content available for customers and partners to easily implement AWS analytics solutions with the AWS Analytics Reference Architecture. The core library provides reusable building blocks with best practices to accelerate the development life cycle on AWS and the reference architecture demonstrates running examples with end-to-end integration in a business context.

Because of its reusable nature, this project will be the foundation for lots of additional content. We plan to extend the technical scope of it with Constructs and reference architectures for a data mesh. We’ll also expand the business scope with industry focused examples. In a future blog post, we will go deeper into the constructs related to Amazon EMR Studio and Amazon EMR on EKS to demonstrate how customers can easily bootstrap an efficient data platform based on Amazon EMR Spark and notebooks.

Flatlogic Admin Templates banner

Introducing Finch: An Open Source Client for Container Development

Today we are happy to announce a new open source project, Finch. Finch is a new command line client for building, running, and publishing Linux containers. It provides for simple installation of a native macOS client, along with a curated set of de facto standard open source components including Lima, nerdctl, containerd, and BuildKit. With Finch, you can create and run containers locally, and build and publish Open Container Initiative (OCI) container images.

At launch, Finch is a new project in its early days with basic functionality, initially only supporting macOS (on all Mac CPU architectures). Rather than iterating in private and releasing a finished project, we feel open source is most successful when diverse voices come to the party. We have plans for features and innovations, but opening the project this early will lead to a more robust and useful solution for all. We are happy to address issues, and are ready to accept pull requests. We’re also hopeful that with our adoption of these open source components from which Finch is composed, we’ll increase focus and attention on these components, and add more hands to the important work of open source maintenance and stewardship. In particular, Justin Cormack, CTO of Docker shared that “we’re bullish about Finch’s adoption of containerd and BuildKit, and we look forward to AWS working with us on upstream contributions.”

We are excited to build Finch in the open with interested collaborators. We want to expand Finch from its current basic starting point to cover Windows and Linux platforms and additional functionality that we’ve put on our roadmap, but would love your ideas as well. Please open issues or file pull requests and start discussing your ideas with us in the Finch Slack channel. Finch is licensed under the Apache 2.0 license and anyone can freely use it.

Why build Finch?

For building and running Linux containers on non-Linux hosts, there are existing commercial products as well as an array of purpose-built open source projects. While companies may be able to assemble a simple command line tool from existing open source components, most organizations want their developers to focus on building their applications, not on building tools.

At AWS, we began looking at the available open source components for container tooling and were immediately impressed with the progress of Lima, recently included in the Cloud Native Computing Foundation (CNCF) as a sandbox project. The goal of Lima is to promote containerd and nerdctl to Mac users, and this aligns very well with our existing investment in both using and contributing to the CNCF graduated project, containerd. Rather than introducing another tool and fragmenting open source efforts, the team decided to integrate with Lima and is making contributions to the project. Akihiro Suda, creator of nerdctl and Lima and a longtime maintainer of containerd, BuildKit, and runc, added “I’m excited to see AWS contributing to nerdctl and Lima and very happy to see the community growing around these projects. I look forward to collaborating with AWS contributors to improve Lima and nerdctl alongside Finch.”

Finch is our response to the complexity of curating and assembling an open source container development tool for macOS initially, followed by Windows and Linux in the future. We are curating the components, depending directly on Lima and nerdctl, and packaging them together with their dependencies into a simple installer for macOS. Finch, via its macOS-native client, acts as a passthrough to nerdctl which is running in a Lima-managed virtual machine. All of the moving parts are abstracted away behind the simple and easy-to-use Finch client. Finch manages and installs all required open source components and their dependencies, removing any need for you to manage dependency updates and fixes.

The core Finch client will always be a curated distribution composed entirely of open source, vendor-neutral projects. We also want Finch to be customizable for downstream consumers to create their own extensions and value-added features for specific use cases. We know that AWS customers will want extensions that make it easier for local containers to integrate with AWS cloud services. However, these will be opt-in extensions that don’t impact or fragment the open source core or upstream dependencies that Finch depends on. Extensions will be maintained as separate projects with their own release cycles. We feel this model strikes a perfect balance for providing specific features while still collaborating in the open with Finch and its upstream dependencies. Since the project is open source, Finch provides a great starting point for anyone looking to build their own custom-purpose container client.

In summary, with Finch we’ve curated a common stack of open source components that are built and tested to work together, and married it with a simple, native tool. Finch is a project with a lot of collective container knowledge behind it. Our goal is to provide a minimal and simple build/run/push/pull experience, focused on the core workflow commands. As the project evolves, we will be working on making the virtualization component more transparent for developers with a smaller footprint and faster boot times, as well as pursuing an extensibility framework so you can customize Finch however you’d like.

Over time, we hope that Finch will become a proving ground for new ideas as well as a way to support our existing customers who asked us for an open source container development tool. While an AWS account is not required to use Finch, if you’re an AWS customer we will support you under your current AWS Support plans when using Finch along with AWS services.

What can you do with Finch?

Since Finch is integrated directly with nerdctl, all of the typical commands and options that you’ve become fluent with will work the same as if you were running natively on Linux. You can pull images from registries, run containers locally, and build images using your existing Dockerfiles. Finch also enables you to build and run images for either amd64 or arm64 architectures using emulation, which means you can build images for either (or both) architectures from your M1 Apple Silicon or Intel-based Mac. With the initial launch, support for volumes and networks is in place, and Compose is supported to run and test multiple container applications.

Once you have installed Finch from the project repository, you can get started building and running containers. As mentioned previously, for our initial launch only macOS is supported.

To install Finch on macOS download the latest release package. Opening the package file will walk you through the standard experience of a macOS application installation.

Finch has no GUI at this time and offers a simple command line client without additional integrations for cluster management or other container orchestration tools. Over time, we are interested in adding extensibility to Finch with optional features that you can choose to enable.

After install, you must initialize and start Finch’s virtual environment. Run the following command to start the VM:
finch vm init

To start Finch’s virtual environment (for example, after reboots) run:
finch vm start

Now, let’s run a simple container. The run command will pull an image if not already present, then create and start the container instance. The —rm flag will delete the container once the container command exits.

finch run –rm public.ecr.aws/finch/hello-finch
public.ecr.aws/finch/hello-finch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:a71e474da9ffd6ec3f8236dbf4ef807dd54531d6f05047edaeefa758f1b1bb7e: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:705cac764e12bd6c5b0c35ee1c9208c6c5998b442587964b1e71c6f5ed3bbe46: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:6cc2bf972f32c6d16519d8916a3dbb3cdb6da97cc1b49565bbeeae9e2591cc60: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.9 s total: 0.0 B (0.0 B/s)

@@@@@@@@@@@@ @@@@@@@@@@@
@@@@@@@ @@@@@@@
@@@@@@ @@@@@@
@@@@@@ @@@@@
@@@@@ @@@# @@@@@@@@@
@@@@@ @@ @@@ @@@@@@@@@@
@@@@% @ @@ @@@@@@@@@@@
@@@@ @@@@@@@@
@@@@ @@@@@@@@@@@&
@@@@@ &@@@@@@@@@@@
@@@@@ @@@@@@@@
@@@@@ @@@@@(
@@@@@@ @@@@@@
@@@@@@@ @@@@@@@

Hello from Finch!

Visit us @ github.com/runfinch

Lima supports userspace emulation in the underlying virtual machine. While all the images we create and use in the following example are Linux images, the Lima VM is emulating the CPU architecture of your host system, which might be 64-bit Intel or Apple Silicon-based. In the following examples we will show that no matter which CPU architecture your Mac system uses, you can author, publish, and use images for either CPU family. In the following example we will build an x86_64-architecture image on an Apple Silicon laptop, push it to ECR, and then run it on an Intel-based Mac laptop.

To verify that we are running our commands on an Apple Silicon-based Mac, we can run uname and see the architecture listed as arm64:

uname -sm
Darwin arm64

Let’s create and run an amd64 container using the –platform option to specify the non-native architecture:

finch run –rm –platform=linux/amd64 public.ecr.aws/amazonlinux/amazonlinux uname -sm
Linux x86_64

The –platform option can be used for builds as well. Let’s create a simple Dockerfile with two lines:

FROM public.ecr.aws/amazonlinux/amazonlinux:latest
LABEL maintainer=”Chris Short”

By default, Finch would build for the host’s CPU architecture platform, which we showed is arm64 above. Instead, let’s build and push an amd64 container to ECR. To build an amd64 image we add the –platform flag to our command:

finch build –platform linux/amd64 -t public.ecr.aws/cbshort/finch-multiarch .
[+] Building 6.5s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 142B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for public.ecr.aws/amazonlinux/amazonlinux:latest 1.2s
=> [auth] aws:: amazonlinux/amazonlinux:pull token for public.ecr.aws 0.0s
=> [1/1] FROM public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 3.9s
=> => resolve public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 0.0s
=> => sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2 62.31MB / 62.31MB 5.1s
=> exporting to oci image format 5.2s
=> => exporting layers 0.0s
=> => exporting manifest sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652 0.0s
=> => exporting config sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8 0.0s
=> => sending tarball 1.3s
unpacking public.ecr.aws/cbshort/finch-multiarch:latest (sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)…
Loaded image: public.ecr.aws/cbshort/finch-multiarch:latest%

finch push public.ecr.aws/cbshort/finch-multiarch
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.v2+json, sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 27.9s total: 1.6 Ki (60.0 B/s)

At this point we’ve created an image on an Apple Silicon-based Mac which can be used on any Intel/AMD CPU architecture Linux host with an OCI-compliant container runtime. This could be an Intel or AMD CPU EC2 instance, an on-premises Intel NUC, or, as we show next, an Intel CPU-based Mac. To show this capability, we’ll run our newly created image on an Intel-based Mac where we have Finch already installed. Note that we have run uname here to show the architecture of this Mac is x86_64, which is analogous to what the Go programming language references 64-bit Intel/AMD CPUs as: amd64.

uname -a
Darwin wile.local 21.6.0 Darwin Kernel Version 21.6.0: Thu Sep 29 20:12:57 PDT 2022; root:xnu-8020.240.7~1/RELEASE_X86_64 x86_64

finch run –rm –platform linux/amd64 public.ecr.aws/cbshort/finch-multiarch:latest uname -a
public.ecr.aws/cbshort/finch-multiarch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 9.2 s total: 59.4 M (6.5 MiB/s)
Linux 73bead2f506b 5.17.5-300.fc36.x86_64 #1 SMP PREEMPT Thu Apr 28 15:51:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

You can see the commands and options are familiar. As Finch is passing through our commands to the nerdctl client, all of the command syntax and options are what you’d expect, and new users can refer to nerdctl’s docs.

Another use case is multi-container application testing. Let’s use yelb as an example app that we want to run locally. What is yelb? It’s a simple web application with a cache, database, app server, and UI. These are all run as containers on a network that we’ll create. We will run yelb locally to demonstrate Finch’s compose features for microservices:

finch vm init
INFO[0000] Initializing and starting finch virtual machine…
INFO[0079] Finch virtual machine started successfully

finch compose up -d
INFO[0000] Creating network localtest_default
INFO[0000] Ensuring image redis:4.0.2
docker.io/library/redis:4.0.2: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:cd277716dbff2c0211c8366687d275d2b53112fecbf9d6c86e9853edb0900956: done |++++++++++++++++++++++++++++++++++++++|

[ snip ]

layer-sha256:afb6ec6fdc1c3ba04f7a56db32c5ff5ff38962dc4cd0ffdef5beaa0ce2eb77e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 11.4s total: 30.1 M (2.6 MiB/s)
INFO[0049] Creating container localtest_yelb-appserver_1
INFO[0049] Creating container localtest_redis-server_1
INFO[0049] Creating container localtest_yelb-db_1
INFO[0049] Creating container localtest_yelb-ui_1

The output indicates a network was created, many images were pulled, started, and are now all running in our local test environment.

In this test case, we’re using Yelb to figure out where a small team should grab lunch. We share the URL with our team, folks vote, and we see the output via the UI:

What’s next for Finch?

The project is just getting started. The team will work on adding features iteratively, and is excited to hear from you. We have ideas on making the virtualization more minimal, with faster boot times to make it more transparent for users. We are also interested in making Finch extensible, allowing for optional add-on functionality. As the project evolves, the team will direct contributions into the upstream dependencies where appropriate. We are excited to support and contribute to the success of our core dependencies: nerdctl, containerd, BuildKit, and Lima. As mentioned previously, one of the exciting things about Finch is shining a light on the projects it depends upon.

Please join us! Start a discussion, open an issue with new ideas, or report any bugs you find, and we are definitely interested in your pull requests. We plan to evolve Finch in public, by building out milestones and a roadmap with input from our users and contributors. We’d also love feedback from you about your experiences building and using containers daily and how Finch might be able to help!

Flatlogic Admin Templates banner

Making it Easier to Build Connectors with Apache Flink: Introducing the Async Sink

Apache Flink is a popular open source framework for stateful computations over data streams. It allows you to formulate queries that are continuously evaluated in near real time against an incoming stream of events. To persist derived insights from these queries in downstream systems, Apache Flink comes with a rich connector ecosystem that supports a wide range of sources and destinations. However, the existing connectors may not always be enough to support all conceivable use cases. Our customers and the community kept asking for more connectors and better integrations with various open source tools and services.

But that’s not an easy problem to solve. Creating and maintaining production-ready sinks for a new destination is a lot of work. For critical use cases, it’s undesirable to lose messages or to compromise on performance when writing into a destination. However, sinks have commonly been developed and maintained independently of each other. This further adds to the complexity and cost of adding sinks to Apache Flink, as more functionality had to be independently reimplemented and optimized for each sink.

To better support our customers and the entire Apache Flink community, we set out to make it easier and less time consuming to build and maintain sinks. We contributed the Async Sink to the Flink 1.15 release, which improved cloud interoperability and added more sink connectors and formats, among other updates. The Async Sink is an abstraction for building sinks with at-least-once semantics. Instead of reimplementing the same core functionality for every new sink that is created, the Async Sink provides common sink functionality that can be extended upon. In the remainder of this post, we’ll explain how the Async Sink works, how you can build a new sink based on the Async Sink, and discuss our plans to continue our contributions to Apache Flink.

Abstracting away common components with the Async Sink

Although sinks have been commonly developed in isolation, their basic functionality is often similar. Sinks buffer multiple messages to send them in a single batch request to improve efficiency. They check completed requests for success and resend messages that were not persisted successfully at a later point. They participate in Flink’s checkpointing mechanism to avoid losing any messages in case the Flink application fails and needs to recover. Lastly, sinks monitor and control the throughput to the destination to not overload it and to fairly divide the capacity amongst multiple concurrent producers. There are usually only two main things that differ between destinations: the structure and information contained in both the destination requests and responses.

Instead of creating independent sinks and duplicating all of this common functionality for every sink, we can abstract away and implement these common requirements once. To implement a new sink, developers then only need to specify those aspects that are specific to the sink: how to build and send requests, and how to identify from the response which records were not persisted successfully and need to be resent. In this way, building a new sink just requires the creation of a lightweight shim that is specific to the destination.

Building a new sink with the Async Sink abstraction

Let’s look at what it takes to build a new sink based on the Async Sink abstraction. For this example, we’ll implement a simplified sink for Amazon Kinesis Data Streams. Kinesis Data Streams is a streaming data service to capture and store data streams. Data is persisted into a kinesis stream by means of the PutRecords API that can persist multiple records with a single batch request.

There are three main aspects that are specific to our sink that we need to implement. First, we need to specify how to extract the information required to make a batch request from the event. In Kinesis Data Streams, this includes the actual payload and a partition key. Second, we need to specify how to construct and make a batch request. And third, we need to inspect the response of the request to know whether all elements of the batch request have been persisted successfully.

Let’s start with extracting the required information from an event. We need to specify how to convert an event to a so-called request entry that forms a batch request. The following code example shows what this looks like for our Kinesis Data Streams sink. The code simply specifies how to extract the actual payload and a partition key from the event and return a PutRecordsRequestEntry object. In this simplified example, we use the string representation of the event as the payload and the hash code of the event as partition key. For a more sophisticated implementation, it may be more desirable to use a serializer that is configurable and provides more flexibility on how to construct the payload and partition key to end users of the sink.

public PutRecordsRequestEntry apply(InputT event, SinkWriter.Context context) {
return PutRecordsRequestEntry.builder()

The sink will buffer these objects until it has collected enough of them according to the buffering hints. These buffering hints include a limit on the number of messages, total size of messages, and a timeout condition.

Next, we need to specify how to construct and make the actual batch request. This is, again, specific to the destination we are writing to, and therefore something we need to implement as part of the submitRequestEntries method that you can see in the code example below. The Async Sink invokes this method with a set of buffered request entries that should form the batch request.

For the Kinesis Data Streams sink, we need to specify how to construct and run the PutRecords request from a set of PutRecordsRequestEntry objects (Lines 6-9 in the example below). In addition to making the batch request, we also need to check the response of the PutRecords request for entries that were not persisted successfully. These entries need to be requeued in the internal buffer so the Async Sink can retry them at a later point (Lines 11-31).

protected void submitRequestEntries(
List<PutRecordsRequestEntry> requestEntriesToSend,
Consumer<List<PutRecordsRequestEntry>> requestEntriesToRetry) {

//construct and run the PutRecords request
PutRecordsRequest batchRequest =

CompletableFuture<PutRecordsResponse> future = kinesisClient.putRecords(batchRequest);

//check the response of the PutRecords request
(response, err) -> {
if (err != null) {
// entire batch request failed, all request entries need to be retried
} else if (response.failedRecordCount() > 0) {
// some request entries in the batch request were not persisted and need to be retried
List<PutRecordsRequestEntry> failedRequestEntries = new ArrayList<>(response.failedRecordCount());
List<PutRecordsResultEntry> records = response.records();

for (int i = 0; i < records.size(); i++) {
if (records.get(i).errorCode() != null) {
} else {
// all request entries of the batch request have been successfully persisted

That’s basically it. These are the main components of the sink you need to implement for a basic Kinesis Data Streams sink. These are parts that are specific to the destination and cannot be abstracted away.

For each event the sink receives, it applies the conversion and buffers the result. Once the conditions of the buffering hints are met, the sink will then construct and send a batch request. The buffering hints also help to satisfy constraints of the destination. For instance, the PutRecords API supports up to 500 records with a total size of 5 MiB and the buffering hints help to enforce these limits. From the response of the request, the sink identifies which request entries were not persisted correctly and requeues them in the internal queue. In addition, the sink will automatically adapt the throughput to the limits of the destination and slow down the entire Flink application by applying back pressure in case the destination becomes overloaded.

However, we left out a couple of details for the sake of simplicity. Some additional boilerplate code is required to assemble these individual pieces into a complete sink. For a production-ready sink, we would also need to extract the message size to support size-based buffering hints, implement serialization for request entries to obtain exactly once semantics, and add support for Flink’s Python and Table API. In addition, adding tests to the implementation is highly encouraged to obtain a well-tested implementation.

We have just used Kinesis Data Streams as an example here to explain the basic components that are required to create a simplified sink. We have implemented a complete and production-ready Kinesis Data Streams sink in Flink 1.15. If you want to sink data into a Kinesis data stream or are interested in a complete example, you can find the sources in the official Apache Flink GitHub repository. If you are curious to see additional examples, you can refer to the Amazon Kinesis Data Firehose sink that is also part of Flink 1.15 or a sample implementation of an Amazon CloudWatch sink.

What’s next?

We’ve started the work on the Async Sink to make it easier to build integrations with AWS services. But we soon realized that our contributions could be generalized to be useful to a much wider set of use cases. We are excited to see how the community is already using the Async Sink since it became available with the Flink 1.15 release. In addition to the sinks for Kinesis Data Streams and Amazon Kinesis Data Firehose that we have contributed, the community has been working on a sink for Amazon DynamoDB and Redis Streams. There are also efforts planned to refactor the Apache Cassandra sink implementation with the Async Sink.

We have been working on additional improvements for the Async Sink since the initial release. We’ve implemented a rate-limiting strategy that is slowing down the sink (and the entire Flink application) if the destination becomes overloaded. For the initial release, this strategy cannot be adapted easily and we are currently working to make it easier to configure the default strategy (FLIP-242: Introduce configurable RateLimitingStrategy for Async Sink). We are also seeking feedback from the community on potential future extensions.

Beyond connectors, we want to continue contributing to Apache Flink. There have been efforts in the community to create a Flink Kubernetes operator. We are currently looking to extend the capabilities of that operator with support for additional deployment modes (FLIP-225: Implement standalone mode support in the kubernetes operator). These efforts will help to improve the security posture of Flink deployments in a multi-tenant environment. Moreover, we are adding support for asynchronous job submission (FLIP-236: Asynchronous Job Submission). This will help to reduce friction when deploying Flink applications with expensive initialization work as part of their main method.

We are excited to continue to work with the open source community to improve Apache Flink. It’s great to be part of the journey to make Apache Flink even more powerful to enable stream more processing use cases. We are curious to see how the contributions will be used by others to get value from their streaming data. If you are using the Async Sink to create a sink of your own, please let us know on the Flink mailing list or by creating a ticket on the Apache Flink Jira. We’d love to get your feedback and thoughts.

Flatlogic Admin Templates banner

jQuery 3.6.2 Released!

You probably weren’t expecting another release so soon, but jQuery 3.6.2 has arrived! The main impetus for this release was the introduction of some new selectors in Chrome. More on that below.

As usual, the release is available on our cdn and the npm package manager. Other third party CDNs will probably have it soon as well, but remember that we don’t control their release schedules and they will need some time. Here are the highlights for jQuery 3.6.2.

undefined and whitespace-only CSS variables

jQuery 3.6.1 introduced a minor regression where attempting to retrieve a value for a custom CSS property that didn’t exist (i.e. $elem.css(“–custom”)) threw an error instead of returning undefined. This has been fixed in 3.6.2. Related to that, we also made sure that whitespace-only values return the same thing across all browsers. The spec requires that CSS variable values be trimmed, but browsers are inconsistent in their trimming. We now return undefined for whitespace-only values to make it consistent with older jQuery and across the different browsers.

.contains() with <template>

An issue was recently reported that showed that a <template>‘s document had its documentElement property set to null, in compliance with the spec. While it made sense semantically for a template to not yet be tied to a document, it made for an unusual case, specifically in jQuery.contains() and any methods relying on it. That included manipulation and selector methods. Fortunately, the fix was simple.

It wasn’t Ralph that broke the internet

The internet experienced a bit of a rumble when Chrome recently introduced some new selectors, the most pertinent of which being :has(). It was a welcome addition, and one celebrated by the jQuery team, but a change to the spec meant that :has() used what’s called “forgiving parsing”. Essentially, even if the arguments for :has() were invalid, the browser returned no results instead of throwing an error. That was problematic in cases where :has() contained another jQuery selector extension (e.g. :has(:contains(“Item”))) or contained itself (:has(div:has(a))). Sizzle relied on errors like that to know when to trust native querySelectorAll and when to run the selector through Sizzle. Selectors that used to work were broken in all jQuery versions dating back to the earliest jQuery versions.

And yet, this little drama didn’t last long. The Chrome team quickly implemented a workaround to fix previous jQuery versions in the vast majority of cases. Safari handled their implementation of :has() a little differently and didn’t have the same problem. But, there’s still an important issue open to determine how to address this in the CSS spec itself. The CSSWG has since resolved the issue.

jQuery has taken steps to ensure that any forgiving parsing doesn’t break future jQuery versions, even if previous jQuery versions would still be affected.


We do not expect compatibility issues when upgrading from a jQuery 3.0+ version. To upgrade, have a look at the new 3.5 Upgrade Guide. If you haven’t yet upgraded to jQuery 3+, first have a look at the 3.0 Upgrade Guide.

The jQuery Migrate plugin will help you to identify compatibility issues in your code. Please try out this new release and let us know about any issues you experienced.

If you can’t yet upgrade to 3.5+, Daniel Ruf has kindly provided patches for previous jQuery versions.


You can get the files from the jQuery CDN, or link to them directly:



You can also get this release from npm:

npm install [email protected]

Slim build

Sometimes you don’t need ajax, or you prefer to use one of the many standalone libraries that focus on ajax requests. And often it is simpler to use a combination of CSS and class manipulation for web animations. Along with the regular version of jQuery that includes the ajax and effects modules, we’ve released a “slim” version that excludes these modules. The size of jQuery is very rarely a load performance concern these days, but the slim build is about 6k gzipped bytes smaller than the regular version. These files are also available in the npm package and on the CDN:



These updates are already available as the current versions on npm and Bower. Information on all the ways to get jQuery is available at https://jquery.com/download/. Public CDNs receive their copies today, please give them a few days to post the files. If you’re anxious to get a quick start, use the files on our CDN until they have a chance to update.


Thank you to all of you who participated in this release by submitting patches, reporting bugs, or testing, including sashashura, Anders Kaseorg, Michal Golebiowski-Owczarek, and the whole jQuery team.


Full changelog: 3.6.2


Return undefined for whitespace-only CSS variable values (#5120) (8bea1dec)
Don’t trim whitespace of undefined custom property (#5105, c0db6d70)


Manipulation: Fix DOM manip within template contents (#5147, 5318e311)
Update Sizzle from 2.3.7 to 2.3.8 (#5147, a1b7ae3b)
Update Sizzle from 2.3.6 to 2.3.7 (#5098, ee0fec05)


Remove a workaround for a Firefox XML parsing issue (965391ab)
Make Ajax tests pass in iOS 9 (d051e0e3)

Flatlogic Admin Templates banner

Flatlogic Introduces Enhanced CSV Export Feature!

We are happy to announce that now you can export data from all your’s entities in CSV format! The new feature allows users to easily export data from the admin panel tables and view it in any spreadsheet application. 

CSV stands for Comma-Separated Values. It is a file format used for storing tabular data, such as a spreadsheet or database, in a plain text format. Each line in a CSV file contains one record, with each record consisting of one or more fields separated by commas. CSV files can be opened and edited in any spreadsheet application, such as Microsoft Excel or Google Sheets. They are commonly used to transfer data between different applications and systems.

Unlock the power of your data now with our enhanced CSV export feature! Take advantage of this powerful tool today and make the most of your data! If you’re having trouble, don’t hesitate to reach out to us by posting a message on our forum, Twitter, or Facebook. We’ll get back to you as soon as we can!

The post Flatlogic Introduces Enhanced CSV Export Feature! appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner

Amazon Joins the Open Invention Network

We’re excited to announce that Amazon has joined the Open Invention Network (OIN), a patent non-aggression community formed to safeguard essential open source technologies such as Linux and adjacent technologies.

Amazon Web Services has long benefited from the innovation arising out of the open source community. Today Amazon is committing its entire patent portfolio to the body of patents that are free to use with OIN’s defined open source projects. Once the OIN agrees to protect a piece of software, all members are granted royalty-free, community patent licenses for the use of that software. By adding our patents to the pool, we are helping to reduce the risk of patent aggression for companies that innovate with open source.

Our membership is also a statement that AWS is committed to protecting and fostering open source. All OIN members promise that they won’t use their patents against each other with respect to open source software covered under the umbrella of OIN. OIN protects essential open source technologies included under their Linux System definition. To date, the list includes 3,730 software packages covered by the OIN community license.

“Linux and open source are essential to many of our customers and a key driver of innovation across AWS. We are proud to support a broad range of open source projects, foundations, and partners, and we are committed to the long-term success and sustainability of open source as a whole,” said Nithya Ruff, director, Open Source Program Office at Amazon. “By joining OIN, we are continuing to strengthen open source communities and helping to ensure technologies like Linux remain thriving and accessible to everyone.”

AWS is investing heavily in open source communities to ensure the sustainability of the open source ecosystem as a whole. Our open source investments include hiring dedicated developers and maintainers to work on upstream projects and fine tuning performance of open source projects to run in the cloud. We also provide funding and cloud credits to open source foundations, and sponsor events and other community initiatives. For example, in May we announced $10 million in funding over three years for the Open Source Security Foundation (OpenSSF). And in November we pledged $3 million in cloud credits and dedicated engineering resources to the Cloud Native Computing Foundation (CNCF) to fund infrastructure improvements for Kubernetes.

We look forward to working with OIN, its members, and the broader open source community to further protect Linux and other foundational open source technologies.

Flatlogic Admin Templates banner

jQuery 3.6.3 Released: A Quick Selector Fix

Last week, we released jQuery 3.6.2. There were several changes in that release, but the most important one addressed an issue with some new selectors introduced in most browsers, like :has(). We wanted to release jQuery 3.6.3 quickly because an issue was reported that revealed a problem with our original fix. More details on that below.

As usual, the release is available on our cdn and the npm package manager. Other third party CDNs will probably have it soon as well, but remember that we don’t control their release schedules and they will need some time. Here are the highlights for jQuery 3.6.3.

Using CSS.supports the right way

After the issue with :has that was fixed in jQuery 3.6.2, we started using CSS.supports( “selector(SELECTOR)”) to determine whether a selector would be valid if passed directly to querySelectorAll. When CSS.supports returned false, jQuery would then fall back to its own selector engine (Sizzle). Apparently, our implementation had a bug. In CSS.supports( “selector(SELECTOR)”), SELECTOR needed to be a <complex-selector> and not a <complex-selector-list>. For example:

CSS.supports(“selector(div)”); // true
CSS.supports(“selector(div, span)”); // false

This meant that all complex selector lists were passed through Sizzle instead of querySelectorAll. That’s not necessarily a problem in most cases, but it does mean that some level 4 selectors that were supported in browsers but not in Sizzle, like :valid, no longer worked if it was part of a selector list (e.g. “input:valid, div”). It should be noted this currently only affects Firefox, but it will be true in all browsers as they roll out changes to CSS.supports.

This has now been fixed in jQuery 3.6.3 and it is the only functional change in this release.


We do not expect compatibility issues when upgrading from a jQuery 3.0+ version. To upgrade, have a look at the new 3.5 Upgrade Guide. If you haven’t yet upgraded to jQuery 3+, first have a look at the 3.0 Upgrade Guide.

The jQuery Migrate plugin will help you to identify compatibility issues in your code. Please try out this new release and let us know about any issues you experienced.

If you can’t yet upgrade to 3.5+, Daniel Ruf has kindly provided patches for previous jQuery versions.


You can get the files from the jQuery CDN, or link to them directly:



You can also get this release from npm:

npm install [email protected]

Slim build

Sometimes you don’t need ajax, or you prefer to use one of the many standalone libraries that focus on ajax requests. And often it is simpler to use a combination of CSS and class manipulation for web animations. Along with the regular version of jQuery that includes the ajax and effects modules, we’ve released a “slim” version that excludes these modules. The size of jQuery is very rarely a load performance concern these days, but the slim build is about 6k gzipped bytes smaller than the regular version. These files are also available in the npm package and on the CDN:



These updates are already available as the current versions on npm and Bower. Information on all the ways to get jQuery is available at https://jquery.com/download/. Public CDNs receive their copies today, please give them a few days to post the files. If you’re anxious to get a quick start, use the files on our CDN until they have a chance to update.


Thank you to all of you who participated in this release by submitting patches, reporting bugs, or testing, including Michal Golebiowski-Owczarek and the whole jQuery team.


Full changelog: 3.6.3


remove stale Insight package from custom builds (81d5bd17)
Updating the 3.x-stable version to 3.6.3-pre. (2c5b47c4)


Update Sizzle from 2.3.8 to 2.3.9 (#5177, 8989500e)

Flatlogic Admin Templates banner

How to Build Your ERP System in Minutes

Do you need an effective way to manage and optimize your business operations? Enterprise Resource Planning (ERP) systems are the perfect tools to help you do just that. With an ERP system, you can manage various tasks, including financial management, supply chain management, project management, and customer relationship management. While these systems can be expensive and complex to purchase and implement, you can build your system in a relatively short amount of time. This article will discuss the steps to create your ERP system from scratch, using open-source software and other readily available resources. With this knowledge, you can benefit from the power of an ERP system without the expensive cost of purchasing a commercial solution. So, if you’re looking for an efficient, cost-effective way to manage your business operations, trust the expert advice in this article and read on to find out how to create your ERP system from scratch.

What is an ERP?

ERP (Enterprise Resource Planning) system is a centralized platform that helps organizations manage their business processes and data across various departments and functions, such as finance, human resources, supply chain, and operations. It streamlines multiple processes, such as accounting, inventory management, and project management, and provides real-time data visibility and insights to help organizations make informed decisions.

Modern ERP solutions include in particular important business areas such as

Financials & Accounting. This includes financial management, budgeting, accounting, and cash flow management. 

Human Resources (HR). This includes payroll, time and attendance, recruitment, benefits, and training management. 

Supply Chain Management. This includes inventory and warehouse management, order management, supplier management, and logistics management. 

Manufacturing. This includes production planning and scheduling, quality control, and cost management. 

Customer Relationship Management (CRM). This includes sales automation, marketing automation, customer service, customer analytics, and customer support. 

Business Intelligence. This includes data mining, data visualization, predictive analytics, and dashboard reporting.

Analytics. This includes predictive analytics, data mining, and machine learning.

Building your ERP system from scratch can be a daunting task, but with the right tools and resources, it can be done in a matter of minutes. Here’s how you can get started:

Identify your business needs and objectives. The first step in building an ERP system is determining what you need it to do for your organization. Consider your current business processes and identify the pain points and inefficiencies that an ERP system could address. Determine what features and functionality are essential for your business and prioritize them accordingly.

Choose the right platform. There are many options available when it comes to choosing an ERP platform, including open-source options and commercial options. Consider factors such as the cost, scalability, integrations, and user-friendliness of the platform when making your decision.

Set up and configure the platform. Once you’ve chosen an ERP platform, it’s time to set it up and configure it to meet your specific business needs. This may involve importing data from your existing systems, creating custom modules or integrations, and setting up user roles and permissions.

Train your team. An ERP system is only as good as the people using it, so it’s essential to provide training to your team to ensure they are comfortable and proficient with the new system. This may involve in-person training sessions, online tutorials, or other resources to help your team get up to speed quickly.

Go live and monitor performance. Once your ERP system is set up and configured, it’s time to go live and start using it for your day-to-day operations. Monitor the performance of the system and make any necessary adjustments or improvements as needed.

The key steps and processes of ERP software development

Requirements Gathering

Requirements Gathering involves gathering information about the customer’s business needs and processes and identifying specific requirements for the ERP software. This includes understanding the customer’s current business operations, their goals and objectives, any existing software they use, and any changes they may need to make. The requirements-gathering process also involves identifying and documenting functional requirements, such as the types of data that need to be captured, the types of reports that need to be generated, and any other processes or activities that need to be automated. This information is used to help the ERP development team design and develop the software.

System Design

After requirements gathering, the ERP development team will design the system based on the customer’s requirements. System Design is the process of designing the software architecture and user interface to meet the customer’s requirements. This includes designing the database structure, the user interface, the business logic, and any integrations with other systems. The design process also involves creating user stories, which describe the actions a user can take with the software, and identifying potential use cases. This information is used to create the software architecture and user interface that will be used to develop the software.


Once the system design is approved, the development team will begin to develop the software according to the design. Development is the process of coding and testing the software according to the system design. This includes writing code to implement the user interface and business logic, as well as testing the software to ensure that it meets the customer’s requirements. This is an iterative process, which involves making changes to the code based on feedback from the customer and testing the changes to ensure that they are functioning correctly.


Once the software is developed, it must be deployed to the customer’s environment. Deployment is the process of setting up the hardware and installing the software in the customer’s environment. This includes configuring the hardware, installing the software, and performing data migration and integration with other systems. The deployment process also involves setting up user accounts, configuring security settings, and testing the system to ensure that it is functioning correctly.

Training and Support 

After deployment, the customer must be trained on how to use the software. Training and Support is the process of providing the customer with training and technical support to ensure they can use the software effectively. This includes providing user manuals and tutorials, providing access to customer support teams, and providing ongoing maintenance and updates. The customer must be trained on how to use the software, and the ERP development team must provide ongoing support to ensure that the customer can use the software effectively.

The cost of custom ERP software development

The cost of custom ERP software development depends on the scope of your project. Typically, the cost of custom ERP development ranges from $50,000 to millions. Factors that influence the cost include the complexity of the project, the size of the project, the development team, and the technology used. Additionally, if you are implementing an ERP system within an existing software architecture, there may be additional costs associated with integrating the existing architecture with the ERP.

The cost of custom ERP software development also depends on the features and capabilities included in the software. For example, if you need an ERP system that tracks inventory and accounting information, the cost of development will be higher than for a system that only tracks customer and sales data. Additionally, the cost of custom ERP software development may also depend on the type of software development methodology used. For example, agile software development processes typically cost more than traditional waterfall methods. Furthermore, the cost of custom ERP software development may be higher if you require the software to be hosted in the cloud, or if the system requires custom integrations with other software.

Pros & Cons of building ERP Systems

Pros of Building a Custom ERP System

There are several benefits to implementing an enterprise resource planning (ERP) system in an organization:

Improved efficiency and productivity: ERP systems streamline various business processes, such as accounting, inventory management, and project management, and provide real-time data visibility, which helps organizations make informed decisions and increase efficiency.
Enhanced data accuracy and integrity: With an ERP system, data is entered only once and is then shared across all relevant departments and functions. This eliminates the need for manual data entry and reduces the risk of errors and inconsistencies, resulting in improved data accuracy and integrity.
Greater collaboration and communication: An ERP system provides a centralized platform for all departments to share information and collaborate on projects, which improves communication and decision-making across the organization.
Increased scalability and flexibility: ERP systems are designed to be scalable and flexible, allowing organizations to easily add new modules or integrations as their needs change. This helps organizations adapt to changing market conditions and grow their business.
Reduced costs: ERP systems can help organizations save money by streamlining processes, reducing the need for manual data entry, and eliminating the need for multiple software systems.

Overall, implementing an ERP system can help organizations improve efficiency, increase data accuracy and integrity, enhance collaboration and communication, increase scalability and flexibility, and reduce costs.

Cons of Building a Custom ERP System

The main problems when building ERP systems include: 

Complexity: ERP systems are complex, and require knowledge and skills to set up and maintain them. This complexity can make it difficult for businesses to understand how to use and configure the system correctly. It can also make it difficult to troubleshoot any issues that arise. 
Cost: ERP systems can be expensive to purchase and maintain, and may require additional hardware and software investments. The cost of an ERP system can be high, and the cost of maintaining and updating the system must also be taken into consideration. 
Vendor Lock-in: Once you have chosen an ERP system, you may be locked into that vendor’s products and services, and unable to switch to another vendor without high costs. This can make it difficult to switch vendors if the current vendor’s services are no longer meeting your needs. 
Integration: ERP systems must be integrated with other systems, such as financial and CRM systems, to be effective. This can be a difficult and time-consuming process and may require additional resources and expertise. 
Security: ERP systems must be secure, as they contain valuable business data. This requires careful planning and implementation and may involve additional security measures, such as encryption and two-factor authentication.

How to create an ERP system using the Flatlogic Platform in minutes

Flatlogic is a company that provides a range of web and mobile application templates and UI components. They offer Full-Stack Generator to build an ERP system by yourself or/and a various number of templates for an ERP system, which is a type of software that helps businesses manage and coordinate various aspects of their operations, such as financials, supply chain, manufacturing, HR, and more. This template includes a range of features and functionality that can be customized to meet the specific needs of an organization, such as CRM, project management, inventory management, and more. It is designed to be responsive and user-friendly and can be easily integrated with other systems and applications. It is worth noting that an ERP template is not a complete ERP system, but rather a starting point that can be customized and developed further to meet an organization’s specific needs.

Flatlogic Full Stack Generator is a tool that helps you create a fully functional full-stack application from scratch. It provides templates for the frontend and backend of the stack, as well as database connections and authentication features. The generator also comes with a range of customizable components, such as user interface elements, forms, and dashboards. Once you’ve chosen your templates and components, you can easily customize and extend the application to meet your specific needs.

Flatlogic Platform provides a ChatGPT+ solution that enables you to create a conversational chatbot for your website or application. The solution can be customized to your exact requirements and includes features such as natural language processing, machine learning, deep learning algorithms, and real-time analytics. Flatlogic’s ChatGPT+ solution can help you create a powerful and sophisticated chatbot that can provide your customers with an engaging and interactive experience.

How does it work?

Using the Flatlogic Platform you can create CRUD and static applications in a few minutes. Creating a full-stack application consists of 3 steps and static only 2 steps.

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

Step 3. Schema Editor

In this step, you can create your database schema from scratch, import an existing schema or select one of the suggested schemas. 

To import your existing database, click the Import SQL button and select your .sql file. After that, your database will be opened in the Schema Editor where you can further edit your data (add/edit/delete entities).

If you are not familiar with database design and find it difficult to understand what tables are, we have prepared some ready-made sample schemas of real applications that you can modify for your application:

E-commerce app;
Time tracking app;
Book store;
Chat (messaging) app;

Or, you can define a database schema and add a description by clicking on the “Generate with AI” button. You need to type the application’s description in the text area and hit “Send”. The application’s schema will be ready in around 15 seconds. You may either hit deploy immediately or review the structure to make manual adjustments.

Step 4. Choose Integration method & Features

Next, you can connect your GitHub and push your application code there. Or skip this step by clicking the Finish and Deploy button and in a few minutes, your application will be generated.

Benefits of ERP systems created with Flatlogic Platform

An ERP system created using the Flatlogic platform may offer several benefits, such as:

Improved efficiency

An ERP system can help streamline and automate various business processes, reducing the need for manual data entry and reducing the risk of errors. This can save time and resources, and allow employees to focus on more important tasks.

Better decision-making

An ERP system can provide real-time data and insights that can help managers and decision-makers make informed decisions.

Increased visibility

An ERP system can provide a single, centralized source of data that can be accessed by authorized users throughout an organization. This can improve communication and collaboration, and help ensure that everyone is working from the same set of accurate and up-to-date information.

Enhanced security

An ERP system can help protect sensitive data and ensure that it is only accessed by authorized users. It can also help meet regulatory compliance requirements.

Reduced costs

An ERP system can help reduce the costs associated with manual processes, such as paper-based systems and manual data entry. It can also help reduce the costs of training new employees, as they can easily access the information they need to perform their tasks.

It is worth noting that the specific benefits of an ERP system created using the Flatlogic platform will depend on the specific features and functionality implemented, as well as how well the system is customized and integrated with an organization’s existing systems and processes.

Summing Up

In conclusion, building your ERP system can be a daunting task for many businesses. However, with the right tools and resources, it is possible to build a powerful, custom ERP system in minutes. By following the steps outlined in this guide, you can quickly create a customized ERP system that meets your business needs and saves you time and money. By taking the time to research, design, and deploy an ERP system, your business can reap the benefits of a centralized and efficient system that can help you increase efficiency and profitability.

The post How to Build Your ERP System in Minutes appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner