Run Open Source FFMPEG at Lower Cost and Better Performance on a VT1 Instance for VOD Encoding Workloads 

FFmpeg is an open source tool commonly used by media technology companies to encode and transcode video and audio formats. FFmpeg users can leverage a cost efficient Amazon Web Services (AWS) instance for their video on demand (VOD) encoding workloads now that AWS offers VT1 support on Amazon Elastic Compute Cloud (Amazon EC2).

VT1 offers improved visual quality for 4K video, support for a newer version of FFmpeg (4.4), expanded OS/kernel support , and bug fixes. These instances are powered by the AMD-Xilinx Alveo U30 media accelerator. Xilinx has the ability to add a single line  into FFmpeg, and enable the Alveo U30 to do the transcoding work. The Xilinx Video SDK includes an enhanced version of FFmpeg that can communicate with the hardware accelerated transcode pipeline in Xilinx devices to deliver up to 30% lower cost per stream than Amazon EC2 GPU-based instances and up to 60% lower cost per stream than Amazon EC2 CPU-based instances.

Companies typically use EC2 CPU instances such as the C5 and C6 coupled with FFmpeg for their VOD encoding workloads. These workloads can be costly in cases where companies encode thousands of VOD assets. The cost of an EC2 workload is influenced by the number of concurrent encoding jobs that an instance can support and this subsequently affects the time it takes to encode targeted outputs. As VOD libraries expand, companies typically auto scale to increase the size or number of C5 and C6 instances or allow the instances to operate longer. In both cases, these workloads experience an increase in cost. Important note: There is no additional charge for AWS Auto Scaling. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.

Amazon EC2 VT1 instances are designed to accelerate real-time video transcoding and deliver low-cost transcoding for live video streams. VT1 is also a cost effective and performance-enhancing alternative for VOD encoding workloads. Using FFmpeg as the transcoding tool, AWS performed an evaluation of VT1, C5, and C6 instances to compare price performance and speed of encode for VOD assets. When compared to C5 and C6 instances, VT1 instances can achieve up to 75% cost savings. The results show that you could operate two VT1 instances for the price of one C5 or C6 instance. Additionally, the  .

Benchmarking Method

First let’s determine the best instance type to use for our VOD workload. C5 and C6 instances are commonly used for transcoding. We used  and C6i.8xl instances and compared them against VT1.3xl instances to transcode 4K and 1080p VOD assets. The two assets were encoded into the output targets as shown  . The VT1.3xl, C5.9xl, and C6i.8xl output targets were measured against the amount of time it took to complete the encode.

As shown here in the screenshot from the AWS console for various instance types, VT1.3xl is the smallest instance type in the VT1 family. Even though VT1.6xl compares closely to C5 and C6 in terms of CPU/memory, we chose VT1.3xl for a closer price/performance comparison.

VT1 family instance type comparison to C5 in terms of CPU/memory.

Input data points

Sample input content

The following table summarizes the key parameters of the source content video files used in measuring the encoding performance for the benchmarking

Clip Name
Frame Count
Frame Rate
Chroma Sampling

12 mins
H.264, High Profile
4:2:0 YUV

13 secs
H.264, High Profile
4:2:0 YUV

Evaluation Adaptive Bit Rate (ABR ) targets

Adaptive bitrate streaming (ABR or ABS) is technology designed to stream files efficiently over HTTP networks. Multiple files of the same content, in different size files, are offered to a user’s video player, and the client chooses the most suitable file to play back on the device. This involves transcoding a single input stream to multiple output formats optimized for different viewing resolutions.

For the benchmarking tests the input 4K and 1080K files were transcoded to various target resolutions that can be used to support different device and network capabilities at their native resolution: 1080p, 720p, 540p, and 360p. The bitrate (br) in the graphic shown here indicates the bitrate associated with each pixel. For example a 4K input file was transcoded to 360p resolution and 640 bitrate.

Figure 1: Adaptive bitrate ladder used to transcode output video files. Source:

Output results

Target duration analysis

The VT1.3xl instance completed the targeted encodes 15.709 seconds faster than the C5.9xl instance and 12.58 seconds faster than the C6.8xl instance. The results in the following charts detail that the VT1.3xl instance has better speed and price performance when compared to the C5.9xl and C6i.8xl instances.

% Price Performance = {(C5/C6 Price Performance – VT1 Price Performance) /C5/C6 Price Performance} * 100

% Speed = { (C5/C6 duration – VT1 duration) /C5/C6 duration } * 100

H.264 4K Clip (3 seconds duration)
Instance Type



Duration to complete ABR targets (See Evaluation Targets Below)

Speed‍ Compared to VT1.3xl (%)
52.043284 %
46.500035 %

Instance Cost ($/hour)

Instance Cost ($/second)

Price Performance: $/(clip transcoded)

Price Performance Compared to VT1.3xl (%) ‍
79.930662 %
71.514488 %

H.264 1080p Clip (12 minutes duration)
Instance Type



Duration to complete ABR targets (See Evaluation Targets Below)

Speed‍ Compared to VT1.3xl (%)‍
41.373538 %
35.641252 %

Instance Cost ($/hour)

Instance Cost ($/second)

Price Performance: $/(clip transcoded)

Price Performance Compared to VT1.3xl (%)‍
75.465407 %
65.735351 %

The following section explains the encoding parameters used for testing. FFmpeg was installed on 1 x C5.9xl, 1x C6i.8xl, and 1 x VT1.3xl instances.  The two input files as mentioned in sample input files were run in parallel on each instance type, and the total duration to complete the transcoding to various output target resolutions was calculated.

Technical specifications

EC2 Instances: 1 x C5.9xl, 1x C6i.8xl, 1 x VT1.3xl
Video framework: FFmpeg
Video codecs: x264 (CPU), XMA (Xilinx U30)
Quality objective: x264 faster
Operating system

For C5 and C6 – x264, Amazon Linux 2 (Linux kernel 4.14)
For VT1- Xilinx: Amazon Linux 2 (Linux kernel 5.4.0-1038-aws)

Video codec settings for encoding performance tests

C5, C6



Output Bitrate (CBR)
See Evaluation Targets above
See Evaluation Targets above

Chroma Subsampling
YUV 4:2:0
YUV 4:2:0

Color Bit Depth
8 bits
8 bits



Amazon VT1 EC2 instances are typically used for live real-time encoding; however, this blog post demonstrates VT1 VOD encoding  and price performance advantages when compared to C5 and C6 EC2 instances. VT1 instances can encode VOD assets up to 52% faster, and achieve up to 75% reduction in cost when compared to C5 and C6 instances. VT1 is best utilized in workloads with VOD encoding jobs that require low encoding speeds in the time it takes to complete outputs. Please visit the Amazon EC2 VT1 instances page for more details.

Securely validate business application resilience with AWS FIS and IAM

To avoid high costs of downtime, mission critical applications in the cloud need to achieve resilience against degradation of cloud provider APIs and services.

In 2021, AWS launched AWS Fault Injection Simulator (FIS), a fully managed service to perform fault injection experiments on workloads in AWS to improve their reliability and resilience. At the time of writing, FIS allows to simulate degradation of Amazon Elastic Compute Cloud (EC2) APIs using API fault injection actions and thus explore the resilience of workflows where EC2 APIs act as a fault boundary. 

In this post we show you how to explore additional fault boundaries in your applications by selectively denying access to any AWS API. This technique is particularly useful for fully managed, “black box” services like Amazon Simple Storage Service (S3) or Amazon Simple Queue Service (SQS) where a failure of read or write operations is sufficient to simulate problems in the service. This technique is also useful for injecting failures in serverless applications without needing to modify code. While similar results could be achieved with network disruption or modifying code with feature flags, this approach provides a fine granular degradation of an AWS API without the need to re-deploy and re-validate code.


We will explore a common application pattern: user uploads a file, S3 triggers an AWS Lambda function, Lambda transforms the file to a new location and deletes the original:

Figure 1. S3 upload and transform logical workflow: User uploads file to S3, upload triggers AWS Lambda execution, Lambda writes transformed file to a new bucket and deletes original. Workflow can be disrupted at file deletion.

We will simulate the user upload with an Amazon EventBridge rate expression triggering an AWS Lambda function which creates a file in S3:

Figure 2. S3 upload and transform implemented demo workflow: Amazon EventBridge triggers a creator Lambda function, Lambda function creates a file in S3, file creation triggers AWS Lambda execution on transformer function, Lambda writes transformed file to a new bucket and deletes original. Workflow can be disrupted at file deletion.

Using this architecture we can explore the effect of S3 API degradation during file creation and deletion. As shown, the API call to delete a file from S3 is an application fault boundary. The failure could occur, with identical effect, because of S3 degradation or because the AWS IAM role of the Lambda function denies access to the API.

To inject failures we use AWS Systems Manager (AWS SSM) automation documents to attach and detach IAM policies at the API fault boundary and FIS to orchestrate the workflow.

Each Lambda function has an IAM execution role that allows S3 write and delete access, respectively. If the processor Lambda fails, the S3 file will remain in the bucket, indicating a failure. Similarly, if the IAM execution role for the processor function is denied the ability to delete a file after processing, that file will remain in the S3 bucket.


Following this blog posts will incur some costs for AWS services. To explore this test application you will need an AWS account. We will also assume that you are using AWS CloudShell or have the AWS CLI installed and have configured a profile with administrator permissions. With that in place you can create the demo application in your AWS account by downloading this template and deploying an AWS CloudFormation stack:

git clone
cd fis-api-failure-injection-using-iam
aws cloudformation deploy –stack-name test-fis-api-faults –template-file template.yaml –capabilities CAPABILITY_NAMED_IAM

Fault injection using IAM

Once the stack has been created, navigate to the Amazon CloudWatch Logs console and filter for /aws/lambda/test-fis-api-faults. Under the EventBridgeTimerHandler log group you should find log events once a minute writing a timestamped file to an S3 bucket named fis-api-failure-ACCOUNT_ID. Under the S3TriggerHandler log group you should find matching deletion events for those files.

Once you have confirmed object creation/deletion, let’s take away the permission of the S3 trigger handler lambda to delete files. To do this you will attach the FISAPI-DenyS3DeleteObject  policy that was created with the template:

ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ROLE_NAME}’].Arn” –output text )
echo Target Role ARN: $ROLE_ARN

POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

aws iam attach-role-policy
–role-name ${ROLE_NAME}
–policy-arn ${POLICY_ARN}

With the deny policy in place you should now see object deletion fail and objects should start showing up in the S3 bucket. Navigate to the S3 console and find the bucket starting with fis-api-failure. You should see a new object appearing in this bucket once a minute:

Figure 3. S3 bucket listing showing files not being deleted because IAM permissions DENY file deletion during FIS experiment.

If you would like to graph the results you can navigate to AWS CloudWatch, select “Logs Insights“, select the log group starting with /aws/lambda/test-fis-api-faults-S3CountObjectsHandler, and run this query:

fields @timestamp, @message
| filter NumObjects >= 0
| sort @timestamp desc
| stats max(NumObjects) by bin(1m)
| limit 20

This will show the number of files in the S3 bucket over time:

Figure 4. AWS CloudWatch Logs Insights graph showing the increase in the number of retained files in S3 bucket over time, demonstrating the effect of the introduced failure.

You can now detach the policy:

ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ROLE_NAME}’].Arn” –output text )
echo Target Role ARN: $ROLE_ARN

POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

aws iam detach-role-policy
–role-name ${ROLE_NAME}
–policy-arn ${POLICY_ARN}

We see that newly written files will once again be deleted but the un-processed files will remain in the S3 bucket. From the fault injection we learned that our system does not tolerate request failures when deleting files from S3. To address this, we should add a dead letter queue or some other retry mechanism.

Note: if the Lambda function does not return a success state on invocation, EventBridge will retry. In our Lambda functions we are cost conscious and explicitly capture the failure states to avoid excessive retries.

Fault injection using SSM

To use this approach from FIS and to always remove the policy at the end of the experiment, we first create an SSM document to automate adding a policy to a role. To inspect this document, open the SSM console, navigate to the “Documents” section, find the FISAPI-IamAttachDetach document under “Owned by me”, and examine the “Content” tab (make sure to select the correct region). This document takes the name of the Role you want to impact and the Policy you want to attach as parameters. It also requires an IAM execution role that grants it the power to list, attach, and detach specific policies to specific roles.

Let’s run the SSM automation document from the console by selecting “Execute Automation”. Determine the ARN of the FISAPI-SSM-Automation-Role from CloudFormation or by running:

POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

Use FISAPI-SSM-Automation-Role, a duration of 2 minutes expressed in ISO8601 format as PT2M, the ARN of the deny policy, and the name of the target role FISAPI-TARGET-S3TriggerHandlerRole:

Figure 5. Image of parameter input field reflecting the instructions in blog text.

Alternatively execute this from a shell:

ASSUME_ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ASSUME_ROLE_NAME}’].Arn” –output text )
echo Assume Role ARN: $ASSUME_ROLE_ARN

ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ROLE_NAME}’].Arn” –output text )
echo Target Role ARN: $ROLE_ARN

POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

aws ssm start-automation-execution
–document-name FISAPI-IamAttachDetach
–parameters “{
“AutomationAssumeRole”: [ “${ASSUME_ROLE_ARN}” ],
“Duration”: [ “PT2M” ],
“TargetResourceDenyPolicyArn”: [“${POLICY_ARN}” ],
“TargetApplicationRoleName”: [ “${ROLE_NAME}” ]

Wait two minutes and then examine the content of the S3 bucket starting with fis-api-failure again. You should now see two additional files in the bucket, showing that the policy was attached for 2 minutes during which files could not be deleted, and confirming that our application is not resilient to S3 API degradation.

Permissions for injecting failures with SSM

Fault injection with SSM is controlled by IAM, which is why you had to specify the FISAPI-SSM-Automation-Role:

Figure 6. Visual representation of IAM permission used for fault injections with SSM.

This role needs to contain an assume role policy statement for SSM to allow assuming the role:

– Action:
– ‘sts:AssumeRole’
Effect: Allow
– “”

The role also needs to contain permissions to describe roles and their attached policies with an optional constraint on which roles and policies are visible:

– Sid: GetRoleAndPolicyDetails
Effect: Allow
– ‘iam:GetRole’
– ‘iam:GetPolicy’
– ‘iam:ListAttachedRolePolicies’
# Roles
– !GetAtt EventBridgeTimerHandlerRole.Arn
– !GetAtt S3TriggerHandlerRole.Arn
# Policies
– !Ref AwsFisApiPolicyDenyS3DeleteObject

Finally the SSM role needs to allow attaching and detaching a policy document. This requires

an ALLOW statement
a constraint on the policies that can be attached
a constraint on the roles that can be attached to

In the role we collapse the first two requirements into an ALLOW statement with a condition constraint for the Policy ARN. We then express the third requirement in a DENY statement that will limit the ‘*’ resource to only the explicit role ARNs we want to modify:

– Sid: AllowOnlyTargetResourcePolicies
Effect: Allow
– ‘iam:DetachRolePolicy’
– ‘iam:AttachRolePolicy’
Resource: ‘*’
# Policies that can be attached
– !Ref AwsFisApiPolicyDenyS3DeleteObject
– Sid: DenyAttachDetachAllRolesExceptApplicationRole
Effect: Deny
– ‘iam:DetachRolePolicy’
– ‘iam:AttachRolePolicy’
# Roles that can be attached to
– !GetAtt EventBridgeTimerHandlerRole.Arn
– !GetAtt S3TriggerHandlerRole.Arn

We will discuss security considerations in more detail at the end of this post.

Fault injection using FIS

With the SSM document in place you can now create an FIS template that calls the SSM document. Navigate to the FIS console and filter for FISAPI-DENY-S3PutObject. You should see that the experiment template passes the same parameters that you previously used with SSM:

Figure 7. Image of FIS experiment template action summary. This shows the SSM document ARN to be used for fault injection and the JSON parameters passed to the SSM document specifying the IAM Role to modify and the IAM Policy to use.

You can now run the FIS experiment and after a couple minutes once again see new files in the S3 bucket.

Permissions for injecting failures with FIS and SSM

Fault injection with FIS is controlled by IAM, which is why you had to specify the FISAPI-FIS-Injection-EperimentRole:

Figure 8. Visual representation of IAM permission used for fault injections with FIS and SSM. It shows the SSM execution role permitting access to use SSM automation documents as well as modify IAM roles and policies via the SSM document. It also shows the FIS execution role permitting access to use FIS templates, as well as the pass-role permission to grant the SSM execution role to the SSM service. Finally it shows the FIS user needing to have a pass-role permission to grant the FIS execution role to the FIS service.

This role needs to contain an assume role policy statement for FIS to allow assuming the role:

– Action:
– ‘sts:AssumeRole’
Effect: Allow
– “”

The role also needs permissions to list and execute SSM documents:

– Sid: RequiredReadActionsforAWSFIS
Effect: Allow
– ‘cloudwatch:DescribeAlarms’
– ‘ssm:GetAutomationExecution’
– ‘ssm:ListCommands’
– ‘iam:ListRoles’
Resource: ‘*’
– Sid: RequiredSSMStopActionforAWSFIS
Effect: Allow
– ‘ssm:CancelCommand’
Resource: ‘*’
– Sid: RequiredSSMWriteActionsforAWSFIS
Effect: Allow
– ‘ssm:StartAutomationExecution’
– ‘ssm:StopAutomationExecution’
– !Sub ‘arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:automation-definition/${SsmAutomationIamAttachDetachDocument}:$DEFAULT’

Finally, remember that the SSM document needs to use a Role of its own to execute the fault injection actions. Because that Role is different from the Role under which we started the FIS experiment, we need to explicitly allow SSM to assume that role with a PassRole statement which will expand to FISAPI-SSM-Automation-Role:

– Sid: RequiredIAMPassRoleforSSMADocuments
Effect: Allow
Action: ‘iam:PassRole’
Resource: !Sub ‘arn:aws:iam::${AWS::AccountId}:role/${SsmAutomationRole}’

Secure and flexible permissions

So far, we have used explicit ARNs for our guardrails. To expand flexibility, we can use wildcards in our resource matching. For example, we might change the Policy matching from:

# Explicitly listed policies – secure but inflexible
– !Ref AwsFisApiPolicyDenyS3DeleteObject

or the equivalent:

# Explicitly listed policies – secure but inflexible
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/${FullPolicyName}

to a wildcard notation like this:

# Wildcard policies – secure and flexible
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/${PolicyNamePrefix}*’

If we set PolicyNamePrefix to FISAPI-DenyS3 this would now allow invoking FISAPI-DenyS3PutObject and FISAPI-DenyS3DeleteObject but would not allow using a policy named FISAPI-DenyEc2DescribeInstances.

Similarly, we could change the Resource matching from:

# Explicitly listed roles – secure but inflexible
– !GetAtt EventBridgeTimerHandlerRole.Arn
– !GetAtt S3TriggerHandlerRole.Arn

to a wildcard equivalent like this:

# Wildcard policies – secure and flexible
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${RoleNamePrefixEventBridge}*’
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${RoleNamePrefixS3}*’
and setting RoleNamePrefixEventBridge to FISAPI-TARGET-EventBridge and RoleNamePrefixS3 to FISAPI-TARGET-S3.

Finally, we would also change the FIS experiment role to allow SSM documents based on a name prefix by changing the constraint on automation execution from:

– Sid: RequiredSSMWriteActionsforAWSFIS
Effect: Allow
– ‘ssm:StartAutomationExecution’
– ‘ssm:StopAutomationExecution’
# Explicitly listed resource – secure but inflexible
# Note: the $DEFAULT at the end could also be an explicit version number
# Note: the ‘automation-definition’ is automatically created from ‘document’ on invocation
– !Sub ‘arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:automation-definition/${SsmAutomationIamAttachDetachDocument}:$DEFAULT’


– Sid: RequiredSSMWriteActionsforAWSFIS
Effect: Allow
– ‘ssm:StartAutomationExecution’
– ‘ssm:StopAutomationExecution’
# Wildcard resources – secure and flexible
# Note: the ‘automation-definition’ is automatically created from ‘document’ on invocation
– !Sub ‘arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:automation-definition/${SsmAutomationDocumentPrefix}*’

and setting SsmAutomationDocumentPrefix to FISAPI-. Test this by updating the CloudFormation stack with a modified template:

aws cloudformation deploy –stack-name test-fis-api-faults –template-file template2.yaml –capabilities CAPABILITY_NAMED_IAM

Permissions governing users

In production you should not be using administrator access to use FIS. Instead we create two roles FISAPI-AssumableRoleWithCreation and FISAPI-AssumableRoleWithoutCreation for you (see this template). These roles require all FIS and SSM resources to have a Name tag that starts with FISAPI-. Try assuming the role without creation privileges and running an experiment. You will notice that you can only start an experiment if you add a Name tag, e.g. FISAPI-secure-1, and you will only be able to get details of experiments and templates that have proper Name tags.

If you are working with AWS Organizations, you can add further guard rails by defining SCPs that control the use of the FISAPI-* tags similar to this blog post.


For this solution we are choosing to attach policies instead of permission boundaries. The benefit of this is that you can attach multiple independent policies and thus simulate multi-step service degradation. However, this means that it is possible to increase the permission level of a role. While there are situations where this might be of interest, e.g. to simulate security breaches, please implement a thorough security review of any fault injection IAM policies you create. Note that modifying IAM Roles may trigger events in your security monitoring tools.

The AttachRolePolicy and DetachRolePolicy calls from AWS IAM are eventually consistent, meaning that in some cases permission propagation when starting and stopping fault injection may take up to 5 minutes each.


To avoid additional cost, delete the content of the S3 bucket and delete the CloudFormation stack:

# Clean up policy attachments just in case
CLEANUP_ROLES=$(aws iam list-roles –query “Roles[?starts_with(RoleName,’FISAPI-‘)].RoleName” –output text)
for role in $CLEANUP_ROLES; do
CLEANUP_POLICIES=$(aws iam list-attached-role-policies –role-name $role –query “AttachedPolicies[?starts_with(PolicyName,’FISAPI-‘)].PolicyName” –output text)
for policy in $CLEANUP_POLICIES; do
echo Detaching policy $policy from role $role
aws iam detach-role-policy –role-name $role –policy-arn $policy
# Delete S3 bucket content
ACCOUNT_ID=$( aws sts get-caller-identity –query Account –output text )
aws s3 rm –recursive s3://${S3_BUCKET_NAME}
aws s3 rb s3://${S3_BUCKET_NAME}
# Delete cloudformation stack
aws cloudformation delete-stack –stack-name test-fis-api-faults
aws cloudformation wait stack-delete-complete –stack-name test-fis-api-faults


AWS Fault Injection Simulator provides the ability to simulate various external impacts to your application to validate and improve resilience. We’ve shown how combining FIS with IAM to selectively deny access to AWS APIs provides a generic path to explore fault boundaries across all AWS services. We’ve shown how this can be used to identify and improve a resilience problem in a common S3 upload workflow. To learn about more ways to use FIS, see this workshop.

About the authors:

Dr. Rudolf Potucek

Dr. Rudolf Potucek is Startup Solutions Architect at Amazon Web Services. Over the past 30 years he gained a PhD and worked in different roles including leading teams in academia and industry, as well as consulting. He brings experience from working with academia, startups, and large enterprises to his current role of guiding startup customers to succeed in the cloud.

Rudolph Wagner

Rudolph Wagner is a Premium Support Engineer at Amazon Web Services who holds the CISSP and OSCP security certifications, in addition to being a certified AWS Solutions Architect Professional. He assists internal and external Customers with multiple AWS services by using his diverse background in SAP, IT, and construction.

Why Do Some Programmers Say Frontend Is Easier Than Backend?

So, you’re wondering if frontend development is easier than backend development. Truth be told, the question is rather challenging. Frontend and backend development are two somewhat complicated aspects of web development in 2023. Fortunately for you, we’ll determine which type of development is more challenging in this article: frontend or backend! 

Do you ever wonder why so many backend developers say that frontend is easier? Discover the answer to this and many other related questions in this article! Have you ever asked yourself questions like What makes frontend easier than backend? What skills are needed to become a successful frontend developer? or What techniques do developers use to make the frontend development process easier?  

It is well known that backend development is more difficult than frontend development. A study by the University of Oxford found that “Backend developers tend to have a higher workload than frontend developers, due to the complexity of the programming language used”. The same study also noted that “The complexity of the backend language also means that backend developers need to have a higher level of technical knowledge than frontend developers”. 

In this article, you will learn why so many backend developers say that frontend is easier, what skills are needed to become a successful frontend developer, and what techniques are used to make the frontend development process easier. After reading this article, you will have a better understanding of why frontend is easier than backend and why it is important to learn both. 

Is Frontend Development Easier Than Backend?​​

During the past decade, frontend development has grown in popularity as more engineers switch from backend development to frontend. Due to its greater availability and perception as being “easier” than backend development, frontend programming has the propensity to be used more frequently. The primary reason so many developers like the frontend are its simplicity. Frontend development has a lower learning curve and calls for less technical knowledge than backend programming. This makes it possible for developers to get started straight away even with just a basic understanding of HTML, CSS, and JavaScript.

Moreover, several frontend frameworks, such as React and Vue, have made it simpler for developers to create working prototypes of websites fast. The tools available are another reason frontend development is perceived as being simpler. Website development is made simpler for developers by the abundance of tools, libraries, and frameworks available. As an illustration, CSS preprocessors like Sass and LESS may significantly cut down on the time required to develop and maintain CSS code. The same is true for JavaScript build tools like webpack and gulp, which may assist developers in writing task automation and optimized code.

The fact that frontend development is more visible and tangible than backend development is a last consideration. As a result, developers can more easily comprehend and interact with the code they write since they can view the results of their labors in real-time in the browser. Developers may be highly motivated by this and debugging and troubleshooting are also much facilitated. In conclusion, many backend engineers assert that the frontend is simpler since it is more approachable, has access to tools, and is more visible and concrete. Because of this, a lot of developers are switching from the backend to the frontend, and this trend is probably going to continue.

What is Frontend & Backend Development?

Frontend development (client-side development) refers to the development of the parts of a website that the user can see and interact with. This includes code that is responsible for the look, feel, and behavior of the website and includes HTML, CSS, and JavaScript. 

Backend development (server-side development) is the creation of sections of a website that the user does not directly view or interact with. This contains program code for databases, servers, and APIs that manage and handle the website’s data.

What’s the Difference?

The main distinction between frontend and backend development is that the former concentrates on the external components of the website, whilst the latter does so for its internal components. Backend development is in charge of data processing and storage, whereas frontend development is in charge of the appearance, feel, and functionality of the website.

Frontend developers build the aesthetics, style, and interaction of the user interface using HTML, CSS, and JavaScript. The logic, databases, and APIs that power the user interface are created by backend developers in languages like PHP, Python, Java, and Ruby. Backend development is concerned with how the user interface works and interacts with the server-side logic and data, whereas frontend development is concerned with how the user interface appears and feels.

Why Is Frontend Harder Than Backend?

Why is it that some claim that frontend development is more difficult than backend development these days? There are several reasons why this is so, let’s look at them.

Keeping up with a rapidly changing environment

The rapid advancements in frontend development have given it a reputation for being challenging. Every few months, new frameworks and technologies like React, Angular, and Vue are released to improve development. These continual updates mean that staying up-to-date requires constant learning of new lessons and courses. Once Angular was the most popular frontend framework, but now React is the preferred choice for many companies. Even Netflix has gone back to using the original JavaScript due to performance concerns. With no indication that these advances will soon slow down, it’s important to remember how quickly the industry is developing the next time someone claims that frontend development is easy.

More information to consider

Frontend development may prove to be equally challenging in 2023 as backend development. With opinionated frameworks, state management systems, and intricate logic, there should be no assumption that the workload for backend developers is greater than that of frontend developers. However, frontend development entails more than just programming, as it demands creativity, aesthetics, and an understanding of user experience. This includes being adept with design techniques, creating prototypes, and making sure the design looks professional. Furthermore, it necessitates taking into account how users will interact with the software to deliver the best user experience.

More tools to learn

As the workplace evolves, so too must your skillset. Keeping up with the latest tools, such as Webpack, React, Yarn, and NPM can be a challenge, as you may find yourself constantly learning new technologies, leaving less time to learn other programming topics, such as different paradigms, languages, and best practices. Nevertheless, it is important to remain up-to-date and not be discouraged by the ever-changing landscape.

Test suites and testing

Testing the frontend of a web application is more difficult and tedious than the back end. In addition to checking for the theoretical soundness of functions and objects, and assessing edge scenarios, frontend testing requires tests for design components, logical operations, and state changes. As such, manual testing is often preferred over creating a unit test suite, which is more time-consuming and frustrating. All in all, frontend testing is more complex, laborious, and frustrating than backend testing.

Why Is Backend Harder Than Frontend?

Both backend and frontend development have specific explanations for why they are more difficult.

The higher learning curve for beginners

Compared to frontend development, learning backend programming can be more difficult. To build a website’s frontend, only HTML and CSS are needed. However, the backend requires a deep understanding of programming languages. This can be daunting for newcomers and lead them to believe that frontend development is easier. In reality, the learning curve for the backend is much steeper than for the frontend.

Frontend is less visually appealing than the backend

Just knowing where to look can help you find the backend, which can be just as aesthetically pleasing as the frontend. However, with frontend development, you can often see the effects of your changes in real time. The response time for the backend can be unpredictable, making it more challenging for a beginner.

Many backend languages

The complexity of learning backend languages can be attributed to their variety and the need to comprehend multiple languages. While frontend development only requires knowledge of JavaScript, HTML, and CSS, backend development involves mastering three languages to work with the various methods available. Although the concepts are generally the same, transitioning between languages can be challenging, leading many to stick with the language they are most comfortable with or switch only when necessary for a better career opportunity.


So, which is harder, the backend or the frontend? The truth is that both types of development are equally difficult, but for different reasons. Frontend development necessitates comprehension of design concepts and user experience, as well as the ability to produce an aesthetically beautiful user interface. Awareness of server architecture, security, and strong technical language and framework knowledge are all necessary for backend development. In the end, both styles of development are essential for a successful product, and they each call for a unique set of talents. The distinctions between the two and the many tasks that each may be utilized for must be understood. You can more readily pick which form of growth is best for you if you are aware of the distinctions.

The post Why Do Some Programmers Say Frontend Is Easier Than Backend? appeared first on Flatlogic Blog.

ECMAScript 2023 for President

#​627 — February 24, 2023

Read on the Web

JavaScript Weekly

????  Strudel REPL: Live JavaScript Music in the Browser — This is a lot of fun. It’s a little online sandbox for putting together small musical experiments written in JavaScript. Use the ‘shuffle’ button at the top right until you find something you like the sound of. There’s a tutorial on building your own, too. If you make any good ones, send them in to us and we might link to them.

Felix Roos et al.

The ECMAScript 2023 Language Specification — It’s that time of the year again. The latest ECMAScript spec, which standardizes what we know as JavaScript to some extent, is now in draft. This is not bedtime reading, of course, but is a fundamental part of what makes the langauge tick.

ECMA International

???? If you do decide to brave the spec, this guide on ‘how to read the ECMAScript specification’ will get you on the right path.

Understand JavaScript in the Background — Learn what happens to your JavaScript when the user closes their browser and how to detect these changes to execute code later. Plus learn the new Web Push API to give your web app new powers!

Frontend Masters sponsor

Let’s Build a Chrome Extension That Steals Everything — Indulging in what they call “DIY whole hog data exfiltration”, Matt, the author of Building Browser Extensions demonstrates that in spite of Manifest v3, a whole lot of bad stuff is still possible when it comes to building browser extensions. Be aware of it and don’t actually do it, of course.

Matt Frisbie

What to Expect from Vue in 2023 — Vue.js creator Evan You explains how Vue 3 is different from Vue 2, and in particular how its use of the Virtual DOM has evolved.

Richard MacManus (The New Stack)


Node.js 19.7.0 (Current) landed this week complete with npm 9.5, a new URL parser called Ada and (experimental) support for packing up Node apps into a single distributable executable.

Colin Ihrig of the Node.js core team gave a ▶️ ‘State of Node.js Core’ presentation earlier this week.

A look at how Storybook 7 has significantly revamped Storybook Docs. This is a great way to show UI components off.


Next.js 13.2

Turborepo 1.8
↳ Rust-powered build system for JS/TS.

Mermaid 10.0
↳ The popular text to diagram rendering toolkit goes ESM only.

Node.js 18.14.2 (LTS)

Preact 10.13

Angular 15.2

???? Articles & Tutorials

Sandboxing JavaScript CodeVal Town is an interesting, rather minimalist platform for running JavaScript in the cloud, and if you’re going to let folks run JavaScript on your server, good sandboxing is a must.

Andrew Healey

An Intro to ‘HTML-First’ Frontend Frameworks — The post defines HTML-first front-end frameworks as ones that prioritize sending complete functional HTML versus a JavaScript bundle and looks at some of the different approaches taken by different frameworks/tools like Qwik, Marko, Astro, Eleventy, Fresh and Enhance.

SitePen Engineering

▶  Astro 2.0, Island Architecture, and React with Fred K. Schott — Fred talks with us about how Astro uses an HTML-first approach to create content-focused websites and new Astro v2 features.

Whiskey Web and Whatnot sponsorpodcast

▶  NPM Library Speedrun – 90 Minutes to Build, CI & Publish — You could throw a bare project up on npm in a few minutes, but it’s fun watching Matt do it with testing, CI, TypeScript, writing a README and building something useful. (He starts around 17-minutes into the video.)

Matt Pocock

You Don’t Need Ruby on Rails to Start Using HotwireHotwire, an ‘HTML over the wire’ approach to making Web pages more dynamic (explained here), is closely tied to the Ruby on Rails framework, but you can use it to add dynamism to a static site with no Ruby in sight, as demonstrated here.

Akshay Khot

Performance Analysis of Type-Driven Data Validation Libraries — The author’s tRPC/React project was getting sluggish and after some investigation he narrowed it down to Zod and decided to benchmark it against Superstruct, Yup, Light-Type and Typebox.

Nick Lucas

Why 2023 is the Time to Migrate from AngularJS to AngularAngularJS went EOL a year ago so hopefully this is old news..

Bartosz and Łukasz

Migrating from Enzyme to React Testing Library

Priscila Oliveria and Scott Cooper (Sentry)

Building a Lightbox with the <dialog> Element


???? Code & Tools

Vuestic 1.6: An Open Source UI Library for Vue 3 — A library of more than 50 customizable components. v1.6 is a big release focused on Tailwind CSS and Nuxt support. Official homepage.


React Libraries for 2023 — The React ecosystem is so large that it’s helpful to be presented with some sound, standard options when selecting libraries for a new project. This is the latest annual update of an established list Robin maintains.

Robin Wieruch

Build Business Software 10x Faster with Retool — Trusted by Amazon and Plaid. Try it for free (up to 5 users) or get $25,000 in credits for paid plans if you’re an early-stage startup.

Retool sponsor

Kobalte: A UI Toolkit for SolidJS — The components are unstyled and follow WAI-ARIA authoring practices. You also have granular access to each component part, allowing you to add event listeners, props, etc. GitHub repo.


Urban Bot 1.0: React-Based Universal Chatbot Library — Rather than messing around with the APIs for Telegram, Discord, Slack or Facebok Messenger, write React components instead and get chatbot functionality on each.

Urban Bot

OrgChart 3.5
↳ Render org charts. (Lots of demos.)

Sortable 2.0
↳ Make tables sortable with class=”sortable”

Ruby2JS 5.1
↳ Ruby to JavaScript transpiler.

Don’t Let Your Issue Tracker Be a Four-Letter Word. Use Shortcut

Shortcut (formerly sponsor

RxDB 14.1
↳ Offline-first, reactive database for JS apps.

tRPC 10.12
↳ End-to-end typesafe APIs made easy.

JavaScript Charting 3.4

OpenPGP.js 5.7

React Testing Library 14.0

Ember.js 4.11

???? Jobs

Software Engineer — Join our happy team. Stimulus is a social platform started by Sticker Mule to show what’s possible if your mission is to increase human happiness.


Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.


???? Elsewhere This Week

Did you know we have several other newsletters we publish each week? We thought we’d do a quick roundup of what’s going on with each of them in case they’re of interest to you:

REACT: In this week’s React Status we wondered if ‘React is the new IBM’, got to build a word scrambling game, and detected unnecessarily mounted React components.

POSTGRES: In this week’s Postgres Weekly we learnt how to create Mermaid diagrams from SQL, create type constraints at the database level, and found out what the real ‘billion dollar mistake’ is. It’s not nulls!

FRONTEND: In this week’s Frontend Focus we focused on iOS and iPadOS’s forthcoming Web Push API support, creating accessible carousels, and a surprisingly easy way to create a ‘dark mode’ theme for your site.

NODE.JS: In this week’s Node Weekly we discovered two new ways to make type-safe MongoDB queries, and found a way to have Node play endless clicker games on your behalf.

RUBY: In this week’s Ruby Weekly, a Rails core team member told us why we shouldn’t indulge in monkey patching, we learnt to create an intelligent FAQ with GPT-3, and met the Rails Foundation’s new executive director.

JAMSTACK: In this week’s JAMstacked, Brian Rinaldi pondered React’s role in the whole Jam space, and shared a few useful tools with us.

GO: In this week’s Go Weekly, we played a game of Solitaire on the terminal, discovered a Go-powered Web browser, and learnt more about what’s new in Go 1.20.

RUST: Sorry, we don’t have a Rust newsletter, but we get asked that a lot!

Thanks for reading, thanks to everyone who submitted items for this issue, and thanks to Frontend Masters, Whiskey Web and Whatnot, Shortcut, and Retool for supporting this issue. See you next week!

Validating OpenTelemetry Configuration Files with the otel-config-validator

OpenTelemetry provides open source APIs, libraries, and agents to collect distributed traces and metrics for application monitoring. The AWS Distro for OpenTelemetry (ADOT) provides a secure production-ready distribution of OpenTelemetry (OTel) that allows for instrumentation and collecting of metrics and traces.

The ADOT collector can be used to collect and export telemetry data. Within the collector, observability pipelines are defined using a YAML configuration file.

This figure shows a typical architecture for collecting and exporting telemetry using ADOT. The ADOT collector has a number of components to select from that support common observability patterns. For example, the ADOT collector could be configured with a ‘prometheusreceiver’ component to collect Prometheus metrics, and a ‘prometheusremotewriteexporter’ component to export the metrics to a supported backend such as Amazon Managed Service for Prometheus.

When creating an observability pipeline with various different components, there is a potential for syntactic error in the YAML file which would prevent your pipeline from operating effectively. Having a telemetry pipeline non-functional may cause gaps in observability that can lead to application downtime.

As part of an ongoing partnership between the ADOT team and observability provider Lightstep, an open source OTEL configuration validator has been created by Lightstep that supports all ADOT components. The goal of the validator is to assist with error-checking OTEL configuration files during development so potential mis-configurations can be addressed before causing issues. The project, released under the Apache-2.0 license, can be found on GitHub with full source code and usage instructions. In this blog post, we will show an example of how you can use the otel-config-validator in both GUI mode and CLI mode to validate an OpenTelemetry config file.


The validator has two possible modes of operation.

A WebAssembly GUI that you can compile and view in the local browser.
A CLI tool you can use in the terminal.

Local GUI

You can build and deploy the validator as a WebAssembly application for GUI interaction:

Open up a local terminal and run the following:

   $ make
    $ go run cmd/server/main.go
    $ open

Once built and running, the application looks like this:

There are a few example configurations provided and you can paste in your own OTel configuration YAML to validate.

If you use an invalid configuration the error will be displayed. Here is an example of an incorrect configuration. In this configuration we try to build a pipeline using a memory_limiter processor component, but that component is not defined as a processor:

    region: ‘us-west-2’
      enabled: true
      – otlp
      – batch
      – memory_limiter
      – awsemf  

This YAML configuration would be flagged by the validator as in this screenshot, which shows the error message displayed when validating an incorrect YAML configuration:

If we correct the YAML to include the memory_limiter in the config, we will no longer get the error and our pipeline will now be able to build correctly for telemetry export:

    check_interval: 1s
    limit_percentage: 50
    spike_limit_percentage: 30
    region: ‘us-west-2’
      enabled: true
      – otlp
      – batch
      – memory_limiter
      – awsemf

The application confirms that the config is now valid:

Command Line

The second method of deployment and operation for the validator is as a command-line utility. Building and running would look like this:

The following commands should be run in your local terminal. Be sure to substitute ‘/path/to/config’ with the full path of the OTel configuration file you are trying to validate.

$ go build -o otel-config-validator ./cmd/cli/main.go
$ ./otel-config-validator -f /path/to/config


OpenTelemetry Collector Configuration file `test-adot.yml` is valid.
Pipeline metrics:
Receivers: [otlp]
Processors: []
Exporters: [logging]
Pipeline traces:
Receivers: [otlp]
Processors: []
Exporters: [awsxray]

The CLI can be installed and used within development environments like AWS Cloud9 or Visual Studio Code to run validations against OTel configs before they are checked into a repository.


In this blog post we described how you can validate OpenTelemetry configuration files using Lightstep’s OpenTelemetry validator. Users of Amazon Distro for OpenTelemetry can take advantage of this as the validator supports all ADOT components. Using this tool, you can have greater operational insight into the validity of your OpenTelemetry configurations. Catching errors before pipeline build time allows for a reliable OTel deployment and less time spent troubleshooting configuration errors. Feedback and contributions to the otel-config-validator are appreciated and welcomed.

Improve collaboration between teams by using AWS CDK constructs

There are different ways to organize teams to deliver great software products. There are companies that give the end-to-end responsibility for a product to a single team, like Amazon’s Two-Pizza teams, and there are companies where multiple teams split the responsibility between infrastructure (or platform) teams and application development teams. This post provides guidance on how collaboration efficiency can be improved in the case of a split-team approach with the help of the AWS Cloud Development Kit (CDK).

The AWS CDK is an open-source software development framework to define your cloud application resources. You do this by using familiar programming languages like TypeScript, Python, Java, C# or Go. It allows you to mix code to define your application’s infrastructure, traditionally expressed through infrastructure as code tools like AWS CloudFormation or HashiCorp Terraform, with code to bundle, compile, and package your application.

This is great for autonomous teams with end-to-end responsibility, as it helps them to keep all code related to that product in a single place and single programming language. There is no need to separate application code into a different repository than infrastructure code with a single team, but what about the split-team model?

Larger enterprises commonly split the responsibility between infrastructure (or platform) teams and application development teams. We’ll see how to use the AWS CDK to ensure team independence and agility even with multiple teams involved. We’ll have a look at the different responsibilities of the participating teams and their produced artifacts, and we’ll also discuss how to make the teams work together in a frictionless way.

This blog post assumes a basic level of knowledge on the AWS CDK and its concepts. Additionally, a very high level understanding of event driven architectures is required.

Team Topologies

Let’s first have a quick look at the different team topologies and each team’s responsibilities.

One-Team Approach

In this blog post we will focus on the split-team approach described below. However, it’s still helpful to understand what we mean by “One-Team” Approach: A single team owns an application from end-to-end. This cross-functional team decides on its own on the features to implement next, which technologies to use and how to build and deploy the resulting infrastructure and application code. The team’s responsibility is infrastructure, application code, its deployment and operations of the developed service.

If you’re interested in how to structure your AWS CDK application in a such an environment have a look at our colleague Alex Pulver’s blog post Recommended AWS CDK project structure for Python applications.

Split-Team Approach

In reality we see many customers who have separate teams for application development and infrastructure development and deployment.

Infrastructure Team

What I call the infrastructure team is also known as the platform or operations team. It configures, deploys, and operates the shared infrastructure which other teams consume to run their applications on. This can be things like an Amazon SQS queue, an Amazon Elastic Container Service (Amazon ECS) cluster as well as the CI/CD pipelines used to bring new versions of the applications into production.
It is the infrastructure team’s responsibility to get the application package developed by the Application Team deployed and running on AWS, as well as provide operational support for the application.

Application Team

Traditionally the application team just provides the application’s package (for example, a JAR file or an npm package) and it’s the infrastructure team’s responsibility to figure out how to deploy, configure, and run it on AWS. However, this traditional setup often leads to bottlenecks, as the infrastructure team will have to support many different applications developed by multiple teams. Additionally, the infrastructure team often has little knowledge of the internals of those applications. This often leads to solutions which are not optimized for the problem at hand: If the infrastructure team only offers a handful of options to run services on, the application team can’t use options optimized for their workload.

This is why we extend the traditional responsibilities of the application team in this blog post. The team provides the application and additionally the description of the infrastructure required to run the application. With “infrastructure required” we mean the AWS services used to run the application. This infrastructure description needs to be written in a format which can be consumed by the infrastructure team.

While we understand that this shift of responsibility adds additional tasks to the application team, we think that in the long term it is worth the effort. This can be the starting point to introduce DevOps concepts into the organization. However, the concepts described in this blog post are still valid even if you decide that you don’t want to add this responsibility to your application teams. The boundary of who is delivering what would then just move more into the direction of the infrastructure team.

To be successful with the given approach, the two teams need to agree on a common format on how to hand over the application, its infrastructure definition, and how to bring it to production. The AWS CDK with its concept of Constructs provides a perfect means for that.

Primer: AWS CDK Constructs

In this section we take a look at the concepts the AWS CDK provides for structuring our code base and how these concepts can be used to fit a CDK project into your team topology.


Constructs are the basic building block of an AWS CDK application. An AWS CDK application is composed of multiple constructs which in the end define how and what is deployed by AWS CloudFormation.

The AWS CDK ships with constructs created to deploy AWS services. However, it is important to understand that you are not limited to the out-of-the-box constructs provided by the AWS CDK. The true power of AWS CDK is the possibility to create your own abstractions on top of the default constructs to create solutions for your specific requirement. To achieve this you write, publish, and consume your own, custom constructs. They codify your specific requirements, create an additional level of abstraction and allow other teams to consume and use your construct.

We will use a custom construct to separate the responsibilities between the the application and the infrastructure team. The application team will release a construct which describes the infrastructure along with its configuration required to run the application code. The infrastructure team will consume this construct to deploy and operate the workload on AWS.

How to use the AWS CDK in a Split-Team Setup

Let’s now have a look at how we can use the AWS CDK to split the responsibilities between the application and infrastructure team. I’ll introduce a sample scenario and then illustrate what each team’s responsibility is within this scenario.


Our fictitious application development team writes an AWS Lambda function which gets deployed to AWS. Messages in an Amazon SQS queue will invoke the function. Let’s say the function will process orders (whatever this means in detail is irrelevant for the example) and each order is represented by a message in the queue.

The application development team has full flexibility when it comes to creating the AWS Lambda function. They can decide which runtime to use or how much memory to configure. The SQS queue which the function will act upon is created by the infrastructure team. The application team does not have to know how the messages end up in the queue.

With that we can have a look at a sample implementation split between the teams.

Application Team

The application team is responsible for two distinct artifacts: the application code (for example, a Java jar file or an npm module) and the AWS CDK construct used to deploy the required infrastructure on AWS to run the application (an AWS Lambda Function along with its configuration).

The lifecycles of these artifacts differ: the application code changes more frequently than the infrastructure it runs in. That’s why we want to keep the artifacts separate. With that each of the artifacts can be released at its own pace and only if it was changed.

In order to achieve these separate lifecycles, it is important to notice that a release of the application artifact needs to be completely independent from the release of the CDK construct. This fits our approach of separate teams compared to the standard CDK way of building and packaging application code within the CDK construct.

But how will this be done in our example solution? The team will build and publish an application artifact which does not contain anything related to CDK.
When a CDK Stack with this construct is synthesized it will download the pre-built artifact with a given version number from AWS CodeArtifact and use it to create the input zip file for a Lambda function. There is no build of the application package happening during the CDK synth.

With the separation of construct and application code, we need to find a way to tell the CDK construct which specific version of the application code it should fetch from CodeArtifact. We will pass this information to the construct via a property of its constructor.

For dependencies on infrastructure outside of the responsibility of the application team, I follow the pattern of dependency injection. Those dependencies, for example a shared VPC or an Amazon SQS queue, are passed into the construct from the infrastructure team.

Let’s have a look at an example. We pass in the external dependency on an SQS Queue, along with details on the desired appPackageVersion and its CodeArtifact details:

export interface OrderProcessingAppConstructProps {
    queue: aws_sqs.Queue,
    appPackageVersion: string,
    codeArtifactDetails: {
        account: string,
        repository: string,
        domain: string

export class OrderProcessingAppConstruct extends Construct {

    constructor(scope: Construct, id: string, props: OrderProcessingAppConstructProps) {
        super(scope, id);

        const lambdaFunction = new lambda.Function(this, ‘OrderProcessingLambda’, {
            code: lambda.Code.fromDockerBuild(path.join(__dirname, ‘..’, ‘bundling’), {
                buildArgs: {
                    ‘PACKAGE_VERSION’ : props.appPackageVersion,
                    ‘CODE_ARTIFACT_ACCOUNT’ : props.codeArtifactDetails.account,
                    ‘CODE_ARTIFACT_REPOSITORY’ : props.codeArtifactDetails.repository,
                    ‘CODE_ARTIFACT_DOMAIN’ : props.codeArtifactDetails.domain
            runtime: lambda.Runtime.NODEJS_16_X,
            handler: ‘node_modules/order-processing-app/dist/index.lambdaHandler’
        const eventSource = new SqsEventSource(props.queue);

Note the code lambda.Code.fromDockerBuild(…): We use AWS CDK’s functionality to bundle the code of our Lambda function via a Docker build. The only things which happen inside of the provided Dockerfile are:

the login into the AWS CodeArtifact repository which holds the pre-built application code’s package
the download and installation of the application code’s artifact from AWS CodeArtifact (in this case via npm)

If you are interested in more details on how you can build, bundle and deploy your AWS CDK assets I highly recommend a blog post by my colleague Cory Hall: Building, bundling, and deploying applications with the AWS CDK. It goes into much more detail than what we are covering here.

Looking at the example Dockerfile we can see the two steps described above:



RUN aws codeartifact login –tool npm –repository $CODE_ARTIFACT_REPOSITORY –domain $CODE_ARTIFACT_DOMAIN –domain-owner $CODE_ARTIFACT_ACCOUNT –region $CODE_ARTIFACT_AWS_REGION
RUN npm install [email protected]$PACKAGE_VERSION –prefix /asset

Please note the following:

we use –prefix /asset with our npm install command. This tells npm to install the dependencies into the folder which CDK will mount into the container. All files which should go into the output of the docker build need to be placed here.
the aws codeartifact login command requires credentials with the appropriate permissions to proceed. In case you run this on for example AWS CodeBuild or inside of a CDK Pipeline you need to make sure that the used role has the appropriate policies attached.

Infrastructure Team

The infrastructure team consumes the AWS CDK construct published by the application team. They own the AWS CDK Stack which composes the whole application. Possibly this will only be one of several Stacks owned by the Infrastructure team. Other Stacks might create shared infrastructure (like VPCs, networking) and other applications.

Within the stack for our application the infrastructure team consumes and instantiates the application team’s construct, passes any dependencies into it and then deploys the stack by whatever means they see fit (e.g. through AWS CodePipeline, GitHub Actions or any other form of continuous delivery/deployment).

The dependency on the application team’s construct is manifested in the package.json of the infrastructure team’s CDK app:

  “name”: “order-processing-infra-app”,
  “dependencies”: {
    “order-app-construct” : “1.1.0”,

Within the created CDK Stack we see the dependency version for the application package as well as how the infrastructure team passes in additional information (like e.g. the queue to use):

export class OrderProcessingInfraStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);   

    const orderProcessingQueue = new Queue(this, ‘order-processing-queue’);

    new OrderProcessingAppConstruct(this, ‘order-processing-app’, {
       appPackageVersion: “2.0.36”,
       queue: orderProcessingQueue,
       codeArtifactDetails: { … }

Propagating New Releases

We now have the responsibilities of each team sorted out along with the artifacts owned by each team. But how do we propagate a change done by the application team all the way to production? Or asked differently: how can we invoke the infrastructure team’s CI/CD pipeline with the updated artifact versions of the application team?

We will need to update the infrastructure team’s dependencies on the application teams artifacts whenever a new version of either the application package or the AWS CDK construct is published. With the dependencies updated we can then start the release pipeline.

One approach is to listen and react to events published by AWS CodeArtifact via Amazon EventBridge. On each release AWS CodeArtifact will publish an event to Amazon EventBridge. We can listen to that event, extract the version number of the new release from its payload and start a workflow to update either our dependency on the CDK construct (e.g. in the package.json of our CDK application) or a update the appPackageVersion which the infrastructure team passes into the consumed construct.

Here’s how a release of a new app version flows through the system:

Figure 1 – A release of the application package triggers a change and deployment of the infrastructure team’s CDK Stack

The application team publishes a new app version into AWS CodeArtifact
CodeArtifact triggers an event on Amazon EventBridge
The infrastructure team listens to this event
The infrastructure team updates its CDK stack to include the latest appPackageVersion

The infrastructure team’s CDK Stack gets deployed

And very similar the release of a new version of the CDK Construct:

Figure 2 – A release of the application team’s CDK construct triggers a change and deployment of the infrastructure team’s CDK Stack

The application team publishes a new CDK construct version into AWS CodeArtifact
CodeArtifact triggers an event on Amazon EventBridge
The infrastructure team listens to this event
The infrastructure team updates its dependency to the latest CDK construct
The infrastructure team’s CDK Stack gets deployed

We will not go into the details on how such a workflow could look like, because it’s most likely highly custom for each team (think of different tools used for code repositories, CI/CD). However, here are some ideas on how it can be accomplished:

Updating the CDK Construct dependency

To update the dependency version of the CDK construct the infrastructure team’s package.json (or other files used for dependency tracking like pom.xml) needs to be updated. You can build automation to checkout the source code and issue a command like npm install [email protected]_VERSION (where NEW_VERSION is the value read from the EventBridge event payload). You then automatically create a pull request to incorporate this change into your main branch. For a sample on what this looks like see the blog post Keeping up with your dependencies: building a feedback loop for shared librares.

Updating the appPackageVersion

To update the appPackageVersion used inside of the infrastructure team’s CDK Stack you can either follow the same approach outlined above, or you can use CDK’s capability to read from an AWS Systems Manager (SSM) Parameter Store parameter. With that you wouldn’t put the value for appPackageVersion into source control, but rather read it from SSM Parameter Store. There is a how-to for this in the AWS CDK documentation: Get a value from the Systems Manager Parameter Store. You then start the infrastructure team’s pipeline based on the event of a change in the parameter.

To have a clear understanding of what is deployed at any given time and in order to see the used parameter value in CloudFormation I’d recommend using the option described at Reading Systems Manager values at synthesis time.


You’ve seen how the AWS Cloud Development Kit and its Construct concept can help to ensure team independence and agility even though multiple teams (in our case an application development team and an infrastructure team) work together to bring a new version of an application into production. To do so you have put the application team in charge of not only their application code, but also of the parts of the infrastructure they use to run their application on. This is still in line with the discussed split-team approach as all shared infrastructure as well as the final deployment is in control of the infrastructure team and is only consumed by the application team’s construct.

About the Authors

As a Solutions Architect Jörg works with manufacturing customers in Germany. Before he joined AWS in 2019 he held various roles like Developer, DevOps Engineer and SRE. With that Jörg enjoys building and automating things and fell in love with the AWS Cloud Development Kit.

Mo joined AWS in 2020 as a Technical Account Manager, bringing with him 7 years of hands-on AWS DevOps experience and 6 year as System operation admin. He is a member of two Technical Field Communities in AWS (Cloud Operation and Builder Experience), focusing on supporting customers with CI/CD pipelines and AI for DevOps to ensure they have the right solutions that fit their business needs.

Maintaining Code Quality with Amazon CodeCatalyst Reports

Amazon CodeCatalyst reports contain details about tests that occur during a workflow run. You can create tests such as unit tests, integration tests, configuration tests, and functional tests. You can use a test report to help troubleshoot a problem during a workflow.


In prior posts in this series, I discussed reading The Unicorn Project, by Gene Kim, and how the main character, Maxine, struggles with a complicated Software Development Lifecycle (SDLC) after joining a new team. One of the challenges she encounters is the difficulties in shipping secure, functioning code without an automated testing mechanism. To quote Gene Kim, “Without automated testing, the more code we write, the more money it takes for us to test.”

Software Developers know that shipping vulnerable or non-functioning code to a production environment is to be avoided at all costs; the monetary impact is high and the toll it takes on team morale can be even greater. During the SDLC, developers need a way to easily identify and troubleshoot errors in their code.

In this post, I will focus on how developers can seamlessly run tests as a part of workflow actions as well as configure unit test and code coverage reports with Amazon CodeCatalyst. I will also outline how developers can access these reports to gain insights into their code quality.


If you would like to follow along with this walkthrough, you will need to:

Have an AWS Builder ID for signing in to CodeCatalyst.
Belong to a CodeCatalyst space and have the Space administrator role assigned to you in that space. For more information, see Creating a space in CodeCatalyst, Managing members of your space, and Space administrator role.
Have an AWS account associated with your space and have the IAM role in that account. For more information about the role and role policy, see Creating a CodeCatalyst service role.


As with the previous posts in the CodeCatalyst series, I am going to use the Modern Three-tier Web Application blueprint. Blueprints provide sample code and CI/CD workflows to help you get started easily across different combinations of programming languages and architectures. To follow along, you can re-use a project you created previously, or you can refer to a previous post that walks through creating a project using the Three-tier blueprint.

Once the project is deployed, CodeCatalyst opens the project overview. This view shows the content of the README file from the project’s source repository, workflow runs, pull requests, etc. The source repository and workflow are created for me by the project blueprint. To view the source code, I select Code → Source Repositories from the left-hand navigation bar. Then, I select the repository name link from the list of source repositories.

Figure 1. List of source repositories including Mythical Mysfits source code.

From here I can view details such as the number of branches, workflows, commits, pull requests and source code of this repo. In this walkthrough, I’m focused on the testing capabilities of CodeCatalyst. The project already includes unit tests that were created by the blueprint so I will start there.

From the Files list, navigate to web → src → components→ __tests__ → TheGrid.spec.js. This file contains the front-end unit tests which simply check if the strings “Good”, “Neutral”, “Evil” and “Lawful”, “Neutral”, “Chaotic” have rendered on the web page. Take a moment to examine the code. I will use these tests throughout the walkthrough.

Figure 2. Unit test for the front-end that test strings have been rendered properly. 

Next, I navigate to the  workflow that executes the unit tests. From the left-hand navigation bar, select CI/CD → Workflows. Then, find ApplicationDeploymentPipeline, expand Recent runs and select  Run-xxxxx . The Visual tab shows a graphical representation of the underlying YAML file that makes up this workflow. It also provides details on what started the workflow run, when it started,  how long it took to complete, the source repository and whether it succeeded.

Figure 3. The Deployment workflow open in the visual designer.

Workflows are comprised of a source and one or more actions. I examined test reports for the back-end in a prior post. Therefore, I will focus on the front-end tests here. Select the build_and_test_frontend action to view logs on what the action ran, its configuration details, and the reports it generated. I’m specifically interested in the Unit Test and Code Coverage reports under the Reports tab:

Figure 4. Reports tab showing line and branch coverage.

Select the report unitTests.xml (you may need to scroll). Here, you can see an overview of this specific report with metrics like pass rate, duration, test suites, and the test cases for those suites:

Figure 5. Detailed report for the front-end tests.

This report has passed all checks.  To make this report more interesting, I’ll intentionally edit the unit test to make it fail. First, navigate back to the source repository and open web → src → components→ __tests__→TheGrid.spec.js. This test case is looking for the string “Good” so change it to say “Best” instead and commit the changes.

Figure 6. Front-End Unit Test Code Change.

This will automatically start a new workflow run. Navigating back to CI/CD →  Workflows, you can see a new workflow run is in progress (takes ~7 minutes to complete).

Once complete, you can see that the build_and_test_frontend action failed. Opening the unitTests.xml report again, you can see that the report status is in a Failed state. Notice that the minimum pass rate for this test is 100%, meaning that if any test case in this unit test ever fails, the build fails completely.

There are ways to configure these minimums which will be explored when looking at Code Coverage reports. To see more details on the error message in this report, select the failed test case.

Figure 7. Failed Test Case Error Message.

As expected, this indicates that the test was looking for the string “Good” but instead, it found the string “Best”. Before continuing, I return to the TheGrid.spec.js file and change the string back to “Good”.

CodeCatalyst also allows me to specify code and branch coverage criteria. Coverage is a metric that can help you understand how much of your source was tested. This ensures source code is properly tested before shipping to a production environment. Coverage is not configured for the front-end, so I will examine the coverage of the back-end.

I select Reports on the left-hand navigation bar, and open the report called backend-coverage.xml. You can see details such as line coverage, number of lines covered, specific files that were scanned, etc.

Figure 8. Code Coverage Report Succeeded.

The Line coverage minimum is set to 70% but the current coverage is 80%, so it succeeds. I want to push the team to continue improving, so I will edit the workflow to raise the minimum threshold to 90%. Navigating back to CI/CD → Workflows → ApplicationDeploymentPipeline, select the Edit button. On the Visual tab, select build_backend. On the Outputs tab, scroll down to Success Criteria and change Line Coverage to 90%.

Figure 9. Configuring Code Coverage Success Criteria.

On the top-right, select Commit. This will push the changes to the repository and start a new workflow run. Once the run has finished, navigate back to the Code Coverage report. This time, you can see it reporting a failure to meet the minimum threshold for Line coverage.

Figure 10. Code Coverage Report Failed.

There are other success criteria options available to experiment with. To learn more about success criteria, see Configuring success criteria for tests.


If you have been following along with this workflow, you should delete the resources you deployed so you do not continue to incur charges. First, delete the two stacks that CDK deployed using the AWS CloudFormation console in the AWS account you associated when you launched the blueprint. These stacks will have names like mysfitsXXXXXWebStack and mysfitsXXXXXAppStack. Second, delete the project from CodeCatalyst by navigating to Project settings and choosing Delete project.


In this post, I demonstrated how Amazon CodeCatalyst can help developers quickly configure test cases, run unit/code coverage tests, and generate reports using CodeCatalyst’s workflow actions. You can use these reports to adhere to your code testing strategy as a software development team. I also outlined how you can use success criteria to influence the outcome of a build in your workflow.  In the next post, I will demonstrate how to configure CodeCatalyst workflows and integrate Software Composition Analysis (SCA) reports. Stay tuned!

About the authors:

Imtranur Rahman

Imtranur Rahman is an experienced Sr. Solutions Architect in WWPS team with 14+ years of experience. Imtranur works with large AWS Global SI partners and helps them build their cloud strategy and broad adoption of Amazon’s cloud computing platform.Imtranur specializes in Containers, Dev/SecOps, GitOps, microservices based applications, hybrid application solutions, application modernization and loves innovating on behalf of his customers. He is highly customer obsessed and takes pride in providing the best solutions through his extensive expertise.

Wasay Mabood

Wasay is a Partner Solutions Architect based out of New York. He works primarily with AWS Partners on migration, training, and compliance efforts but also dabbles in web development. When he’s not working with customers, he enjoys window-shopping, lounging around at home, and experimenting with new ideas.

Top 10+ OpenAI Alternatives

Are you looking for the best OpenAI alternatives? If you’re wondering what the best options are, how they compare to OpenAI, and what criteria to consider when choosing the right one – this article is for you. 

When researching OpenAI alternatives, you may be asking yourself questions such as: What is the best alternative to OpenAI? What are the differences between OpenAI and its alternatives? How do I choose the best OpenAI alternative for my needs? 

The growth of AI and machine learning technologies has made it increasingly difficult for organizations to keep up with the latest advancements. Despite being a strong tool, OpenAI might not be the ideal choice for everyone. You can choose which one is best for you by exploring the many OpenAI choices.

By reading this article, you’ll learn about the top 10+ OpenAI alternatives, their features and capabilities, and what criteria to consider when selecting the right one for your needs. You’ll also learn which one stands out as the best alternative for OpenAI.

What is OpenAI?

OpenAI is an AI (artificial intelligence) research lab founded in December 2015 by Elon Musk, Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman. By researching and developing innovative technologies that can be used to benefit humanity, OpenAI aims to improve artificial intelligence. Intending to make AI accessible to everyone, OpenAI provides open-source tools and resources to help programmers and researchers develop AI-powered products.

What makes OpenAI different from other organizations? Compared to other AI organizations, OpenAI is different in several ways. First of all, OpenAI is open source, meaning everyone has access to their research results and tools. To develop their AI-driven products and bring them to market, companies and individuals can use OpenAI’s technology. OpenAI also has an outstanding team of researchers, engineers, and scientists who continue to push the boundaries of artificial intelligence research. Furthermore, OpenAI is dedicated to creating AI responsibly and ethically, guaranteeing that AI is safe and secure. The purpose of OpenAI in democratizing AI is to deliver the power of AI to everyone.

OpenAI is a groundbreaking AI research laboratory that has revolutionized the way we think about AI. It has enabled researchers to use its algorithms to develop groundbreaking AI applications. But, many other AI research laboratories are making a name for themselves in the AI space. Let’s discuss the top 10+ OpenAI alternatives that you should consider for your AI projects. 


DeepMind is a UK-based AI research center that has achieved significant advances in the field of AI since its foundation in 2010. The goal of DeepMind is to apply AI to address the most critical issues facing the planet.  Also, to apply artificial intelligence to tackle some of the world’s most difficult challenges, such as climate change and healthcare. Machine learning and other related approaches, such as reinforcement learning, and unsupervised learning, are the main topics of DeepMind’s research.  


DeepMind Technologies is used to develop artificial intelligence (AI) systems for various tasks, such as natural language processing, image recognition, game playing, and robotics. It is also used to develop machine learning algorithms for data analysis, pattern recognition, and decision-making. DeepMind Technologies is used in various industries, such as healthcare, finance, and defense. DeepMind Technologies is utilized in healthcare for medical image analysis, clinical decision support, and healthcare system optimization. In finance, it is used for fraud detection, financial forecasting, and investment analysis. In defense, it is used to improve military operations, surveillance, and targeting.

Pros & Cons of DeepMind

Pros of DeepMind:

DeepMind places great emphasis on research, making it a pioneer in the development of general-purpose AI systems.
DeepMind’s AI solutions are closely interwoven with Google’s goods and services.
There is close collaboration between DeepMind and partners in business and academia.
DeepMind’s advancements in reinforcement learning techniques have improved the area of AI.
A universal AI platform that is available to everyone.

Cons of DeepMind:

DeepMind’s algorithms are not open source, thus they can only be used with Google goods and services.
DeepMind’s AI solutions are still in the early phases of development, thus they may be less effective or efficient than more established AI solutions.
DeepMind’s AI solutions may be too sophisticated for certain people to comprehend and use.

IBM Watson

IBM Watson is an AI platform developed by IBM that enables the development of cognitive applications. It uses natural language processing and machine learning to understand complex unstructured data and is based on data-driven algorithms. Its main purpose is to provide users with an intuitive and automated way to gain insights from their data. IBM Watson can be used in a variety of fields, including healthcare, finance, and retail. Watson provides natural language processing, image recognition, speech recognition, and other cognitive services that can help businesses make better decisions and improve the customer experience.


Healthcare, banking, retail, and education are just a few of the industries where IBM Watson could be used. In medicine, Watson can be used to diagnose diseases, determine the best course of action and even detect cancer tumors in their early stages. In finance, it can examine financial data, identify trends and detect fraud. In retail, it may be applied to consumer behavior analysis, product recommendations, and individualized customer experiences. In the field of education, it may be used to track down students who are in danger of failing classes, design individualized lessons, and provide instructors with individualized feedback.

Pros & Cons of IBM Watson

Pros of IBM Watson

Highly accurate and efficient, allowing organizations to gain insights from their data quickly and accurately. 
Can be used to analyze complex unstructured data, such as images, videos, and natural language. 
Can be used in a variety of fields, such as healthcare, finance, retail, and education. 
Can help organizations make better decisions, improve customer experiences, and increase efficiency. 

Cons of IBM Watson

Expensive and requires a lot of computing power. 
Can be difficult to set up and maintain. 
Can be difficult to integrate into existing systems. 
May not always be able to provide accurate results, as it relies on machine learning algorithms.

Microsoft Azure

The Microsoft Azure cloud computing platform was created by Microsoft, and it includes a variety of cloud services, like processing, storage, networking, analytics, and the creation of mobile and web apps. The Microsoft Azure cloud computing platform is designed to help businesses quickly install and manage cloud apps and services. Microsoft Azure provides a user-friendly platform for programmers and businesses to develop, manage, and deploy cloud-based apps and services. It also provides a variety of services such as big data analytics, the Internet of Things (IoT), artificial intelligence (AI), and machine learning, allowing businesses to optimize their operations and get insights from their data.

Microsoft Azure can be used for a variety of AI-related tasks:

Image recognition and classification 
Natural language processing 
Speech recognition 
Text analytics 
Machine learning 
Predictive analytics
Autonomous systems

Pros & Cons of Microsoft Azure

Pros of Microsoft Azure

Highly scalable and can be used for a variety of tasks.
Provides a wide range of services and tools for developers. 
Easy to use and can be integrated with existing systems. 
Cost-effective and provides a secure environment. 

Cons of Microsoft Azure

Complex and can be difficult to set up and manage. 
Requires a lot of expertise to use effectively. 
Not always provide the best performance for some tasks.

Google Cloud AI

Google Cloud AI is a collection of AI services and tools for developing AI applications. It provides pre-trained models and services for developing intelligent apps that can respond to user input, forecast outcomes, and recognize speech.  Google Cloud AI is designed to make it simpler for programmers to construct AI apps that can solve challenging issues and enhance user experiences.


Building apps that comprehend natural language, recognize pictures, process audio and video, and identify things in photographs is the goal of Google Cloud AI. Applications that promote goods, advise activities and automate customer service chores can all be created using it. Insights and analytics may be produced as a way to streamline business processes and make informed decisions.

Pros & Cons of Google Cloud AI

Pros of Google Cloud AI

Provides a wide range of services and tools that can be used to create powerful AI applications. 
The services are highly scalable and can be used to create applications that can handle large volumes of data. 
Cost-effective and can be used to create applications that can handle large volumes of data without incurring high costs. 

Cons of Google Cloud AI

Google Cloud AI can be complex and difficult to use for developers who are new to the platform. 
Limited support for developers who are new to the platform.

Amazon Machine Learning

Amazon Machine Learning is a cloud-based AI service and product package that allows developers to create predictive applications. It gives you the tools and techniques you need to create apps that can analyze data, spot patterns, and make predictions. Amazon Machine Learning is made to help developers create software that anticipates consumer behavior, recommends products or services, looks for fraud or other abnormalities, and recognizes trends. Software that automates customer service chores, uncovers abnormalities in medical data, or draws insights from huge datasets may all be made using this technique.

Pros and cons of Amazon Machine Learning

Pros of Amazon Machine Learning

Provides an intuitive interface that makes it easy for developers to create predictive applications. 
Scalability: can be used to create applications that can handle large volumes of data without incurring high costs. 
Cost-effective and can be used to create powerful applications without incurring high costs. 

Cons of Amazon Machine Learning

Limited support for developers who are new to the platform. 
Limited flexibility and customization options for developers.


NVIDIA DGX is a high-performance computing system designed to meet the needs of data-intensive workloads like deep learning.  It is made to speed up AI and machine learning workflows, enabling users to create deep learning models quickly. Along with completely integrated and deep learning-optimized software, it consists of a group of potent GPU servers powered by NVIDIA. The most effective deep learning and AI development platforms are provided by NVIDIA DGX, which enables customers to rapidly and effectively develop, deploy, and manage their applications.


NVIDIA DGX is primarily designed to speed up AI and machine learning processes. Rapid deep learning model development, application deployment and management, and high-performance computer operations are all possible with it. Data scientists may experiment with their deep-learning models more quickly and effectively by using NVIDIA DGX to build virtualized computing environments.

Pros & Cons of NVIDIA DGX


High performance in deep learning and AI development.
Fully integrated and optimized software for deep learning.
Easy to use and deploy.
Virtualized computing environment for data scientists.


May not be suitable for all types of data-intensive workloads.

Intel AI

Intel AI is a suite of hardware and software solutions for artificial intelligence (AI) designed to deliver performance and flexibility for AI workloads. Intel AI solutions provide the computing power to run deep learning and other AI models, as well as advanced analytics and real-time insights. Intel AI solutions are designed to be used in a wide variety of applications, such as autonomous driving, healthcare, and robotics. Intel AI solutions are optimized for Intel-based platforms and are fully integrated with Intel architecture. The main purpose of Intel AI is to provide the computing power and flexibility needed to run AI models, analytics, and real-time insights.


Intel AI can be used in a variety of applications, including autonomous driving, healthcare, robotics, and more. Intel AI solutions can be used to train deep learning models, run analytics, and generate real-time insights. Intel AI solutions can also be used to develop and deploy AI applications on Intel-based platforms. 

Pros & Cons of Intel AI

Pros of Intel AI

Optimized for Intel-based platforms. 
Fully integrated with Intel architecture.
High performance in deep learning and AI. 
Flexibility to run a variety of AI applications. 

Cons of Intel AI

Not as widely available as other AI solutions.
Potentially more expensive than other AI solutions.

Apple Core ML

Apple Core ML is a machine learning platform developed by Apple to enable developers to quickly and easily integrate machine learning models into their iOS, macOS, watchOS, and tvOS apps. Core ML allows developers to take advantage of the power of machine learning without having to write complex algorithms or deep learning models. Core ML leverages the power of Apple’s hardware and software to enable developers to quickly create and deploy machine learning models for their apps. The main purpose of Apple Core ML is to make it easy for developers to integrate machine learning into their apps. 


Apple Core ML can be used to quickly create and deploy machine learning models for iOS, macOS, watchOS, and tvOS apps. Core ML models can be used for a variety of tasks, including image recognition, natural language processing, and more. Core ML also allows developers to take advantage of the power of Apple’s hardware and software to optimize their models for performance and accuracy. 

Pros & Cons of Apple Core ML

Pros of Apple Core ML

Quick and easy to integrate machine learning into apps. 
Leverages the power of Apple’s hardware and software.
Optimizes models for performance and accuracy. 
Variety of tasks supported, including image recognition and natural language processing.

Cons of Apple Core ML

Limited to Apple-specific platforms. 
May not be suitable for all types of machine-learning tasks.
Dependent on Apple for updates and bug fixes. is an open-source platform for machine learning and artificial intelligence. It provides a range of tools and technologies for data scientists to quickly and easily develop, deploy, and manage machine learning models. is designed to work with a wide variety of data sources, including relational databases, text files, spreadsheets, and more. The main purpose of is to enable data scientists to quickly and easily develop, deploy, and manage machine learning models. 

Usage can be used to develop and deploy machine learning models for a wide variety of applications. provides a range of tools and technologies for data scientists to quickly and easily develop, deploy, and manage their models. can also be used to create virtualized computing environments for data scientists, allowing them to experiment more quickly and efficiently with their models. 

Pros & Cons of

Pros of

Open-source platform
Easy to use and deploy 
Works with a wide variety of data sources 
Ability to create virtualized computing environments 

Cons  of

May not be suitable for all types of machine-learning tasks
Limited support for certain data sources 
Limited scalability for larger datasets


OpenCV is an open-source computer vision library for real-time image and video processing. It provides a wide range of algorithms and functions for image and video analysis, including feature detection, object detection, and tracking. OpenCV is designed to be user-friendly and efficient, allowing developers to quickly and easily create complex applications for vision-based systems. The main purpose of OpenCV is to provide developers with a powerful and easy-to-use library for real-time image and video processing. 


OpenCV can be used to create a wide range of applications for vision-based systems. It can be used for feature detection, object detection, and tracking, as well as a variety of other image and video processing tasks. OpenCV can also be used to create virtualized computing environments for developers, allowing them to experiment quickly and efficiently with their applications. 

Pros and cons of OpenCV

Pros of OpenCV

User-friendly and efficient. 
Open-source library. 
Wide range of algorithms and functions. 
Ability to create virtualized computing environments. 

Cons of OpenCV

Limited support for a certain image and video formats. 
May not be suitable for all types of vision-based systems.
Can be difficult to debug and optimize code.

Summing Up

These are some of the top 10+ OpenAI alternatives that you should consider for your AI projects. Each of these AI research facilities has advantages and disadvantages, so you should examine your alternatives before picking which one to utilize. We hope that this article has helped you understand the many alternatives available as well as the benefits and drawbacks of each.

The post Top 10+ OpenAI Alternatives appeared first on Flatlogic Blog.

5+ Secrets Time Estimation Hints in Project Management

Having a clear understanding of time estimation is integral for successful project management. With the right technique, you can be confident that your estimates are exact and valuable to you and your team. Time estimation is the skill of predicting accurately how long a task will take to finish – using time estimation methods can help you eliminate the guesswork associated with your estimates and permit you to have more confidence regarding your time management and the period your work can be accomplished in.   

While looking for time estimation methods for a project, you probably ask yourself: how to accurately determine the length of a project? How do you avoid overestimating the amount of time needed? How can the timetable of a project be managed more effectively? For successful project management and project completion, time estimation is a necessary step. So, if you’re looking for methods for the perfect time estimation required for your project, this article is for you.

In the world of project management, accurately estimating the time needed for a project is a difficult but necessary task. According to recent surveys, 78% of project managers need help with project time estimation. Furthermore, a Project Management Institute research underlined the necessity of precise time estimation, since projects that are delivered late or within a tight deadline can hurt the bottom line.

In this article, we’ll share our project manager Erik Kalmykov’s tips for accurately estimating how much time a project will take. We’ll also show you how to avoid common errors and manage project timeframes more effectively. You’ll have a better grasp of how to correctly estimate the amount of time spent on a project after this article, along with useful advice on how to manage project deadlines and avoid frequent errors. 

What is Time Estimation?

Time estimation is about determining how long it will take to complete a project. It is an essential part of project management and can help project managers better plan, manage, and complete projects on time. Accurate time estimation allows project managers to plan for potential risks or delays and better estimate the resources and staff needed to complete the project.

Why are time estimates so important? Time estimations are important for several reasons. Providing a better knowledge of the resources and manpower required to execute the project, first aids in the better planning and management of projects by project managers. Additionally, it enables project managers to recognize possible hazards and delays and make plans on how to deal with them. Finally, precise time prediction enables project managers to better plan for the employees and resources required to finish the project, which helps them manage project budgets more effectively.

Types of Time Estimation

Expert Judgment

Utilizing the knowledge of others in a field where you lack experience is plain innovative business. We can’t all be experts in everything, after all. When it comes to time estimates, employing the expertise and knowledge of an expert to assist in establishing your estimates has extra advantages since the expert you contact will probably have helpful advice for the project as a whole and might be able to identify problems that have bedeviled similar projects in the past. Having a professional to help you make more accurate and reliable estimates can help you achieve your goals more efficiently and cost-effectively. By utilizing the expertise of an experienced individual, you can ensure that your goals are met on time and within budget.

Pros & Cons of Expert Judgment 

Pros: The method enables the consideration of certain elements that cannot be taken into account by an automated analysis.

Cons: This strategy necessitates individual judgment. As a result, the outcome frequently exhibits bias. 


Expert judgment is constructive for managers who lack knowledge and is most appropriate for big projects when quantitative estimation alone is insufficient.

Analogous / Comparative

It is possible to estimate how long it will take to complete a task or project by comparing it to similar ones. This approach concentrates on “analogous” or “comparative” reasoning to produce an estimate by drawing on knowledge and past experiences.

Pros & Cons of Analogous / Comparative Estimation

Pros: Comparative / Analogous estimation is one of the quickest and easiest methods for estimating resources.

Cons: It has a poor track record of accuracy and has a significant danger of incorrect results.


The method is particularly suitable for standard projects with comparable task requirements. To gain a rough idea of the number of resources needed, it is frequently utilized in the early phases of a project’s life cycle.


Using a preset formula, a parametric time estimate is a technique for determining how long an activity or project will take to complete. This formula often requires the input of certain criteria, such as the number of individuals allocated to the work or project, the complexity of the task or project, and/or the level of competence of the persons involved.

Pros & Cons of Parametric Estimation

Pros: This method is reasonably accurate since it considers the complexity of the work or project as well as the skill level of the persons involved.

Cons: The parameters’ values may not always be accurately determined using this procedure, which takes longer than other methods. It also disregards any adjustments to materials, methods, or technology that can impact how long it takes to finish a task or project.


The parametric time estimation method works best on projects with standardized work bundles and repeated activities. Therefore, it works best in industries with lesser levels of inventiveness, where early in the planning phase, project parameters can be fairly simply estimated.


The top-down methodology of project estimation is based on breaking down the project activities into major blocks, projecting how long they will take to complete, and summarizing the estimates. Once managers acquire more information during the latter stages of project planning, these generic, big blocks of project work may be divided into smaller parts and then estimated independently to provide more precise predictions.

Pros & Cons of  Top-down Estimation

Pros: As it takes into account the difficulty of the work or project and the expertise of the persons involved, this method of estimating may be rather accurate.

Cons: It might be challenging to precisely divide the activity or project into manageable portions using this methodology, which takes more time than other approaches. The time needed to accomplish the activity or project is also not adjusted for advancements in technology, business practices, or material availability.


In project planning when quick outcomes are important, the top-down project estimating approach is widely utilized. It is most useful during the early stages of project planning when a rough and rapid estimate is required.


Bottom-up estimation is a method for determining the price and time needed to complete a project by segmenting it into smaller jobs and estimating each one independently. For projects with several jobs or components that need to be estimated, this method is helpful.

Pros & Cons of  Bottom-up Estimation

Pros: High result accuracy and little differences between resources that were estimated and used.

Cons: This method takes a lot of time, effort, and skill to master.


Large software development projects, where several activities and components need to be estimated independently, are the ones that most frequently employ bottom-up estimating. Other project categories, including building projects or significant industrial activities, can also adopt this strategy.


Three-point estimation is a project management method for estimating the duration, cost, and resources needed to complete a project. An optimistic estimate, a pessimistic estimate, and a most likely estimate are all created using this process based on the idea of ‘triangulation’. The expected value of the project is then calculated from these estimates.

Pros & Cons of Three-point Estimation

Pros: Compared to most estimating procedures, which tend to concentrate just on one point throughout the computation process, the method is more complete. It is risk-oriented and aids managers in reducing the risk of budget and schedule overruns brought on by unanticipated circumstances.

Cons: Large volumes of data and careful attention to detail.


Software development and other large-scale initiatives frequently employ a three-point estimate. The time, expense, and resources needed to complete any project or assignment may also be estimated using this method.

Time Estimation Statistics in Project Management

In project management, timeline accuracy is assessed using time estimation statistics. They offer statistics on how close a project was to finish on time. Statistics on time estimation are utilized to pinpoint areas where the project may be improved upon as well as those where the project timetable was successful and precise. The number of activities finished on time, the number of tasks delayed, the number of jobs finished ahead of schedule, the time saved due to resource efficiency, and the time added due to unforeseen complications are all examples of time estimation statistics. These statistics can be used to determine if changes need to be made to the project timeline, or if additional resources need to be allocated to ensure the project is completed on time.

Why Are Accurate Time Estimates Crucial to Project Success?

The success of a project depends on accurate time estimates since they provide the project manager with a better understanding of the amount of time, effort, and resources required to complete the project. Additionally, they may be used by project managers to coordinate with key players and provide realistic goals for their team members. Accurate time estimates can help with resource allocation, on-time task completion, and project budget stability. Knowing how long the project will take to complete can help project managers make better decisions about how to distribute resources and manage deadlines. Accurate time estimates also serve to lessen the risk of project failure by providing a foundation for assessing progress and detecting possible concerns before they become serious difficulties.

Cost overruns

Cost overruns happen when a project goes above its allocated budget. Numerous factors, including inadequate project management, underestimating the number of resources required, and unanticipated scope changes, can lead to them. Cost overruns can be problematic for the project team since they can cause delays or even the complete cancellation of the project. Project managers should constantly develop precise time and cost estimates for their projects, as well as monitor development and make modifications as necessary, to minimize cost overruns. They should also ensure that their personnel is well-educated and prepared for any scope adjustments or unexpected issues. Finally, project managers should strive to create a culture of accountability and communication, so that any issues are identified and addressed before they become a major problems.

Tips for Estimating the Time needed to Implement a Project

Define your goals and objectives: Consider spending some time defining your aims and objectives before beginning the project estimating process. You may determine what has to be done and how long it should take using this information.

Break it down: Break your project down into smaller, more achievable activities that may be estimated independently after you have determined the general objectives.

Gather data: After you have divided your project up into smaller jobs, collect information for each one. This might consist of time estimates, cost estimates, and any other pertinent data that will enable you to estimate the project properly.

Assess risks: Consider the hazards involved with each activity when you assess them.

Compare estimates: Compare your estimates to industry norms and the estimates of other experts. This will assist you in ensuring the accuracy of your estimations.

Keep track of changes: Keep note of any scope or time frame modifications as the project develops and modify your estimations as necessary.

Get feedback: Ask for feedback from stakeholders and team members throughout the estimation process. This will help ensure that everyone is on the same page and that the estimates are realistic. 

Summing Up

Time estimation is a crucial component of project management and may be the difference between a project being successful and staying within budget and timeline constraints. Project managers can predict the time required for any project with accuracy if they have the correct plan and methodology. Project managers may develop a time estimating plan that will help them manage their projects more effectively and assure successful project completion by using the advice provided in this article.

The post 5+ Secrets Time Estimation Hints in Project Management appeared first on Flatlogic Blog.

JavaScript sans build systems?

#​626 — February 17, 2023

Read on the Web

JavaScript Weekly

Writing JavaScript Without a Build System — Using a variety of build tools for things like bundling and transpiling is reasonably standard in modern JavaScript development, but what if you want to keep things simple? For simple things, it’s not necessary, says Julia. This led to a lot of discussion on Hacker News.

Julia Evans

Ryan Dahl, Node.js Creator, Wants to Rebuild the Runtime of the Web — A neat bit of journalism about the alternative JavaScript runtime Deno and what Ryan Dahl is trying to achieve with it and how Ryan handled the stress of being known as the creator of Node.js.

Harry Spitzer / Sequoia

Broadcasting a Live Stream With Nothing but JavaScript — Live streams typically use third-party software to broadcast, but with Amazon Interactive Video Service, you can build a powerful, interactive broadcasting interface with the Web Broadcast SDK and JavaScript. Click here to learn more.

Amazon Web Services (AWS) sponsor

core-js’s Maintainer Complains Open Source Is ‘Broken’core-js is a popular universal polyfill for JavaScript features and its author has run into his fair share of bad luck which has culminated in this lengthy post on the state of the project, his issues in securing an income and, well, the downsides to living in Russia. The Register has tried to balance out the story.

The Register


? The just released Firefox 110 for Android now supports Tampermonkey, an extension for running JavaScript ‘userscripts’ on sites you visit.

The Angular project is taking steps to revamp its reactivity model to enable fine-grained change detection via signals.

The latest beta of iOS and iPadOS 16.4 supports the Web Push API for home screen webapps.

? A fun Twitter thread where Qwik’s Miško Hevery attempted to demonstrate why a = 0-x is about 3-10x faster than a = -x before being told about a flaw in his benchmark. There is still a performance difference, though.

▶️ The React.js documentary we mentioned last week has now been released and it’s a heck of a watch – you’ll need 78 minutes of your time though.


Node.js 19.6.1, 18.14.1, 16.19.1 and 14.21.3.

JavaScript Obfuscator 4.0 – Code scrambler.

Shoelace 2.1
↳ Framework agnostic Web components.

Mermaid 9.4
↳ Text to diagram generator. Now with timeline diagram support.

Cypress 12.6

? Articles & Tutorials

Use a MutationObserver to Handle DOM Nodes that Don’t Exist Yet — Comparing the effectiveness of the MutationObserver API with the conventional method of constantly checking for the creation of nodes.

Alex MacArthur

Well-Known Symbols in JavaScript — Hemanth, a TC39 delegate, shows off 14 symbols and where they can come in useful.

Hemanth HM

? Monitor and Optimize Website Speed to Rank Higher in Google — Monitor Google’s Core Web Vitals and optimize performance using in-depth reports built for developers. Improve SEO & UX.

DebugBear sponsor

Why to Use Maps More and Objects Less — A journey down a performance rabbit hole.

Steve Sewell

Adopting React in the Early Days — A personal history lesson providing context around React’s evolution. While React might be an obvious, even safe, choice now, that wasn’t always true.

Sébastien Lorber

An Animated Flythrough with Theatre.js and React Three Fiber — How to fly through a 3D scene using the Theatre.js JavaScript animation library and the React Three Fiber 3D renderer. This is the sort of thing that used to be Very Difficult™ but is now relatively trivial.

Andrew Prifer (Codrops)

How to Change the Tab Bar Color Dynamically with JavaScript

Amit Merchant

Is Deno Ready for Primetime? One Dev’s Opinion

Max Countryman

Using Playwright to Monitor Third-Party Resources That Could Impact User Experience

Stefan Judis

? Code & Tools

Dependency Cruiser: Validate and Visualize JavaScript Dependencies — If you want a look at the output, there’s a whole page of graphs for popular, real world projects including Chalk, Yarn, and React.

Sander Verweij

Devalue: Like JSON.stringify, But..“Gets the job done when JSON.stringify can’t.” Namely, it can handle cyclical and repeated references, regular expressions, Map and Set, custom types, and more.

Rich Harris

? JavaScript Scratchpad for VS Code (2m+ Downloads) — Get Quokka.js ‘Community’ for free: #1 tool for exploring/testing JavaScript with edit-continue experience to see realtime execution and runtime values.

Wallaby.js sponsor

NodeGUI: Build Native Cross-Platform Desktop Apps with Node.js — Unlike Electron which leans upon webviews and HTML, NodeGui uses a Qt based approach. This week’s 0.58.0 release is the first stable release based on Qt 6 and offering high DPI support.


DOMPurify 3.0: Fast, Tolerant XSS Sanitizer for HTML and SVG — A project that’s nine years old today but still actively developed. Supports all modern browsers (IE support was only just dropped) and is heavily tested. There’s a live demo here.


Pythagora: Generate Express Integration Tests by Recording Activity — This is a neat idea still in its early stages. Add a line of code after setting up an Express.js app and this will capture app usage and generate integration tests based on the interactions. (▶️ Screencast demo.)

zvone187 and LeonOstrez

Try Stream’s Free Trial of SDKs for In-App Chat

Stream sponsor Search Code Across a Half Million GitHub Repos — A code search engine that lets you use regexes or syntax in your search. Considering what it is, it’s pretty fast and has an extensive index (over half a million public repos from GitHub, allegedly).

tsParticles: Particles, Confetti and Fireworks for Your Pages — Create customizable particle related effects for use on the Web. Uses the regular 2D canvas for broad support.

Matteo Bruni

? Jobs

Software Engineer — Join our happy team. Stimulus is a social platform started by Sticker Mule to show what’s possible if your mission is to increase human happiness.


Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.



Minimatch 6.2
↳ Glob matcher library, as used in npm.
    minimatch(“”, “*.foo”)

React Accordion 1.2
↳ Unstyled WAI-ARIA-compliant accordion library.

ScrollTrigger 1.0.6
↳ Have your page react to scroll changes.

VeeValidate 4.7.4
↳ Popular Vue.js form library

Express Admin 2.0
↳ Admin interface for data in MySQL/Postgres/SQLite.

Execa 7.0
↳ Improved process execution from Node.js.

React Tooltip 5.8