S3 URI Parsing is now available in AWS SDK for Java 2.x

The AWS SDK for Java team is pleased to announce the general availability of Amazon Simple Storage Service (Amazon S3) URI parsing in the AWS SDK for Java 2.x. You can now parse path-style and virtual-hosted-style S3 URIs to easily retrieve the bucket, key, region, style, and query parameters. The new parseUri() API and S3Uri class provide the highly-requested parsing features that many customers miss from the AWS SDK for Java 1.x. Please note that Amazon S3 AccessPoints and Amazon S3 on Outposts URI parsing are not supported.

Motivation

Users often need to extract important components like bucket and key from stored S3 URIs to use in S3Client operations. The new parsing APIs allow users to conveniently do so, bypassing the need for manual parsing or storing the components separately.

Getting Started

To begin, first add the dependency for S3 to your project.

<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<version>${s3.version}</version>
</dependency>

Next, instantiate S3Client and S3Utilities objects.

S3Client s3Client = S3Client.create();
S3Utilities s3Utilities = s3Client.utilities();

Parsing an S3 URI

To parse your S3 URI, call parseUri() from S3Utilities, passing in the URI. This will return a parsed S3Uri object. If you have a String of the URI, you’ll need to convert it into an URI object first.

String url = “https://s3.us-west-1.amazonaws.com/myBucket/resources/doc.txt?versionId=abc123&partNumber=77&partNumber=88”;
URI uri = URI.create(url);
S3Uri s3Uri = s3Utilities.parseUri(uri);

With the S3Uri, you can call the appropriate getter methods to retrieve the bucket, key, region, style, and query parameters. If the bucket, key, or region is not specified in the URI, an empty Optional will be returned. If query parameters are not specified in the URI, an empty map will be returned. If the field is encoded in the URI, it will be returned decoded.

Region region = s3Uri.region().orElse(null); // Region.US_WEST_1
String bucket = s3Uri.bucket().orElse(null); // “myBucket”
String key = s3Uri.key().orElse(null); // “resources/doc.txt”
boolean isPathStyle = s3Uri.isPathStyle(); // true

Retrieving query parameters

There are several APIs for retrieving the query parameters. You can return a Map<String, List<String>> of the query parameters. Alternatively, you can specify a query parameter to return the first value for the given query, or return the list of values for the given query.

Map<String, List<String>> queryParams = s3Uri.rawQueryParameters(); // {versionId=[“abc123”], partNumber=[“77”, “88”]}
String versionId = s3Uri.firstMatchingRawQueryParameter(“versionId”).orElse(null); // “abc123”
String partNumber = s3Uri.firstMatchingRawQueryParameter(“partNumber”).orElse(null); // “77”
List<String> partNumbers = s3Uri.firstMatchingRawQueryParameters(“partNumber”); // [“77”, “88”]

Caveats

Special Characters

If you work with object keys or query parameters with reserved or unsafe characters, they must be URL-encoded, e.g., replace whitespace ” ” with “%20”.

Valid:
“https://s3.us-west-1.amazonaws.com/myBucket/object%20key?query=%5Bbrackets%5D”

Invalid:
“https://s3.us-west-1.amazonaws.com/myBucket/object key?query=[brackets]”

Virtual-hosted-style URIs

If you work with virtual-hosted-style URIs with bucket names that contain a dot, i.e., “.”, the dot must not be URL-encoded.

Valid:
“https://my.Bucket.s3.us-west-1.amazonaws.com/key”

Invalid:
“https://my%2EBucket.s3.us-west-1.amazonaws.com/key”

Conclusion

In this post, I discussed parsing S3 URIs in the AWS SDK for Java 2.x and provided code examples for retrieving the bucket, key, region, style, and query parameters. To learn more about how to set up and begin using the feature, visit our Developer Guide. If you are curious about how it is implemented, check out the source code on GitHub. As always, the AWS SDK for Java team welcomes bug reports, feature requests, and pull requests on the aws-sdk-java-v2 GitHub repository.

Create a CI/CD pipeline for .NET Lambda functions with AWS CDK Pipelines

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in familiar programming languages and provision it through AWS CloudFormation.

In this blog post, we will explore the process of creating a Continuous Integration/Continuous Deployment (CI/CD) pipeline for a .NET AWS Lambda function using the CDK Pipelines. We will cover all the necessary steps to automate the deployment of the .NET Lambda function, including setting up the development environment, creating the pipeline with AWS CDK, configuring the pipeline stages, and publishing the test reports. Additionally, we will show how to promote the deployment from a lower environment to a higher environment with manual approval.

Background

AWS CDK makes it easy to deploy a stack that provisions your infrastructure to AWS from your workstation by simply running cdk deploy. This is useful when you are doing initial development and testing. However, in most real-world scenarios, there are multiple environments, such as development, testing, staging, and production. It may not be the best approach to deploy your CDK application in all these environments using cdk deploy. Deployment to these environments should happen through more reliable, automated pipelines. CDK Pipelines makes it easy to set up a continuous deployment pipeline for your CDK applications, powered by AWS CodePipeline.

The AWS CDK Developer Guide’s Continuous integration and delivery (CI/CD) using CDK Pipelines page shows you how you can use CDK Pipelines to deploy a Node.js based Lambda function. However, .NET based Lambda functions are different from Node.js or Python based Lambda functions in that .NET code first needs to be compiled to create a deployment package. As a result, we decided to write this blog as a step-by-step guide to assist our .NET customers with deploying their Lambda functions utilizing CDK Pipelines.

In this post, we dive deeper into creating a real-world pipeline that runs build and unit tests, and deploys a .NET Lambda function to one or multiple environments.

Architecture

CDK Pipelines is a construct library that allows you to provision a CodePipeline pipeline. The pipeline created by CDK pipelines is self-mutating. This means, you need to run cdk deploy one time to get the pipeline started. After that, the pipeline automatically updates itself if you add new application stages or stacks in the source code.

The following diagram captures the architecture of the CI/CD pipeline created with CDK Pipelines. Let’s explore this architecture at a high level before diving deeper into the details.

Figure 1: Reference architecture diagram

The solution creates a CodePipeline with a AWS CodeCommit repo as the source (CodePipeline Source Stage). When code is checked into CodeCommit, the pipeline is automatically triggered and retrieves the code from the CodeCommit repository branch to proceed to the Build stage.

Build stage compiles the CDK application code and generates the cloud assembly.

Update Pipeline stage updates the pipeline (if necessary).

Publish Assets stage uploads the CDK assets to Amazon S3.

After Publish Assets is complete, the pipeline deploys the Lambda function to both the development and production environments. For added control, the architecture includes a manual approval step for releases that target the production environment.

Prerequisites

For this tutorial, you should have:

An AWS account

Visual Studio 2022
AWS Toolkit for Visual Studio
Node.js 18.x or later
AWS CDK v2 (2.67.0 or later required)
Git

Bootstrapping

Before you use AWS CDK to deploy CDK Pipelines, you must bootstrap the AWS environments where you want to deploy the Lambda function. An environment is the target AWS account and Region into which the stack is intended to be deployed.

In this post, you deploy the Lambda function into a development environment and, optionally, a production environment. This requires bootstrapping both environments. However, deployment to a production environment is optional; you can skip bootstrapping that environment for the time being, as we will cover that later.

This is one-time activity per environment for each environment to which you want to deploy CDK applications. To bootstrap the development environment, run the below command, substituting in the AWS account ID for your dev account, the region you will use for your dev environment, and the locally-configured AWS CLI profile you wish to use for that account. See the documentation for additional details.

cdk bootstrap aws://<DEV-ACCOUNT-ID>/<DEV-REGION>
–profile DEV-PROFILE
–cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess

‐‐profile specifies the AWS CLI credential profile that will be used to bootstrap the environment. If not specified, default profile will be used. The profile should have sufficient permissions to provision the resources for the AWS CDK during bootstrap process.

‐‐cloudformation-execution-policies specifies the ARNs of managed policies that should be attached to the deployment role assumed by AWS CloudFormation during deployment of your stacks.

Note: By default, stacks are deployed with full administrator permissions using the AdministratorAccess policy, but for real-world usage, you should define a more restrictive IAM policy and use that, refer customizing bootstrapping in AWS CDK documentation and Secure CDK deployments with IAM permission boundaries to see how to do that.

Create a Git repository in AWS CodeCommit

For this post, you will use CodeCommit to store your source code. First, create a git repository named dotnet-lambda-cdk-pipeline in CodeCommit by following these steps in the CodeCommit documentation.

After you have created the repository, generate git credentials to access the repository from your local machine if you don’t already have them. Follow the steps below to generate git credentials.

Sign in to the AWS Management Console and open the IAM console.
Create an IAM user (for example, git-user).
Once user is created, attach AWSCodeCommitPowerUser policy to the user.
Next. open the user details page, choose the Security Credentials tab, and in HTTPS Git credentials for AWS CodeCommit, choose Generate.

Download credentials to download this information as a .CSV file.

Clone the recently created repository to your workstation, then cd into dotnet-lambda-cdk-pipeline directory.

git clone <CODECOMMIT-CLONE-URL>
cd dotnet-lambda-cdk-pipeline

Alternatively, you can use git-remote-codecommit to clone the repository with git clone codecommit::<REGION>://<PROFILE>@<REPOSITORY-NAME> command, replacing the placeholders with their original values. Using git-remote-codecommit does not require you to create additional IAM users to manage git credentials. To learn more, refer AWS CodeCommit with git-remote-codecommit documentation page.

Initialize the CDK project

From the command prompt, inside the dotnet-lambda-cdk-pipeline directory, initialize a AWS CDK project by running the following command.

cdk init app –language csharp

Open the generated C# solution in Visual Studio, right-click the DotnetLambdaCdkPipeline project and select Properties. Set the Target framework to .NET 6.

Create a CDK stack to provision the CodePipeline

Your CDK Pipelines application includes at least two stacks: one that represents the pipeline itself, and one or more stacks that represent the application(s) deployed via the pipeline. In this step, you create the first stack that deploys a CodePipeline pipeline in your AWS account.

From Visual Studio, open the solution by opening the .sln solution file (in the src/ folder). Once the solution has loaded, open the DotnetLambdaCdkPipelineStack.cs file, and replace its contents with the following code. Note that the filename, namespace and class name all assume you named your Git repository as shown earlier.

Note: be sure to replace “<CODECOMMIT-REPOSITORY-NAME>” in the code below with the name of your CodeCommit repository (in this blog post, we have used dotnet-lambda-cdk-pipeline).

using Amazon.CDK;
using Amazon.CDK.AWS.CodeBuild;
using Amazon.CDK.AWS.CodeCommit;
using Amazon.CDK.AWS.IAM;
using Amazon.CDK.Pipelines;
using Constructs;
using System.Collections.Generic;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStack : Stack
{
internal DotnetLambdaCdkPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{

var repository = Repository.FromRepositoryName(this, “repository”, “<CODECOMMIT-REPOSITORY-NAME>”);

// This construct creates a pipeline with 3 stages: Source, Build, and UpdatePipeline
var pipeline = new CodePipeline(this, “pipeline”, new CodePipelineProps
{
PipelineName = “LambdaPipeline”,
SelfMutation = true,

// Synth represents a build step that produces the CDK Cloud Assembly.
// The primary output of this step needs to be the cdk.out directory generated by the cdk synth command.
Synth = new CodeBuildStep(“Synth”, new CodeBuildStepProps
{
// The files downloaded from the repository will be placed in the working directory when the script is executed
Input = CodePipelineSource.CodeCommit(repository, “master”),

// Commands to run to generate CDK Cloud Assembly
Commands = new string[] { “npm install -g aws-cdk”, “cdk synth” },

// Build environment configuration
BuildEnvironment = new BuildEnvironment
{
BuildImage = LinuxBuildImage.AMAZON_LINUX_2_4,
ComputeType = ComputeType.MEDIUM,

// Specify true to get a privileged container inside the build environment image
Privileged = true
}
})
});
}
}
}

In the preceding code, you use CodeBuildStep instead of ShellStep, since ShellStep doesn’t provide a property to specify BuildEnvironment. We need to specify the build environment in order to set privileged mode, which allows access to the Docker daemon in order to build container images in the build environment. This is necessary to use the CDK’s bundling feature, which is explained in later in this blog post.

Open the file src/DotnetLambdaCdkPipeline/Program.cs, and edit its contents to reflect the below. Be sure to replace the placeholders with your AWS account ID and region for your dev environment.

using Amazon.CDK;

namespace DotnetLambdaCdkPipeline
{
sealed class Program
{
public static void Main(string[] args)
{
var app = new App();
new DotnetLambdaCdkPipelineStack(app, “DotnetLambdaCdkPipelineStack”, new StackProps
{
Env = new Amazon.CDK.Environment
{
Account = “<DEV-ACCOUNT-ID>”,
Region = “<DEV-REGION>”
}
});
app.Synth();
}
}
}

Note: Instead of committing the account ID and region to source control, you can set environment variables on the CodeBuild agent and use them; see Environments in the AWS CDK documentation for more information. Because the CodeBuild agent is also configured in your CDK code, you can use the BuildEnvironmentVariableType property to store environment variables in AWS Systems Manager Parameter Store or AWS Secrets Manager.

After you make the code changes, build the solution to ensure there are no build issues. Next, commit and push all the changes you just made. Run the following commands (or alternatively use Visual Studio’s built-in Git functionality to commit and push your changes):

git add –all .
git commit -m ‘Initial commit’
git push

Then navigate to the root directory of repository where your cdk.json file is present, and run the cdk deploy command to deploy the initial version of CodePipeline. Note that the deployment can take several minutes.

The pipeline created by CDK Pipelines is self-mutating. This means you only need to run cdk deploy one time to get the pipeline started. After that, the pipeline automatically updates itself if you add new CDK applications or stages in the source code.

After the deployment has finished, a CodePipeline is created and automatically runs. The pipeline includes three stages as shown below.

Source – It fetches the source of your AWS CDK app from your CodeCommit repository and triggers the pipeline every time you push new commits to it.

Build – This stage compiles your code (if necessary) and performs a cdk synth. The output of that step is a cloud assembly.

UpdatePipeline – This stage runs cdk deploy command on the cloud assembly generated in previous stage. It modifies the pipeline if necessary. For example, if you update your code to add a new deployment stage to the pipeline to your application, the pipeline is automatically updated to reflect the changes you made.

Figure 2: Initial CDK pipeline stages

Define a CodePipeline stage to deploy .NET Lambda function

In this step, you create a stack containing a simple Lambda function and place that stack in a stage. Then you add the stage to the pipeline so it can be deployed.

To create a Lambda project, do the following:

In Visual Studio, right-click on the solution, choose Add, then choose New Project.
In the New Project dialog box, choose the AWS Lambda Project (.NET Core – C#) template, and then choose OK or Next.
For Project Name, enter SampleLambda, and then choose Create.
From the Select Blueprint dialog, choose Empty Function, then choose Finish.

Next, create a new file in the CDK project at src/DotnetLambdaCdkPipeline/SampleLambdaStack.cs to define your application stack containing a Lambda function. Update the file with the following contents (adjust the namespace as necessary):

using Amazon.CDK;
using Amazon.CDK.AWS.Lambda;
using Constructs;
using AssetOptions = Amazon.CDK.AWS.S3.Assets.AssetOptions;

namespace DotnetLambdaCdkPipeline
{
class SampleLambdaStack: Stack
{
public SampleLambdaStack(Construct scope, string id, StackProps props = null) : base(scope, id, props)
{
// Commands executed in a AWS CDK pipeline to build, package, and extract a .NET function.
var buildCommands = new[]
{
“cd /asset-input”,
“export DOTNET_CLI_HOME=”/tmp/DOTNET_CLI_HOME””,
“export PATH=”$PATH:/tmp/DOTNET_CLI_HOME/.dotnet/tools””,
“dotnet build”,
“dotnet tool install -g Amazon.Lambda.Tools”,
“dotnet lambda package -o output.zip”,
“unzip -o -d /asset-output output.zip”
};

new Function(this, “LambdaFunction”, new FunctionProps
{
Runtime = Runtime.DOTNET_6,
Handler = “SampleLambda::SampleLambda.Function::FunctionHandler”,

// Asset path should point to the folder where .csproj file is present.
// Also, this path should be relative to cdk.json file.
Code = Code.FromAsset(“./src/SampleLambda”, new AssetOptions
{
Bundling = new BundlingOptions
{
Image = Runtime.DOTNET_6.BundlingImage,
Command = new[]
{
“bash”, “-c”, string.Join(” && “, buildCommands)
}
}
})
});
}
}
}

Building inside a Docker container

The preceding code uses bundling feature to build the Lambda function inside a docker container. Bundling starts a new docker container, copies the Lambda source code inside /asset-input directory of the container, runs the specified commands that write the package files under /asset-output directory. The files in /asset-output are copied as assets to the stack’s cloud assembly directory. In a later stage, these files are zipped and uploaded to S3 as the CDK asset.

Building Lambda functions inside Docker containers is preferable than building them locally because it reduces the host machine’s dependencies, resulting in greater consistency and reliability in your build process.

Bundling requires the creation of a docker container on your build machine. For this purpose, the privileged: true setting on the build machine has already been configured.

Adding development stage

Create a new file in the CDK project at src/DotnetLambdaCdkPipeline/DotnetLambdaCdkPipelineStage.cs to hold your stage. This class will create the development stage for your pipeline.

using Amazon.CDK;
using Constructs;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStage : Stage
{
internal DotnetLambdaCdkPipelineStage(Construct scope, string id, IStageProps props = null) : base(scope, id, props)
{
Stack lambdaStack = new SampleLambdaStack(this, “LambdaStack”);
}
}
}

Edit src/DotnetLambdaCdkPipeline/DotnetLambdaCdkPipelineStack.cs to add the stage to your pipeline. Add the bolded line from the code below to your file.

using Amazon.CDK;
using Amazon.CDK.Pipelines;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStack : Stack
{
internal DotnetLambdaCdkPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{

var repository = Repository.FromRepositoryName(this, “repository”, “dotnet-lambda-cdk-application”);

// This construct creates a pipeline with 3 stages: Source, Build, and UpdatePipeline
var pipeline = new CodePipeline(this, “pipeline”, new CodePipelineProps
{
PipelineName = “LambdaPipeline”,
.
.
.
});

var devStage = pipeline.AddStage(new DotnetLambdaCdkPipelineStage(this, “Development”));
}
}
}

Next, build the solution, then commit and push the changes to the CodeCommit repo. This will trigger the CodePipeline to start.

When the pipeline runs, UpdatePipeline stage detects the changes and updates the pipeline based on the code it finds there. After the UpdatePipeline stage completes, pipeline is updated with additional stages.

Let’s observe the changes:

An Assets stage has been added. This stage uploads all the assets you are using in your app to Amazon S3 (the S3 bucket created during bootstrapping) so that they could be used by other deployment stages later in the pipeline. For example, the CloudFormation template used by the development stage, includes reference to these assets, which is why assets are first moved to S3 and then referenced in later stages.

A Development stage with two actions has been added. The first action is to create the change set, and the second is to execute it.

Figure 3: CDK pipeline with development stage to deploy .NET Lambda function

After the Deploy stage has completed, you can find the newly-deployed Lambda function by visiting the Lambda console, selecting “Functions” from the left menu, and filtering the functions list with “LambdaStack”. Note the runtime is .NET.

Running Unit Test cases in the CodePipeline

Next, you will add unit test cases to your Lambda function, and run them through the pipeline to generate a test report in CodeBuild.

To create a Unit Test project, do the following:

Right click on the solution, choose Add, then choose New Project.
In the New Project dialog box, choose the xUnit Test Project template, and then choose OK or Next.
For Project Name, enter SampleLambda.Tests, and then choose Create or Next.
Depending on your version of Visual Studio, you may be prompted to select the version of .NET to use. Choose .NET 6.0 (Long Term Support), then choose Create.
Right click on SampleLambda.Tests project, choose Add, then choose Project Reference. Select SampleLambda project, and then choose OK.

Next, edit the src/SampleLambda.Tests/UnitTest1.cs file to add a unit test. You can use the code below, which verifies that the Lambda function returns the input string as upper case.

using Xunit;

namespace SampleLambda.Tests
{
public class UnitTest1
{
[Fact]
public void TestSuccess()
{
var lambda = new SampleLambda.Function();

var result = lambda.FunctionHandler(“test string”, context: null);

Assert.Equal(“TEST STRING”, result);
}
}
}

You can add pre-deployment or post-deployment actions to the stage by calling its AddPre() or AddPost() method. To execute above test cases, we will use a pre-deployment action.

To add a pre-deployment action, we will edit the src/DotnetLambdaCdkPipeline/DotnetLambdaCdkPipelineStack.cs file in the CDK project, after we add code to generate test reports.

To run the unit test(s) and publish the test report in CodeBuild, we will construct a BuildSpec for our CodeBuild project. We also provide IAM policy statements to be attached to the CodeBuild service role granting it permissions to run the tests and create reports. Update the file by adding the new code (starting with “// Add this code for test reports”) below the devStage declaration you added earlier:

using Amazon.CDK;
using Amazon.CDK.Pipelines;

namespace DotnetLambdaCdkPipeline
{
public class DotnetLambdaCdkPipelineStack : Stack
{
internal DotnetLambdaCdkPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// …
// …
// …
var devStage = pipeline.AddStage(new DotnetLambdaCdkPipelineStage(this, “Development”));

// Add this code for test reports
var reportGroup = new ReportGroup(this, “TestReports”, new ReportGroupProps
{
ReportGroupName = “TestReports”
});

// Policy statements for CodeBuild Project Role
var policyProps = new PolicyStatementProps()
{
Actions = new string[] {
“codebuild:CreateReportGroup”,
“codebuild:CreateReport”,
“codebuild:UpdateReport”,
“codebuild:BatchPutTestCases”
},
Effect = Effect.ALLOW,
Resources = new string[] { reportGroup.ReportGroupArn }
};

// PartialBuildSpec in AWS CDK for C# can be created using Dictionary
var reports = new Dictionary<string, object>()
{
{
“reports”, new Dictionary<string, object>()
{
{
reportGroup.ReportGroupArn, new Dictionary<string,object>()
{
{ “file-format”, “VisualStudioTrx” },
{ “files”, “**/*” },
{ “base-directory”, “./testresults” }
}
}
}
}
};
// End of new code block
}
}
}

Finally, add the CodeBuildStep as a pre-deployment action to the development stage with necessary CodeBuildStepProps to set up reports. Add this after the new code you added above.

devStage.AddPre(new Step[]
{
new CodeBuildStep(“Unit Test”, new CodeBuildStepProps
{
Commands= new string[]
{
“dotnet test -c Release ./src/SampleLambda.Tests/SampleLambda.Tests.csproj –logger trx –results-directory ./testresults”,
},
PrimaryOutputDirectory = “./testresults”,
PartialBuildSpec= BuildSpec.FromObject(reports),
RolePolicyStatements = new PolicyStatement[] { new PolicyStatement(policyProps) },
BuildEnvironment = new BuildEnvironment
{
BuildImage = LinuxBuildImage.AMAZON_LINUX_2_4,
ComputeType = ComputeType.MEDIUM
}
})
});

Build the solution, then commit and push the changes to the repository. Pushing the changes triggers the pipeline, runs the test cases, and publishes the report to the CodeBuild console. To view the report, after the pipeline has completed, navigate to TestReports in CodeBuild’s Report Groups as shown below.

Figure 4: Test report in CodeBuild report group

Deploying to production environment with manual approval

CDK Pipelines makes it very easy to deploy additional stages with different accounts. You have to bootstrap the accounts and Regions you want to deploy to, and they must have a trust relationship added to the pipeline account.

To bootstrap an additional production environment into which AWS CDK applications will be deployed by the pipeline, run the below command, substituting in the AWS account ID for your production account, the region you will use for your production environment, the AWS CLI profile to use with the prod account, and the AWS account ID where the pipeline is already deployed (the account you bootstrapped at the start of this blog).

cdk bootstrap aws://<PROD-ACCOUNT-ID>/<PROD-REGION>
–profile <PROD-PROFILE>
–cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess
–trust <PIPELINE-ACCOUNT-ID>

The –trust option indicates which other account should have permissions to deploy AWS CDK applications into this environment. For this option, specify the pipeline’s AWS account ID.

Use below code to add a new stage for production deployment with manual approval. Add this code below the “devStage.AddPre(…)” code block you added in the previous section, and remember to replace the placeholders with your AWS account ID and region for your prod environment.

var prodStage = pipeline.AddStage(new DotnetLambdaCdkPipelineStage(this, “Production”, new StageProps
{
Env = new Environment
{
Account = “<PROD-ACCOUNT-ID>”,
Region = “<PROD-REGION>”
}
}), new AddStageOpts
{
Pre = new[] { new ManualApprovalStep(“PromoteToProd”) }
});

To support deploying CDK applications to another account, the artifact buckets must be encrypted, so add a CrossAccountKeys property to the CodePipeline near the top of the pipeline stack file, and set the value to true (see the line in bold in the code snippet below). This creates a KMS key for the artifact bucket, allowing cross-account deployments.

var pipeline = new CodePipeline(this, “pipeline”, new CodePipelineProps
{
PipelineName = “LambdaPipeline”,
SelfMutation = true,
CrossAccountKeys = true,
EnableKeyRotation = true, //Enable KMS key rotation for the generated KMS keys

// …
}

After you commit and push the changes to the repository, a new manual approval step called PromoteToProd is added to the Production stage of the pipeline. The pipeline pauses at this step and awaits manual approval as shown in the screenshot below.

Figure 5: Pipeline waiting for manual review

When you click the Review button, you are presented with the following dialog. From here, you can choose to approve or reject and add comments if needed.

Figure 6: Manual review approval dialog

Once you approve, the pipeline resumes, executes the remaining steps and completes the deployment to production environment.

Figure 7: Successful deployment to production environment

Clean up

To avoid incurring future charges, log into the AWS console of the different accounts you used, go to the AWS CloudFormation console of the Region(s) where you chose to deploy, select and click Delete on the stacks created for this activity. Alternatively, you can delete the CloudFormation Stack(s) using cdk destroy command. It will not delete the CDKToolkit stack that the bootstrap command created. If you want to delete that as well, you can do it from the AWS Console.

Conclusion

In this post, you learned how to use CDK Pipelines for automating the deployment process of .NET Lambda functions. An intuitive and flexible architecture makes it easy to set up a CI/CD pipeline that covers the entire application lifecycle, from build and test to deployment. With CDK Pipelines, you can streamline your development workflow, reduce errors, and ensure consistent and reliable deployments.
For more information on CDK Pipelines and all the ways it can be used, see the CDK Pipelines reference documentation.

About the authors:

Ankush Jain

Ankush Jain is a Cloud Consultant at AWS Professional Services based out of Pune, India. He currently focuses on helping customers migrate their .NET applications to AWS. He is passionate about cloud, with a keen interest in serverless technologies.

Sanjay Chaudhari

Sanjay Chaudhari is a Cloud Consultant with AWS Professional Services. He works with customers to migrate and modernize their Microsoft workloads to the AWS Cloud.

JavaScript sans build systems?

#​626 — February 17, 2023

Read on the Web

JavaScript Weekly

Writing JavaScript Without a Build System — Using a variety of build tools for things like bundling and transpiling is reasonably standard in modern JavaScript development, but what if you want to keep things simple? For simple things, it’s not necessary, says Julia. This led to a lot of discussion on Hacker News.

Julia Evans

Ryan Dahl, Node.js Creator, Wants to Rebuild the Runtime of the Web — A neat bit of journalism about the alternative JavaScript runtime Deno and what Ryan Dahl is trying to achieve with it and how Ryan handled the stress of being known as the creator of Node.js.

Harry Spitzer / Sequoia

Broadcasting a Live Stream With Nothing but JavaScript — Live streams typically use third-party software to broadcast, but with Amazon Interactive Video Service, you can build a powerful, interactive broadcasting interface with the Web Broadcast SDK and JavaScript. Click here to learn more.

Amazon Web Services (AWS) sponsor

core-js’s Maintainer Complains Open Source Is ‘Broken’core-js is a popular universal polyfill for JavaScript features and its author has run into his fair share of bad luck which has culminated in this lengthy post on the state of the project, his issues in securing an income and, well, the downsides to living in Russia. The Register has tried to balance out the story.

The Register

IN BRIEF:

? The just released Firefox 110 for Android now supports Tampermonkey, an extension for running JavaScript ‘userscripts’ on sites you visit.

The Angular project is taking steps to revamp its reactivity model to enable fine-grained change detection via signals.

The latest beta of iOS and iPadOS 16.4 supports the Web Push API for home screen webapps.

? A fun Twitter thread where Qwik’s Miško Hevery attempted to demonstrate why a = 0-x is about 3-10x faster than a = -x before being told about a flaw in his benchmark. There is still a performance difference, though.

▶️ The React.js documentary we mentioned last week has now been released and it’s a heck of a watch – you’ll need 78 minutes of your time though.

RELEASES:

Node.js 19.6.1, 18.14.1, 16.19.1 and 14.21.3.

JavaScript Obfuscator 4.0 – Code scrambler.

Shoelace 2.1
↳ Framework agnostic Web components.

Mermaid 9.4
↳ Text to diagram generator. Now with timeline diagram support.

Cypress 12.6

? Articles & Tutorials

Use a MutationObserver to Handle DOM Nodes that Don’t Exist Yet — Comparing the effectiveness of the MutationObserver API with the conventional method of constantly checking for the creation of nodes.

Alex MacArthur

Well-Known Symbols in JavaScript — Hemanth, a TC39 delegate, shows off 14 symbols and where they can come in useful.

Hemanth HM

? Monitor and Optimize Website Speed to Rank Higher in Google — Monitor Google’s Core Web Vitals and optimize performance using in-depth reports built for developers. Improve SEO & UX.

DebugBear sponsor

Why to Use Maps More and Objects Less — A journey down a performance rabbit hole.

Steve Sewell

Adopting React in the Early Days — A personal history lesson providing context around React’s evolution. While React might be an obvious, even safe, choice now, that wasn’t always true.

Sébastien Lorber

An Animated Flythrough with Theatre.js and React Three Fiber — How to fly through a 3D scene using the Theatre.js JavaScript animation library and the React Three Fiber 3D renderer. This is the sort of thing that used to be Very Difficult™ but is now relatively trivial.

Andrew Prifer (Codrops)

How to Change the Tab Bar Color Dynamically with JavaScript

Amit Merchant

Is Deno Ready for Primetime? One Dev’s Opinion

Max Countryman

Using Playwright to Monitor Third-Party Resources That Could Impact User Experience

Stefan Judis

? Code & Tools

Dependency Cruiser: Validate and Visualize JavaScript Dependencies — If you want a look at the output, there’s a whole page of graphs for popular, real world projects including Chalk, Yarn, and React.

Sander Verweij

Devalue: Like JSON.stringify, But..“Gets the job done when JSON.stringify can’t.” Namely, it can handle cyclical and repeated references, regular expressions, Map and Set, custom types, and more.

Rich Harris

? JavaScript Scratchpad for VS Code (2m+ Downloads) — Get Quokka.js ‘Community’ for free: #1 tool for exploring/testing JavaScript with edit-continue experience to see realtime execution and runtime values.

Wallaby.js sponsor

NodeGUI: Build Native Cross-Platform Desktop Apps with Node.js — Unlike Electron which leans upon webviews and HTML, NodeGui uses a Qt based approach. This week’s 0.58.0 release is the first stable release based on Qt 6 and offering high DPI support.

NodeGui

DOMPurify 3.0: Fast, Tolerant XSS Sanitizer for HTML and SVG — A project that’s nine years old today but still actively developed. Supports all modern browsers (IE support was only just dropped) and is heavily tested. There’s a live demo here.

Cure53

Pythagora: Generate Express Integration Tests by Recording Activity — This is a neat idea still in its early stages. Add a line of code after setting up an Express.js app and this will capture app usage and generate integration tests based on the interactions. (▶️ Screencast demo.)

zvone187 and LeonOstrez

Try Stream’s Free Trial of SDKs for In-App Chat

Stream sponsor

grep.app: Search Code Across a Half Million GitHub Repos — A code search engine that lets you use regexes or syntax in your search. Considering what it is, it’s pretty fast and has an extensive index (over half a million public repos from GitHub, allegedly).

grep.app

tsParticles: Particles, Confetti and Fireworks for Your Pages — Create customizable particle related effects for use on the Web. Uses the regular 2D canvas for broad support.

Matteo Bruni

? Jobs

Software Engineer — Join our happy team. Stimulus is a social platform started by Sticker Mule to show what’s possible if your mission is to increase human happiness.

Stimulus

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

QUICK RELEASES:

Minimatch 6.2
↳ Glob matcher library, as used in npm.
    minimatch(“bar.foo”, “*.foo”)

React Accordion 1.2
↳ Unstyled WAI-ARIA-compliant accordion library.

ScrollTrigger 1.0.6
↳ Have your page react to scroll changes.

VeeValidate 4.7.4
↳ Popular Vue.js form library

Express Admin 2.0
↳ Admin interface for data in MySQL/Postgres/SQLite.

Execa 7.0
↳ Improved process execution from Node.js.

React Tooltip 5.8

Flatlogic Admin Templates banner

Introducing Finch: An Open Source Client for Container Development

Today we are happy to announce a new open source project, Finch. Finch is a new command line client for building, running, and publishing Linux containers. It provides for simple installation of a native macOS client, along with a curated set of de facto standard open source components including Lima, nerdctl, containerd, and BuildKit. With Finch, you can create and run containers locally, and build and publish Open Container Initiative (OCI) container images.

At launch, Finch is a new project in its early days with basic functionality, initially only supporting macOS (on all Mac CPU architectures). Rather than iterating in private and releasing a finished project, we feel open source is most successful when diverse voices come to the party. We have plans for features and innovations, but opening the project this early will lead to a more robust and useful solution for all. We are happy to address issues, and are ready to accept pull requests. We’re also hopeful that with our adoption of these open source components from which Finch is composed, we’ll increase focus and attention on these components, and add more hands to the important work of open source maintenance and stewardship. In particular, Justin Cormack, CTO of Docker shared that “we’re bullish about Finch’s adoption of containerd and BuildKit, and we look forward to AWS working with us on upstream contributions.”

We are excited to build Finch in the open with interested collaborators. We want to expand Finch from its current basic starting point to cover Windows and Linux platforms and additional functionality that we’ve put on our roadmap, but would love your ideas as well. Please open issues or file pull requests and start discussing your ideas with us in the Finch Slack channel. Finch is licensed under the Apache 2.0 license and anyone can freely use it.

Why build Finch?

For building and running Linux containers on non-Linux hosts, there are existing commercial products as well as an array of purpose-built open source projects. While companies may be able to assemble a simple command line tool from existing open source components, most organizations want their developers to focus on building their applications, not on building tools.

At AWS, we began looking at the available open source components for container tooling and were immediately impressed with the progress of Lima, recently included in the Cloud Native Computing Foundation (CNCF) as a sandbox project. The goal of Lima is to promote containerd and nerdctl to Mac users, and this aligns very well with our existing investment in both using and contributing to the CNCF graduated project, containerd. Rather than introducing another tool and fragmenting open source efforts, the team decided to integrate with Lima and is making contributions to the project. Akihiro Suda, creator of nerdctl and Lima and a longtime maintainer of containerd, BuildKit, and runc, added “I’m excited to see AWS contributing to nerdctl and Lima and very happy to see the community growing around these projects. I look forward to collaborating with AWS contributors to improve Lima and nerdctl alongside Finch.”

Finch is our response to the complexity of curating and assembling an open source container development tool for macOS initially, followed by Windows and Linux in the future. We are curating the components, depending directly on Lima and nerdctl, and packaging them together with their dependencies into a simple installer for macOS. Finch, via its macOS-native client, acts as a passthrough to nerdctl which is running in a Lima-managed virtual machine. All of the moving parts are abstracted away behind the simple and easy-to-use Finch client. Finch manages and installs all required open source components and their dependencies, removing any need for you to manage dependency updates and fixes.

The core Finch client will always be a curated distribution composed entirely of open source, vendor-neutral projects. We also want Finch to be customizable for downstream consumers to create their own extensions and value-added features for specific use cases. We know that AWS customers will want extensions that make it easier for local containers to integrate with AWS cloud services. However, these will be opt-in extensions that don’t impact or fragment the open source core or upstream dependencies that Finch depends on. Extensions will be maintained as separate projects with their own release cycles. We feel this model strikes a perfect balance for providing specific features while still collaborating in the open with Finch and its upstream dependencies. Since the project is open source, Finch provides a great starting point for anyone looking to build their own custom-purpose container client.

In summary, with Finch we’ve curated a common stack of open source components that are built and tested to work together, and married it with a simple, native tool. Finch is a project with a lot of collective container knowledge behind it. Our goal is to provide a minimal and simple build/run/push/pull experience, focused on the core workflow commands. As the project evolves, we will be working on making the virtualization component more transparent for developers with a smaller footprint and faster boot times, as well as pursuing an extensibility framework so you can customize Finch however you’d like.

Over time, we hope that Finch will become a proving ground for new ideas as well as a way to support our existing customers who asked us for an open source container development tool. While an AWS account is not required to use Finch, if you’re an AWS customer we will support you under your current AWS Support plans when using Finch along with AWS services.

What can you do with Finch?

Since Finch is integrated directly with nerdctl, all of the typical commands and options that you’ve become fluent with will work the same as if you were running natively on Linux. You can pull images from registries, run containers locally, and build images using your existing Dockerfiles. Finch also enables you to build and run images for either amd64 or arm64 architectures using emulation, which means you can build images for either (or both) architectures from your M1 Apple Silicon or Intel-based Mac. With the initial launch, support for volumes and networks is in place, and Compose is supported to run and test multiple container applications.

Once you have installed Finch from the project repository, you can get started building and running containers. As mentioned previously, for our initial launch only macOS is supported.

To install Finch on macOS download the latest release package. Opening the package file will walk you through the standard experience of a macOS application installation.

Finch has no GUI at this time and offers a simple command line client without additional integrations for cluster management or other container orchestration tools. Over time, we are interested in adding extensibility to Finch with optional features that you can choose to enable.

After install, you must initialize and start Finch’s virtual environment. Run the following command to start the VM:
finch vm init

To start Finch’s virtual environment (for example, after reboots) run:
finch vm start

Now, let’s run a simple container. The run command will pull an image if not already present, then create and start the container instance. The —rm flag will delete the container once the container command exits.

finch run –rm public.ecr.aws/finch/hello-finch
public.ecr.aws/finch/hello-finch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:a71e474da9ffd6ec3f8236dbf4ef807dd54531d6f05047edaeefa758f1b1bb7e: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:705cac764e12bd6c5b0c35ee1c9208c6c5998b442587964b1e71c6f5ed3bbe46: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:6cc2bf972f32c6d16519d8916a3dbb3cdb6da97cc1b49565bbeeae9e2591cc60: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.9 s total: 0.0 B (0.0 B/s)

@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@ @@@@@@@@@@@
@@@@@@@ @@@@@@@
@@@@@@ @@@@@@
@@@@@@ @@@@@
@@@@@ @@@# @@@@@@@@@
@@@@@ @@ @@@ @@@@@@@@@@
@@@@% @ @@ @@@@@@@@@@@
@@@@ @@@@@@@@
@@@@ @@@@@@@@@@@&
@@@@@ &@@@@@@@@@@@
@@@@@ @@@@@@@@
@@@@@ @@@@@(
@@@@@@ @@@@@@
@@@@@@@ @@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@

Hello from Finch!

Visit us @ github.com/runfinch

Lima supports userspace emulation in the underlying virtual machine. While all the images we create and use in the following example are Linux images, the Lima VM is emulating the CPU architecture of your host system, which might be 64-bit Intel or Apple Silicon-based. In the following examples we will show that no matter which CPU architecture your Mac system uses, you can author, publish, and use images for either CPU family. In the following example we will build an x86_64-architecture image on an Apple Silicon laptop, push it to ECR, and then run it on an Intel-based Mac laptop.

To verify that we are running our commands on an Apple Silicon-based Mac, we can run uname and see the architecture listed as arm64:

uname -sm
Darwin arm64

Let’s create and run an amd64 container using the –platform option to specify the non-native architecture:

finch run –rm –platform=linux/amd64 public.ecr.aws/amazonlinux/amazonlinux uname -sm
Linux x86_64

The –platform option can be used for builds as well. Let’s create a simple Dockerfile with two lines:

FROM public.ecr.aws/amazonlinux/amazonlinux:latest
LABEL maintainer=”Chris Short”

By default, Finch would build for the host’s CPU architecture platform, which we showed is arm64 above. Instead, let’s build and push an amd64 container to ECR. To build an amd64 image we add the –platform flag to our command:

finch build –platform linux/amd64 -t public.ecr.aws/cbshort/finch-multiarch .
[+] Building 6.5s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 142B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for public.ecr.aws/amazonlinux/amazonlinux:latest 1.2s
=> [auth] aws:: amazonlinux/amazonlinux:pull token for public.ecr.aws 0.0s
=> [1/1] FROM public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 3.9s
=> => resolve public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 0.0s
=> => sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2 62.31MB / 62.31MB 5.1s
=> exporting to oci image format 5.2s
=> => exporting layers 0.0s
=> => exporting manifest sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652 0.0s
=> => exporting config sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8 0.0s
=> => sending tarball 1.3s
unpacking public.ecr.aws/cbshort/finch-multiarch:latest (sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)…
Loaded image: public.ecr.aws/cbshort/finch-multiarch:latest%

finch push public.ecr.aws/cbshort/finch-multiarch
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.v2+json, sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 27.9s total: 1.6 Ki (60.0 B/s)

At this point we’ve created an image on an Apple Silicon-based Mac which can be used on any Intel/AMD CPU architecture Linux host with an OCI-compliant container runtime. This could be an Intel or AMD CPU EC2 instance, an on-premises Intel NUC, or, as we show next, an Intel CPU-based Mac. To show this capability, we’ll run our newly created image on an Intel-based Mac where we have Finch already installed. Note that we have run uname here to show the architecture of this Mac is x86_64, which is analogous to what the Go programming language references 64-bit Intel/AMD CPUs as: amd64.

uname -a
Darwin wile.local 21.6.0 Darwin Kernel Version 21.6.0: Thu Sep 29 20:12:57 PDT 2022; root:xnu-8020.240.7~1/RELEASE_X86_64 x86_64

finch run –rm –platform linux/amd64 public.ecr.aws/cbshort/finch-multiarch:latest uname -a
public.ecr.aws/cbshort/finch-multiarch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 9.2 s total: 59.4 M (6.5 MiB/s)
Linux 73bead2f506b 5.17.5-300.fc36.x86_64 #1 SMP PREEMPT Thu Apr 28 15:51:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

You can see the commands and options are familiar. As Finch is passing through our commands to the nerdctl client, all of the command syntax and options are what you’d expect, and new users can refer to nerdctl’s docs.

Another use case is multi-container application testing. Let’s use yelb as an example app that we want to run locally. What is yelb? It’s a simple web application with a cache, database, app server, and UI. These are all run as containers on a network that we’ll create. We will run yelb locally to demonstrate Finch’s compose features for microservices:

finch vm init
INFO[0000] Initializing and starting finch virtual machine…
INFO[0079] Finch virtual machine started successfully

finch compose up -d
INFO[0000] Creating network localtest_default
INFO[0000] Ensuring image redis:4.0.2
docker.io/library/redis:4.0.2: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:cd277716dbff2c0211c8366687d275d2b53112fecbf9d6c86e9853edb0900956: done |++++++++++++++++++++++++++++++++++++++|

[ snip ]

layer-sha256:afb6ec6fdc1c3ba04f7a56db32c5ff5ff38962dc4cd0ffdef5beaa0ce2eb77e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 11.4s total: 30.1 M (2.6 MiB/s)
INFO[0049] Creating container localtest_yelb-appserver_1
INFO[0049] Creating container localtest_redis-server_1
INFO[0049] Creating container localtest_yelb-db_1
INFO[0049] Creating container localtest_yelb-ui_1

The output indicates a network was created, many images were pulled, started, and are now all running in our local test environment.

In this test case, we’re using Yelb to figure out where a small team should grab lunch. We share the URL with our team, folks vote, and we see the output via the UI:

What’s next for Finch?

The project is just getting started. The team will work on adding features iteratively, and is excited to hear from you. We have ideas on making the virtualization more minimal, with faster boot times to make it more transparent for users. We are also interested in making Finch extensible, allowing for optional add-on functionality. As the project evolves, the team will direct contributions into the upstream dependencies where appropriate. We are excited to support and contribute to the success of our core dependencies: nerdctl, containerd, BuildKit, and Lima. As mentioned previously, one of the exciting things about Finch is shining a light on the projects it depends upon.

Please join us! Start a discussion, open an issue with new ideas, or report any bugs you find, and we are definitely interested in your pull requests. We plan to evolve Finch in public, by building out milestones and a roadmap with input from our users and contributors. We’d also love feedback from you about your experiences building and using containers daily and how Finch might be able to help!

Flatlogic Admin Templates banner

Starting a Web App in 2022 [Research Results]

We are finally happy to share with you the results of the world’s first study on how developers start a web application in 2022. For this research, we wanted to do a deep dive into how engineers around the globe are starting web apps, how popular the use of low-code platforms and what tools are decisive in creating web applications.

To achieve this, we conducted a survey with 191 software engineers of all experience around the globe. We asked questions around the technology they use to start web applications.

Highlights of the key findings:

The usage of particular technologies in the creation of web apps is closely related to engineers’ experience. New technologies, such as no-code/low-code solutions, GraphQL, and non-relational databases, appeal to developers with less expertise;

Engineers with less experience are more likely to learn from online sources, whereas developers with more expertise in software development prefer to learn from more conventional sources such as books;

Retool and Bubble are the most popular no-code/low-code platforms;

React, Node.js, PostgreSQL, Amazon AWS, and Bootstrap are the most popular web application development stacks.

To read the full report, including additional insights, and full research methodology, visit this page

With Flatlogic you can create full-stack web applications literally in minutes. If you’re interested in trying Flatlogic solutions, sign up for free

The post Starting a Web App in 2022 [Research Results] appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

Generate Full-Stack Web Apps Based on Database Schema

Hello! Flatlogic has added one more important feature: ready-made web application scheme integration. Check out our release notes to learn more about our latest product enhancements.

Previously, Flatlogic web app generator allowed you to model a database schema from scratch. We received a lot of requests from our users who had already modelled DB schemas and found it inconvenient to manually remodel the schema in Flatlogic schema editor.

To eliminate this obstacle and increase our users’ satisfaction, we are introducing a new platform feature: Creating the web app schema from a SQL dump. The feature supports both Postgres and MySQL dialects and identifies tables and column types, also trying to discover relations based on index, reference and constraint information. “users” table is automatically added to the DB model to preserve web app user management functionality. If the “users” table already exists in the imported DB, the column lists of the two tables are merged. SQL import is invoked by simply uploading the SQL dump file, literally in one click.

We recommend that you choose “Structure only, no data” mode when dumping the SQL from your database, otherwise the data dump will result in unnecessary volume uploads which sometimes could be huge and even lead to upload failure.

After the successful file upload and schema import, the schema editor allows you to review and edit the schema, and correct names and column types in case those were not identified correctly by the SQL dump parser. Also you can add one-to-many (relation one) and many-to-many (relation many) relation types if such were not discovered automatically.

Flatlogic schema editor also allows you to extend your existing DB schema while keeping all the data intact by applying our generated migrations feature.

Enjoy this new feature and model Admin web apps for your existing database literally in one click!

Click here to learn more about upcoming features. Here you’ll see some info about Flatlogic plans and some points from the roadmap.

The post Generate Full-Stack Web Apps Based on Database Schema appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

4 Best Angular.JS Themes and Templates for 2022

The task of creating an admin panel for creating and editing content often arises in the web development process. The task, in general, is trivial, but it is not so easy to make a convenient admin panel.

A Couple Of Words About AngularJS

AngularJS is an MVC framework. Includes its own high-level ajax implementation, built-in unit, and e2e tests (Jasmine for unit testing, a special testing server is launched for end-to-end tests).

Angular.JS works according to the MVC scheme (Model-View-Controller – model-view-controller) – it divides the application into three separate parts that can be changed independently of each other.

Model
Provides information (and responds to controller commands).

View
Responsible for displaying model data and monitoring changes.

Controller
Reacts to user input and notifies the model to update.

“AngularJS is not completely MVC; the controller will never have a direct reference to the view. This is great because it keeps the controller independent of the view, and also allows us to easily test the controller without needing to instantiate a DOM.”

Shyam Seshadri, AngularJS: Up and Running: Enhanced Productivity with Structured Web Apps

Most developers would choose the newest version of Angular to work with. This is a choice that no one will argue in the modern web development world. If you are new to Angular please note that AngularJS refers to the first version of the framework. Then there was a complete rethinking of technology. It was a great breakthrough, but it caused so much pain. The migration from one version to another took lots of time and effort. But in rare cases, you still need to use the old version – AngularJS. So this article is useful in these cases.

If you need admin templates with the newest version of Angular, you can check out Angular templates made by Flatlogic. Or we have several articles on Angular dashboards in our blog:

Top 10 Angular Material Admin Dashboard Templates

Top Angular Admin Templates

Despite all the pain that the new version caused, frontenders liked those changes. It made the framework much more flexible and scalable. Plus drastically changed technology in terms of speed and lightweight. And of course, as with every software update, it reduced drastically the number of bugs. And added some new:)

Check out Flatlogic Angular Templates!

Theme Support
E-Commerce Section
Fully Documented Codebase

This opened up huge perspectives for enterprise-like software. And to these days when we compare React and Angular, for example, we always point out this indisputable plus – it is perfect for gigantic enterprise apps.

To be fair we need to point out some minuses. This framework needs additional optimization to make the application load and work faster. Of course, it depends on the technology you compare Angular with. But in general, you will spend more time chasing the speed of your dream, though there are many ways to do this.

Do you like this article?
You might want to read:

Top 10+ Simple Admin Themes and Templates

Another pretty obvious minus is too complicated directive API. I don’t mean that it is inconvenient, I’m just saying that very flexible. There is always the possibility that you will modify it the way you really shouldn’t. In simple terms: too much freedom. The easiest way to avoid the mistakes with API is to study carefully the documentation.

An Opinionated List of AngularJS Admin Dashboard Templates

So, let’s proceed to the review of admin panels made with AngularJS.

Sing App AngularJS

Image source: https://flatlogic.com/templates/sing-app-angularjs/demo

DEMO
MORE INFO
DOCUMENTATION

This admin template is created by Flatlogic Team using Angular.js and Bootstrap. Sing App Angular.js is made beautiful and stylish. The background color of the buttons changes after clicking them. Design connoisseurs will also notice the light glow from the navbar and the mesmerizing effect of smoothly opening tabs.

This dashboard template is multifunctional. It has charts, using them you are able to easily operate your data. From the dashboard, you get statistics, for instance, you see the number of visits. This information is available on the map and on the progress bars. In addition, you can find our other useful information, such as the user base growth, traffic values, market stats, and etc.

Developers have included forms, grid options, and a bunch of UI elements. For easy navigation, there’s a search field on the navbar, and a left sidebar that you can fix on the page or leave it collapsing. Sing App Angular.js is a multipage multifunctional responsive template that that takes its rightful place in this top.

Slant Admin

Image source: http://jaybabani.com/slant/site/

DEMO
MORE INFO

Slant Admin is positioned on the market as multiple admin themes in one. Let’s see why. But first, let’s pay attention to the technical stuffing. The basis of the template is AngularJS and Bootstrap. To build the project, developers connected Grunt and Bower. If you have heard nothing about Bower, this is a package manager that helps automatize the work, for example, you don’t have to download the files of frameworks or libraries manually.

Slant Admin is created developer-friendly. This template contains different blanks. Among its sections, we can find Dashboard, APPS, User Interface, Forms, Charts, etc. For UI content, there are many elements starting with Typography, Progress Bars, Icons, and ending with Image Cropper, Section Panels, Counter Tiles. There are also many pages, such as the Login page, Forgot password page, Error pages.

This is a responsive good-looking template. You can select the layout type for the sidebar. A nice finding is when you select the category from the left, the category name is duplicated on the navbar in the form of the tab.

This product contains 9 admin themes. They are the following: General Admin, Hospital Admin, University Admin Music Admin, CRM Admin, Freelance Admin, Blog Admin, E-commerce Admin, Social Media Admin. It means that Slant Admin can be a solution to the needs of different spheres of life. That’s why this product is called “multiple admin themes in one”.

Angular Material Dashboard


Image source: https://flatlogic.com/templates/angular-material-dashboard/demo

DEMO
MORE INFO

Another sample of coding with Angular.js framework is Angular Material Dashboard. Created with Angular Material design, the template’s interface looks self-contained and stylish. 

One of the main advantages that keep an eye is an informative multisector dashboard. From the dashboard, you will find out about the following:

the number of site visitors;
the number of warnings;
memory load;
Server Control Panel management;
usage statistics;
enabling or disabling of Autocomplete Input;
database performance;
TODO list. 

It’s very convenient to operate data due to integrated graphics for data visualization. In addition to the dashboard, there’s a table section and a profile page with the necessary fields. If you look through the demo, you’ll find the sharing option via connection to Twitter and GitHub. 

If you are considering creating an application like CMS, SAAS, project management tools, and if you looking for a smart template free of charge, then Angular Material Dashboard is the one that will help you achieve your goal. 

CoreUI for AngularJS


Image source: https://coreui.io/v1/demo/AngularJS_Demo/#!/dashboard

DEMO
MORE INFO

CoreUI is an admin template that is created with Angular.js and Bootstrap. This is a free template, in addition, free updates are provided. It’s an open-source project, however, everyone can donate to support it. 

The template doesn’t look like raw material, it looks like a well-designed ready-to-use dashboard. On the top of the screen, there’s a navbar; from the left side – a sidebar, and the main content in the center. What’s interesting is that the main content has its own navbar with the Home, Dashboard, and Settings buttons.

This project boasts an impressive list of UI elements. These elements are various and numerous. The list includes buttons (social buttons as a bonus), forms, tables, switches, icons (font awesome, flags), 4 types of charts; login, registration, error pages. CoreUI contains SCSS source files and 5 plugins, such as Angular Animate, Angular Breadcrumb, Angular Chart.js, Angular Loading Bar, and Chart.js. 

The color scheme of this admin dashboard is pleasant to the eye due to its dark sidebar theme and light theme of the main content. However, the color theme is represented with only one style, so you can’t switch to another scheme.

The team anticipated the expectations of more demanding users and released an extended version –  CoreUI PRO for AngularJS. This template contains UI kits for emailing and invoicing. Here you can apply more plugins (standard version plugins + Angular DateRangePicker, Angular DateRangePicker, Angular Translate, Angular UI Calendar, Angular UI Mask, Full Calendar, Toastr). What is more, in terms of the visual part, a PRO version can be switched to dark mode.

A Couple of Words About The Migration Possibilities

Some developers nowadays seek to migrate from AngularJS to the newest version of Angular. This journey can not be not easy, nor fast. Mainly because you will ask yourself a question, what if Google decide to change everything once again without backward compatibility. But in 2022 so far the transition to the new version is relatively easy.

Going back to the migration from AngularJS we must say that the sooner you start, the cheaper it would be. Every year Angular team makes lots of changes, so it is not always easy to keep up with those changes. 
But the Angular team developed two tools that are aimed to help you with that. 

The first.
ngMigration Assistant
This command-line tool recommends the best migration option by analyzing an app.

The second.
ngMigrationForum
The most useful part of a forum in my opinion is a Migration Paths Overview. It suggests several options for you (the same as ngMigration Assistant):

Incremental migration,
ngUpgrade,
Angular elements,
Routing,
Code conversion,
Rewriting from scratch.

You might also like these articles:

Top 7+ Node.js React Templates and Themes for Your Admin Panel

Top 7 Awesome Vue Material Admin Dashboard Templates Worth Your Attention

Top 10 Angular Material Admin Dashboard Templates

The post 4 Best Angular.JS Themes and Templates for 2022 appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

12 JavaScript Image Manipulation Libraries for Your Next Web App

Today we would like to talk to you on a topic most interesting – JavaScript image manipulation libraries. And, to be more precise – those JavaScript image manipulation libraries that definitely deserve your particular attention when you develop your next spectacular web app. But, let’s not get ahead of ourselves and firstly answer the question “What is an Image Manipulation Library?”

What is an Image Manipulation Library

Let’s start with a simple description. An image manipulation library or IML is a tool that’s main goal is to help you systemize, organize and manipulate graphic elements of your app in different ways. Different image manipulation libraries typically serve different purposes and can perform such goals as cropping the images, resizing them, converting them from one format to another, improving their quality and many-many more. So, all and all, a tool to use when making a web app. Unless, of course, you want to make a dull monochrome app that contains no images whatsoever, which is an unrealistic scenario in a modern world, where design can make or break an app just as easily as its functionality or usability.

What does JavaScript Image Manipulation Libraries bring to the table?

The next question to discuss when is the reasoning for choosing an image manipulation library, based on JavaScript, for your next web app instead of, for example, C++-based ones. The answer is simple: even though at first glance JavaScript IMLs are metaphorically heavier, they are reliable and can create some simply astounding result.

The practical usage of some of the entries you are going to see in this article in a short time is a thing of beauty and will do nothing less than improve the development of your next web app by easing the work with the images. So, without any further delays, let’s get down to the list.

JavaScript Image Manipulation Libraries

Pica

Pica is a prime tool for in-browser image resizing, most useful when you want to reduce an exceedingly large image into a suitable one in order to save upload time. It avoids the pixelation of an image and works at a suitably fast pace. It serves a great amount of server resources on image processing and can reuse your images into thumbnails in the browser. What’s also great about Pica is the fact that it auto selects such technologies as web workers, webassembly, createImageBitmap, pure JS, etc. automatically, so there is no need for you to do it yourself.

MORE INFO

Lena.js

Lena.js can be described as a very simple, yet nice and neat image redactor and processor. It has a number (22 to be precise) image filters that you can play around with to improve your image. Lena.js is very small in size and has a killer feature that allows you to add your own filters, as it’s code is open to anyone at GitHub.

MORE INFO

Jimp

Jimp stands for JavaScript image manipulation program and it does what it says on the tin a flawless fashion. Written for Node, this entirely JavaScript image processing library has zero native dependencies. It also has no external dependencies either, which makes it quite universal. Jimp can help you with such tasks as blitting, blurring, colouring, containing images and many others. What also advantages Jimp is its Node.js syntax that is will prove an easy use for people with Python or C++ primary prior experience.

MORE INFO

Grade

Grade (not a big surprise) is another JS library on our list. Its main selling point is producing complementary gradients that are automatically generated based on 2 colours that are determined predominant in the selected images. Such an effect allows your site or app to seem more seamless. Grade is an easy to use plugin that will add an aura of visually pleasing aesthetic to your finished product, which is always nice for both you and the end user.

MORE INFO

MarvinJ

Now let’s get to a more intrinsically complex JavaScript image manipulation library. MarvinJ is a powerful Marvin Framework derivative that offers quite a number of algorithms for the images’ colour and appearance manipulation. It allows you to have an easier working process when it comes to such image processing fundamentals as corners and shapes, as MarvinJ is able to detect these features automatically. This way it simplifies the process of cropping out the image and even makes it more or less automated. And isn’t it, after all, the dream – to leave the tedious and boring thing like cropping the elements out to the machines while you can concentrate on more time-, imagination- and expertise-consuming tasks?

MORE INFO

Compressor.js

And now back to the simpler stuff. Compressor.js’s whole schtick is in the name – it handles the image compression and does it well. All thanks to the canvas.toBlob API that allows you to set the compression output quality of the image in the range from 0 to 1.

MORE INFO

Fabric.js

Does your next web app need such simple, yet if used correctly, shapes as rectangles, circles, triangles and other polygons? May it be so that it requires  more complex shapes?  If the answer is “Yes” to any or both of the questions then Fabric.js is your guy – it will not only create all these shapes for you, but also  allow you to manipulate every aspect of it, such as sizes, positions and rotations of the objects. But wait, there is more: control all the attributes of the upper-mentioned objects: colours, level of transparency. level of depth position and so on.

You might have noticed that we haven’t said a thing about images. But that meal is also on the menu, as Fabric.js allows to convert SVG images into JavaScript data and insert it into the canvas of the project. So, that’s killing two birds with one stone: cool shapes and SVG images in your apps code.

MORE INFO

CamanJS

And, once again, to the more complex JavaScript image manipulation libraries. CamanJS is a combination of fantastic and sometimes quite advanced techniques and an intuitive interface. You can use presets and filters or tinker around toggling them yourself. The cherry on top is the ability to add your own filters and plugins, as well as constant updating, that bring new features and functions.

MORE INFO

 Cropper.js

We duly hope that you are not tired of the “simple-complex” swings of our list, as here comes another simpler JavaScript image manipulation library. It allows you to crop the needed images, as well as scaling, rotating and zooming around the image. But the nicest thing about this JSIML is the ability to set the aspect ratio on the picture and crop accordingly.

MORE INFO

Merge Images

A unique entree of this list, as Merge Images doesn’t crop or skew or rotate the images. We hope, you’ve already guessed what this one does – it merges the given images onto one canvas, ridding you of the need to transform them into code and working on a canvas (pun intended).

MORE INFO

Blurify

This JavaScript image manipulation library is tiny, as it weighs less than 2 kb. But it’s effectiveness doesn’t allow us to not include it on the list, as it downgrades the pictures that you provide it with, and does it gracefully.

MORE INFO

Doka

Doka is a JIML which will provide you with a variety of image editing. It has a rich UI that warms your soul if needed. The support for React, Vue, Svelte, Angular, jQuery is also a nice and needed touch during the working upon the images. You will get around and understand this Library quite quickly.

MORE INFO

Conclusions to have

And that’s the list. Conclusions to have are quite simple: your next project will benefit from using these JavaScript image manipulation libraries, as it will get you free from performing mundane tasks and you will find yourself in love with them in no time.

Start with one, if you feel cautious, add more if you feel adventurous, as it may actually require some tinkering to work the way you want it to work.

That’s it for today. Thanks for reading this article and stay tuned for our new ones!

Bonus! Creating Applications with Flatlogic Platform

We’ve covered JavaScript libraries for image manipulation. Now let us take a look at a quicker and easier way to create Applications. Sometimes an application must have specific features or peculiarities we have to craft by hand. Still, the bulk of all Apps work similarly and can be built from the same blocks. We created the Flatlogic Platform to do just that. It requires no special expertise, only a few choices. Let’s see what they are!

#1: Name the Project

This step is rather straightforward. Name your project so it’s easier to find and recognize.

#2: Define Tech Stack

Next, we’ll choose the technologies our App’s components will run on. In the example above we’ve chosen React, NodeJS, and PostgreSQL for front-end, backend, and database, respectively.

#3: Choose the Design

Choose the design type that appeals to you the most. You might spend a lot of time with this interface, so choose carefully!

#4: Create Database Schema

The integrated schema editor is rather easy to master. It lets you define fields, columns, and data types for the database. If you’re short on time, just pick one of the ready schemas.

#5: Finish the App

It’s time to review your choices and click “Finish”. Check the “Connect GIT repository” box if you want to. After a brief compilation, your App will be yours to deploy and use. Great job, sir or madam!

You might also like these articles

React Table Guide And Best React Table Examples

Javascript Tabs: Save Space! Examples of Tabbed Widgets

Top 30 Open Source and Paid React Charts + Examples

The post 12 JavaScript Image Manipulation Libraries for Your Next Web App appeared first on Flatlogic Blog.Flatlogic Admin Templates banner