Java-Script Jarre

#​621 — January 13, 2023

Read on the Web

JavaScript Weekly

The State of JS 2022The State of JS is one of the JavaScript ecosystem’s most popular surveys and this time 39,471 folks took part giving us a snapshot of the tools, technologies, and language features people are using (or not using!) There’s a lot to go through, but here are some key points:

top-level await is the most newly adopted feature recently.
The JavaScript / TypeScript balance shows a majority of developers using TypeScript over JS.
Express remains by far the most popular backend framework with Nest, Fastify, Strapi, and Koa following somewhat behind.
Other interesting results can be found in JS pain points, what is currently missing from JS, and the ‘Awards’ for stand out items (complete with snazzy visual effects).

Devographics

? Retire your Legacy CMS with ButterCMS — ButterCMS is your new content backend. We’re SaaS so we host, maintain, and scale the CMS. Enable your marketing team to update website + app content without needing you. Try the #1 rated SaaS Headless CMS for your JS app today. Free for 30 days.

? ButterCMS sponsor

? Is TypeScript Worth It? — Time saver or waste of time? The relationship between TypeScript and JavaScript remains a complex one. An extensive discussion took place on Hacker News this week and, notably, TypeScript PM Daniel Rosenwasser popped up to respond to some of the concerns.

Hacker News

IN BRIEF:

You’ll be aware of JavaScript’s strict mode but one developer thinks we need a stricter mode to fix several other syntax issues.

Publint is an online tool for ‘linting’ live npm packages to see if they are packaged correctly, as a way to ensure maximum compatibility across environments.

RELEASES:

Node v19.4.0 and v18.13.0 (LTS)

Commander.js 9.5
↳ Node.js command-line interface toolkit.

Angular 15.1

Pixi.js 7.1 – Fast 2D on WebGL engine.

? Articles & Tutorials

The Gotcha of Unhandled Promise Rejections — A rough edge with promises that can sneak up on you. Jake looks at a ‘gotcha’ around unhandled promise rejections and how to work around it.

Jake Archibald

HTML with Superpowers: The Guidebook — A free resource introducing Web Components, what they are, and what problems they’re trying to solve. You can see the Guidebook directly here.

Dave Rupert

With Retool You Ship Apps Fast with 100+ Perfectly Crafted UI Components — The fast way for devs to build and share internal tools. Teams at companies like Amazon, DoorDash & NBC collaborate around custom-built Retool apps to solve internal workflows.

Retool sponsor

Everything About React’s ‘Concurrent Mode’ Features — An in-depth, example-led exploration of Concurrent Mode (now more a set of features integrated into React 18 than a distinct ‘mode’).

Henrique Yuji

Using GitHub Copilot for Unit Testing? — Even if you find the idea of a AI tool like Copilot writing production code distasteful, it may have a place in speeding up writing tests.

Ianis Triandafilov

How to Destructure Props in Vue (Composition API) — How to correctly destructure props object in a Vue component while maintaining the reactivity.

Dmitri Pavlutin

Using Inline JavaScript Modules to Prevent CSS Blockage

Stoyan Stefanov

How to Build a GraphQL Server with Deno

Andy Jiang

? Code & Tools

Gluon: Framework for Creating Desktop Apps from Sites — A new approach for building desktop apps on Windows and Linux from Web sites using Node (or Deno) and already installed browsers (Chromium or Firefox). Initial macOS support has just been added too.

Gluon

Structura.js: Lightweight Library for Immutable State Management” It is based on the idea of structural sharing. The library is very similar to Immer.js, but it has some advantages over it.”

Giuseppe Raso

Tuple, a Lightning-Fast Pairing Tool Built for Remote Developers — High-resolution, crystal-clear screen sharing, low-latency remote control, and less CPU usage than you’d think possible.

Tuple sponsor

Bay.js: A Lightweight Library for Web Components — Makes it easy to create web components that can be reused across projects. It also boasts performant state changes and secure event binding.

Ian Dunkerley

Twify: Scaffold a Tailwind CSS Project with a Single Command — You can use your preferred package manager and it supports creating projects with Next.js, Nuxt 2/3, SvelteKit, Remix, Angular, and more.

Kazi Ahmed

Lazy Brush 2.0: A Library for Smooth Pointer Drawing — Allow your users to draw smooth curves and straight lines with your mouse, finger or any pointing device. This long standing library has just migrated to TypeScript and gained a new ‘friction’ option to customize the feel. GitHub repo.

Jan Hug

 Mafs: React Components for Interactive Math — Build interactive, animated visualizations using declarative code with illustrative demos like bezier curves. The documentation is fantastic – check out how easy it is to make plots. Or just head to the GitHub repo.

Steven Petryk

Are You Looking for a New Observability Tool?

TelemetryHub by Scout sponsor

Hyphenopoly 5.0: A Polyfill for Client-Side Hyphenation — An interesting use of WebAssembly here.

Mathias Nater

visx 3.0
↳ D3-powered visualization React components.

Atrament 3.0
↳ Library for drawing and handwriting on a canvas element.

HLS.js 1.3
↳ Library to play HLS (HTTP Live Streaming) in browsers, with MSE support.

? Jobs

Developer Relations Manager — Join the CKEditor team to build community around an Open Source project used by millions of users around the world ?

CKEditor

Backend Engineer, TypeScript (Berlin / Remote) — Thousands of people love our product (see Trustpilot for yourself). Join the team behind it and help us scale. ?

Feather

Find JavaScript Jobs with Hired — Create a profile on Hired to connect with hiring managers at growing startups and Fortune 500 companies. It’s free for job-seekers.

Hired

Écoute la musique..

Oxygene Pt 4, as Performed by JavaScript — This is fun. Dittytoy is a simple, JavaScript-powered online generative music tool and someone has put together a surprisingly faithful rendition of perhaps one of the best known instrumental synth songs ever, all the way from 1976.

Dittytoy

Flatlogic Admin Templates banner

Introducing Finch: An Open Source Client for Container Development

Today we are happy to announce a new open source project, Finch. Finch is a new command line client for building, running, and publishing Linux containers. It provides for simple installation of a native macOS client, along with a curated set of de facto standard open source components including Lima, nerdctl, containerd, and BuildKit. With Finch, you can create and run containers locally, and build and publish Open Container Initiative (OCI) container images.

At launch, Finch is a new project in its early days with basic functionality, initially only supporting macOS (on all Mac CPU architectures). Rather than iterating in private and releasing a finished project, we feel open source is most successful when diverse voices come to the party. We have plans for features and innovations, but opening the project this early will lead to a more robust and useful solution for all. We are happy to address issues, and are ready to accept pull requests. We’re also hopeful that with our adoption of these open source components from which Finch is composed, we’ll increase focus and attention on these components, and add more hands to the important work of open source maintenance and stewardship. In particular, Justin Cormack, CTO of Docker shared that “we’re bullish about Finch’s adoption of containerd and BuildKit, and we look forward to AWS working with us on upstream contributions.”

We are excited to build Finch in the open with interested collaborators. We want to expand Finch from its current basic starting point to cover Windows and Linux platforms and additional functionality that we’ve put on our roadmap, but would love your ideas as well. Please open issues or file pull requests and start discussing your ideas with us in the Finch Slack channel. Finch is licensed under the Apache 2.0 license and anyone can freely use it.

Why build Finch?

For building and running Linux containers on non-Linux hosts, there are existing commercial products as well as an array of purpose-built open source projects. While companies may be able to assemble a simple command line tool from existing open source components, most organizations want their developers to focus on building their applications, not on building tools.

At AWS, we began looking at the available open source components for container tooling and were immediately impressed with the progress of Lima, recently included in the Cloud Native Computing Foundation (CNCF) as a sandbox project. The goal of Lima is to promote containerd and nerdctl to Mac users, and this aligns very well with our existing investment in both using and contributing to the CNCF graduated project, containerd. Rather than introducing another tool and fragmenting open source efforts, the team decided to integrate with Lima and is making contributions to the project. Akihiro Suda, creator of nerdctl and Lima and a longtime maintainer of containerd, BuildKit, and runc, added “I’m excited to see AWS contributing to nerdctl and Lima and very happy to see the community growing around these projects. I look forward to collaborating with AWS contributors to improve Lima and nerdctl alongside Finch.”

Finch is our response to the complexity of curating and assembling an open source container development tool for macOS initially, followed by Windows and Linux in the future. We are curating the components, depending directly on Lima and nerdctl, and packaging them together with their dependencies into a simple installer for macOS. Finch, via its macOS-native client, acts as a passthrough to nerdctl which is running in a Lima-managed virtual machine. All of the moving parts are abstracted away behind the simple and easy-to-use Finch client. Finch manages and installs all required open source components and their dependencies, removing any need for you to manage dependency updates and fixes.

The core Finch client will always be a curated distribution composed entirely of open source, vendor-neutral projects. We also want Finch to be customizable for downstream consumers to create their own extensions and value-added features for specific use cases. We know that AWS customers will want extensions that make it easier for local containers to integrate with AWS cloud services. However, these will be opt-in extensions that don’t impact or fragment the open source core or upstream dependencies that Finch depends on. Extensions will be maintained as separate projects with their own release cycles. We feel this model strikes a perfect balance for providing specific features while still collaborating in the open with Finch and its upstream dependencies. Since the project is open source, Finch provides a great starting point for anyone looking to build their own custom-purpose container client.

In summary, with Finch we’ve curated a common stack of open source components that are built and tested to work together, and married it with a simple, native tool. Finch is a project with a lot of collective container knowledge behind it. Our goal is to provide a minimal and simple build/run/push/pull experience, focused on the core workflow commands. As the project evolves, we will be working on making the virtualization component more transparent for developers with a smaller footprint and faster boot times, as well as pursuing an extensibility framework so you can customize Finch however you’d like.

Over time, we hope that Finch will become a proving ground for new ideas as well as a way to support our existing customers who asked us for an open source container development tool. While an AWS account is not required to use Finch, if you’re an AWS customer we will support you under your current AWS Support plans when using Finch along with AWS services.

What can you do with Finch?

Since Finch is integrated directly with nerdctl, all of the typical commands and options that you’ve become fluent with will work the same as if you were running natively on Linux. You can pull images from registries, run containers locally, and build images using your existing Dockerfiles. Finch also enables you to build and run images for either amd64 or arm64 architectures using emulation, which means you can build images for either (or both) architectures from your M1 Apple Silicon or Intel-based Mac. With the initial launch, support for volumes and networks is in place, and Compose is supported to run and test multiple container applications.

Once you have installed Finch from the project repository, you can get started building and running containers. As mentioned previously, for our initial launch only macOS is supported.

To install Finch on macOS download the latest release package. Opening the package file will walk you through the standard experience of a macOS application installation.

Finch has no GUI at this time and offers a simple command line client without additional integrations for cluster management or other container orchestration tools. Over time, we are interested in adding extensibility to Finch with optional features that you can choose to enable.

After install, you must initialize and start Finch’s virtual environment. Run the following command to start the VM:
finch vm init

To start Finch’s virtual environment (for example, after reboots) run:
finch vm start

Now, let’s run a simple container. The run command will pull an image if not already present, then create and start the container instance. The —rm flag will delete the container once the container command exits.

finch run –rm public.ecr.aws/finch/hello-finch
public.ecr.aws/finch/hello-finch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:a71e474da9ffd6ec3f8236dbf4ef807dd54531d6f05047edaeefa758f1b1bb7e: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:705cac764e12bd6c5b0c35ee1c9208c6c5998b442587964b1e71c6f5ed3bbe46: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:6cc2bf972f32c6d16519d8916a3dbb3cdb6da97cc1b49565bbeeae9e2591cc60: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.9 s total: 0.0 B (0.0 B/s)

@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@ @@@@@@@@@@@
@@@@@@@ @@@@@@@
@@@@@@ @@@@@@
@@@@@@ @@@@@
@@@@@ @@@# @@@@@@@@@
@@@@@ @@ @@@ @@@@@@@@@@
@@@@% @ @@ @@@@@@@@@@@
@@@@ @@@@@@@@
@@@@ @@@@@@@@@@@&
@@@@@ &@@@@@@@@@@@
@@@@@ @@@@@@@@
@@@@@ @@@@@(
@@@@@@ @@@@@@
@@@@@@@ @@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@

Hello from Finch!

Visit us @ github.com/runfinch

Lima supports userspace emulation in the underlying virtual machine. While all the images we create and use in the following example are Linux images, the Lima VM is emulating the CPU architecture of your host system, which might be 64-bit Intel or Apple Silicon-based. In the following examples we will show that no matter which CPU architecture your Mac system uses, you can author, publish, and use images for either CPU family. In the following example we will build an x86_64-architecture image on an Apple Silicon laptop, push it to ECR, and then run it on an Intel-based Mac laptop.

To verify that we are running our commands on an Apple Silicon-based Mac, we can run uname and see the architecture listed as arm64:

uname -sm
Darwin arm64

Let’s create and run an amd64 container using the –platform option to specify the non-native architecture:

finch run –rm –platform=linux/amd64 public.ecr.aws/amazonlinux/amazonlinux uname -sm
Linux x86_64

The –platform option can be used for builds as well. Let’s create a simple Dockerfile with two lines:

FROM public.ecr.aws/amazonlinux/amazonlinux:latest
LABEL maintainer=”Chris Short”

By default, Finch would build for the host’s CPU architecture platform, which we showed is arm64 above. Instead, let’s build and push an amd64 container to ECR. To build an amd64 image we add the –platform flag to our command:

finch build –platform linux/amd64 -t public.ecr.aws/cbshort/finch-multiarch .
[+] Building 6.5s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 142B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for public.ecr.aws/amazonlinux/amazonlinux:latest 1.2s
=> [auth] aws:: amazonlinux/amazonlinux:pull token for public.ecr.aws 0.0s
=> [1/1] FROM public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 3.9s
=> => resolve public.ecr.aws/amazonlinux/amazonlinux:[email protected]:d0cc2f24c888613be336379e7104a216c9aa881c74d6df15e30286f67 0.0s
=> => sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2 62.31MB / 62.31MB 5.1s
=> exporting to oci image format 5.2s
=> => exporting layers 0.0s
=> => exporting manifest sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652 0.0s
=> => exporting config sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8 0.0s
=> => sending tarball 1.3s
unpacking public.ecr.aws/cbshort/finch-multiarch:latest (sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)…
Loaded image: public.ecr.aws/cbshort/finch-multiarch:latest%

finch push public.ecr.aws/cbshort/finch-multiarch
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.v2+json, sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652)
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 27.9s total: 1.6 Ki (60.0 B/s)

At this point we’ve created an image on an Apple Silicon-based Mac which can be used on any Intel/AMD CPU architecture Linux host with an OCI-compliant container runtime. This could be an Intel or AMD CPU EC2 instance, an on-premises Intel NUC, or, as we show next, an Intel CPU-based Mac. To show this capability, we’ll run our newly created image on an Intel-based Mac where we have Finch already installed. Note that we have run uname here to show the architecture of this Mac is x86_64, which is analogous to what the Go programming language references 64-bit Intel/AMD CPUs as: amd64.

uname -a
Darwin wile.local 21.6.0 Darwin Kernel Version 21.6.0: Thu Sep 29 20:12:57 PDT 2022; root:xnu-8020.240.7~1/RELEASE_X86_64 x86_64

finch run –rm –platform linux/amd64 public.ecr.aws/cbshort/finch-multiarch:latest uname -a
public.ecr.aws/cbshort/finch-multiarch:latest: resolved |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:af61210145ded93bf2234d63ac03baa24fe50e7187735f0849d8383bd5073652: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:474c401eafe6b05f5a4b5b4128d7b0023f93c705e0328243501e5d6c7d1016a8: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e3cfe889ce0a44ace07ec174bd2a7e9022e493956fba0069812a53f81a6040e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 9.2 s total: 59.4 M (6.5 MiB/s)
Linux 73bead2f506b 5.17.5-300.fc36.x86_64 #1 SMP PREEMPT Thu Apr 28 15:51:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

You can see the commands and options are familiar. As Finch is passing through our commands to the nerdctl client, all of the command syntax and options are what you’d expect, and new users can refer to nerdctl’s docs.

Another use case is multi-container application testing. Let’s use yelb as an example app that we want to run locally. What is yelb? It’s a simple web application with a cache, database, app server, and UI. These are all run as containers on a network that we’ll create. We will run yelb locally to demonstrate Finch’s compose features for microservices:

finch vm init
INFO[0000] Initializing and starting finch virtual machine…
INFO[0079] Finch virtual machine started successfully

finch compose up -d
INFO[0000] Creating network localtest_default
INFO[0000] Ensuring image redis:4.0.2
docker.io/library/redis:4.0.2: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:cd277716dbff2c0211c8366687d275d2b53112fecbf9d6c86e9853edb0900956: done |++++++++++++++++++++++++++++++++++++++|

[ snip ]

layer-sha256:afb6ec6fdc1c3ba04f7a56db32c5ff5ff38962dc4cd0ffdef5beaa0ce2eb77e2: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 11.4s total: 30.1 M (2.6 MiB/s)
INFO[0049] Creating container localtest_yelb-appserver_1
INFO[0049] Creating container localtest_redis-server_1
INFO[0049] Creating container localtest_yelb-db_1
INFO[0049] Creating container localtest_yelb-ui_1

The output indicates a network was created, many images were pulled, started, and are now all running in our local test environment.

In this test case, we’re using Yelb to figure out where a small team should grab lunch. We share the URL with our team, folks vote, and we see the output via the UI:

What’s next for Finch?

The project is just getting started. The team will work on adding features iteratively, and is excited to hear from you. We have ideas on making the virtualization more minimal, with faster boot times to make it more transparent for users. We are also interested in making Finch extensible, allowing for optional add-on functionality. As the project evolves, the team will direct contributions into the upstream dependencies where appropriate. We are excited to support and contribute to the success of our core dependencies: nerdctl, containerd, BuildKit, and Lima. As mentioned previously, one of the exciting things about Finch is shining a light on the projects it depends upon.

Please join us! Start a discussion, open an issue with new ideas, or report any bugs you find, and we are definitely interested in your pull requests. We plan to evolve Finch in public, by building out milestones and a roadmap with input from our users and contributors. We’d also love feedback from you about your experiences building and using containers daily and how Finch might be able to help!

Flatlogic Admin Templates banner

12+ Best Node.js Frameworks for Web App Development in 2022

Node.js is getting increasingly popular among developers, to the point where some developers call Node.js their primary choice for backend development. In this article, we review the 12 best Node.js web frameworks that we rate according to their popularity and unique toolkits for time and cost-efficiency.

Is Node.js a web framework?

So is Node.js a web framework? The most common way of referring to it is as a web framework. Still, Node.js is a JavaScript execution environment – a server-side platform for JavaScript code execution and portability. Instead, web frameworks focus on building features. A lot of developers have built Node.js web frameworks, such as Nest.js, Express.js, and other toolkits, for Node.js applications, providing a unique experience for software developers.

What are Node.js web frameworks?

Every web application technology offers different types of frameworks, all supporting a specific use case in the development lifecycle. Node.js web frameworks come in three types – Full-Stack Model-View-Controller (MVC), MVC, and REST API web frameworks.

Node.js web framework features

API of Node.js is asynchronous. You can use the Node.js server to move after a data request, rather than waiting for the API to return the information.
The code execution process of Node.js is faster compared to the reverse backend framework.
Node.js runs on a single-threaded model.
With Node.js web framework developers never face buffering issues because it transfers information by parts.
It is supported by Google’s Node.js runtime environment.

Through these features, it is clear to understand why developers more often choose Node.js for Backend development. Let’s take a closer look at each Node.js web framework.

NestJS

Github repo: https://github.com/nestjs/nest
License: MIT
Github stars: 47400

NestJS is object-oriented and functional-reactive programming (FRP), widely used for developing enterprise-level dynamic and scalable web solutions, being well featured with extensive libraries.

NestJS is based on TypeScript as its core programming language, but is also highly compatible with a JavaScript subset and easily integrated with other frameworks such as ExpressJS through a command-line interface.

Why use NestJS:

Modern CLI
 functional-reactive programming
Multiple easy-to-use external libraries
Straightforward Angular compatibility

NestJS has a clean and modular architecture pattern aiding developers to build scalable and maintainable applications with ease. 

Pros of NestJS:

Powerful but super friendly to work with
Fast development
Easy to understand documentation
Angular style syntax for the backend

NodeJS ecosystem
Typescript
Its easy to understand since it follows angular syntax
Good architecture
Integrates with Narwhal Extensions
Typescript makes it well integrated in vscode
Graphql support easy
Agnosticism
Easily integrate with others external extensions

ExpressJS

Github repo: https://github.com/expressjs/express
License: MIT
Github stars: 57200

ExpressJS is minimalistic, asynchronous, fast, and powerful and was launched in 2010. It’s beginner-friendly thanks to a low learning curve that requires only a basic understanding of the Node.js environment and programming skills. ExpressJS optimises client-to-server requests and observed user interaction via an API very quickly, and also helps you manage high-speed I/O operations. 

Why use ExpressJS:

Enhanced content coordination
MVC architecture pattern
HTTP helpers
Asynchronous programming to support multiple independent operations

ExpressJS offers templating, robust routing, security and error handling, making it suitable for building enterprise or browser-based applications.

Pros of ExpressJS :

Simple
NodeJS
Javascript
High performance
Robust routing
Middlewares
Open source
Great community
Hybrid web applications
Well documented
Light weight

Meteor

Github repo: https://github.com/meteor/meteor
License: MIT
Github stars: 42900

Meteor is an open-source framework that was launched in 2012 that works best for teams who want to develop in a single language, being a full-featured Node.js web framework. Meteor is ideal for modern real-time applications as it facilitates instant data transfer between server and client.

Why use Meteor:

Cross-platform web framework
Rapid prototyping using the CLI
Extensive community support and open-source code
End-to-end solution
Seamless integration with other frameworks

The Meteor is an excellent option for those who are familiar with Javascript and prefer it. It’s a great one for both web and mobile app development as well. Meteor is great for applications that require a lot of updates that need to be sent out, even in a live environment.

Pros of Meteor :

Real-time
Full stack, one language
Best app dev platform available today
Data synchronization
Javascript
Focus on your product not the plumbing
Hot code pushes
Open source
Live page updates
Latency compensation
Ultra-simple development environment
Great for beginners
Smart Packages

KoaJS

Github repo: https://github.com/koajs/koa
License: MIT
Github stars: 32700

Koa has been called the next-generation Node.js web framework, and it’s one of the best of the bunch. Koa uses a stack-based approach to handling HTTP mediators, which makes it a great option for easy API development. Koa is similar to ExpressJS, so it’s fairly easy to switch from either one. Despite the same features and flexibility, Koa reduces the complexity of writing code even more.  

Why use Koa:

Multi-level customisation
Considered a lightweight version of ExpressJS
Supplied with cascading middleware ( user experience personalisation)
Node mismatch normalization
Cleans caches and supports content and proxy negotiation

Use Koa when performance is the main focus of your web application. Koa is ahead of ExpressJS in some scenarios, so you can use it for large-scale projects. 

Pros of Koa :

Async/Await
JavaScript
REST API

socket.io

Github repo:https://github.com/socketio/socket.io
License: MIT
Github stars: 55900

The socket is a Javascript library that works most effectively for real-time web applications. The socket is used when communication between real-time web clients and servers needs to be efficiently bidirectional. 

Why use socket.io:

Binary support
Multiplexing support
Reliability
Auto-reconnection support
Auto-correction and error detection 

The socket is a great choice when building real-time applications like video conferencing, chat rooms and multiplayer games with servers being required to send data out before it’s requested from the client-side.

Pros of socket :

Real-time
Event-based communication
NodeJS
WebSockets
Open source
Binary streaming
No internet dependency
Large community

TotalJS

Github repo: https://github.com/totaljs/
License: MIT
Github stars: n/a

TotalJS is a web framework that offers a CMS-like user experience and has almost all the functionality you need in a Node.js environment. The framework is a full open-source framework that provides developers with the ultimate flexibility. There are various options available for the framework, e.g. CMS, and HelpDesk. Through these options, your application will have more integration possibilities with the REST service and hyper-fast, low-maintenance, stable applications. 

TotalJS is most well-known for its real-time, high-precision tracking in modern applications. 

Pros of TotalJS:

Tracking in real-time
API Testing
Automatic project discovery
Compatibility with multiple databases
Flexibility to work with different frontend frameworks
Fast development and low cost of maintenance

SailsJS

Github repo: https://github.com/balderdashy/sails
License: MIT
Github stars: 22247

SailsJS is similar to the MVC architect pattern of web frameworks such as Ruby on Rails, and it supports modernized data-centric development. Compatible with all databases, at the same time it flexibly integrates Javascript frameworks. SailsJS is the most relevant framework for building high-quality custom applications. Its special code-writing policy helps reduce the code needed, allowing you to integrate npm modules while remaining flexible and open source. 

Pros of SailsJS:

REST API auto-generation
Multiple security policies
Frontend agnosticism
Object Relational Mapping for framework databases compatibility
Supports ExpressJS integration for HTTP requests and socket.io for WebSockets 

FeathersJS

Github repo: https://github.com/feathersjs/feathers
License: MIT
Github stars: 14000

FeathersJS is gaining popularity between website and application developers because it provides flexibility in development with react native as well as Node.js. It is a framework of microservices because it operates with more than one database, providing real-time functionality. FeathersJS makes it easier for web developers to code concretely and understandably.

Pros of FeathersJS:

Reusable services
Modern CLI
Automated RESTful APIs
Authentication and authorization plugins by default
Lightweight

FeathersJS natively supports all frontend technologies, and its database-agnostic is best performed in a Node.js environment because the web framework supports Javascript and Typescript. It allows you to create production-ready applications and real-time applications, and also REST APIs in just a few days.

hapi.dev

Github repo: https://github.com/hapijs/hapi
License: MIT
Github stars: 13900

Hapi is an open-source framework for web applications. It is well-known for proxy server development as well as REST APIs and some other desktop applications since the framework is robust and security-rich. It has a wealth of built-in plugins, therefore this means you don’t have to worry about running non-official middleware. 

Pros of Hapi:

Extensive and scalable applications
Low overhead
Secure default settings
Rich ecosystem
Quick and easy bug fixes
Compatible with multiple databases
Compatible with Rest API and HTTPS proxy applications
Caching, authentication and input validation by default

AdonisJS

Github repo: https://github.com/adonisjs/core
License: MIT
Github stars: 12600

AdonisJS is a Model-View-Controller Node.js web framework based on a structural template repeating Laravel. The framework decreases the time required for development by focusing on core details such as out of the box web socket support, development speed and performance, lifecycle dependency management, and built-in modules for data validation, mailing, and authentication. Command-based coding structure and the interface is easy for developers to understand. The Node.js web framework uses the concepts of dependency injections through IoC or control inversion. It offers developers an organized structure for accessing all the components of the framework. 

Pros of AdonisJS:

Organised template with folder structure
Easy user input validation.
Ability to write custom functional testing scripts
Support for Lucid object-relational mapping.
Threat protection such as cross-site forgery protection

Loopback

Github repo: https://github.com/loopbackio/loopback-next
License: MIT
Github stars: 4200

Loopback provides the best connection with any Node.js web framework and can be integrated with multiple API services. You can best use the platform to build REST APIs with minimal lead time. Loopback offers outstanding flexibility, interfacing with a broad range of browsers, devices,  databases, and services. Framework’s structured code helps support application modules and speed of development. Loopback has the best documentation, allowing even beginners to work with it. 

Pros of Loopback:

Comprehensive support for networked applications
The built-in client API explorer
High extensibility
Multiple database support
Clean and modular code
Full-stack development
Data storage, third-party access, and user management

Loopback is designed solely for creating powerful end-to-end APIs and handling requests for them. 

DerbyJS

Github repo: https://github.com/derbyjs/derby
License: MIT
Github stars: 4622

DerbyJS is a full-stack web application development platform powered by Node.js technology. This framework uses the Model-View-Controller architecture with an easy-to-write nomenclature for coding. This framework is great for building real-time web applications since it allows essentially the same code to work on Node.js and in the browser. That way, you don’t have to worry about writing separate codes for the view part. DerbyJS decreases the delay in content delivery by rendering a client-side view on the server. Performing this makes the application SEO-friendly and improves the user experience. 

Pros of DerbyJS:

Support for Racer Engine
Real-time conversion for data synchronization
Offline use and conflict resolution support

Version control
Client-side and server-side code sharing
Rendering client-side views on the server-side

Conclusion

Node.js web frameworks make application development easier with their enormous possibilities for the advancement of web and mobile application development.  Under the conditions of increasingly growing technologies, a thorough investigation of project requirements and accessibility of resources is the key to choosing the right web framework that will provide the greatest results.

The post 12+ Best Node.js Frameworks for Web App Development in 2022 appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

Caching NextJS Apps with Serverless Redis using Upstash

The modern application we build today is sophisticated. Every time a user loads a webpage, their browser needs to download the bulk of data in order to display that page. A website may consist of millions of data and serve hundreds of API calls. For the data to move smoothly with zero delays between server and client we can follow many strategies. We, developers want our app to deliver the best user experience possible, to achieve this we can employ a variety of techniques available.

There are a number of ways we can address this situation. It would be the best optimization if we could apply techniques that can reduce the amount of latency to perform read/write operations on the database. One of the most popular ways to optimize our API calls is by implementing Caching mechanism.

What is Caching?

Caching is the process of storing copies of files in a cache, or temporary storage location so that they can be accessed more quickly. Technically, a cache is any temporary storage location for copies of files or data, but the term is often used in reference to Internet technologies.

By Cloudflare.com

The most common example of caching we can see is the browser cache, which stores frequently accessed website resources locally so that it does not have to retrieve them over the network each time they are needed. Caching can boost the performance bottleneck of our web applications. When mostly dealing with heavy network traffic and large API calls optimization this technique can be one of the best options for our performance optimization.

Redis: Caching in Server-side

When we talk about caching in servers, one of the top pioneers of caching built-in databases is Redis. Redis (for REmote DIctionary Server) is an open-source NoSQL in-memory key-value data store. One of the best things about Redis is that we can persist data in a database that can continuously store them unless we delete or flush it manually. It is an in-memory database, its data access operations are faster than any other disk-based database, which eventually makes Redis the best choice for caching.

Redis can also be used as a primary database if needed. With the help of Redis, we can call to access and reaccessed as many times as needed without running the database query again. Depending on the Redis cache setup, this can stay in memory for a few hours, a few minutes, or longer. We even can set an expiration time for our caching which we will implement in our demo application.

Redis is able to handle huge amounts of data in real-time, making use of its in-memory data storage capabilities to help support highly responsive database constructs. Caching with Redis allows for fewer database accesses, which helps to reduce the amount of traffic and instances required even achieving a sub-millisecond of latency.

We will implement Redis in our Next application and see the performance gain we can achieve.

Let’s dive into it.

Initializing our Project

Before we begin I assume you have Node installed on your machine so that you can follow along with the steps involved. We will use Next for our project because it helps us write front-end and back-end logic with no configuration needed. We will create a starter project with the following command:

$ npx [email protected]typescript

After the command, give the project the desired name. After everything is done and the project is made for us we can add the dependencies we need to work on in this demo application.

$ npm i ioredis @chakra-ui/core @emotion/core @emotion/styled emotion-theming
$ npm i –save-dev @types/node @types/ioredis

The command above is all the dependencies we will deal with in this project. We will be making the use of ioredis to communicate with our Redis database and style things up with ChakraUI.

As we are using typescript for our project. We will also need to install the typescript version of the node and ioredis which we did in the second command as our local dev dependencies.

Setting up Redis with Upstash

We definitely need to connect our application with Redis. You can use Redis locally and connect to it from your application or use a Redis cloud instance. For this project demo, we will be using Upstash Redis.

Upstash is a serverless database for Redis, with servers/instances, you pay per hour or a fixed price. With Serverless, you pay per request. This means we are not charged when the database is not in use. Upstash configures and manages the database for you.

Head on to Upstash official website and start with an easy free plan. For our demo purpose, we don’t need to pay. Visit the Upstash console after creating your new account and create a new Redis serverless database with Upstash.

You can find the example of the connection string used ioredis in the Upstash dashboard. Copy the blue overlay URL. We will use this connection string to connect to the serverless Redis instance provided in with free tire by Upstash.

import Redis from “ioredis”;
export const redisConnect = new Redis(process.env.REDIS_URL);

In the snippet above we connected our app with the database. We can now use our Redis server instance provided by Upstash inside or our App.

Populating static data

The application we are building might not be an exact use case but, we actually want to see the implementation of caching performance Redis can make to our Application and know how it’s done.

Here we are making a Pokemon application where users can select a list of Pokemon and choose to see the details of Pokemon. We will implement caching to the visited Pokemon. In other words, if users visit the same Pokemon twice they will receive the cached result.

Let’s populate some data inside of our Pokemon options.

export const getStaticProps: GetStaticProps = async () => {
const res = await fetch(
‘https://pokeapi.co/api/v2/pokemon?limit=200&offset=200’
);
const { results }: GetPokemonResults = await res.json();

return {
props: {
pokemons: results,
},
};
};

We are making a call to our endpoint to fetch all the names of Pokemon. The GetStaticProps help us to fetch data at build time. The getStaticProps()function gives props needed for the component Home to render the pages that are generated at build time, not at runtime, and are static.

const Home: NextPage<{ pokemons: Pokemons[] }> = ({ pokemons }) => {
const [selectedPokemon, setSelectedPokemon] = useState<string>(”);
const toast = useToast();
const router = useRouter();

const handelSelect = (e: any) => {
setSelectedPokemon(e.target.value);
};

const searchPokemon = () => {
if (selectedPokemon === ”)
return toast({
title: ‘No pokemon selected’,
description: ‘You need to select a pokemon to search.’,
status: ‘error’,
duration: 3000,
isClosable: true,
});
router.push(`/details/${selectedPokemon}`);
};

return (
<div className={styles.container}>
<main className={styles.main}>
<Box my=”10″>
<FormControl>
<Select
id=”country”
placeholder={
selectedPokemon ? selectedPokemon : ‘Select a pokemon’
}
onChange={handelSelect}
>
{pokemons.map((pokemon, index) => {
return <option key={index}>{pokemon.name}</option>;
})}
</Select>
<Button
colorScheme=”teal”
size=”md”
ml=”3″
onClick={searchPokemon}
>
Search
</Button>
</FormControl>
</Box>
</main>
</div>
);
};

We have successfully populated some static data inside our dropdown to select some Pokemon. Let’s implement a page redirect to a dynamic route when we select a Pokemon name and click the search button.

Adding dynamic page

Creating a dynamic page inside of Next is simple as it has a folder structure provided, which we can leverage to add our dynamic Routes. Let’s create a detailed page for our Pokemon.

const PokemonDetail: NextPage<{ info: PokemonDetailResults }> = ({ info }) => {
return (
<div>
// map our data here
</div>
);
};

export const getServerSideProps: GetServerSideProps = async (context) => {
const { id } = context.query;
const name = id as string;
const data = await fetch(`https://pokeapi.co/api/v2/pokemon/${name}`);
const response: PokemonDetailResults = await data.json();

return {
props: {
info: response,
},
};
};

We made the use of getServerSideProps we are making the use of Server-Side-Rendering provided by Next which will help us to pre-render the page on each request using the data returned by getServerSideProps. This comes in handy when we want to fetch data that changes often and have the page updated to show the most current data. After receiving data we are mapping it over to display it on the screen.

Until now we really have not implemented caching mechanism into our project. Each time the user visits the page we are hitting the API endpoint and sending them back the data they requested for. Let’s move ahead and implement caching into our application.

Caching data

To implement caching in the first place we want to read our Redis database. As discussed Redis stores its data as key-value pairs. We will find whether the key is stored in Redis or not and feed the client with the respective data needed. For this to achieve we will create a function that reads Redis for the key client is requesting.

export const fetchCache = async <T>(key: string, fetchData: () => Promise<T>) => {
const cachedData = await getKey(key);
if (cachedData)return cachedData
return setValue(key, fetchData);
}

When we will know the client is requesting data they have not visited yet we will provide them a copy of data from the server and also behind the scene make a copy inside our Redis database. So, that we can serve data fast through Redis in the next request.

We will write a function where it takes in a parameter of key and if the key exists in the database it will return us parsed value to the client.

const getKey = async <T>(key: string): Promise<T | null> => {
const result = await redisConnect.get(key);
if (result) return JSON.parse(result);
return null;
}

We also need a function where it takes in a key and set the new values alongside with the keys inside our database only if we don’t have that key stored inside of Redis.

const setValue = async <T>(key: string, fetchData: () => Promise<T>): Promise<T> => {
const setValue = await fetchData();
await redisConnect.set(key, JSON.stringify(setValue));
return setValue;
}

Until now we have written everything we need to implement Caching. We will just need is to invoke the function in our dynamic pages. Inside of our [id].tsx we will make a minor tweak where we can invoke an API call only if we don’t have the requested key in Redis.

For this to happen we will need to pass a function as a prop to our fetchCache function.

export const getServerSideProps: GetServerSideProps = async (context) => {
const { id } = context.query;
const name = id as string;

const fetchData = async () => {
const data = await fetch(`https://pokeapi.co/api/v2/pokemon/${name}`);
const response: PokemonDetailResults = await data.json();
return response;
};

const cachedData = await fetchCache(name, fetchData);

return {
props: {
info: cachedData,
},
};
};

We added some tweaks to our code we wrote before. We imported and made the use of fetchCache functions inside of the dynamic page. This function will take in function as a prop and do the checking for key respectively.

Adding expiry

The expiration policy employed by a cache is another factor that helps determine how long a cached item is retained. The expiration policy is usually assigned to the object when it is added to the cache. This can also be customized according to the type of object that’s being cached. A common strategy involves assigning an absolute time of expiration to each object when it is added to the cache. Once that time passes, the item is removed from the cache accordingly.

Let’s also use the caching expiration feature of Redis in our Application. To implement this we just need to add a parameter to our fetchCache function.

const cachedData = await fetchCache(name, fetchData, 60 * 60 * 24);
return {
props: {
info: cachedData,
},
};

export const fetchCache = async (key: string, fetchData: () => Promise<unknown>, expiresIn: number) => {
const cachedData = await getKey(key);
if (cachedData) return cachedData
return setValue(key, fetchData, expiresIn);
}

const setValue = async <T>(key: string, fetchData: () => Promise<T>, expiresIn: number): Promise<T> => {
const setValue = await fetchData();
await redisConnect.set(key, JSON.stringify(setValue), “EX”, expiresIn);
return setValue;
}

For each key that is stored in our Redis database, we have added an expiry time of one day. When the set amount of time elapses, Redis will automatically get rid of the object from the cache so that it may be refreshed by calling the API again. This really helps when we want to feed the client with the updated fresh data every time they call an API.

Performance testing

After all of all these efforts we did which is all for our App performance and optimization. Let’s take a look at our application performance.

This might not be a suitable performance testing for small application. But app serving thousands of API calls with big data set can see a big advantage.

I will make use of the perf_hooks module to assist in measuring the time for our Next lambda to complete an invocation. This is not really provided by Next instead it’s imported from Node. With these APIs, you can measure the time it takes individual dependencies to load, how long your app takes to initially start, and even how long individual web service API calls take. This allows you to make more informed decisions on the efficiency of specific code blocks or even algorithms.

import { performance } from “perf_hooks”;

const startPerfTimer = (): number => {
return performance.now();
}

const endPerfTimer = (): number => {
return performance.now();
}

const calculatePerformance = (startTime: number, endTime: number): void => {
console.log(`Response took ${endTime – startTime} milliseconds`);
}

This may be overkill, to create a function for a line of code but we basically can reuse this function in our application when needed. We will add these function calls to our application and see the results millisecond(ms) of latency, it can impact our app performance overall.

In the above screenshot, we can see the millisecond of improvements in fetching the response. This can be a small improvement in the small application we have built. But, this may be a huge time and performance boost, especially working with large datasets.

Conclusion

Data-heavy applications do need caching operations to improve the response time and even reduce the cost of data volume and bandwidth. With the help of Redis, we can deduct the expensive operation database operations, third-party API calls, and server to server requests by duplicating a copy of the previous requests in our Redis instance.

There might be some cases, we might need to delegate caching to other applications or microservices or any form of key-value storage system that allows us to store and use when we need it. We chose Redis since it is open source and very popular in the industry. Redis’s other cool features include data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, HyperLogLogs, and many more.

I highly recommend you visit the Redis documentation here to gain a depth understanding of other features provided out of the box. Now we can go forth and use Redis to cache frequently queried data in our applications and gain a considerable performance boost.

Please find the code repository here.

Happy coding!

The post Caching NextJS Apps with Serverless Redis using Upstash appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

What is Node.js?

NodeJS is a backend JavaScript runtime environment (RTE) designed in 2009 by Ryan Dahl, that is used to build server-side applications like websites and internal API services. Node.js is also a cross-platform stack, meaning that applications can be run on such operating systems as macOS, Microsoft Windows and Linux.

Node.js is powered by Google’s Chrome JavaScript engine V8, with web applications event-driven in an asynchronous way. Also, Node.js uses the world’s largest ecosystem of open source libraries – npm (The Node Package Manager).

The npm modules idea is a publicly available set of reusable components, which are available through simple installation via an online repository, with both version and dependency management.

The architecture of Node.js work

Node.js has a limited set of thread pools of requests processing.
Node.js queues requests as they come in.
Then comes the Single-Threaded Event Loop – the core component that waits indefinitely for requests.
The loop picks up the request in the queue as it arrives and verifies if it requires an I/O blocking operation.
If the request doesn’t have a blocking I/O operation, the loop processes it and sends a response.
If the request appears to have a blocking operation, the loop creates a thread from the internal thread pool to control the request.
As soon as the blocking task is handled, the event loop continues monitoring blocking requests and queues them. That’s called a non-blocking nature.

Why use Node.js

Single-Threaded Event Loop Model. Node.js uses a ‘Single-Threaded Event Loop Model’ architecture that manages multiple requests submitted via clients. While the main loop of events is performed by a single thread, the I/O work in the background is performed by separate threads because the I/O operations in the Node API are asynchronous (non-blocking design) to fit into the event loop. 

Performance. Through Google Chrome’s V8 JavaScript engine on which the work is built Node.js allows us to run the code faster and easier.

High scalability.  Applications in Node.js are very scalable because they work asynchronously. Node.js operates in a single thread when one request is submitted, processing begins and is ready to prepare for the next request. When ready, it sends the request back to the client.
NPM package. 

Global community. NodeJs has an enormous global community that actively communicating on GitHub, Reddit, and StackOverflow. Community members also share completely free tools, modules, packages, and frameworks with each other.

Extended hosting options. Node.js deployments occur via PaaS providers such as AWS and Heroku. Thus, NodeJs minimizes the number of servers required to host an application, ultimately reducing page load times by 50%.

Who uses NodeJs

Node.js enables to build the business solutions due to which you have an edge over competitors, e.g.:

IoT apps;
SPA;
Chatbots;
Data Streaming, etc.

Node.js is quite popular it is used for development by both global companies and startups, below are examples of the most popular of them: 

Uber
Slack
Reddit
Figma
AliExpress
NASA
LinkedIn
eBay
Netflix
PayPal
Mozilla
Yandex

How to create your application on Node.js backend using Flatlogic Platform

1 Step. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

2 Step. Choosing the Starter Template

Then you’re choosing the design of the web app.

3 Step. Schema Editor

In this part you will need to know which application you want to build, that is, CRM or E-commerce, also in this part you build a database schema i.e. tables and relationships between them.

If you are not familiar with database design and it is difficult for you to understand what tables are, we have prepared several ready-made example schemas of real-world apps that you can build your app upon modification:

E-commerce app;
Time tracking app;
Books store;
Chat (messaging) app;
Blog.

Flatlogic Platform offers you the opportunity to create a CRUD application with the Node.js backend literally in a few minutes. As a result, you will get DataBase models, Node.js CRUD admin panel, and API.

The post What is Node.js? appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

Angular Server Side Rendering on Azure Static Web Apps

This post is about how implement Angular Server side rendering apps on Azure Static Web Apps. What is server-side rendering – A normal Angular application executes within the browser, rendering pages within the DOM in response to user actions. Angular Universal executes on the server, generating static application pages that later get bootstrapped on the consumer. This implies that the appliance usually renders additional quickly, giving users an opportunity to look at the appliance layout before it becomes totally interactive.

To get started, install the @nguniversal/express-engine package to your Angular application. You can do this by running the ng add @nguniversal/express-engine command. Once you execute this command, it will modify few files in your Angular application like this.

Now you can check the application by running npm run build:ssr and then npm run serve:ssr. It will build the app and then serve it in localhost:4000 address. You won’t see any difference when you browse the application. You can find more details about Angular Server side rendering, its pros and cons in the Angular Website.

Next lets push this app to Github, so that we can create a Static Web App for this. Before creating the project, I created a GitHub repo. After checking whether every thing is working expected, commit and push the changes to the GitHub repository. Next create a static web app in Azure Portal. Under deployment details, configure your GitHub repo and main branch. And in the Build details, choose Angular as the build presets. Change the output location from dist to dist/AngularSSRSWA/browser.

Click on the Review and Create button to review the configuration and create static web app. Once it is done, open your GitHub repository, and find the workflows directory in the root – it will be under the .github directory. Edit the yml file and add app_build_command with this npm run prerender.

You can find more details about the app_build_command here – Build configuration for Azure Static Web Apps)

Commit the changes – Azure will build the application and deploy it.

This way you will be able to deploy Angular Server side rendering / Angular universal apps to Azure Static Web Apps. Here are some resources which will help you to learn more about Static Web Apps.

Quickstart: Building your first static site with Azure Static Web Apps
Configure front-end frameworks and libraries with Azure Static Web Apps

Happy Programming 🙂Flatlogic Admin Templates banner