What is Git and Why Use It

Introduction and Definition
Repositories and Commits
Branches and Merging
Pull Requests
Commands
Why use Git?
Articles You Might Like

What is Git

Git is a version control system for keeping track of changes to files. Using Git lets you always have a record of all adjustments, and return to specific versions when needed. Using it is easy to learn and takes up minimal space with high productivity.  The feature that sets it apart from nearly every other SCM out there is its branching model. What makes Git extremely simple is the ability to merge changes from several people into a single source.  You can use GitHub or other online hosts where you can also store backups of your files and their revision history. 

Repositories and Commits

Repositories (or repos) are collections of code. To keep track of the development process, Repositories include commits to the project or a collection of commits references.

A Commit is a snapshot of your repository at a particular point in time. Commits capture a specific change, or series of changes, that you have made to a file in the repository. Successive commits constitute the history of Git.

Branches and Merging

Generally, branches are unique code changes set with a unique name. Any repository contains more or less than one branch. The main branch where all changes merge eventually is the master branch.

Merge provides a Git method for combining fork histories. Merge integrates several commit sequences into a history. Most commonly, Merge serves to combine two forked histories.

Git Pull Requests

Pull request is a method for discussing changes before they are merged into your codebase. A Pull request is not just a notification, it is a special discussion forum for the suggested feature. If there are any problems with the changes, team members can provide feedback in the pull request and even refine the feature by pushing subsequent commits. All of this activity is tracked directly in the pull request.

Git commands

Developers use specific commands to copy, create, modify, and merge code to use Git. You can execute these commands directly from the command line or through an application. Here are some of those commands:

You will find more Git commands here.

Why use Git?

Synchronous development. Everyone has their local instance of the code, and everyone can work on their branches at the same time. Git works offline because almost all operations run locally.

Increase team speed and productivity. Git makes keeping track of changes to your code easy for your team. So you can focus on writing code instead of wasting time tracking and merging different versions into your team. It also calculates and stores your main repository locally, making it faster for most operations.

Open Source. Open source allows developers from all over the world to contribute to the software and make it more and more powerful through features and extra plugins. This has led to the Linux kernel consisting of 15 million lines of code.

Security: The SHA-1 cryptography keeps you safe. This algorithm securely manages your versions, files and directories avoiding any damage to your work.

Git Is an Industry Standard. It is highly popular, and major IDEs support it.

Articles you might like:

What is REST API
Docker and Why Use It?
What is API and How It Works

The post What is Git and Why Use It appeared first on Flatlogic Blog.

Commit your code as if it could be accidentally deployed

The one simple trick to do a better job as a programmer is to git commit as if your commit could be accidentally deployed (and it wouldn’t break the production environment…)

Why would this improve your work? Because it pushes for improvements in several areas.

If a commit needs to leave the project in a working state, you need to:

Be able to prove that after making your changes, the project is in a working state. This forces you to put automated tests in place.
Be able to make changes that don’t break existing code, or that only add new features. This forces you think about first improving the existing code design to allow for your change to happen.

To practice with the latter, I recommend learning about the Mikado method. It makes it easier to recognize prerequisites for the change you want to make, and then forces you to enable the real goals by implementing the prerequisites first.

This process is also known as “making the change easy, then making the easy change”. An amazing result of applying this process is that after some practice you’ll be able to make many, much smaller commits during the day. Each of those commits will leave the project in a working state. To me this always feels great because:

I’m not worried that my change breaks something.
I only make small jumps between stepping stones and I’m safe on each stone.
I can switch tasks when I need to.

Consultancy secrets

As a consultant I apply this “trick” to trigger improvement of the development process. I like to work with development teams on one of their real programming tasks, as an ensemble. We use a pomodoro timer, and we establish a goal for the “pomodoro” (25 minutes). Reflecting at the end of the pomodoro we often conclude two things:

We didn’t reach the goal, it was too ambitious
We won’t be able to commit anything until several pomodoro’s later

During all this time we don’t feel safe at all. We don’t have a sense of accomplishment either. Realizing this turns out to be a great starting point for improving the development process.

Implement a PWA using Blazor with BFF security and Azure B2C

The article shows how to implement a progressive web application (PWA) using Blazor which is secured using the backend for frontend architecture and Azure B2C as the identity provider.

Code https://github.com/damienbod/PwaBlazorBffAzureB2C

Setup and challenges with PWAs

The application is setup to implement all security in the trusted backend and reduce the security risks of the overall software. We use Azure B2C as an identity provider. When implementing and using BFF security architecture, cookies are used to secure the Blazor WASM UI and its backend. Microsoft.Identity.Web is used to implement the authentication as recommended by Microsoft for server rendered applications. Anti-forgery tokens as well as all the other cookie protections can be used to reduce the risk of CSRF attacks. This requires that the WASM application is hosted in an ASP.NET Core razor page and the dynamic data can be added. With PWA applications, this is not possible. To work around this, CORS preflight and custom headers can be used to protect against this as well as same site. The anti-forgery cookies need to be removed to support PWAs. Using CORS preflight has some disadvantages compared to anti-forgery cookies but works good.

Setup Blazor BFF with Azure B2C for PWA

The application is setup using the Blazor.BFF.AzureB2C.Template Nuget package. This uses anti-forgery cookies. All of the anti-forgery protection can be completely removed. The Azure App registrations and the Azure B2C user flows need to be setup and the application should work (without PWA support).

To setup the PWA support, you need to add an index.html file to the wwwroot of the Blazor client and a service worker JS script to implement the PWA. The index.html file adds what is required and the serviceWorkerRegistration.js script is linked.

<!DOCTYPE html>
<html>
<!– PWA / Offline Version –>
<head>
<meta charset=”utf-8″ />
<meta name=”viewport” content=”width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no” />
<base href=”/” />
<title>PWA Blazor Azure B2C Cookie</title>
<base href=”~/” />
<link rel=”stylesheet” href=”css/bootstrap/bootstrap.min.css” />
<link href=”css/app.css” rel=”stylesheet” />
<link href=”BlazorHosted.Client.styles.css” rel=”stylesheet” />
<link href=”manifest.json” rel=”manifest” />
<link rel=”apple-touch-icon” sizes=”512×512″ href=”icon-512.png” />

<body>
<div id=”app”>
<div class=”spinner d-flex align-items-center justify-content-center spinner”>
<div class=”spinner-border text-success” role=”status”>
<span class=”sr-only”>Loading…</span>
</div>
</div>
</div>

<div id=”blazor-error-ui”>
An unhandled error has occurred.
<a href=”” class=”reload”>Reload</a>
<a class=”dismiss”>🗙</a>
</div>

<script src=”_framework/blazor.webassembly.js”></script>
<script src=”serviceWorkerRegistration.js”></script>
</body>

</html>

The serviceWorker.published.js script is pretty standard except that the OpenID Connect redirects and signout URLs need to be excluded from the PWA and always rendered from the trusted backend. The registration script references the service worker so that the inline Javascript is removed from the html because we do not allow unsafe inline scripts anywhere in an application if possible.

navigator.serviceWorker.register(‘service-worker.js’);

The service worker excludes all the required authentication URLs and any other required server URLs. The published script registers the PWA.

Note: if you would like to test the PWA locally without deploying the application, you can reference the published script directly and it will run locally. You need to be carefully testing as the script and the cache needs to be emptied before testing each time.

// Caution! Be sure you understand the caveats before publishing an application with
// offline support. See https://aka.ms/blazor-offline-considerations

self.importScripts(‘./service-worker-assets.js’);
self.addEventListener(‘install’, event => event.waitUntil(onInstall(event)));
self.addEventListener(‘activate’, event => event.waitUntil(onActivate(event)));
self.addEventListener(‘fetch’, event => event.respondWith(onFetch(event)));

const cacheNamePrefix = ‘offline-cache-‘;
const cacheName = `${cacheNamePrefix}${self.assetsManifest.version}`;
const offlineAssetsInclude = [/.dll$/, /.pdb$/, /.wasm/, /.html/, /.js$/, /.json$/, /.css$/, /.woff$/, /.png$/, /.jpe?g$/, /.gif$/, /.ico$/, /.blat$/, /.dat$/];
const offlineAssetsExclude = [/^service-worker.js$/];

async function onInstall(event) {
console.info(‘Service worker: Install’);

// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash, cache: ‘no-cache’ }));

await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}

async function onActivate(event) {
console.info(‘Service worker: Activate’);

// Delete unused caches
const cacheKeys = await caches.keys();
await Promise.all(cacheKeys
.filter(key => key.startsWith(cacheNamePrefix) && key !== cacheName)
.map(key => caches.delete(key)));
}

async function onFetch(event) {
let cachedResponse = null;
if (event.request.method === ‘GET’) {
// For all navigation requests, try to serve index.html from cache
// If you need some URLs to be server-rendered, edit the following check to exclude those URLs
const shouldServeIndexHtml = event.request.mode === ‘navigate’
&& !event.request.url.includes(‘/signin-oidc’)
&& !event.request.url.includes(‘/signout-callback-oidc’)
&& !event.request.url.includes(‘/api/Account/Login’)
&& !event.request.url.includes(‘/api/Account/Logout’)
&& !event.request.url.includes(‘/HostAuthentication/’);

const request = shouldServeIndexHtml ? ‘index.html’ : event.request;
const cache = await caches.open(cacheName);
cachedResponse = await cache.match(request);
}

return cachedResponse || fetch(event.request, { credentials: ‘include’ });
}

The ServiceWorkerAssetsManifest definition needs to be added to the client project.

<ServiceWorkerAssetsManifest>service-worker-assets.js</ServiceWorkerAssetsManifest>

Now the PWA should work. The next step is to add the extra CSRF protection.

Setup CSRF protection using CORS preflight

CORS preflight can be used to protect against CSRF as well as same site. All API calls should include a custom HTTP header and this needs to be controlled on the APIs that the header exists.

The can be implemented in the Blazor WASM client by using a CSRF middleware protection.

public class CsrfProtectionCorsPreflightAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var header = context.HttpContext
.Request
.Headers
.Any(p => p.Key.ToLower() == “x-force-cors-preflight”);

if (!header)
{
// “X-FORCE-CORS-PREFLIGHT header is missing”
context.Result = new UnauthorizedObjectResult(“X-FORCE-CORS-PREFLIGHT header is missing”);
return;
}
}
}

In the Blazor client, the middleware can be added to all HttpClient instances used in the Blazor WASM.

builder.Services.AddHttpClient(“default”, client =>
{
client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress);
client.DefaultRequestHeaders
.Accept
.Add(new MediaTypeWithQualityHeaderValue(“application/json”));

}).AddHttpMessageHandler<CsrfProtectionMessageHandler>();

builder.Services.AddHttpClient(“authorizedClient”, client =>
{
client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress);
client.DefaultRequestHeaders
.Accept
.Add(new MediaTypeWithQualityHeaderValue(“application/json”));

}).AddHttpMessageHandler<AuthorizedHandler>()
.AddHttpMessageHandler<CsrfProtectionMessageHandler>();

The CSRF CORS preflight header can be validated using an ActionFilter in the ASP.NET Core backend application. This is not the only way of doing this. The CsrfProtectionCorsPreflightAttribute implements the ActionFilterAttribute so only the OnActionExecuting needs to be implemented. The custom header is validated and if it fails, an unauthorized result is returned. It does not matter if you give the reason why, unless you want to obfuscate this a bit.

public class CsrfProtectionCorsPreflightAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var header = context.HttpContext
.Request
.Headers
.Any(p => p.Key.ToLower() == “x-force-cors-preflight”);

if (!header)
{
// “X-FORCE-CORS-PREFLIGHT header is missing”
context.Result = new UnauthorizedObjectResult(“X-FORCE-CORS-PREFLIGHT header is missing”);
return;
}
}
}

The CSRF can then be applied anywhere this is required. All secured routes where cookies are used should enforce this.

[CsrfProtectionCorsPreflight]
[Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)]
[ApiController]
[Route(“api/[controller]”)]
public class DirectApiController : ControllerBase
{
[HttpGet]
public IEnumerable<string> Get()
{
return new List<string> { “some data”, “more data”, “loads of data” };
}
}

Now the PWA works using the server rendered application and protected using BFF with all security in the trusted backend.

Problems with this solution and Blazor

The custom header cannot be applied and added when sending direct links, redirects or forms which don’t used Javascript. Anywhere a form is implemented and requires the CORS preflight protection, a HttpClient which adds the header needs to be used.

This is a problem with the Azure B2C signin and signout. The signin redirects the whole application, but this is not so much a problem because when signing in, the identity has no cookie with sensitive data, or should have none. The signout only works correctly with Azure B2C with a form request from the whole application and not HttpClient API call using Javascript. The CORS preflight header cannot be applied to an Azure B2C identity provider signout request, if you require the session to be ended on Azure B2C. If you only require a local logout, then the HttpClient can be used.

Note: Same site protection also exists for modern browsers, so this double CSRF fallback is not really critical, if the same site is implemented correctly and using a browser which enforces this.

Links

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api

Managing Azure B2C users with Microsoft Graph API

https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS#client-credentials-provider

https://github.com/search?q=Microsoft.Identity.Web

https://github.com/damienbod/Blazor.BFF.AzureB2C.Template

Rust extension traits, greppability and IDEs

Traits are a central feature of Rust, critical for its implementation of
polymorphism; traits are used for both static (by serving as bounds for generic
parameters) and dynamic (by having trait objects to serve as interfaces)
polymorphism.

This post assumes some familiarity with traits and discusses only a specific
aspect of them – how extension traits affect code readability. To learn the
basics of traits in Rust, the official book is a good starting point.

Extension traits

This Rust RFC
provides a good, short definition of extension traits:

Extension traits are a programming pattern that makes it possible to add
methods to an existing type outside of the crate defining that type.

For example, here’s a trait with a single method:

trait Magic {
fn magic_num(&self) -> usize;
}

We can now implement the Magic trait for our types:

struct Foobar {
name: String,
}

impl Magic for Foobar {
fn magic_num(&self) -> usize {
return if self.name.len() == 0 { 2 } else { 33 };
}
}

Now a FooBar can be passed wherever a Magic is expected. FooBar is a
custom type, but what’s really interesting is that we can also implement
Magic for any other type, including types that we did not define. Let’s
implement it for bool:

impl Magic for bool {
fn magic_num(&self) -> usize {
return if *self { 3 } else { 54 };
}
}

We can now write code like true.magic_num() and it will work! We’ve added
a method to a built-in Rust type. Obviously, we can also implement this trait
for types in the standard library; e.g.:

impl<T> Magic for Vec<T> {
fn magic_num(&self) -> usize {
return if self.len() == 0 { 10 } else { 5 };
}
}

Extension traits in the wild

Extension traits aren’t just a fringe feature; they are widely used in the Rust
ecosystem.

One example is the popular serde crate, which includes code that serializes
and deserializes data structures in multiple formats. One of the traits
serde provides is serde::Serialize; once we import this trait and one of
the concrete serializers serde provides, we can do stuff like [1]:

let mut serializer = serde_json::Serializer::new(std::io::stdout());
185.serialize(&mut serializer).unwrap();

Importing serde::Serialize is critical for this code to work, even though we
don’t refer to Serialize anywhere in our code explicitly. Rust requires
traits to be explicitly imported to imbue their methods onto existing types;
otherwise it’s hard to avoid naming collisions in case multiple traits from
different crates provide the same methods.

Another example is the byteorder crate, which helps encode numbers into
buffers with explicit length and endianness. To write some numbers into a vector
byte-by-byte, we have to import the relevant trait and enum first, and then
we can call the newly-added methods directly on a vector:

use byteorder::{LittleEndian, WriteBytesExt};

// …

let mut wv = vec![];
wv.write_u16::<LittleEndian>(259).unwrap();
wv.write_u16::<LittleEndian>(517).unwrap();

The write_u16 method is part of the WriteBytesExt trait, and it’s
implemented on a Vec by the byteorder crate. To be more precise, it’s
automatically implemented on any type that implements the Write trait.

Finally, let’s look at rayon – a library for simplified data-parallelism. It
provides magical iterators that have the same functionality as iter but
compute their results in parallel, leveraging multiple CPU cores. The rayon
documentation recommends to import the traits the crate injects as follows:

It is recommended that you import all of these traits at once by adding
use rayon::prelude::* at the top of each module that uses Rayon methods.

Having imported it thus, we can proceed to use Rayon as follows:

let exps = vec![2, 4, 6, 12, 24];
let pows_of_two: Vec<_> = exps.par_iter().map(|n| 2_u64.pow(*n)).collect();

Note the par_iter, which replaces a regular iter. It’s been magically
implemented on a vector, as well as a bunch of other types that support
iteration.

On greppability and code readability

All these uses of extension traits are pretty cool and useful, no doubt. But
that’s not the main point of my post. What I really want to discuss is how the
general approach relates to code readability, which is in my mind one of the
most important aspects of programming we should all be thinking about.

This Rust technique fails the greppability test; it’s not a word I made up –
google it! If it’s not immediately apparent, greppability means the ability to
explore a code base using textual search tools like grep, git grep,
ripgrep, pss or what have you.

Suppose you encounter this piece of code in a project you’re exploring:

let mut wv = vec![];
wv.write_u16::<LittleEndian>(259).unwrap();

“Interesting”, you think, “I didn’t know that Vec has a write_u16
method”. You quickly check the documentation – indeed, it doesn’t! So where is
it coming from? You grep the project… nothing. It’s nowhere in the
imports. You examine the imports one by one, and notice the:

use byteorder::{LittleEndian, WriteBytesExt};

“Aha!”, you say, “this imports LittleEndian, so maybe this has to do with
the byteorder crate”. You check the documentation of that crate and indeed,
you find the write_u16 method there; phew.

With par_iter you’re less lucky. Nothing in imports will catch your eye,
unless you’re already familiar with the rayon crate. If you’re not, then
use rayon::prelude::* won’t ring much of a bell in relation to par_iter.

Of course, you can just google this symbol like this and you’ll find it. Or maybe
you don’t even understand what the problem is, because your IDE is perfectly
familiar with these symbols and will gladly pop up their documentation when you
hover over them.

IDEs and language servers

These days we have free, powerful and fast IDEs that make all of this a
non-issue (looking at Visual Studio Code, of course). Coupled with smart
language servers, these IDEs are as familiar with your code as the compiler;
the language servers typically run a full front-end sequence on the code, ending
up with type-checked ASTs cross-referenced with symbol tables that let them
understand where each symbol is coming from, its type and so on. For Rust the
language server is RLS, for Go its gopls; all popular languages have them these
days [2].

It’s entirely possible that using a language like Rust without a sophisticated
IDE is madness, and I’m somewhat stuck in the past. But I have to say, I do
lament the loss of greppability. There’s something very universal about being
able to understand a project using only grep and the official documentation.

In fact, for some languages it’s likely that this has been the case for a long
while already. Who in their right mind has the courage to tackle a Java project
without an IDE? It’s just that this wasn’t always the case for systems
programming languages, and Rust going this way makes me slightly sad. Or maybe
I’m just too indoctrinated in Go at this point, where all symbol access happens
as package.Symbol, packages are imported explicitly and there is no magic
name injection anywhere (almost certainly by design).

I can’t exactly put my finger on why this is bothering me; perhaps I’m just
yelling at clouds
here. While I’m at it, I should finally write that post about printf-based
debugging…

[1]
Note that it could be simpler to use serde’s to_json function
here, but I opted for the explicit serializer because I wanted to show
how we invoke a new method on an integer literal.

[2]
Apparently, not all tooling has access to sophisticated language servers;
for example, as far as I can tell GitHub source analysis won’t be able to
find where write_u16 is coming from, and the same is true of
Sourcegraph.

Better Exception Handling With EntityFrameworkCore Exceptions

I cannot tell you how many times I’ve had the following conversation

“Hey I’m getting an error”

“What’s the error?”

“DBUpdateException”

“OK, what’s the message though, that could be anything”

“ahhh.. I didn’t see…..”

Frustratingly, When doing almost anything with Entity Framework including updates, deletes and inserts, if something goes wrong you’ll be left with the generic exception of :

Microsoft.EntityFrameworkCore.DbUpdateException: ‘An error occurred while saving the entity changes. See the inner exception for details.’

It can be extremely annoying if you’re wanting to catch a particular database exception (e.g. It’s to be expected that duplicates might be inserted), and handle them in a different way than something like being unable to connect to the database full stop. Let’s work up a quick example to illustrate what I mean.

Let’s assume I have a simple database model like so :

class BlogPost
{
public int Id { get; set; }
public string PostName { get; set; }
}

And I have configured my entity to have a unique constaint meaning that every BlogPost must have a unique name :

modelBuilder.Entity<BlogPost>()
.HasIndex(x => x.PostName)
.IsUnique();

If I do something as simple as :

context.Add(new BlogPost
{
PostName = “Post 1”
});

context.Add(new BlogPost
{
PostName = “Post 1”
});

context.SaveChanges();

The *full* exception would be along the lines of :

Microsoft.EntityFrameworkCore.DbUpdateException: ‘An error occurred while saving the entity changes. See the inner exception for details.’
Inner Exception
SqlException: Cannot insert duplicate key row in object ‘dbo.BlogPosts’ with unique index ‘IX_BlogPosts_PostName’. The duplicate key value is (Post 1).

Let’s say that we want to handle this exception in a very specific way, for us to do this we would have to have a bit of a messy try/catch statement :

try
{
context.SaveChanges();
}catch(DbUpdateException exception) when (exception?.InnerException?.Message.Contains(“Cannot insert duplicate key row in object”) ?? false)
{
//We know that the actual exception was a duplicate key row
}

Very ugly and there isn’t much reusability here. If we want to catch a similar exception elsewhere in our code, we’re going to be copy and pasting this long catch statement everywhere.

And that’s where I came across the EntityFrameworkCore.Exceptions library!

Using EntityFrameworkCore.Exceptions

The EntityFrameworkCore.Exceptions library is extremely easy to use and I’m actually somewhat surprised that it hasn’t made it’s way into the core EntityFramework libraries already.

To use it, all we have to do is run the following on our Package Manager Console :

Install-Package EntityFrameworkCore.Exceptions.SqlServer

And note that there are packages for things like Postgres and MySQL if that’s your thing!

Then with a single line for our DB Context we can set up better error handling :

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseExceptionProcessor();
}

If we run our example code from above, instead of our generic DbUpdateException we get :

EntityFramework.Exceptions.Common.UniqueConstraintException: ‘Unique constraint violation’

Meaning we can change our Try/Catch to be :

try
{
context.SaveChanges();
}catch(UniqueConstraintException ex)
{
//We know that the actual exception was a duplicate key row
}

Much cleaner, much tidier, and far more reusable!

The post Better Exception Handling With EntityFrameworkCore Exceptions appeared first on .NET Core Tutorials.

What is Docker and Why Use it?

Introduction
How does it work
Why use it
Conclusion

Introduction

Docker is an open container-based platform that enables you to generate, control and deploy applications on it. You can decouple applications from the infrastructure, and it enables you to build software quickly. Docker helps you reduce the time between coding and getting it into production by leveraging the advantages of the Docker methodology for rapid code delivery, testing, and deployment.

What is Docker Container?

Docker Containers are lightweight and portable virtual operating systems that include libraries, system tools, code, and runtime. One container can run anything from a small microservice to a massive application.

In Containers, applications can be abstracted from environments. This separation enables easy and consistent deployment of container-based applications, whether the medium is a private data center or a public cloud.

From an operational perspective, in addition to portability, containers also provide more control over resources, increasing infrastructure efficiency, which leads to better usage of computing resources.

How does Docker work?

Docker is necessary for client-server application functionality, which includes:

The Server runs the daemon used to generate and control Containers, Images, Networks, and Data Volumes. 
The REST API specifies how apps can communicate with the Server and instruct it to do its job. 
The Client interacts with daemon through code and commands. 

Container work on an image-based deployment model that simplifies using the application across multiple environments. Images are a fundamental element of the Docker environment because they allow collaboration between developers in a manner that has not been possible before.

Why use Docker?

The Docker platform allows you to create virtual workloads quickly, allowing distributed applications to traverse server boundaries. Here are some pros why you can use it:

Consistency. You always launch from the same starting point. Docker enables a coordinated environment for your application from development to production.

Speed. You can rapidly start a new process on the server. Because the image is pre-configured and installed with the process you want to run, it removes all the complexity associated with starting a process.

Isolation. Every launched Docker container is isolated from the file system, the network, and other running processes. As a result, applications can contain different versions of the same support software.

Scalability. You can add several of the same containers to create multiple instances of the same application.

Conclusion

Docker allows you to easily build applications using containers, running multiple applications on the same hardware, for easier management and deployment of applications. On the Flatlogic platform, after generating the application, we provide the user with two ways to run the application locally, one of which is using Docker.

The post What is Docker and Why Use it? appeared first on Flatlogic Blog.

Top 7+ Bootstrap Admin Dashboard Templates in 2022

Introduction

There is a huge number of admin dashboard templates on the internet and a significant number of free ones to download. They usually include such kinds of things as graph/chart libraries, dashboard pages, alert box buttons, navigation schemes, icons, tables, and so on. We will try to find the best suitable UI (user interface) toolkit for your project.

All of the admin dashboard templates listed here include at least one dashboard page that was pre-built in a template. It can be customized for your project. If you decide to create your custom dashboard, you have different options in the template packs such as a combination of components, UI elements, and cards.

So how can you choose the best admin template for a project? For sure one of the most important things for an admin template is to put data and content first in the visual design hierarchy. E.g., your first attention is content, not design. However, we have made analyses not only of the design but also what are the versions, how many unique pages, in fact, the project has, how often it is updated, and of course the price.

Here is some basic information about the most popular admin templates available on the market. It contains such columns as versions, unique pages, price, Bootstrap version, and of course a rating, based on mentioned characteristics. Please note that it’s just our opinion. Nevertheless, it’s based on solid facts. For example, using Bootstrap 4 is definitely a plus. 4th version is much ahead of its predecessor. It’s better for you to use it on your projects because it is essential to keep up to date and use only the latest, high-quality, and relevant tools when developing your product. That proves that you will never face security issues for example.

Sing App

Sing App is a  Bootstrap 4 based admin template with an excellent fresh and clean design. It has static HTML, REACT, AngularJS, and Angular 5 versions. Sing contains more than 30 pages. Sing is stunning while at the same time advanced. With this admin template, you will manage to develop your web application much more effectively which will definitely make your lives simpler. 100% Responsive layout. Sing is a perfect choice both for a small startup and an established enterprise.  It is compatible with Bootstrap 4 and is updated regularly. A reasonable price would be an additional plus for this project.

MORE INFO
DEMO
DOCUMENTATION

Skydash

Skydash is a brand new admin dashboard from BootstrapDash. It has been designed and developed by closely working with one of the best UI/UX agencies in India. You can be assured you are getting a premium template. It offers a clean user interface that can be easily modified to fit your needs. Skydash offers you 4 different types of dashboard layouts, including ones with vertical and horizontal navbars. You can either use the default light theme or the dark theme. Check out the website to know more about Skydash.

MORE INFO
DEMO
DOCUMENTATION

SmartAdmin

SmartAdmin is a  Bootstrap 3 based admin template with great features and characteristics. It has static HTML, AJAX, PHP, AngularJS 4.0, Ruby On Rails, ReactJS and ASP.NET version. It contains more than 30 pages including an impressionistic landing page. 100% Responsive layout (tablets, desktops, mobile devices) and modern and clean design. Unfortunately, it compatible only with Bootstrap 3 and updates, in general, are not very often.

MORE INFO
DEMO

Inspinia

Designed and developed professionally, Inspinia is based on Bootstrap 3. Out of the box, you get four version: AJAX + HTML + ANGULAR + ANGULAR 5 Version. This admin template with pixel perfect design and a fully responsive layout has more than 70 pages. Thanks to its multi-purposefulness you can use Inspinia for many web projects. The last update was practically a year ago, so this could be a HUGE minus for this project.

MORE INFO
DEMO

Metronic

Metronic is an admin template with 70+ pages, based on Bootstrap 4. It has  HTML + ANGULAR + ANGULAR 5 Version. This admin template with innovative design can absolutely improve your web application. Flat design is assuring great quality and modern look. And this experience will be friendly towards mobile device users, given that this admin template is entirely responsive. Metronic is updated approximately once a month in general.

MORE INFO
DEMO

Light Blue

This dashboard is Bootstrap 4 based and contains more than 30 pages. Fully responsive and has a modern, simple but smooth look. It is a multi-functional admin template with a responsive and flexible design, adapting smoothly to any device. It updates practically every one or two weeks. This is definitely a simple non-intrusive template. This app template uses Server Side Rendering for SEO and Node.js backend to speed up your development process.

MORE INFO
DEMO
DOCUMENTATION

Color Admin

Color Admin comes with plenty of elements which you can tailor to your needs and requests. Bootstrap 4 based admin template including AJAX + HTML + ANGULAR + ANGULAR 5 Version. 100 unique pages is a big plus. No matter how complex you would like your admin to be Color Admin template is here to take care of your needs. It has a strong focus on user interfaces and web applications. It’s updating once in a 2 month.

MORE INFO
DEMO

Flatlogic One Bootstrap Template

Flatlogic One is built with Bootstrap 4.5 and contains a lot of customizable components. The template is beautifully designed. There are four color themes to choose from. By the way, the developers team has created a unique Flatlogic Typography and Flatlogic Icons. This dashboard is fully responsive because it is well-matched with any device. Developers have filled this template with so many useful things, such as various charts (Amcharts, Echarts, Apexcharts), Google Maps, analytics, and visits dashboards, chat, and email opportunities, and much more. Flatlogic One is documented and fully supported. It’s compatible with Chrome, Firefox, Opera, Edge, IE 10, IE 11.

MORE INFO
DEMO
DOCUMENTATION

Unify

Unify is a template with huge  250+ shortcode pages which have over 1750 reusable UI Components. Based on Bootstrap 4 it has AJAX + HTML + ANGULAR + ANGULAR 5 Version. The quickness of customization and the combo of available elements allow you to create the exact layouts you need. Unify has a modern web design and comes handy for many different web projects you are working on. It regularly updates so this could be a great plus for this template.

MORE INFO
DEMO

Conclusion

To summarize, admin dashboard templates are great tools to check on your progress, statistics, productivity, and staff. Professionally crafted admin dashboard with lots of pages and elements can take your company to a higher level. So it is extremely important to choose it carefully. A good template always should be the reflection of reliability and professionalism.

If you liked this post you may also want to read:

Top 7 React Admin Dashboard Templates
Why we decided to move away from a marketplace and created our own platform

The post Top 7+ Bootstrap Admin Dashboard Templates in 2022 appeared first on Flatlogic Blog.

Performance improvements in ASP.NET Core 6

Inspired by the blog posts by Stephen Toub about performance in .NET we are writing a similar post to highlight the performance improvements done to ASP.NET Core in 6.0.

Benchmarking Setup

We will be using BenchmarkDotNet for the majority of the examples throughout. A repo at https://github.com/BrennanConroy/BlogPost60Bench is provided that includes the majority of the benchmarks used in this post.

Most of the benchmark results in this post were generated with the following command line:

dotnet run -c Release -f net48 –runtimes net48 netcoreapp3.1 net5.0 net6.0

Then selecting a specific benchmark to run from the list.

This tells BenchmarkDotNet:

Build everything in a release configuration.
Build it targeting the .NET Framework 4.8 surface area.
Run each benchmark on each of .NET Framework 4.8, .NET Core 3.1, .NET 5, and .NET 6.

For some benchmarks, they were only run on .NET 6 (e.g. if comparing two ways of coding something on the same version):

dotnet run -c Release -f net6.0 –runtimes net6.0

and for others only a subset of the versions were run, e.g.

dotnet run -c Release -f net5.0 –runtimes net5.0 net6.0

I’ll include the command used to run each of the benchmarks as they come up.

Most of the results in the post were generated by running the above benchmarks on Windows, primarily so that .NET Framework 4.8 could be included in the result set. However, unless otherwise called out, in general all of these benchmarks show comparable improvements when run on Linux or on macOS. Simply ensure that you have installed each runtime you want to measure. The benchmarks were run with a nightly build of .NET 6 RC1, along with the latest released downloads of .NET 5 and .NET Core 3.1.

Span<T>

Every release since the addition of Span<T> in .NET 2.1 we have converted more code to use spans both internally and as part of the public API to improve performance. This release is no exception.

PR dotnet/aspnetcore#28855 removed a temporary string allocation in PathString coming from string.SubString when adding two PathString instances and instead uses a Span<char> for the temporary string. In the benchmark below we use a short string and a longer string to show the performance difference from avoiding the temporary string.

dotnet run -c Release -f net48 –runtimes net48 net5.0 net6.0 –filter *PathStringBenchmark*
private PathString _first = new PathString(“/first/”);
private PathString _second = new PathString(“/second/”);
private PathString _long = new PathString(“/longerpathstringtoshowsubstring/”);

[Benchmark]
public PathString AddShortString()
{
return _first.Add(_second);
}

[Benchmark]
public PathString AddLongString()
{
return _first.Add(_long);
}

Method
Runtime
Toolchain
Mean
Ratio
Allocated

AddShortString
.NET Framework 4.8
net48
23.51 ns
1.00
96 B

AddShortString
.NET 5.0
net5.0
22.73 ns
0.97
96 B

AddShortString
.NET 6.0
net6.0
14.92 ns
0.64
56 B

AddLongString
.NET Framework 4.8
net48
30.89 ns
1.00
201 B

AddLongString
.NET 5.0
net5.0
25.18 ns
0.82
192 B

AddLongString
.NET 6.0
net6.0
15.69 ns
0.51
104 B

dotnet/aspnetcore#34001 introduced a new Span based API for enumerating a query string that is allocation free in a common case of no encoded characters, and lower allocations when the query string contains encoded characters.

dotnet run -c Release -f net6.0 –runtimes net6.0 –filter *QueryEnumerableBenchmark*
#if NET6_0_OR_GREATER
public enum QueryEnum
{
Simple = 1,
Encoded,
}

[ParamsAllValues]
public QueryEnum QueryParam { get; set; }

private string SimpleQueryString = “?key1=value1&key2=value2”;
private string QueryStringWithEncoding = “?key1=valu%20&key2=value%20”;

[Benchmark(Baseline = true)]
public void QueryHelper()
{
var queryString = QueryParam == QueryEnum.Simple ? SimpleQueryString : QueryStringWithEncoding;
foreach (var queryParam in QueryHelpers.ParseQuery(queryString))
{
_ = queryParam.Key;
_ = queryParam.Value;
}
}

[Benchmark]
public void QueryEnumerable()
{
var queryString = QueryParam == QueryEnum.Simple ? SimpleQueryString : QueryStringWithEncoding;
foreach (var queryParam in new QueryStringEnumerable(queryString))
{
_ = queryParam.DecodeName();
_ = queryParam.DecodeValue();
}
}
#endif

Method
QueryParam
Mean
Ratio
Allocated

QueryHelper
Simple
243.13 ns
1.00
360 B

QueryEnumerable
Simple
91.43 ns
0.38

QueryHelper
Encoded
351.25 ns
1.00
432 B

QueryEnumerable
Encoded
197.59 ns
0.56
152 B

It’s important to note that there is no free lunch. In the new QueryStringEnumerable API case, if you are planning on enumerating the query string values multiple times it can actually be more expensive than using QueryHelpers.ParseQuery and storing the dictionary of the parsed query string values.

dotnet/aspnetcore#29448 from @paulomorgado uses the string.Create method that allows initializing a string after it’s created if you know the final size it will be. This was used to remove some temporary string allocations in UriHelper.BuildAbsolute.

dotnet run -c Release -f netcoreapp3.1 –runtimes netcoreapp3.1 net6.0 –filter *UriHelperBenchmark*
#if NETCOREAPP
[Benchmark]
public void BuildAbsolute()
{
_ = UriHelper.BuildAbsolute(“https”, new HostString(“localhost”));
}
#endif

Method
Runtime
Toolchain
Mean
Ratio
Allocated

BuildAbsolute
.NET Core 3.1
netcoreapp3.1
92.87 ns
1.00
176 B

BuildAbsolute
.NET 6.0
net6.0
52.88 ns
0.57
64 B

PR dotnet/aspnetcore#31267 converted some parsing logic in ContentDispositionHeaderValue to use Span<T> based APIs to avoid temporary strings and a temporary byte[] in common cases.

dotnet run -c Release -f net48 –runtimes net48 netcoreapp3.1 net5.0 net6.0 –filter *ContentDispositionBenchmark*
[Benchmark]
public void ParseContentDispositionHeader()
{
var contentDisposition = new ContentDispositionHeaderValue(“inline”);
contentDisposition.FileName = “FileÃName.bat”;
}

Method
Runtime
Toolchain
Mean
Ratio
Allocated

ContentDispositionHeader
.NET Framework 4.8
net48
654.9 ns
1.00
570 B

ContentDispositionHeader
.NET Core 3.1
netcoreapp3.1
581.5 ns
0.89
536 B

ContentDispositionHeader
.NET 5.0
net5.0
519.2 ns
0.79
536 B

ContentDispositionHeader
.NET 6.0
net6.0
295.4 ns
0.45
312 B

Idle Connections

One of the major components of ASP.NET Core is hosting a server which brings with it a host of different problems to optimize for. We’ll focus on improvements to idle connections in 6.0 where we made many changes to reduce the amount a memory used when a connection is waiting for data.

There were three distinct types of changes we made, one was to reduce the size of the objects used by connections, this includes System.IO.Pipelines, SocketConnections, and SocketSenders. The second type of change was to pool commonly accessed objects so we can reuse old instances and save on allocations. The third type of change was to take advantage of something called “zero byte reads”. This is where we try to read from the connection with a zero byte buffer, if there is data available the read will return with no data, but we will know there is now data available and can provide a buffer to read that data immediately. This avoids allocating a buffer up front for a read that may complete at a future time, so we can avoid a large allocation until we know data is available.

dotnet/runtime#49270 reduced the size of System.IO.Pipelines from ~560 bytes to ~368 bytes which is a 34% size reduction, there are at least 2 pipes per connection so this was a great win. dotnet/aspnetcore#31308 refactored the Socket layer of Kestrel to avoid a few async state machines and reduce the size of remaining state machines to get a ~33% allocation savings for each connection.

dotnet/aspnetcore#30769 removed a per connection PipeOptions allocation and moved the allocation to the connection factory so we only allocate one for the entire lifetime of the server and reuse the same options for every connection. dotnet/aspnetcore#31311 from @benaadams replaced well known header values in WebSocket requests with interned strings which allowed the strings allocated during header parsing to be garbage collected, reducing the memory usage of the long lived WebSocket connections. dotnet/aspnetcore#30771 refactored the Sockets layer in Kestrel to first avoid allocating a SocketReceiver object + a SocketAwaitableEventArgs and combine it into a single object, that saved a few bytes and resulted in less unique objects allocated per connection. That PR also pooled the SocketSender class so instead of creating one per connection you now on average have number of cores SocketSender. So in the below benchmark when we have 10,000 connections there are only 16 allocated on my machine instead of 10,000 which is a savings of ~46 MB!

Another similar sized change is dotnet/runtime#49123 which adds support for zero-byte reads in SslStream so our 10,000 idle connections go from ~46 MB to ~2.3 MB from SslStream allocations. dotnet/runtime#49117 added support for zero-byte reads on StreamPipeReader which was then used by Kestrel in dotnet/aspnetcore#30863 to start using the zero-byte reads in SslStream.

The culmination of all these changes is a massive reduction in memory usage for idle connections.

The following numbers are not from a BenchmarkDotNet app as it’s measuring idle connections and it was easier to setup with a client and server application.

Console and WebApplication code are pasted in the following gist:
https://gist.github.com/BrennanConroy/02e8459d63305b4acaa0a021686f54c7

Below is the amount of memory 10,000 idle secure WebSocket connections (WSS) take on the server on different frameworks.

Framework
Memory

net48
665.4 MB

net5.0
603.1 MB

net6.0
160.8 MB

That’s an almost 4x memory reduction from net5.0 to net6.0!

Entity Framework Core

EF Core made some massive improvements in 6.0, it is 31% faster at executing queries and the TechEmpower Fortunes benchmark improved by 70% with Runtime updates, optimized benchmarks and the EF improvements.

These improvements came from improving object pooling, intelligently checking if telemetry is enabled, and adding an option to opt out of thread safety checks when you know your app uses DbContext safely.

See the Announcing Entity Framework Core 6.0 Preview 4: Performance Edition blog post which highlights many of the improvements in detail.

Blazor

Native byte[] Interop

Blazor now has efficient support for byte arrays when performing JavaScript interop. Previously, byte arrays sent to and from JavaScript were Base64 encoded so they could be serialized as JSON, which increased the transfer size and the CPU load. The Base64 encoding has now been optimized away in .NET 6 allowing users to transparently work with byte[] in .NET and Uint8Array in JavaScript. Documentation on using this feature for JavaScript to .NET and .NET to JavaScript.

Let’s take a look at a quick benchmark to see the difference between byte[] interop in .NET 5 and .NET 6. The following Razor code creates a 22 kB byte[], and sends it to a JavaScript receiveAndReturnBytes function, which immediately returns the byte[]. This roundtrip of data is repeated 10,000 times and the time data is printed to the screen. This code is the same for .NET 5 and .NET 6.

<button @onclick=”@RoundtripData”>Roundtrip Data</button>

<hr />

@Message

@code {
public string Message { get; set; } = “Press button to benchmark”;

private async Task RoundtripData()
{
var bytes = new byte[1024*22];
List<double> timeForInterop = new List<double>();
var testTime = DateTime.Now;

for (var i = 0; i < 10_000; i++)
{
var interopTime = DateTime.Now;

var result = await JSRuntime.InvokeAsync<byte[]>(“receiveAndReturnBytes”, bytes);

timeForInterop.Add(DateTime.Now.Subtract(interopTime).TotalMilliseconds);
}

Message = $”Round-tripped: {bytes.Length / 1024d} kB 10,000 times and it took on average {timeForInterop.Average():F3}ms, and in total {DateTime.Now.Subtract(testTime).TotalMilliseconds:F1}ms”;
}
}

Next we take a look at the receiveAndReturnBytes JavaScript function. In .NET 5. We must first decode the Base64 encoded byte array into a Uint8Array so it may be used in application code. Then we must re-encode it into Base64 before returning the data to the server.

function receiveAndReturnBytes(bytesReceivedBase64Encoded) {
const bytesReceived = base64ToArrayBuffer(bytesReceivedBase64Encoded);

// Use Uint8Array data in application

const bytesToSendBase64Encoded = base64EncodeByteArray(bytesReceived);

if (bytesReceivedBase64Encoded != bytesToSendBase64Encoded) {
throw new Error(“Expected input/output to match.”)
}

return bytesToSendBase64Encoded;
}

// https://stackoverflow.com/a/21797381
function base64ToArrayBuffer(base64) {
const binaryString = atob(base64);
const length = binaryString.length;
const result = new Uint8Array(length);
for (let i = 0; i < length; i++) {
result[i] = binaryString.charCodeAt(i);
}
return result;
}

function base64EncodeByteArray(data) {
const charBytes = new Array(data.length);
for (var i = 0; i < data.length; i++) {
charBytes[i] = String.fromCharCode(data[i]);
}
const dataBase64Encoded = btoa(charBytes.join(”));
return dataBase64Encoded;
}

The encoded/decoding adds significant overhead both on the client and server, along with requiring extensive boiler plate code as well. So how would this be done in .NET 6? Well, it’s quite a bit simpler:

function receiveAndReturnBytes(bytesReceived) {
// bytesReceived comes as a Uint8Array ready for use
// and can be used by the application or immediately returned.
return bytesReceived;
}

So it’s definitely easier to write, but how does it perform? Running these snippets in a blazorserver template in .NET 5 and .NET 6 respectively, under Release configuration, we see .NET 6 offers a 78% performance improvement in byte[] interop!

—————–
.NET 6 (ms)
.NET 5 (ms)
Improvement

Total Time
5273
24463
78%

Additionally, this byte array interop support is leveraged within the framework to enable bidirectional streaming interop between JavaScript and .NET. Users are now able to transport arbitrary binary data. Documentation on streaming from .NET to JavaScript is available here, and the JavaScript to .NET documentation is here.

Input File

Using the Blazor Streaming Interop​ mentioned above, we now support uploading large files via the InputFile​ component (previously uploads were limited to ~2GB). This component also features significant speed improvements on account of native byte[] streaming as opposed to going through Base64 encoding. For instance, a 100 MB file is uploaded 77% quicker in comparison to .NET 5.

.NET 6 (ms)
.NET 5 (ms)
Percentage

2591
10504
75%

2607
11764
78%

2632
11821
78%

Average:
77%

Note the streaming interop support also enables efficient downloads of (large) files, for more details, please see the documentation.

The InputFile component was upgraded to utilize streaming via dotnet/aspnetcore#33900.

Hodgepodge

dotnet/aspnetcore#30320 from @benaadams modernized our Typescript libraries and optimized them so websites load faster. The signalr.min.js file went from 36.8 kB compressed and 132 kB uncompressed, to 16.1 kB compressed and 42.2 kB uncompressed. And the blazor.server.js file 86.7 kB compressed and 276 kB uncompressed, to 43.9 kB compressed and 130 kB uncompressed.

dotnet/aspnetcore#31322 from @benaadams removes some unnecessary casts when getting common features from the connections feature collection. This gives a ~50% improvement when accessing common features from the collection. Seeing the performance improvement in a benchmark isn’t really possible unfortunately because it requires a bunch of internal types so I’ll include the numbers from the PR here, and if you’re interested in running them, the PR includes benchmarks that can run against the internal code.

Method
Mean
Op/s
Diff

Get<IHttpRequestFeature>*
8.507 ns
117,554,189.6
+50.0%

Get<IHttpResponseFeature>*
9.034 ns
110,689,963.7

Get<IHttpResponseBodyFeature>*
9.466 ns
105,636,431.7
+58.7%

Get<IRouteValuesFeature>*
10.007 ns
99,927,927.4
+50.0%

Get<IEndpointFeature>*
10.564 ns
94,656,794.2
+44.7%

dotnet/aspnetcore#31519 also from @benaadams adds default interface methods to the IHeaderDictionary type for accessing common headers via properties named after the header name. No more mistyping common headers when accessing the header dictionary! More interestingly for this blog post, this change allows server implementations to return a custom header dictionary that implements these new interface methods more optimally. For example, instead of querying an internal dictionary for a header value which requires hashing the key and looking up an entry, the server might have the header value stored directly in a field and can return the field directly. This change resulted in up to 480% improvements in some cases when getting or setting header values. Once again, to properly benchmark this change to show the improvements it requires using internal types for the setup so I will be including the numbers from the PR, and for those interested in trying it out the PR contains benchmarks that run on the internal code.

Method
Branch
Type
Mean
Op/s
Delta

GetHeaders
before
Plaintext
25.793 ns
38,770,569.6

GetHeaders
after
Plaintext
12.775 ns
78,279,480.0
+101.9%

GetHeaders
before
Common
121.355 ns
8,240,299.3

GetHeaders
after
Common
37.598 ns
26,597,474.6
+222.8%

GetHeaders
before
Unknown
366.456 ns
2,728,840.7

GetHeaders
after
Unknown
223.472 ns
4,474,824.0
+64.0%

SetHeaders
before
Plaintext
49.324 ns
20,273,931.8

SetHeaders
after
Plaintext
34.996 ns
28,574,778.8
+40.9%

SetHeaders
before
Common
635.060 ns
1,574,654.3

SetHeaders
after
Common
108.041 ns
9,255,723.7
+487.7%

SetHeaders
before
Unknown
1,439.945 ns
694,470.8

SetHeaders
after
Unknown
517.067 ns
1,933,985.7
+178.4%

 

dotnet/aspnetcore#31466 used the new CancellationTokenSource.TryReset() method introduced in .NET 6 to reuse CancellationTokenSource’s if a connection closed without being canceled. The below numbers were collected by running bombardier against Kestrel with 125 connections and it ran for ~100,000 requests.

Branch
Type
Allocations
Bytes

Before
CancellationTokenSource
98,314
4,719,072

After
CancellationTokenSource
125
6,000

dotnet/aspnetcore#31528 and dotnet/aspnetcore#34075 made similar changes for reusing CancellationTokenSource‘s for HTTPS handshakes and HTTP3 streams respectively.

dotnet/aspnetcore#31660 improved the perf of server to client streaming in SignalR by reusing the allocated StreamItem object for the whole stream instead of allocating one per stream item. And dotnet/aspnetcore#31661 stores the HubCallerClients object on the SignalR connection instead of allocating it per Hub method call.

dotnet/aspnetcore#31506 from @ShreyasJejurkar refactored the internals of the WebSocket handshake to avoid a temporary List<T> allocation. dotnet/aspnetcore#32829 from @gfoidl refactored QueryCollection to reduce allocations and vectorize some of the code. dotnet/aspnetcore#32234 from @benaadams removed an unused field in HttpRequestHeaders enumeration which improves the perf by no longer assigning to the field for every header enumerated.

dotnet/aspnetcore#31333 from martincostello converted Http.Sys to use LoggerMessage.Define which is the high performance logging API. This avoids unnecessary boxing of value types, parsing of the logging format string, and in some cases avoids allocations of strings or objects when the log level isn’t enabled.

dotnet/aspnetcore#31784 adds a new IApplicationBuilder.Use overload for registering middleware that avoids some unnecessary per-request allocations when running the middleware.
Old code looks like:

app.Use(async (context, next) =>
{
await next();
});

New code looks like:

app.Use(async (context, next) =>
{
await next(context);
});

The below benchmark simulates the middleware pipeline without setting up a server to showcase the improvement. An int is used instead of HttpContext for a request and the middleware returns a completed task.

dotnet run -c Release -f net6.0 –runtimes net6.0 –filter *UseMiddlewareBenchmark*
static private Func<Func<int, Task>, Func<int, Task>> UseOld(Func<int, Func<Task>, Task> middleware)
{
return next =>
{
return context =>
{
Func<Task> simpleNext = () => next(context);
return middleware(context, simpleNext);
};
};
}

static private Func<Func<int, Task>, Func<int, Task>> UseNew(Func<int, Func<int, Task>, Task> middleware)
{
return next => context => middleware(context, next);
}

Func<int, Task> Middleware = UseOld((c, n) => n())(i => Task.CompletedTask);
Func<int, Task> NewMiddleware = UseNew((c, n) => n(c))(i => Task.CompletedTask);

[Benchmark(Baseline = true)]
public Task Use()
{
return Middleware(10);
}

[Benchmark]
public Task UseNew()
{
return NewMiddleware(10);
}

Method
Mean
Ratio
Allocated

Use
15.832 ns
1.00
96 B

UseNew
2.592 ns
0.16

Summary

I hope you enjoyed reading about some of the improvements made in ASP.NET Core 6.0! And I encourage you to take a look at the performance improvements in .NET 6 blog post that goes over performance in the Runtime.

The post Performance improvements in ASP.NET Core 6 appeared first on .NET Blog.

352: With Aysenur Turk

Aysenur Turk had a number of appearances on this year’s Top Hearted of 2021, including #1! In this podcast, I get to catch up with her, find out where she gets ideas and inspiration, how much time it takes to build something like her amazing layouts, and what her favorites are.

Time Jumps

01:05 Guest introduction

02:05 Is your pen your fav as well?

03:35 What draws you to make a full interface?

06:14 Sponsor: Retool

08:03 How long did these take you?

09:23 What order do you build in?

10:34 Do you have a favorite trend to code?

12:20 What are you looking forward to in 2022?

14:54 What are your sources of inspiration?

16:57 What is your job?

19:16 Have you thought about making money off the work?

20:32 Is coding fun?

25:37 Any advice for fellow CodePen users?

Sponsor: Retool

Custom dashboards, admin panels, CRUD apps—build any internal tool faster in Retool. Visually design apps that interface with any database or API. Switch to code nearly anywhere to customize how your apps look and work. With Retool, you ship more apps and move your business forward—all in less time.

Thousands of teams at companies like Amazon, DoorDash, Peloton, and Brex collaborate around custom-built Retool apps to solve internal workflows. To learn more, visit retool.com.

The post 352: With Aysenur Turk appeared first on CodePen Blog.

8 Essential Bootstrap 4 Components for Your Web App

Let’s talk about Bootstrap 4 components. Bootstrap is an open-sourced framework for web apps development that has gained great popularity since 2011 when it was released for the first time. Since that time Bootstrap has expanded, evolved, become more and more popular, and gained the support of a large community of developers.

The latest Bootstrap version is 4.5.2, and we expect to see version 5 soon. As Bootstrap improves, it can offer more and more components with comprehensive documentation. 

You can find alerts, forms, input groups, dropdowns, and much more on the official website.

The source: https://getbootstrap.com/docs/4.5/components/alerts/

These components are free to use, go with Bootstrap toolkit, fully responsive, some of them come with JS files, and they are completely reusable without any necessity in coding. 

However, if base Bootstrap 4 components don’t fit your design or your app requires specific components that the base toolkit doesn’t contain, you face the need to modify base components or to develop them from scratch. It’s can be hard and time-consuming, so we are here to help. At Flatlogic, we create responsive admin templates. We often use Bootstrap 4 components. You can look through some examples of a bootstrap dashboard.

In that article, we consider the most essential bootstrap components that were customized by other developers for different purposes. We show not the complete list of customized components because it would have taken a long series of articles to describe them all since the same components vary in different templates, UI toolkits, and starter kits. We offer you great and well-coded bootstrap-based samples of the most-used components that we believe are noteworthy. 

Enjoy reading!   

Basic Bootstrap 4 components

First of all, let’s examine the list of our essential components itself and how they look like in a bootstrap toolkit (once again, the link to the documentation of the latest bootstrap is here:

Buttons. Have you ever seen the app without buttons? This is the fundamental UI element if you are not going just to display your users an app with only one page. Of course, you can use clickable icons, swipe for mobiles, or even trending voice control for apps, but it’s hard to imagine a no-buttons app. 

Alerts. Another crucial to provide contextual feedback to users. If a user performs any action, it’s supposed that the app notices the user about what he has done – here alerts go. 

Navbar. If you want to allow users to navigate through your app you probably need the navbar. The navigation bar should be clear, simple, and legible. It’s another very significant UI element. 

Forms and input groups. You can use it if you need to provide an opportunity to register, to fill in the feedback form, to leave a review, to leave your personal information in orders, write a comment, place a checkbox, so on and so forth. In general, every time user is supposed to provide any type of information here goes forms and input groups. 

Jumbotron. A component for calling extra attention to a certain piece of information. People’s attention is limited and they use apps for specific purposes while sometimes we need to share information that can be useful to users despite the fact whether ask users that information or not. We want to be sure that users will see it, and jumbotron helps here. But don’t misuse this instrument for advertisement because if it’s unwanted and intrusive you risk losing users.

Tabs. A useful component to manage content and space of the app. Add some animation to show and hide elements, make it smooth and your users will be grateful that they don’t need to scroll through the whole page to get a new piece of information.  

Carousel. A component for cycling through a series of images, text. Surely the carousel must be auto-rotating.

Social buttons. It’s a questionable component, but we still decide to include it in the list of essential elements for apps. The reason is simple: social media are extremely popular today and are being integrated into many apps with such functionality like social login, share via social media, or get in touch with someone with the help of chosen social messenger. You may consider this component not essential for the development of the application, but it’s definitely one of the most used ones.

Once again, you can find the description with examples of the code of every component in the official bootstrap documentation. And now, when we looked through the list of the most essential components for every web app that an official bootstrap toolkit offers, it’s time to see how these components can be customized.

Customized essential Bootstrap 4 components

1) Buttons

Buttons from UI Kit “Material design for bootstrap 4”

The source: https://react.mdbootstrap.com/components/buttons

Here you can find fancy buttons based on material design principles. This component is a part of a quite popular UI KIT that is available in jQuery, Angular, React, and Vue versions. The KIT is free to use, but there is also a premium version that offers more styles for buttons, using gradient colors.  

You can see the component here.

2) Alerts

Alerts from Sing Html5

The source: https://flatlogic.com/templates/sing-app-html5/demo

Provide users with bright alert messages from Sing admin dashboard template. Alerts have additional buttons on it you can customize to your need. The template offers transparent, rounded alerts and specific alerts that contain additional HTML elements like dividers. 

You can download the component with the template here.

3) Navbar

Navbar from Material Kit

The source: https://demos.creative-tim.com/material-kit/index.html#navigation

Simple and beautiful navbars were painted using vibrant and vivid colors. It is a part of a UI kit that can offer a lot of other components.  You can see the component here.

4) Forms and input groups

Bootstrap select

The source: https://github.com/snapappointments/bootstrap-select

Nice looking jQuery-based plugin that combines all possible functions to select something: multiselection, live search, search by keywords. The plugin also offers several inbuilt classes to customize input fields. 

You can download it here.

Bootstrap Fileinput

The source: https://plugins.krajee.com/file-advanced-usage-demo

From our point of view, this is the most multifunctional and featured component for file input that we found on the Internet. It supports preview of numerous file types like text, Html, videos, etc. You can delete files, change their positions in initial preview, set maximum file upload size, and much more. Since it offers comprehensive documentation with examples for every possible function it doesn’t take much time to customize the component.

You can download the component here.

Input groups from Light blue

The source: https://flatlogic.com/templates/light-blue-html5/demo

Light blue is a premium template that can offer awesome and stylish form elements where you can prepend and append text or buttons to the input fields.

You can download it here.

5) Jumbotron

Jumbotron from Anchor UI Kit

The source: https://wowthemesnet.github.io/Anchor-Bootstrap-UI-Kit/docs.html#jumbotron

You can find a nice-looking jumbotron in the component of Anchor UI Kit. You can use either standard simple jumbotron or a jumbotron with a background image. You can download the UI kit here.

6) Nav tabs and pills

Navigation tabs from Miri UI

The source: https://www.bootstrapdash.com/demo/miri-ui-kit-pro/demo/index.html

To download use the link.

7) Carousel

Carousel from Bootstrap Vue

The source: https://bootstrap-vue.org/docs/components/carousel

Bootstrap Vue contains plugins, custom components, icons build on top of Bootstrap and Vue.js. One of the most fascinating UI elements in there is a carousel. Along with sizing, setting the interval between slides, controls, and indicators that component can give you additional tools such as crossfade animation, touch swipe support, and placing content inside the sliders.  

You can download it here.

8) Social buttons

Social Buttons for Bootstrap

The source: https://lipis.github.io/bootstrap-social/

With Social Sign-In Buttons you get strict and minimalistic buttons without excessive animation or unnecessary hover effects. 

To download the component go here.

Fancy Flat Social Button Animation by Colorlib

The source: https://codepen.io/colorlib/full/GOzroL

This component fully corresponds to its name. The specific animation upon hovering when the icon turns from a square into a circle looks fascinating. 

To use this free component go here.

Bonus. How effectively learn all Bootstrap 4 components, addons and plugins

Practice is the key to success. You need to create several applications using these simple tips for complete beginners.

Plugins are the secret of making great web applications

Including plugins in your app is a great development technique. There are online plugin libraries that you can find on the internet. Some are unofficial, but in any case, using plugins for forms, menus, navigation, tables, buttons, and notifications can not only speed up development but also significantly improve the visual component of your app.

Take a component-oriented approach

When developing a web application using Bootstrap, it is ideal to take a component-oriented approach rather than a page-oriented approach. This helps you develop reusable components that can be used across multiple pages. That is, you should not pay more attention to creating the HTML and CSS of the page, so the development process will go much faster. Plus it will give a stylistic uniformity, which is always a sign of good design.

Spend time on the mobile version of the app

To be more precise, it’s better to start with the mobile version. This is the key to success in developing responsive websites and apps. First, you will not overload the design with unnecessary elements that simply cannot be included in the mobile version. Secondly, it will just help you to save time. Let me emphasize again: mobile design must be perfect. The share of users using a smartphone is increasing every year, so this trend, apparently, will not decline.

You need something more than just Bootstrap 4 components

Bootstrap isn’t the answer to all questions. The best and most popular applications combine a fairly wide technological stack. It makes sense to use the most appropriate tool for each task.

In addition, Bootstrap itself can be customized to give your site a unique look and feel. The official Bootstrap site offers all the information you need about customization and supported options. This is perhaps the most important advice of all of the above. Don’t make the site look like everyone else. Create your own style.css file that will overwrite the default Bootstrap styles.

Building Apps efficiently with Flatlogic

To be good at Bootstrap development, you need thoroughly understand many sides of it. The essential components we have listed are just eight pieces out of hundreds or even thousands. That’s a lot to learn. The good news is that Bootstrap easier to master once you start putting it to practice. As they say, practice doesn’t make perfect – perfect practice does. We admire a person’s ability to develop complex things and are always looking for competent and enthusiastic developers. However, a lot of people who need Bootstrap Apps would do well to look for a quicker way. We’ll show you one such way.

We designed Flatlogic platform to help you create Apps without professional help. It requires brief research of the subject rather than specialized training in web development. Frameworks and libraries help us develop software by offering us ready solutions that we can use as parts of our software. We followed a similar line of thought and stripped web App development down to several variables. Let’s see what it takes to develop an App with Flatlogic!

#1: Name the project

The first step is perhaps the simplest. Give your project a name that will make it easier to find and recognize.

#2: Choose tech stack

An App’s stack is the combination of technologies it uses. Define technologies for front-end, back-end, and database. In the example in the screenshot, we’re picking React, Node.js, and PostgreSQL, respectively.

#3: Choose the design

Flatlogic offers you several design patterns you can choose from. This is a matter of personal taste but you might spend a lot of time looking at the admin panel, so choose wisely!

#4: Define the schema

We’ve chosen the database’s underlying technology. Now it’s time to define its structure. Fields, titles, data types, parameters, and how all of them relate to each other. If you’re still learning the ropes, you might want to pick one of the pre-built schemas and move on to the next step.

#5: Review and launch

Check if all variables are as intended. Connect GIT repository with the checkbox if you want to. Hit “Finish” when you’re ready.

#6: Finishing the App

Once the compilation is complete, hit “Deploy”. After that, the App will be at your disposal. Push it to GitHub or host it locally.

We’ve covered how Flatlogic lets you create an App of your own in just six simple (more or less) steps. Create your App, host it, connect it to your API’s data endpoints. And enjoy using it!

That’s all.

Thanks for reading.

You might also like these articles:

Top JavaScript Maps API and Libraries

12 Best Bootstrap Progress Bar Widgets

13 Bootstrap Date Pickers Examples

The post 8 Essential Bootstrap 4 Components for Your Web App appeared first on Flatlogic Blog.