Automate code metrics and class diagrams with GitHub Actions

Hi friends, in my previous post — .NET GitHub Actions, you were introduced to the GitHub Actions platform and the workflow composition syntax. You learned how common .NET CLI commands and actions can be used as building blocks for creating fully automated CI/CD pipelines, directly from your GitHub repositories.

This is the second post in the series dedicated to the GitHub Actions platform and the relevance it has for .NET developers. In this post, I’ll summarize an existing GitHub Action written in .NET that can be used to maintain a code metrics markdown file. I’ll then explain how you can create your own GitHub Actions with .NET. I’ll show you how to define metadata that’s used to identify a GitHub repo as an action. Finally, I’ll bring it all together with a really cool example: updating an action in the .NET samples repository to include class diagram support using GitHub’s brand new Mermaid diagram support, so it will build updated class diagrams on every commit.

Writing a custom GitHub Action with .NET

Actions support several variations of app development:

Action type
Metadata specifier

runs.using: ‘docker’
Any app that can run as a Docker container.

runs.using: ‘javascript’
Any Node.js app (includes the benefit of using actions/toolkit).

runs.using: ‘composite’
Composes multiple run commands and uses actions.

.NET is capable of running in Docker containers, and when you want a fully functioning .NET app to run as your GitHub Action — you’ll have to containerize your app. For more information on .NET and Docker, see .NET Docs: Containerize a .NET app.

If you’re curious about creating JavaScript GitHub Actions, I wrote about that too, see Localize .NET applications with machine-translation. In that blog post, I cover a TypeScript action that relies on Azure Cognitive Services to automatically generate pull requests for target translations.

In addition to containerizing a .NET app, you could alternatively create a .NET global tool that could be installed and called upon using the run syntax instead of uses. This alternative approach is useful for creating a .NET CLI app that can be used as a global tool, but it’s out of scope for this post. For more information on .NET global tools, see .NET Docs: Global tools.

Intent of the tutorial

Over on the .NET Docs, there’s a tutorial on creating a GitHub Action with .NET. It covers exactly how to containerize a .NET app, how to author the action.yml which represents the action’s metadata, as well as how to consume it from a workflow. Rather than repeating the entire tutorial, I’ll summarize the intent of the action, and then we’ll look at how it was updated.

The app in the tutorial performs code metric analysis by:

Scanning and discovering *.csproj and *.vbproj project files in a target repository.
Analyzing the discovered source code within these projects for:

Cyclomatic complexity
Maintainability index
Depth of inheritance
Class coupling
Number of lines of source code
Approximated lines of executable code

Creating (or updating) a file.

As part of the consuming workflow composition, a pull request is conditionally (and automatically) created when the file changes. In other words, as you push changes to your GitHub repository, the workflow runs and uses the .NET code metrics action — which updates the markdown representation of the code metrics. The file itself is navigable with automatic links, and collapsible sections. It uses emoji to highlight code metrics at a glance, for example, when a class has high cyclomatic complexity it bubbles an emoji up to the project level heading the markdown. From there, you can drill down into the class and see the metrics for each method.

The Microsoft.CodeAnalysis.CodeMetrics namespace contains the CodeAnalysisMetricData type, which exposes the CyclomaticComplexity property. This property is a measurement of the structural complexity of the code. It is created by calculating the number of different code paths in the flow of the program. A program that has a complex control flow requires more tests to achieve good code coverage and is less maintainable. When the code is analyzed and the GitHub Action updates the file, it writes an emoji in the header using the following code:

internal static string ToCyclomaticComplexityEmoji(
this CodeAnalysisMetricData metric) =>
metric.CyclomaticComplexity switch
>= 0 and <= 7 => “:heavy_check_mark:”, // ✔
8 or 9 => “:warning:”, // ⚠
10 or 11 => “:radioactive:”, // ☢
>= 12 and <= 14 => “:x:”, // ❌
_ => “:exploding_head:” // 🤯

All of the code for this action is provided in the .NET samples repository and is also part of the .NET docs code samples browser experience. As an example of usage, the action is self-consuming (or dogfooding). As the code updates the action runs, and maintains a file via automated pull requests. Consider the following screen captures showing some of the major parts of the markdown file: heading ProjectFileReference drill-down heading

The code metrics markdown file represents the code it analyzes, by providing the following hierarchy:

Project Namespace Named Type Members table

When you drill down into a named type, there is a link at the bottom of the table for the auto-generated class diagram. This is discussed in the Adding new functionality section. To navigate the example file yourself, see .NET samples

Action metadata

For a GitHub repository to be recognized as a GitHub Action, it must define metadata in an action.yml file.

# Name, description, and branding. All of which are used for
# displaying the action in the GitHub Action marketplace.
name: ‘.NET code metric analyzer’
description: ‘A GitHub action that maintains a file,
reporting cyclomatic complexity, maintainability index, etc.’
icon: sliders
color: purple

# Specify inputs, some are required and some are not.
description: ‘The owner of the repo. Assign from github.repository_owner. Example, “dotnet”.’
required: true
description: ‘The repository name. Example, “samples”.’
required: true
description: ‘The branch name. Assign from github.ref. Example, “refs/heads/main”.’
required: true
description: ‘The root directory to work from. Example, “path/to/code”.’
required: true
description: ‘The workspace directory.’
required: false
default: ‘/github/workspace’

# The action outputs the following values.
description: ‘The title of the code metrics action.’
description: ‘A detailed summary of all the projects that were flagged.’
description: ‘A boolean value, indicating whether or not the
was updated as a result of running this action.’

# The action runs using docker and accepts the following arguments.
using: ‘docker’
image: ‘Dockerfile’
– ‘-o’
– ${{ inputs.owner }}
– ‘-n’
– ${{ }}
– ‘-b’
– ${{ inputs.branch }}
– ‘-d’
– ${{ inputs.dir }}
– ‘-w’
– ${{ inputs.workspace }}

The metadata for the .NET samples code metrics action is nested within a subdirectory, as such, it is not recognized as an action that can be displayed in the GitHub Action marketplace. However, it can still be used as an action.

For more information on metadata, see GitHub Docs: Metadata syntax for GitHub Actions.

Consuming workflow

To consume the .NET code metrics action, a workflow file must exist in the .github/workflows directory from the root of the GitHub repository. Consider the following workflow file:

name: ‘.NET code metrics’

branches: [ main ]
# Ignore and files
– ‘**.md’

runs-on: ubuntu-latest
contents: write
pull-requests: write

– uses: actions/[email protected]

# Analyze repositories source metrics:
# Create (or update) file.
– name: .NET code metrics
id: dotnet-code-metrics
uses: dotnet/samples/github-actions/[email protected]
owner: ${{ github.repository_owner }}
name: ${{ github.repository }}
branch: ${{ github.ref }}
dir: ${{ ‘./github-actions/DotNet.GitHubAction’ }}

# Create a pull request if there are changes.
– name: Create pull request
uses: peter-evans/[email protected]
if: ${{ steps.dotnet-code-metrics.outputs.updated-metrics }} == ‘true’
title: ‘${{ steps.dotnet-code-metrics.outputs.summary-title }}’
body: ‘${{ steps.dotnet-code-metrics.outputs.summary-details }}’
commit-message: ‘.NET code metrics, automated pull request.’

This workflow makes use of jobs.<job_id>.permissions, setting contents and pull-requests to write. This is required for the action to update contents in the repo and create a pull request from those changes. For more information on permissions, see GitHub Docs: Workflow syntax for GitHub Actions – permissions.

To help visualize how this workflow functions, see the following sequence diagram:

The preceding sequence diagram shows the workflow for the .NET code metrics action:

When a developer pushes code to the GitHub repository.

The workflow is triggered and starts to run.

The source code is checked out into the $GITHUB_WORKSPACE.

The .NET code metrics action is invoked.

The source code is analyzed, and the file is updated.

If the .NET code metrics step (dotnet-code-metrics) outputs that metrics were updated, the create-pull-request action is invoked.

The file is checked into the repository.
An automated pull request is created. For example pull requests created by the app/github-actions bot, see .NET samples / pull requests.

Adding new functionality

GitHub recently announced diagram support for Markdown powered by Mermaid. Since our custom action is capable of analyzing C# as part of its execution, it has a semantic understanding of the classes it’s analyzing. This is used to automatically create Mermaid class diagrams in the file.

The .NET code metrics GitHub Action sample code was updated to include Mermaid support.

static void AppendMermaidClassDiagrams(
MarkdownDocument document,
List<(string Id, string Class, string MermaidCode)> diagrams)
document.AppendHeader(“Mermaid class diagrams”, 2);

foreach (var (id, className, code) in diagrams)
document.AppendParagraph($”<div id=”{id}”></div>”);
document.AppendHeader($”`{className}` class diagram”, 5);
document.AppendCode(“mermaid”, code);

If you’re interested in seeing the code that generates the diagrams argument, see the ToMermaidClassDiagram extension method.
As an example of what this renders like within the Markdown file, see the following diagram:


In this post, you learned about the different types of GitHub Actions with an emphasis on Docker and .NET. I explained how the .NET code metrics GitHub Action was updated to include Mermaid class diagram support. You also saw an example action.yml file that serves as the metadata for a GitHub Action. You then saw a visualization of the consuming workflow, and how the code metrics action is consumed. I also covered how to add new functionality to the code metrics action.

What will you build, and how will it help others? I encourage you to create and share your own .NET GitHub Actions. For more information on .NET and custom GitHub Actions, see the following resources:

GitHub Docs: Creating custom actions
.NET Docs: GitHub Actions and .NET

The post Automate code metrics and class diagrams with GitHub Actions appeared first on .NET Blog.

.NET March 2022 Updates – .NET 6.0.3, .NET 5.0.15 and, .NET 3.1.23

Today, we are releasing the .NET March 2022 Updates. These updates contain reliability and security improvements. See the individual release notes for details on updated packages.

You can download 6.0.3, 5.0.15 and, 3.1.23 versions for Windows, macOS, and Linux, for x86, x64, Arm32, and Arm64.

Installers and binaries: 6.0.3 | 5.0.15 | 3.1.23

Release notes: 6.0.3 | 5.0.15 | 5.0.15
Container images
Linux packages: 6.0.3 | 5.0.15 | 3.1.23
Release feedback/issue
Known issues: 6.0 | 5.0 | 3.1


ASP.NET Core: 6.0.3
EF Core: 6.0.3

Runtime: 6.0.3

Winforms: 6.0.3
WPF: 6.0.3
WPF: 5.0.15


CVE-2020-8927: .NET Remote Code Execution Vulnerability

Microsoft is releasing this security advisory to provide information about a vulnerability in .NET 5.0 and .NET Core 3.1. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability.

A vulnerability exists in .NET 5.0 and .NET Core 3.1 where a buffer overflow exists in the Brotli library versions prior to 1.0.8.

CVE-2022-24464: .NET Denial of Service Vulnerability

Microsoft is releasing this security advisory to provide information about a vulnerability in .NET 6.0, .NET 5.0, and .NET CORE 3.1. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability.

Microsoft is aware of a Denial of Service vulnerability, which exists in .NET 6.0, .NET 5.0, and .NET CORE 3.1 when parsing certain types of http form requests.

CVE-2022-24512: .NET Remote Code Execution Vulnerability

Microsoft is releasing this security advisory to provide information about a vulnerability in .NET 6.0, .NET 5.0, and .NET Core 3.1. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability.

A Remote Code Execution vulnerability exists in .NET 6.0, .NET 5.0, and .NET Core 3.1 where a stack buffer overrun occurs in .NET Double Parse routine.

Visual Studio

See release notes for Visual Studio compatibility for .NET 6.0, .NET 5.0 and, .NET Core 3.1.

The post .NET March 2022 Updates – .NET 6.0.3, .NET 5.0.15 and, .NET 3.1.23 appeared first on .NET Blog.

How to Upgrade to the React 18 Release Candidate

Our next major version, React 18, is available today as a Release Candidate (RC). As we shared at React Conf, React 18 introduces features powered by our new concurrent renderer, with a gradual adoption strategy for existing applications. In this post, we will guide you through the steps for upgrading to React 18.

If you’d like to help us test React 18, follow the steps in this upgrade guide and report any issues you encounter so we can fix them before the stable release.

Note for React Native users: React 18 will ship in React Native with the New React Native Architecture. For more information, see the React Conf keynote here.


To install the latest React 18 RC, use the @rc tag:

Or if you’re using yarn:

Updates to Client Rendering APIs

When you first install React 18, you will see a warning in the console:

ReactDOM.render is no longer supported in React 18. Use createRoot instead. Until you switch to the new API, your app will behave as if it’s running React 17. Learn more:

React 18 introduces a new root API which provides better ergonomics for managing roots. The new root API also enables the new concurrent renderer, which allows you to opt-into concurrent features.

// Before
import { render } from ‘react-dom’;
const container = document.getElementById(‘app’);
render(<App tab=home />, container);

// After
import { createRoot } from ‘react-dom/client’;
const container = document.getElementById(‘app’);
const root = createRoot(container);
root.render(<App tab=home />);

We’ve also changed unmountComponentAtNode to root.unmount:

// Before

// After

We’ve also removed the callback from render, since it usually does not have the expected result when using Suspense:

// Before
const container = document.getElementById(‘app’);
ReactDOM.render(<App tab=home />, container, () => {

// After
function AppWithCallbackAfterRender() {
useEffect(() => {

return <App tab=home />

const container = document.getElementById(‘app’);
const root = ReactDOM.createRoot(container);
root.render(<AppWithCallbackAfterRender />);

Note: There is no one-to-one replacement for the old render callback API — it depends on your use case. See the working group post for Replacing render with createRoot for more information.

Finally, if your app uses server-side rendering with hydration, upgrade hydrate to hydrateRoot:

// Before
import { hydrate } from ‘react-dom’;
const container = document.getElementById(‘app’);
hydrate(<App tab=home />, container);

// After
import { hydrateRoot } from ‘react-dom/client’;
const container = document.getElementById(‘app’);
const root = hydrateRoot(container, <App tab=home />);
// Unlike with createRoot, you don’t need a separate root.render() call here.

For more information, see the working group discussion here.

Updates to Server Rendering APIs

In this release, we’re revamping our react-dom/server APIs to fully support Suspense on the server and Streaming SSR. As part of these changes, we’re deprecating the old Node streaming API, which does not support incremental Suspense streaming on the server.

Using this API will now warn:

renderToNodeStream: Deprecated ⛔️️

Instead, for streaming in Node environments, use:

renderToPipeableStream: New ✨

We’re also introducing a new API to support streaming SSR with Suspense for modern edge runtime environments, such as Deno and Cloudflare workers:

renderToReadableStream: New ✨

The following APIs will continue working, but with limited support for Suspense:

renderToString: Limited ⚠️

renderToStaticMarkup: Limited ⚠️

Finally, this API will continue to work for rendering e-mails:


For more information on the changes to server rendering APIs, see the working group post on Upgrading to React 18 on the server, a deep dive on the new Suspense SSR Architecture, and Shaundai Person’s talk on Streaming Server Rendering with Suspense at React Conf 2021.

Automatic Batching

React 18 adds out-of-the-box performance improvements by doing more batching by default. Batching is when React groups multiple state updates into a single re-render for better performance. Before React 18, we only batched updates inside React event handlers. Updates inside of promises, setTimeout, native event handlers, or any other event were not batched in React by default:

// Before React 18 only React events were batched

function handleClick() {
setCount(c => c + 1);
setFlag(f => !f);
// React will only re-render once at the end (that’s batching!)

setTimeout(() => {
setCount(c => c + 1);
setFlag(f => !f);
// React will render twice, once for each state update (no batching)
}, 1000);

Starting in React 18 with createRoot, all updates will be automatically batched, no matter where they originate from. This means that updates inside of timeouts, promises, native event handlers or any other event will batch the same way as updates inside of React events:

// After React 18 updates inside of timeouts, promises,
// native event handlers or any other event are batched.

function handleClick() {
setCount(c => c + 1);
setFlag(f => !f);
// React will only re-render once at the end (that’s batching!)

setTimeout(() => {
setCount(c => c + 1);
setFlag(f => !f);
// React will only re-render once at the end (that’s batching!)
}, 1000);

This is a breaking change, but we expect this to result in less work rendering, and therefore better performance in your applications. To opt-out of automatic batching, you can use flushSync:

import { flushSync } from ‘react-dom’;

function handleClick() {
flushSync(() => {
setCounter(c => c + 1);
// React has updated the DOM by now
flushSync(() => {
setFlag(f => !f);
// React has updated the DOM by now

For more information, see the Automatic batching deep dive.

New APIs for Libraries

In the React 18 Working Group we worked with library maintainers to create new APIs needed to support concurrent rendering for use cases specific to their use case in areas like styles, external stores, and accessibility. To support React 18, some libraries may need to switch to one of the following APIs:

useId is a new hook for generating unique IDs on both the client and server, while avoiding hydration mismatches. This solves an issue that already exists in React 17 and below, but it’s even more important in React 18 because of how our streaming server renderer delivers HTML out-of-order. For more information see the useId post in the working group.

useSyncExternalStore is a new hook that allows external stores to support concurrent reads by forcing updates to the store to be synchronous. This new API is recommended for any library that integrates with state external to React. For more information, see the useSyncExternalStore overview post and useSyncExternalStore API details.

useInsertionEffect is a new hook that allows CSS-in-JS libraries to address performance issues of injecting styles in render. Unless you’ve already built a CSS-in-JS library we don’t expect you to ever use this. This hook will run after the DOM is mutated, but before layout effects read the new layout. This solves an issue that already exists in React 17 and below, but is even more important in React 18 because React yields to the browser during concurrent rendering, giving it a chance to recalculate layout. For more information, see the Library Upgrade Guide for <style>.

React 18 also introduces new APIs for concurrent rendering such as startTransition and useDeferredValue, which we will share more about in the upcoming stable release post.

Updates to Strict Mode

In the future, we’d like to add a feature that allows React to add and remove sections of the UI while preserving state. For example, when a user tabs away from a screen and back, React should be able to immediately show the previous screen. To do this, React would unmount and remount trees using the same component state as before.

This feature will give React better performance out-of-the-box, but requires components to be resilient to effects being mounted and destroyed multiple times. Most effects will work without any changes, but some effects assume they are only mounted or destroyed once.

To help surface these issues, React 18 introduces a new development-only check to Strict Mode. This new check will automatically unmount and remount every component, whenever a component mounts for the first time, restoring the previous state on the second mount.

Before this change, React would mount the component and create the effects:

* React mounts the component.
* Layout effects are created.
* Effect effects are created.

With Strict Mode in React 18, React will simulate unmounting and remounting the component in development mode:

* React mounts the component.
* Layout effects are created.
* Effect effects are created.
* React simulates unmounting the component.
* Layout effects are destroyed.
* Effects are destroyed.
* React simulates mounting the component with the previous state.
* Layout effect setup code runs
* Effect setup code runs

For more information, see the Working Group posts for Adding Strict Effects to Strict Mode and How to Support Strict Effects.

Configuring Your Testing Environment

When you first update your tests to use createRoot, you may see this warning in your test console:

The current testing environment is not configured to support act(…)

To fix this, set global.IS_REACT_ACT_ENVIRONMENT to true before running your test:

// In your test setup file

The purpose of the flag is to tell React that it’s running in a unit test-like environment. React will log helpful warnings if you forget to wrap an update with act.

You can also set the flag to false to tell React that act isn’t needed. This can be useful for end-to-end tests that simulate a full browser environment.

Eventually, we expect testing libraries will configure this for you automatically. For example, the next version of React Testing Library has built-in support for React 18 without any additional configuration.

More background on the the act testing API and related changes is available in the working group.

Dropping Support for Internet Explorer

In this release, React is dropping support for Internet Explorer, which is going out of support on June 15, 2022. We’re making this change now because new features introduced in React 18 are built using modern browser features such as microtasks which cannot be adequately polyfilled in IE.

If you need to support Internet Explorer we recommend you stay with React 17.

Other Changes

Update to remove the “setState on unmounted component” warning
Suspense no longer requires a fallback prop to capture
Components can now render undefined
Deprecated renderSubtreeIntoContainer
StrictMode updated to not silence double logging by default

Working with Feature Flags in ASP.NET Core MVC application

This post is about Adding feature flags to an ASP.NET Core app. In this blog post we will discuss about various extension points of Feature Management package. In the last post we implemented the feature management in controller code. But that might not be the scenario always – we might like to control the feature visibility in views or in middleware. We will be able to use IFeatureManager instance, but FeatureManagement package offers more extension points like Views, Filters and Middleware out of the box.

These are the various ways we can consume Feature management.

Using IsEnabledAsync method of IFeatureManager instance – we can check the availability of the feature using IsEnabledAsync method of IFeatureManager instance – which is injected by ASP.NET Core runtime, in controllers or in other services.

if(await _featureManager.IsEnabledAsync(“Feature”))

Using FeatureGate attribute – If we want to control the execution of an Action Method or Controller actions based on the feature we can use the FeatureGate attribute. We can add this attribute to controller / action methods -and if the feature is not enabled – it will show a 404 page.

public class HomeController : Controller
private readonly ILogger<HomeController> _logger;
//Controller implementation

Using Feature tag helper in Views – We can use the Feature tag helper and conditionally render elements, like this.

<feature name=“WelcomeMessage”>
<!– Implementation –>

We need to modify the ViewImports.cshtml and add the following line to start using this tag helper.

@addTagHelper *, Microsoft.FeatureManagement.AspNetCore

Conditionally execute Action Filters – we can use the Feature Management package to conditionally execute action filters, like this.

builder.Services.AddControllersWithViews(options =>

The CustomActionFilter class should implement the IAsyncActionFilter interface.

Conditionally execute middleware – we can use the feature management package to conditionally execute middleware as well.

var app = builder.Build();

In case of controllers and views if the feature is disabled it will redirect to 404 page – it is not a nice user experience. We can customize this behavior by implementing the IDisabledFeaturesHandler interface. Here is one simple implementation.

public class DisabledFeaturesHandler : IDisabledFeaturesHandler
public Task HandleDisabledFeatures(IEnumerable<string> features, ActionExecutingContext context)
var featureText = string.Join(“,”, features);
context.Result = new ContentResult()
Content = $”<p>The following feature(s) is not available for you.</p>” +
$”<p>{featureText}</p><p>Please contact support for more information.</p>”,
ContentType = “text/html”
return Task.CompletedTask;

And we need to map the DisabledFeaturesHandler to the Feature Management service like this.

.UseDisabledFeaturesHandler(new DisabledFeaturesHandler());

These are the various out of the box feature management options to control the execution and rendering of ASP.NET Core MVC application code and views. You can implement your own filters as well, by implementing IFeatureFilter interface.

Happy Programming 🙂

Improving the signal bar

Signals are one of Seq’s most important and useful features. Activating and combining signals can very quickly limit a search or query down to a narrow stream of relevant events.

The signal bar is the right-hand pane of the Seq UI where signals can be activated and deactivated. With just a few signals, everything is neatly organized there, but when the total number of signals is large, a few problems emerge:

The activated signals can be scrolled off-screen, making it difficult to quickly determine which filters are being applied to the current search or query,
The query list, beneath the signal list, can end up scrolled off-screen, making saved queries an oft-forgotten feature,
The editor pane that appears when editing a signal or query can end up scrolled off-screen, making it awkward to start editing a signal far down the list, and
There isn’t much space there to add new functionality without more scrolling or clutter.

Seq 2022.1 will have an updated signal bar design that addresses these issues. We’re really pleased with how it is coming together, so along with this sneak peak blog post, you can now download or docker pull a preview version of Seq 2022.1 and try it out!

Tool windows and the tab drawer

The first thing you’ll notice about the Seq 2022 UI is the addition of a tab drawer along the right-hand edge of the screen. Signals, queries, and other tool windows show up there when they’ve been collapsed:

Clicking on one of the tool windows in the tab drawer will restore it into the signal bar:

Each tool window has its own content area and scrolls independently: you can keep the queries tool window on screen even when the signals window contains a large number of signals.

This new layout also neatly and consistently organizes features like the improved history window and the new variables window.

The editor window

When a signal or query is being edited, the editor appears as an independent tool window that you can keep on screen, or collapse to the tab drawer.

This makes working with complex signals (with long lists of filters) much more pleasant.

The editor has also been streamlined: you no longer need to click “ok” to apply changes to the signal or query title, or to any of the filters within a signal. We’re planning some further changes to the editor to better take advantage of the increased screen real estate available for it.

Activated signals

When a signal is activated, it now appears in a list beneath the search box:

This keeps all of the information about the scope of the current query – the date range and selected signals – in the one place. It’s much easier to tell which signals are applied at a glance, and clicking on a signal here will deactivate it, saving the need to scroll through a long signal list.

When can I get it?

A preview build of Seq 2022.1 is availble for Windows via the Seq downloads page, and you can grab a preview container for Docker/Linux by pulling the datalust/seq:preview tag.

We’re finishing up the release now, with a few small items yet to add and some polish to apply. If all goes to plan you’ll have an RTM build in your hands in late March 2022.

We’d very much appreciate your feedback on the changes, either here, on the discussion forum, or via our support email address. Thanks for taking a look!

Adding feature flags to an ASP.NET Core app

This post is about Adding feature flags to an ASP.NET Core app.Feature flags (also known as feature toggles or feature switches) are a software development technique that turns certain functionality on and off during runtime, without deploying new code. In this post we will discuss about flags using appsettings.json file. I am using an ASP.NET Core MVC project, you can do it for any .NET Core project like Razor Web Apps, or Web APIs.

First we need to add reference of Microsoft.FeatureManagement.AspNetCore nuget package – This package created by Microsoft, it will support creation of simple on/off feature flags as well as complex conditional flags. Once this package added, we need to add the following code to inject the Feature Manager instance to the http pipeline.

using Microsoft.FeatureManagement;

var builder = WebApplication.CreateBuilder(args);


var app = builder.Build();

Next we need to create a FeatureManagement section in the appsettings.json with feature with a boolean value like this.

“FeatureManagement”: {
“WelcomeMessage”: false

Now we are ready with feature toggle, let us write code to manage it from the controller. In the controller, ASP.NET Core runtime will inject an instance of the IFeatureManager. And in this interface we can check whether a feature is enabled or not using the IsEnabledAsync method. So for our feature we can do it like this.

public async Task<IActionResult> IndexAsync()
if(await _featureManager.IsEnabledAsync(“WelcomeMessage”))
ViewData[“WelcomeMessage”] = “Welcome to the Feature Demo app.”;
return View();

And in the View we can write the following code.

@if (ViewData[“WelcomeMessage”] != null)
<div class=“alert alert-primary” role=“alert”>

Run the application, the alert will not be displayed. You can change the WelcomeMessage to true and refresh the page – it will display the bootstrap alert.

This way you can start introducing Feature Flags or Feature Toggles in ASP.NET Core MVC app. As you may already noticed the Feature Management library built on top of the configuration system of .NET Core. So it will support any configuration sources as Feature flags source. Microsoft Azure provides Azure App Configuration service which helps to implement feature flags for cloud native apps.

Happy Programming 🙂

What is Node.js?

NodeJS is a backend JavaScript runtime environment (RTE) designed in 2009 by Ryan Dahl, that is used to build server-side applications like websites and internal API services. Node.js is also a cross-platform stack, meaning that applications can be run on such operating systems as macOS, Microsoft Windows and Linux.

Node.js is powered by Google’s Chrome JavaScript engine V8, with web applications event-driven in an asynchronous way. Also, Node.js uses the world’s largest ecosystem of open source libraries – npm (The Node Package Manager).

The npm modules idea is a publicly available set of reusable components, which are available through simple installation via an online repository, with both version and dependency management.

The architecture of Node.js work

Node.js has a limited set of thread pools of requests processing.
Node.js queues requests as they come in.
Then comes the Single-Threaded Event Loop – the core component that waits indefinitely for requests.
The loop picks up the request in the queue as it arrives and verifies if it requires an I/O blocking operation. 
If the request doesn’t have a blocking I/O operation, the loop processes it and sends a response.
If the request appears to have a blocking operation, the loop creates a thread from the internal thread pool to control the request. 
As soon as the blocking task is handled, the event loop continues monitoring blocking requests and queues them. That’s called a non-blocking nature.

Why use Node.js

Single-Threaded Event Loop Model. Node.js uses a ‘Single-Threaded Event Loop Model’ architecture that manages multiple requests submitted via clients. While the main loop of events is performed by a single thread, the I/O work in the background is performed by separate threads because the I/O operations in the Node API are asynchronous (non-blocking design) to fit into the event loop. 

Performance. Through Google Chrome’s V8 JavaScript engine on which the work is built Node.js allows us to run the code faster and easier.

High scalability.  Applications in Node.js are very scalable because they work asynchronously. Node.js operates in a single thread when one request is submitted, processing begins and is ready to prepare for the next request. When ready, it sends the request back to the client.
NPM package. 

Global community. NodeJs has an enormous global community that actively communicating on GitHub, Reddit, and StackOverflow. Community members also share completely free tools, modules, packages, and frameworks with each other.

Extended hosting options. Node.js deployments occur via PaaS providers such as AWS and Heroku. Thus, NodeJs minimizes the number of servers required to host an application, ultimately reducing page load times by 50%.

Who uses NodeJs

Node.js enables to build the business solutions due to which you have an edge over competitors, e.g.:

IoT apps;
Data Streaming, etc.

Node.js is quite popular it is used for development by both global companies and startups, below are examples of the most popular of them: 


How to create your application on Node.js backend using Flatlogic Platform

1 Step. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

2 Step. Choosing the Starter Template

Then you’re choosing the design of the web app.

3 Step. Schema Editor

In this part you will need to know which application you want to build, that is, CRM or E-commerce, also in this part you build a database schema i.e. tables and relationships between them.

If you are not familiar with database design and it is difficult for you to understand what tables are, we have prepared several ready-made example schemas of real-world apps that you can build your app upon modification:

E-commerce app;
Time tracking app;
Books store;
Chat (messaging) app;

Flatlogic Platform offers you the opportunity to create a CRUD application with the Node.js backend literally in a few minutes. As a result, you will get DataBase models, Node.js CRUD admin panel, and API.

The post What is Node.js? appeared first on Flatlogic Blog.

Building a gRPC Server in .NET


In this article, we will look at how to build a simple web service with gRPC in .NET. We will keep our changes to minimal and leverage the same Protocol Buffer IDL we used in my previous post. We will also go through some common problems that you might face when building a gRPC server in .NET.


For this article also we will be using the Online Bookshop example and leveraging the same Protobufs as we saw before. For those who aren’t familiar with or missed this series, you can find them from here.

Introduction to gRPC
Building a gRPC server with Go

Building a gRPC server with .NET (You are here)
Building a gRPC client with Go
Building a gRPC client with .NET

We will be covering steps 1 and 2 in the following diagram.


So this is what we are trying to achieve.

Generate the .proto IDL stubs.
Write the business logic for our service methods.
Spin up a gRPC server on a given port.

In a nutshell, we will be covering the following items on our initial diagram.

💡  As always, all the code samples documentation can be found at:


Visual Studio Code or IDE of your choice
gRPC compiler

Please note that I’m using some of the commands that are macOS specific. Please follow this link to set it up if you are on a different OS.

To install Protobuf compiler:

brew install protobuf

Project Structure

We can use the the .NET’s tooling to generate a sample gRPC project. Run the following command in at the root of your workspace.

dotnet new grpc -o BookshopServer

Once you run the above command, you will see the following structure.

We also need to configure the SSL trust:

dotnet dev-certs https –trust

As you might have guessed, this is like a default template and it already has a lot of things wired up for us like the Protos folder.

Generating the server stubs

Usually, we would have to invoke the protocol buffer compiler to generate the code for the target language (as we saw in my previous article). However, for .NET they have streamlined the code generation process. They use the Grpc.Tools NuGet package with MSBuild to provide automatic code generation, which is pretty neat! 👏

If you open up the Bookshop.csproj file you will find the following lines:

<Protobuf Include=Protosgreet.proto GrpcServices=Server />

We are going to replace greet.proto with our Bookshop.proto file.

We will also update our csproj file like so:

<Protobuf Include=../proto/bookshop.proto GrpcServices=Server />

Implementing the Server

The implementation part is easy! Let’s clean up the GreeterService that comes default and add a new file called InventoryService.cs

rm BookshopServer/Services/GreeterService.cs
code BookshopServer/Services/InventoryService.cs

This is what our service is going to look like.


Let’s go through the code step by step.

Inventory.InventoryBase is an abstract class that got auto-generated (in your obj/debug folder) from our protobuf file.

GetBookList method’s stub is already generated for us in the InventoryBase class and that’s why we are overriding it. Again, this is the RPC call we defined in our protobuf definition. This method takes in a GetBookListRequest which defines what the request looks like and a ServerCallContext param which contains the headers, auth context etc.
Rest of the code is pretty easy – we prepare the response and return it back to the caller/client. It’s worth noting that we never defined the GetBookListRequest GetBookListResponse types ourselves, manually. The gRPC tooling for .NET has already created these for us under the Bookshop namespace.

Make sure to update the Program.cs to reflect the new service as well.

// …
// …

And then we can run the server with the following command.

dotnet run –project BookshopServer/BookshopServer.csproj

We are almost there! Remember we can’t access the service yet through the browser since browsers don’t understand binary protocols. In the next step, we will to test our service 🎉

Common Errors

A common error you’d find on macOS systems with .NET is HTTP/2 and TLS issue shown below.

gRPC template uses TLS by default and Kestrel doesn’t support HTTP/2 with TLS on macOS systems. We need to turn off TLS (ouch!) in order for our demo to work.

💡 Please don’t do this in production! This is intended for local development purposes only.

On local development

// Turn off TLS
builder.WebHost.ConfigureKestrel(options =>
// Setup a HTTP/2 endpoint without TLS.
options.ListenLocalhost(5000, o => o.Protocols =

Testing the service

Usually, when interacting with the HTTP/1.1-like server, we can use cURL to make requests and inspect the responses. However, with gRPC, we can’t do that. (you can make requests to HTTP/2 services, but those won’t be readable). We will be using gRPCurl for that.

Once you have it up and running, you can now interact with the server we just built.

grpcurl -plaintext localhost:8080 Inventory/GetBookList

💡 Note: gRPC defaults to TLS for transport. However, to keep things simple, I will be using the `-plaintext` flag with `grpcurl` so that we can see a human-readable response.

How do we figure out the endpoints of the service? There are two ways to do this. One is by providing a path to the proto files, while the other option enables reflection through the code.

Using proto files

If you don’t want to enable reflection, we can use the Protobuf files to let gRPCurl know which methods are available. Normally, when a team makes a gRPC service they will make the protobuf files available if you are integrating with them. So, without having to ask them or doing trial-and-error you can use these proto files to introspect what kind of endpoints are available for consumption.

grpcurl -import-path Proto -proto bookshop.proto -plaintext localhost:5000 Inventory/GetBookList

Now, let’s say we didn’t have reflection enabled and try to call a method on the server.

grpcurl -plaintext localhost:5000 Inventory/GetBookList

We can expect that it will error out. Cool!

Enabling reflection

While in the BookshopServer folder run the following command to install the reflection package.

dotnet add package Grpc.AspNetCore.Server.Reflection

Add the following to the Program.cs file. Note that we are using the new Minimal API approach to configure these services

// Register services that enable reflection

// Enable reflection in Debug mode.
if (app.Environment.IsDevelopment())


As we have seen, similar to the Go implementation, we can use the same Protocol buffer files to generate the server implementation in .NET. In my opinion .NET’s new tooling makes it easier to generate the server stubs when a change happens in your Protobufs. However, setting up the local developer environment could be a bit challenging especially for macOS.

Feel free to let me know if you have any questions or feedback. Until next time! 👋


Understanding the .NET Language Integrated Query (LINQ)


The Language Integrated Query (LINQ), which is pronounced as “link”, was introduced in the .NET Framework 3.5 to provide query capabilities by defining standardized query syntax in the .NET programming languages (such as C# and VB.NET). LINQ is provided via the System.Linq namespace.

A query is an expression to retrieve data from a data source. Usually, queries are expressed as simple strings (e.g., SQL for relational databases) without type checking at compile time or IntelliSense support. Traditionally, developers had to learn a new query language for each data source type (e.g., SQL, XML, ADO.NET Datasets, etc.).

LINQ provides unified query syntax to query different data sources by working with objects. For example, we could retrieve and save data in different databases (MS SQL, My SQL, Oracle, etc.) with the same code. Using the same basic coding patterns, we can query and transform data in any source where a LINQ provider is available. In addition, we can perform many operations, such as filtering, ordering, and grouping.

In this article, we will learn about the LINQ architecture and technologies, query syntaxes, execution types, and query operations. In addition, we will see some code examples to be familiarized with LINQ concepts.

LINQ and Generic Types (C#)

We can design classes and methods that can provide functionalities for a general type (T) by using Generics. The generic type parameter will be defined when the class or method is declared and instantiated. In this way, we can use the generic class or method for different types without the cost of boxing operations and the risk of runtime casts.

A generic type is declared by specifying a type parameter in angle brackets after the class or method name, e.g. MyClassName<T>, where T is a type parameter. The MyClassName class will provide generalized solutions for any T. The most common use of generics is to create collection classes.

LINQ queries are based on generic types. So, when creating an instance of a generic collection class, such as List<T>, Dictionary<TKey, TValue>, etc., we should replace the T parameter with the type of our objects. For example, we could keep a list of string values (List<string>), a list of custom User objects (List< User>), a dictionary of integer keys with string values (Dictionary<int, string>), etc.

If you have already used LINQ, you probably have seen the IEnumerable<T> interface. The IEnumerable<T> interface enables the generic collection classes to be enumerated using the foreach statement. A generic collection is a collection with a general type (T). The non-generic collection classes such as ArrayList support the IEnumerable interface to be enumerated.

LINQ Architecture and Technologies

As we have already seen, we can write LINQ queries in any source in which a LINQ provider is available. These sources implement the IEnumerable interface, such as in-memory data structures, XML documents, SQL databases, and DataSet objects. In this way, we always view the data as an IEnumerable collection, either when we query, update, etc.

In the following figure, we can see the LINQ architecture and the available LINQ technologies. As we can see, the LINQ technologies are the following:

LINQ to Objects: Using LINQ queries with any IEnumerable or IEnumerable<T> collection directly, without using an intermediate LINQ provider or API such as LINQ to SQL, LINQ to XML, etc. Practically, we query any enumerable collections such as List<T>, Array, or Dictionary<TKey, TValue>.

LINQ to XML: LINQ to XML provides an in-memory XML programming interface that leverages the LINQ Framework to perform queries easier, similarly to SQL.

ADO.NET LINQ Technologies: ADO.NET provides consistent access to data sources (such as SQL Server, data sources exposed through OLE DB and ODBC, etc.) to separate the data access from data manipulation.

LINQ to DataSet: To perform queries over data cached in a DataSet object. In this scenario, the retrieved data are stored in a DataSet object.

LINQ to SQL: Use the LINQ programming model directly over the existing database schema and auto-generate the .NET model classes representing data. LINQ to SQL is used when we do not require mapping to conceptual models (i.e., when one-to-one mapping of the data to model classes is accepted).

LINQ to Entities: We can use the LINQ to Entities to support conceptual models (i.e., models that are not the same as the logical models of the database). The conceptual data models (mapped database models) are used to model the data and interact as objects. In this way, we can formulate queries in the database in the same programming language we are building the business logic.

Figure 1. – The LINQ architecture and the available LINQ technologies (Source).

LINQ Syntax

LINQ provides two ways to write queries, the Query Syntax and the Method Syntax. In the following sections, we will see the syntax of both ways.

Query Syntax

The LINQ Query Syntax has some similarities with the SQL query syntax, as we see in the following syntax statement. The result of a query expression is a query object (not the actual results), which is usually a collection of type IEnumerable<T>.

// LINQ Query Syntax

from <range variable> in <sourcecollection>
<Query Operator> conditional expression
<select or groupBy operator> <result formation>

In Figure 2, we can see a simple LINQ query syntax example. The from clause specifies the data source (numbers) and the num range variable (i.e., the value in each iteration). The where clause applies the filter (e.g., when the num is an even number), and the select clause specifies the type of the returned elements (e.g. all even numbers).

Figure 2. – LINQ query syntax example.

In general, the query specifies what information to retrieve from the data source or sources. Optionally, a query also determines how that information should be sorted, grouped, and shaped before it is returned.

Note: The Query syntax does not support all LINQ query operators compared to the Method syntax.

Method Syntax

Query syntax and Method syntax are semantically identical. However, many people find query syntax simpler and easier to read since it doesn’t use lambda expressions. In Figure 3, we can see the semantically equivalent LINQ Query syntax example written in Method syntax.

The query syntax is translated into method calls (method syntax) for the .NET common language runtime (CLR) in compile-time. Thus, in terms of runtime performance, both LINQ syntaxes are the same.

Figure 3. – LINQ Method syntax example.

Note: In terms of runtime performance, both LINQ syntaxes are the same.

Query Execution

In the previous sections, we saw how to use Query and Method syntax to create our query object. It is essential to notice that the query object doesn’t contain the results (i.e., the query result data). Instead, it includes the information required to produce the results when the query is executed. As we can understand, we can execute the query multiple times.

There are two ways to execute a LINQ query object, the deferred execution and the forced execution:

Deferred Execution is performed when we use the query object in a foreach statement, executing it and iterating the results.

Forced execution is performed when we execute the query to retrieve its results in a single collection object using the ToList() or ToArray() methods. Another way to force the query execution is when we perform functions that need to iterate the results, such as Count(), Max(), Average(), etc.

Let’s assume we have the Customer[] customers array from a related service. We have created the following query object to retrieve the customers who live in Athens.

// Data source

Customer[] customers = CustomerService.GetAllCustomers();

// Create the Query object (via Query Syntax)

IEnumerable<Customer> customerQuery =
from customer in customers
where customer.City == “Athens”
select customer;

In the following example, we can see how to execute the query object using the two execution methods (Deferred and Forced).

//Deferred: Query execution using the foreach stamenent

foreach (Customer customer in customerQuery)
Console.WriteLine($”{customer.Lastname}, {customer.Firstname});

// Forced: Query execution using the ToList method

List<Customer> customerResults = customerQuery.ToList();
foreach (Customer customer in customerResults)
Console.WriteLine($”{customer.Lastname}, {customer.Firstname});

Basic LINQ Query Operations

In the following table, we can see the majority of the LINQ Query Operations grouped in categories. For information regarding each query operator’s result type and execution type (Deferred or Forced), click here.

LINQ Operator Category
LINQ Query Operators

Filtering Data
Where, OfType

Sorting Data
OrderBy, OrderByDescending, ThenBy, ThenByDescending, Reverse

Projection Operations
Select, SelectMany

Quantifier Operations
All, Any, Contains

Element Operations
ElementAt, ElementAtOrDefault, First, FirstOrDefault, Last, LastOrDefault, Single, SingleOrDefault

Partitioning Data
Skip, SkipWhile, Take, TakeWhile

Join Operations
Join, GroupJoin

Grouping Data
GroupBy, ToLookup

Aggregation Operations
Aggregate, Average, Count, LongCount, Max or MaxBy, Min or MinBy, Sum

Generation Operations
DefaultIfEmpty, Empty, Range, Repeat


The Language Integrated Query (LINQ) provides unified query syntax to query different data sources (e.g., SQL, XML, ADO.NET Datasets, Objects, etc.). In addition, it supports various query operations, such as filtering, ordering, grouping, etc.

LINQ queries are based on generic types, so in generic collections such as List<T>, we should replace the T parameter with our type object. The LINQ sources implement the IEnumerable interface to be enumerated. The available LINQ technologies include LINQ to Objects, XML, DataSet, SQL, and Entities.


Provide unified query syntax of queries for different data sources.
Type checking at compile-time and IntelliSense support.
We can reuse the queries quickly.
Easier debugging through the .NET debugger.
Supports various query operations, such as filtering, ordering, grouping, etc.


The project should be recompiled and redeployed for every change in the queries.
For complex SQL queries, LINQ is not very good.
We cannot take advantage of the execution caching provided in SQL store procedures.

LINQ provides powerful query capabilities that any .NET developer should know.

357: Ryan Mulligan

This week I get to talk to Ryan Mulligan! Ryan put together a Collection of some of his personal picks for favorite Pens and we get a chance to talk through a lot of them. There are some classic moments here I really feel, like when something you consider pretty basic gets way more popular than you ever thought it would. Ryan has a knack for feeling out really cool new technologies and then quickly using them to build great demos that play up what those technologies were born to do.

Time Jumps

00:28 Guest introduction

01:20 The story behind the username

01:58 NFTs and CodePen

03:46 Card Hover Interactions

07:05 Working at Netlify

12:27 Sponsor: Automattic

13:34 Heart Pen

16:47 Flip animation

18:39 Cart animation Pen

23:56 Animated Verbs Pen

26:33 Burger Boxer Pen

28:40 Using React

31:44 Password input Pen

Sponsor: Automattic

Automattic are the makers of, the fastest and easiest place to spin up a WordPress site, without sacrificing the power of self-hosted options. If you sell stuff on, the built-in help to do that is powered by WooCommerce, the premier eCommerce solution for WordPress. It’s the same WooCommerce whether you are on or not. If you are self-hosted, you can almost certainly take advantage of Jetpack, Automattic’s WordPress plugin that adds enormous functionality to WordPress, like a vastly improved site search, real-time backups, security features, and tons more.

The post 357: Ryan Mulligan appeared first on CodePen Blog.