Building .NET 7 Applications with AWS CodeBuild

AWS CodeBuild is a fully managed DevOps service for building and testing your applications. As a fully managed service, there is no infrastructure to manage and you pay only for the resources that you use when you are building your applications. CodeBuild provides a default build image that contains the current Long Term Support (LTS) version of the .NET SDK.

Microsoft released the latest version of .NET in November. This release, .NET 7, includes performance improvements and functionality, such as native ahead of time compilation. (Native AoT)..NET 7 is a Standard Term Support release of the .NET SDK. At this point CodeBuild’s default image does not support .NET 7. For customers that want to start using.NET 7 right away in their applications, CodeBuild provides two means of customizing your build environment so that you can take advantage of .NET 7.

The first option for customizing your build environment is to provide CodeBuild with a container image you create and maintain. With this method, customers can define the build environment exactly as they need by including any SDKs, runtimes, and tools in the container image. However, this approach requires customers to maintain the build environment themselves, including patching and updating the tools. This approach will not be covered in this blog post.

A second means of customizing your build environment is by using the install phase of the buildspec file. This method uses the default CodeBuild image, and adds additional functionality at the point that a build starts. This has the advantage that customers do not have the overhead of patching and maintaining the build image.

Complete documentation on the syntax of the buildspec file can be found here:

https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html

Your application’s buildspec.yml file contains all of the commands necessary to build your application and prepare it for deployment. For a typical .NET application, the buildspec file will look like this:

“`
version: 0.2
phases:
build:
commands:
– dotnet restore Net7TestApp.sln
– dotnet build Net7TestApp.sln
“`

Note: This build spec file contains only the commands to build the application, commands for packaging and storing build artifacts have been omitted for brevity.

In order to add the .NET 7 SDK to CodeBuild so that we can build your .NET 7 applications, we will leverage the install phase of the buildspec file. The install phase allows you to install any third-party libraries or SDKs prior to beginning your actual build.

“`
install:
commands:
– curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin –channel STS
“`

The above command downloads the Microsoft install script for .NET and uses that script to download and install the latest version of the .NET SDK, from the Standard Term Support channel. This script will download files and set environment variables within the containerized build environment. You can use this same command to automatically pull the latest Long Term Support version of the .NET SDK by changing the command argument STS to LTS.

Your updated buildspec file will look like this:

“`
version: 0.2
phases:
install:
commands:
– curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin –channel STS
build:
commands:
– dotnet restore Net7TestApp/Net7TestApp.sln
– dotnet build Net7TestApp/Net7TestApp.sln
“`

Once you check in your buildspec file, you can start a build via the CodeBuild console, and your .NET application will be built using the .NET 7 SDK.

As your build runs you will see output similar to this:

“`
Welcome to .NET 7.0!
———————
SDK Version: 7.0.100
Telemetry
———
The .NET tools collect usage data in order to help us improve your experience. It is collected by Microsoft and shared with the community. You can opt-out of telemetry by setting the DOTNET_CLI_TELEMETRY_OPTOUT environment variable to ‘1’ or ‘true’ using your favorite shell.

Read more about .NET CLI Tools telemetry: https://aka.ms/dotnet-cli-telemetry
—————-
Installed an ASP.NET Core HTTPS development certificate.
To trust the certificate run ‘dotnet dev-certs https –trust’ (Windows and macOS only).
Learn about HTTPS: https://aka.ms/dotnet-https
—————-
Write your first app: https://aka.ms/dotnet-hello-world
Find out what’s new: https://aka.ms/dotnet-whats-new
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Use ‘dotnet –help’ to see available commands or visit: https://aka.ms/dotnet-cli
————————————————————————————–
Determining projects to restore…
Restored /codebuild/output/src095190443/src/git-codecommit.us-east-2.amazonaws.com/v1/repos/net7test/Net7TestApp/Net7TestApp/Net7TestApp.csproj (in 586 ms).
[Container] 2022/11/18 14:55:08 Running command dotnet build Net7TestApp/Net7TestApp.sln
MSBuild version 17.4.0+18d5aef85 for .NET
Determining projects to restore…
All projects are up-to-date for restore.
Net7TestApp -> /codebuild/output/src095190443/src/git-codecommit.us-east-2.amazonaws.com/v1/repos/net7test/Net7TestApp/Net7TestApp/bin/Debug/net7.0/Net7TestApp.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:04.63
[Container] 2022/11/18 14:55:13 Phase complete: BUILD State: SUCCEEDED
[Container] 2022/11/18 14:55:13 Phase context status code: Message:
[Container] 2022/11/18 14:55:13 Entering phase POST_BUILD
[Container] 2022/11/18 14:55:13 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2022/11/18 14:55:13 Phase context status code: Message:
“`

Conclusion

Adding .NET 7 support to AWS CodeBuild is easily accomplished by adding a single line to your application’s buildspec.yml file, stored alongside your application source code. This change allows you to keep up to date with the latest versions of .NET while still taking advantage of the managed runtime provided by the CodeBuild service.

About the author:

Tom Moore

Tom Moore is a Sr. Specialist Solutions Architect at AWS, and specializes in helping customers migrate and modernize Microsoft .NET and Windows workloads into their AWS environment.

A pattern for dealing with #legacy code in c#

static string legacy_code(int input)
{
// some magic process
const int magicNumber = 7;

var intermediaryValue = input + magicNumber;

return “The answer is ” + intermediaryValue;
}

When dealing with a project more than a few years old, the issue of legacy code crops up time and time again. In this case, you have a function that’s called from lots of different client applications, so you can’t change it without breaking the client apps.

I’m using the code example above to keep the illustration simple, but you have to imagine that this function “legacy_code(int)”, in reality, could be hundreds of lines long, with lots of quirks and complexities. So you really don’t want to duplicate it.

Now, imagine, that as an output, I want to have just the intermediary value, not the string “The answer is …”. My client could parse the number out of the string, but that’s a horrible extra step to put on the client.

Otherwise you could create “legacy_code_internal()” that returns the int, and legacy_code() calls legacy_code_internal() and adds the string. This is the most common approach, but can end up with a rat’s nest of _internal() functions.

Here’s another approach – you can tell me what you think :

static string legacy_code(int input, Action<int> intermediary = null)
{
// some magic process
const int magicNumber = 7;

var intermediaryValue = input + magicNumber;

if (intermediary != null) intermediary(intermediaryValue);

return “The answer is ” + intermediaryValue;
}

Here, we can pass an optional function into the legacy_code function, that if present, will return the intermediaryValue as an int, without interfering with how the code is called by existing clients.

A new client looking to use the new functionality could call;

int intermediaryValue = 0;
var answer = legacy_code(4, i => intermediaryValue = i);
Console.WriteLine(answer);
Console.WriteLine(intermediaryValue);

This approach could return more than one object, but this could get very messy.Flatlogic Admin Templates banner

Building a gRPC Client in .NET

Introduction

In this article, we will take a look at how to create a simple gRPC client with .NET and communicate with a server. This is the final post of the blog series where we talk about building gRPC services.

Motivation

This is the second part of an articles series on gRPC. If you want to jump ahead, please feel free to do so. The links are down below.

Introduction to gRPC
Building a gRPC server with Go
Building a gRPC client with .NET
Building a gRPC client with Go

Building a gRPC client with .NET (You are here)

Please note that this is intended for anyone who’s interested in getting started with gRPC. If you’re not, please feel free to skip this article.

Plan

The plan for this article is as follows.

Scaffold a .NET console project.
Implementing the gRPC client.
Communicating with the server.

In a nutshell, we will be generating the client for the server we built in our previous post.


?  As always, all the code samples documentation can be found at: https://github.com/sahansera/dotnet-grpc

Prerequisites

.NET 6 SDK
Visual Studio Code or IDE of your choice
gRPC compiler

Please note that I’m using some of the commands that are macOS specific. Please follow this link to set it up if you are on a different OS.

To install Protobuf compiler:

brew install protobuf

Project Structure

We can use .NET’s tooling to generate a sample gRPC project. Run the following command at the root of your workspace. Remember how we used dotnet new grpc command to scaffold the server project? For this one though, it can simply be a console app.

dotnet new console -o BookshopClient

Your project structure should look like this.


You must be wondering if this is a console app how does it know how to generate the client stubs? Well, it doesn’t. You have to add the following packages to the project first.

dotnet add BookshopClient.csproj package Grpc.Net.Client
dotnet add BookshopClient.csproj package Google.Protobuf
dotnet add BookshopClient.csproj package Grpc.Tools

Once everything’s installed, we can proceed with the rest of the steps.

Generating the client stubs

We will be using the same Protobuf files that we generated in our previous step. If you haven’t seen that already head over to my previous post.

Open up the BookshopClient.csproj file you need to add the following lines:


<ItemGroup>
<Protobuf Include=../proto/bookshop.proto GrpcServices=Client />
</ItemGroup>

As you can see we will be reusing our Bookshop.proto file. in this example too. One thing to note here is that we have updated the GrpcServices attribute to be Client.

Implementing the gRPC client

Let’s update the Program.cs file to connect to and get the response from the server.

using System.Threading.Tasks;
using Grpc.Net.Client;
using Bookshop;

// The port number must match the port of the gRPC server.
using var channel = GrpcChannel.ForAddress(“http://localhost:5000”);
var client = new Inventory.InventoryClient(channel);
var reply = await client.GetBookListAsync(new GetBookListRequest { });

Console.WriteLine(“Greeting: “ + reply.Books);
Console.WriteLine(“Press any key to exit…”);
Console.ReadKey();

This is based on the example given on the Microsoft docs site btw. What I really like about the above code is how easy it is to read. So here’s what happens.


We first create a gRPC channel with GrpcChannel.ForAddress to the server by giving its URI and port. A client can reuse the same channel object to communicate with a gRPC server. This is an expensive operation compared to invoking a gRPC method on the server. You can also pass in a GrpcChannelOptions object as the second parameter to define client options. Here’s a list for that.
Then we use the auto-generated method Inventory.InventoryClient by leveraging the channel we created above. One thing to note here is that, if your server has multiple services, you can still use the same channel object for all of those.
We call the GetBookListAsync on our server. By the way, this is a Unary call, we will go through other client-server communication mechanisms in a separate post.
Our GetBookList method gets called on the server and returns the list of books.

Now that we know how the requests work, let’s see this in action.

Communicating with the server

Let’s spin up the server that we built in my previous post first. This will be up and running at port 5000.

dotnet run –project BookshopServer/BookshopServer.csproj


For the client-side, we invoke a similar command.

dotnet run –project BookshopClient/BookshopClient.csproj

And in the terminal, we will get the following outputs.


Nice! as you can see it’s not that hard to get everything working ? One thing to note is that we left out the details about TLS and different ways to communicate with the server (i.e. Unary, streaming etc.). I will cover such topics in-depth in the future.

Conclusion

In this article, we looked at how to reuse our Protobuf files to create a client to interact with the server we created in the previous post.

I hope this article series cleared up a lot of confusion that you had about gRPC. Please feel free to share your questions, thoughts, or feedback in the comments section below. Until next time ?

References

https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-6.0&tabs=visual-studio-codeFlatlogic Admin Templates banner

ASP.NET Core updates in .NET 7 Preview 2

.NET 7 Preview 2 is now available and includes many great new improvements to ASP.NET Core.

Here’s a summary of what’s new in this preview release:

Infer API controller action parameters that come from services
Dependency injection for SignalR hub methods
Provide endpoint descriptions and summaries for minimal APIs
Binding arrays and StringValues from headers and query strings in minimal APIs
Customize the cookie consent value

For more details on the ASP.NET Core work planned for .NET 7 see the full ASP.NET Core roadmap for .NET 7 on GitHub.

Get started

To get started with ASP.NET Core in .NET 7 Preview 2, install the .NET 7 SDK.

If you’re on Windows using Visual Studio, we recommend installing the latest Visual Studio 2022 preview. Visual Studio for Mac support for .NET 7 previews isn’t available yet but is coming soon.

To install the latest .NET WebAssembly build tools, run the following command from an elevated command prompt:

dotnet workload install wasm-tools

Upgrade an existing project

To upgrade an existing ASP.NET Core app from .NET 7 Preview 1 to .NET 7 Preview 2:

Update all Microsoft.AspNetCore.* package references to 7.0.0-preview.2.*.
Update all Microsoft.Extensions.* package references to 7.0.0-preview.2.*.

See also the full list of breaking changes in ASP.NET Core for .NET 7.

Infer API controller action parameters that come from services

Parameter binding for API controller actions now binds parameters through dependency injection when the type is configured as a service. This means it’s no longer required to explicitly apply the [FromServices] attribute to a parameter.

Services.AddScoped<SomeCustomType>();

[Route(“[controller]”)]
[ApiController]
public class MyController : ControllerBase
{
// Both actions will bound the SomeCustomType from the DI container
public ActionResult GetWithAttribute([FromServices]SomeCustomType service) => Ok();
public ActionResult Get(SomeCustomType service) => Ok();
}

You can disable the feature by setting DisableImplicitFromServicesParameters:

Services.Configure<ApiBehaviorOptions>(options =>
{
options.DisableImplicitFromServicesParameters = true;
})

Dependency injection for SignalR hub methods

SignalR hub methods now support injecting services through dependency injection (DI).

Services.AddScoped<SomeCustomType>();

public class MyHub : Hub
{
// SomeCustomType comes from DI by default now
public Task Method(string text, SomeCustomType type) => Task.CompletedTask;
}

You can disable the feature by setting DisableImplicitFromServicesParameters:

services.AddSignalR(options =>
{
options.DisableImplicitFromServicesParameters = true;
});

To explicitly mark a parameter to be bound from configured services, use the [FromServices] attribute:

public class MyHub : Hub
{
public Task Method(string arguments, [FromServices] SomeCustomType type);
}

Provide endpoint descriptions and summaries for minimal APIs

Minimal APIs now support annotating operations with descriptions and summaries used for OpenAPI spec generation. You can set these descriptions and summaries for route handlers in your minimal API apps using an extension methods:

app.MapGet(“/hello”, () => …)
.WithDescription(“Sends a request to the backend HelloService to process a greeting request.”);

Or set the description or summary via attributes on the route handler delegate:

app.MapGet(“/hello”, [EndpointSummary(“Sends a Hello request to the backend”)]() => …)

Binding arrays and StringValues from headers and query strings in minimal APIs

With this release, you can now bind values from HTTPS headers and query strings to arrays of primitive types, string arrays, or StringValues:

// Bind query string values to a primitive type array
// GET /tags?q=1&q=2&q=3
app.MapGet(“/tags”, (int[] q) => $”tag1: {q[0]} , tag2: {q[1]}, tag3: {q[2]}”)

// Bind to a string array
// GET /tags?names=john&names=jack&names=jane
app.MapGet(“/tags”, (string[] names) => $”tag1: {names[0]} , tag2: {names[1]}, tag3: {names[2]}”)

// Bind to StringValues
// GET /tags?names=john&names=jack&names=jane
app.MapGet(“/tags”, (StringValues names) => $”tag1: {names[0]} , tag2: {names[1]}, tag3: {names[2]}”)

You can also bind query strings or header values to an array of a complex type as long as the type has TryParse implementation as demonstrated in the example below.

// Bind to aan array of a complex type
// GET /tags?tags=trendy&tags=hot&tags=spicy
app.MapGet(“/tags”, (Tag[] tags) =>
{
return Results.Ok(tags);
});

class Tag
{
public string? TagName { get; init; }

public static bool TryParse(string? tagName, out Tag tag)
{
if (tagName is null)
{
tag = default;
return false;
}

tag = new Tag { TagName = tagName };
return true;
}
}

Customize the cookie consent value

You can now specify the value used to track if the user consented to the cookie use policy using the new CookiePolicyOptions.ConsentCookieValue property.

Thank you @daviddesmet for contributing this improvement!

Request for feedback on shadow copying for IIS

In .NET 6 we added experimental support for shadow copying app assemblies to the ASP.NET Core Module (ANCM) for IIS. When an ASP.NET Core app is running on Windows, the binaries are locked so that they cannot be modified or replaced. You can stop the app by deploying an app offline file, but sometimes doing so is inconvenient or impossible. Shadow copying enables the app assemblies to be updated while the app is running by making a copy of the assemblies.

You can enable shadow copying by customizing the ANCM handler settings in web.config:

<?xml version=”1.0″ encoding=”utf-8″?>
<configuration>
<system.webServer>
<handlers>
<remove name=”aspNetCore”/>
<add name=”aspNetCore” path=”*” verb=”*” modules=”AspNetCoreModuleV2″ resourceType=”Unspecified”/>
</handlers>
<aspNetCore processPath=”%LAUNCHER_PATH%” arguments=”%LAUNCHER_ARGS%” stdoutLogEnabled=”false” stdoutLogFile=”.logsstdout”>
<handlerSettings>
<handlerSetting name=”experimentalEnableShadowCopy” value=”true” />
<handlerSetting name=”shadowCopyDirectory” value=”../ShadowCopyDirectory/” />
</handlerSettings>
</aspNetCore>
</system.webServer>
</configuration>

We’re investigating making shadow copying in IIS a feature of ASP.NET Core in .NET 7, and we’re seeking additional feedback on whether the feature satisfies user requirements. If you deploy ASP.NET Core to IIS, please give shadow copying a try and share with us your feedback on GitHub.

Give feedback

We hope you enjoy this preview release of ASP.NET Core in .NET 7. Let us know what you think about these new improvements by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

The post ASP.NET Core updates in .NET 7 Preview 2 appeared first on .NET Blog.Flatlogic Admin Templates banner

Auto Updating Created, Updated and Deleted Timestamps In Entity Framework

In any database schema, it’s extremely common to have the fields “DateCreated, DateUpdated and DateDeleted” on almost every entity. At the very least, they provide helpful debugging information, but further, the DateDeleted affords a way to “soft delete” entities without actually deleting them.

That being said, over the years I’ve seen some pretty interesting ways in which these have been implemented. The worst, in my view, is writing C# code that specifically updates the timestamp when created or updated. While simple, one clumsy developer later and you aren’t recording any timestamps at all. It’s very prone to “remembering” that you have to update the timestamp. Other times, I’ve seen database triggers used which.. works.. But then you have another problem in that you’re using database triggers!

There’s a fairly simple method I’ve been using for years and it involves utilizing the ability to override the save behaviour of Entity Framework.

Auditable Base Model

The first thing we want to do is actually define a “base model” that all entities can inherit from. In my case, I use a base class called “Auditable” that looks like so :

public abstract class Auditable
{
public DateTimeOffset DateCreated { get; set; }
public DateTimeOffset? DateUpdated { get; set; }
public DateTimeOffset? DateDeleted { get; set; }
}

And a couple of notes here :

It’s an abstract class because it should only ever be inherited from
We use DateTimeOffset because we will then store the timezone along with the timestamp. This is a personal preference but it just removes all ambiguity around “Is this UTC?”
DateCreated is not null (Since anything created will have a timestamp), but the other two dates are! Note that if this is an existing database, you will need to allow nullables (And work out a migration strategy) as your existing records will not have a DateCreated.

To use the class, we just need to inherit from it with any Entity Framework model. For example, let’s say we have a Customer object :

public class Customer : Auditable
{
public int Id { get; set; }
public string Name { get; set; }
}

So all the class has done is mean we don’t have to copy and paste the same 3 date fields everywhere, and that it’s enforced. Nice and simple!

Overriding Context SaveChanges

The next thing is maybe controversial, and I know there’s a few different ways to do this. Essentially we are looking for a way to say to Entity Framework “Hey, if you insert a new record, can you set the DateCreated please?”. There’s things like Entity Framework hooks and a few nuget packages that do similar things, but I’ve found the absolute easiest way is to simply override the save method of your database context.

The full code looks something like :

public class MyContext: DbContext
{
public override Task<int> SaveChangesAsync(CancellationToken cancellationToken = default)
{
var insertedEntries = this.ChangeTracker.Entries()
.Where(x => x.State == EntityState.Added)
.Select(x => x.Entity);

foreach(var insertedEntry in insertedEntries)
{
var auditableEntity = insertedEntry as Auditable;
//If the inserted object is an Auditable.
if(auditableEntity != null)
{
auditableEntity.DateCreated = DateTimeOffset.UtcNow;
}
}

var modifiedEntries = this.ChangeTracker.Entries()
.Where(x => x.State == EntityState.Modified)
.Select(x => x.Entity);

foreach (var modifiedEntry in modifiedEntries)
{
//If the inserted object is an Auditable.
var auditableEntity = modifiedEntry as Auditable;
if (auditableEntity != null)
{
auditableEntity.DateUpdated = DateTimeOffset.UtcNow;
}
}

return base.SaveChangesAsync(cancellationToken);
}
}

Now you’re context may have additional code, but this is the bare minimum to get things working. What this does is :

Gets all entities that are being inserted, checks if they inherit from auditable, and if so set the Date Created.
Gets all entities that are being updated, checks if they inherit from auditable, and is so set the Date Updated.
Finally, call the base SaveChanges method that actually does the saving.

Using this, we are essentially intercepting when Entity Framework would normally save all changes, and updating all timestamps at once with whatever is in the batch.

Handling Soft Deletes

Deletes are a special case for one big reason. If we actually try and call delete on an entity in Entity Framework, it gets added to the ChangeTracker as… well… a delete. And to unwind this at the point of saving and change it to an update would be complex.

What I tend to do instead is on my BaseRepository (Because.. You’re using one of those right?), I check if an entity is Auditable and if so, do an update instead. The copy and paste from my BaseRepository looks like so :

public async Task<T> Delete(T entity)
{
//If the type we are trying to delete is auditable, then we don’t actually delete it but instead set it to be updated with a delete date.
if (typeof(Auditable).IsAssignableFrom(typeof(T)))
{
(entity as Auditable).DateDeleted = DateTimeOffset.UtcNow;
_dbSet.Attach(entity);
_context.Entry(entity).State = EntityState.Modified;
}
else
{
_dbSet.Remove(entity);
}

return entity;
}

Now your mileage may vary, especially if you are not using the Repository Pattern (Which you should be!). But in short, you must handle soft deletes as updates *instead* of simply calling Remove on the DbSet.

Taking This Further

What’s not shown here is that we can use this same methodology to update many other “automated” fields. We use this same system to track the last user to Create, Update and Delete entities. Once this is up and running, it’s often just a couple more lines to instantly gain traceability across every entity in your database!

The post Auto Updating Created, Updated and Deleted Timestamps In Entity Framework appeared first on .NET Core Tutorials.Flatlogic Admin Templates banner

Adding feature flags to an ASP.NET Core app

This post is about Adding feature flags to an ASP.NET Core app.Feature flags (also known as feature toggles or feature switches) are a software development technique that turns certain functionality on and off during runtime, without deploying new code. In this post we will discuss about flags using appsettings.json file. I am using an ASP.NET Core MVC project, you can do it for any .NET Core project like Razor Web Apps, or Web APIs.

First we need to add reference of Microsoft.FeatureManagement.AspNetCore nuget package – This package created by Microsoft, it will support creation of simple on/off feature flags as well as complex conditional flags. Once this package added, we need to add the following code to inject the Feature Manager instance to the http pipeline.

using Microsoft.FeatureManagement;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllersWithViews();

builder.Services.AddFeatureManagement();

var app = builder.Build();

Next we need to create a FeatureManagement section in the appsettings.json with feature with a boolean value like this.

“FeatureManagement”: {
“WelcomeMessage”: false
}

Now we are ready with feature toggle, let us write code to manage it from the controller. In the controller, ASP.NET Core runtime will inject an instance of the IFeatureManager. And in this interface we can check whether a feature is enabled or not using the IsEnabledAsync method. So for our feature we can do it like this.

public async Task<IActionResult> IndexAsync()
{
if(await _featureManager.IsEnabledAsync(“WelcomeMessage”))
{
ViewData[“WelcomeMessage”] = “Welcome to the Feature Demo app.”;
}
return View();
}

And in the View we can write the following code.

@if (ViewData[“WelcomeMessage”] != null)
{
<div class=“alert alert-primary” role=“alert”>
@ViewData[“WelcomeMessage”]
</div>
}

Run the application, the alert will not be displayed. You can change the WelcomeMessage to true and refresh the page – it will display the bootstrap alert.

This way you can start introducing Feature Flags or Feature Toggles in ASP.NET Core MVC app. As you may already noticed the Feature Management library built on top of the configuration system of .NET Core. So it will support any configuration sources as Feature flags source. Microsoft Azure provides Azure App Configuration service which helps to implement feature flags for cloud native apps.

Happy Programming 🙂Flatlogic Admin Templates banner

Building a gRPC Server in .NET

Introduction

In this article, we will look at how to build a simple web service with gRPC in .NET. We will keep our changes to minimal and leverage the same Protocol Buffer IDL we used in my previous post. We will also go through some common problems that you might face when building a gRPC server in .NET.

Motivation

For this article also we will be using the Online Bookshop example and leveraging the same Protobufs as we saw before. For those who aren’t familiar with or missed this series, you can find them from here.

Introduction to gRPC
Building a gRPC server with Go

Building a gRPC server with .NET (You are here)
Building a gRPC client with Go
Building a gRPC client with .NET

We will be covering steps 1 and 2 in the following diagram.


Plan

So this is what we are trying to achieve.

Generate the .proto IDL stubs.
Write the business logic for our service methods.
Spin up a gRPC server on a given port.

In a nutshell, we will be covering the following items on our initial diagram.

?  As always, all the code samples documentation can be found at: https://github.com/sahansera/dotnet-grpc

Prerequisites

.NET 6 SDK
Visual Studio Code or IDE of your choice
gRPC compiler

Please note that I’m using some of the commands that are macOS specific. Please follow this link to set it up if you are on a different OS.

To install Protobuf compiler:

brew install protobuf

Project Structure

We can use the the .NET’s tooling to generate a sample gRPC project. Run the following command in at the root of your workspace.

dotnet new grpc -o BookshopServer

Once you run the above command, you will see the following structure.


We also need to configure the SSL trust:

dotnet dev-certs https –trust

As you might have guessed, this is like a default template and it already has a lot of things wired up for us like the Protos folder.

Generating the server stubs

Usually, we would have to invoke the protocol buffer compiler to generate the code for the target language (as we saw in my previous article). However, for .NET they have streamlined the code generation process. They use the Grpc.Tools NuGet package with MSBuild to provide automatic code generation, which is pretty neat! ?

If you open up the Bookshop.csproj file you will find the following lines:


<ItemGroup>
<Protobuf Include=Protosgreet.proto GrpcServices=Server />
</ItemGroup>

We are going to replace greet.proto with our Bookshop.proto file.


We will also update our csproj file like so:

<ItemGroup>
<Protobuf Include=../proto/bookshop.proto GrpcServices=Server />
</ItemGroup>

Implementing the Server

The implementation part is easy! Let’s clean up the GreeterService that comes default and add a new file called InventoryService.cs

rm BookshopServer/Services/GreeterService.cs
code BookshopServer/Services/InventoryService.cs

This is what our service is going to look like.

InventoryService.cs


Let’s go through the code step by step.

Inventory.InventoryBase is an abstract class that got auto-generated (in your obj/debug folder) from our protobuf file.

GetBookList method’s stub is already generated for us in the InventoryBase class and that’s why we are overriding it. Again, this is the RPC call we defined in our protobuf definition. This method takes in a GetBookListRequest which defines what the request looks like and a ServerCallContext param which contains the headers, auth context etc.
Rest of the code is pretty easy – we prepare the response and return it back to the caller/client. It’s worth noting that we never defined the GetBookListRequest GetBookListResponse types ourselves, manually. The gRPC tooling for .NET has already created these for us under the Bookshop namespace.

Make sure to update the Program.cs to reflect the new service as well.

// …
app.MapGrpcService<InventoryService>();
// …

And then we can run the server with the following command.

dotnet run –project BookshopServer/BookshopServer.csproj


We are almost there! Remember we can’t access the service yet through the browser since browsers don’t understand binary protocols. In the next step, we will to test our service ?

Common Errors

A common error you’d find on macOS systems with .NET is HTTP/2 and TLS issue shown below.


gRPC template uses TLS by default and Kestrel doesn’t support HTTP/2 with TLS on macOS systems. We need to turn off TLS (ouch!) in order for our demo to work.

? Please don’t do this in production! This is intended for local development purposes only.

On local development

// Turn off TLS
builder.WebHost.ConfigureKestrel(options =>
{
// Setup a HTTP/2 endpoint without TLS.
options.ListenLocalhost(5000, o => o.Protocols =
HttpProtocols.Http2);
});

Testing the service

Usually, when interacting with the HTTP/1.1-like server, we can use cURL to make requests and inspect the responses. However, with gRPC, we can’t do that. (you can make requests to HTTP/2 services, but those won’t be readable). We will be using gRPCurl for that.

Once you have it up and running, you can now interact with the server we just built.

grpcurl -plaintext localhost:8080 Inventory/GetBookList

? Note: gRPC defaults to TLS for transport. However, to keep things simple, I will be using the `-plaintext` flag with `grpcurl` so that we can see a human-readable response.

How do we figure out the endpoints of the service? There are two ways to do this. One is by providing a path to the proto files, while the other option enables reflection through the code.

Using proto files

If you don’t want to enable reflection, we can use the Protobuf files to let gRPCurl know which methods are available. Normally, when a team makes a gRPC service they will make the protobuf files available if you are integrating with them. So, without having to ask them or doing trial-and-error you can use these proto files to introspect what kind of endpoints are available for consumption.

grpcurl -import-path Proto -proto bookshop.proto -plaintext localhost:5000 Inventory/GetBookList


Now, let’s say we didn’t have reflection enabled and try to call a method on the server.

grpcurl -plaintext localhost:5000 Inventory/GetBookList

We can expect that it will error out. Cool!


Enabling reflection

While in the BookshopServer folder run the following command to install the reflection package.

dotnet add package Grpc.AspNetCore.Server.Reflection

Add the following to the Program.cs file. Note that we are using the new Minimal API approach to configure these services

// Register services that enable reflection
builder.Services.AddGrpcReflection();

// Enable reflection in Debug mode.
if (app.Environment.IsDevelopment())
{
app.MapGrpcReflectionService();
}


Conclusion

As we have seen, similar to the Go implementation, we can use the same Protocol buffer files to generate the server implementation in .NET. In my opinion .NET’s new tooling makes it easier to generate the server stubs when a change happens in your Protobufs. However, setting up the local developer environment could be a bit challenging especially for macOS.

Feel free to let me know if you have any questions or feedback. Until next time! ?

References

https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-6.0&tabs=visual-studio-code
https://grpc.io/docs/languages/csharp/quickstart/
https://docs.microsoft.com/en-us/aspnet/core/grpc/troubleshoot?view=aspnetcore-6.0#unable-to-start-aspnet-core-grpc-app-on-macos
https://docs.microsoft.com/en-us/aspnet/core/migration/50-to-60-samples?view=aspnetcore-6.0Flatlogic Admin Templates banner

Understanding the .NET Language Integrated Query (LINQ)

Introduction

The Language Integrated Query (LINQ), which is pronounced as “link”, was introduced in the .NET Framework 3.5 to provide query capabilities by defining standardized query syntax in the .NET programming languages (such as C# and VB.NET). LINQ is provided via the System.Linq namespace.

A query is an expression to retrieve data from a data source. Usually, queries are expressed as simple strings (e.g., SQL for relational databases) without type checking at compile time or IntelliSense support. Traditionally, developers had to learn a new query language for each data source type (e.g., SQL, XML, ADO.NET Datasets, etc.).

LINQ provides unified query syntax to query different data sources by working with objects. For example, we could retrieve and save data in different databases (MS SQL, My SQL, Oracle, etc.) with the same code. Using the same basic coding patterns, we can query and transform data in any source where a LINQ provider is available. In addition, we can perform many operations, such as filtering, ordering, and grouping.

In this article, we will learn about the LINQ architecture and technologies, query syntaxes, execution types, and query operations. In addition, we will see some code examples to be familiarized with LINQ concepts.

LINQ and Generic Types (C#)

We can design classes and methods that can provide functionalities for a general type (T) by using Generics. The generic type parameter will be defined when the class or method is declared and instantiated. In this way, we can use the generic class or method for different types without the cost of boxing operations and the risk of runtime casts.

A generic type is declared by specifying a type parameter in angle brackets after the class or method name, e.g. MyClassName<T>, where T is a type parameter. The MyClassName class will provide generalized solutions for any T. The most common use of generics is to create collection classes.

LINQ queries are based on generic types. So, when creating an instance of a generic collection class, such as List<T>, Dictionary<TKey, TValue>, etc., we should replace the T parameter with the type of our objects. For example, we could keep a list of string values (List<string>), a list of custom User objects (List< User>), a dictionary of integer keys with string values (Dictionary<int, string>), etc.

If you have already used LINQ, you probably have seen the IEnumerable<T> interface. The IEnumerable<T> interface enables the generic collection classes to be enumerated using the foreach statement. A generic collection is a collection with a general type (T). The non-generic collection classes such as ArrayList support the IEnumerable interface to be enumerated.

LINQ Architecture and Technologies

As we have already seen, we can write LINQ queries in any source in which a LINQ provider is available. These sources implement the IEnumerable interface, such as in-memory data structures, XML documents, SQL databases, and DataSet objects. In this way, we always view the data as an IEnumerable collection, either when we query, update, etc.

In the following figure, we can see the LINQ architecture and the available LINQ technologies. As we can see, the LINQ technologies are the following:

LINQ to Objects: Using LINQ queries with any IEnumerable or IEnumerable<T> collection directly, without using an intermediate LINQ provider or API such as LINQ to SQL, LINQ to XML, etc. Practically, we query any enumerable collections such as List<T>, Array, or Dictionary<TKey, TValue>.

LINQ to XML: LINQ to XML provides an in-memory XML programming interface that leverages the LINQ Framework to perform queries easier, similarly to SQL.

ADO.NET LINQ Technologies: ADO.NET provides consistent access to data sources (such as SQL Server, data sources exposed through OLE DB and ODBC, etc.) to separate the data access from data manipulation.

LINQ to DataSet: To perform queries over data cached in a DataSet object. In this scenario, the retrieved data are stored in a DataSet object.

LINQ to SQL: Use the LINQ programming model directly over the existing database schema and auto-generate the .NET model classes representing data. LINQ to SQL is used when we do not require mapping to conceptual models (i.e., when one-to-one mapping of the data to model classes is accepted).

LINQ to Entities: We can use the LINQ to Entities to support conceptual models (i.e., models that are not the same as the logical models of the database). The conceptual data models (mapped database models) are used to model the data and interact as objects. In this way, we can formulate queries in the database in the same programming language we are building the business logic.

Figure 1. – The LINQ architecture and the available LINQ technologies (Source).

LINQ Syntax

LINQ provides two ways to write queries, the Query Syntax and the Method Syntax. In the following sections, we will see the syntax of both ways.

Query Syntax

The LINQ Query Syntax has some similarities with the SQL query syntax, as we see in the following syntax statement. The result of a query expression is a query object (not the actual results), which is usually a collection of type IEnumerable<T>.

// LINQ Query Syntax

from <range variable> in <sourcecollection>
<Query Operator> conditional expression
<select or groupBy operator> <result formation>

In Figure 2, we can see a simple LINQ query syntax example. The from clause specifies the data source (numbers) and the num range variable (i.e., the value in each iteration). The where clause applies the filter (e.g., when the num is an even number), and the select clause specifies the type of the returned elements (e.g. all even numbers).

Figure 2. – LINQ query syntax example.

In general, the query specifies what information to retrieve from the data source or sources. Optionally, a query also determines how that information should be sorted, grouped, and shaped before it is returned.

Note: The Query syntax does not support all LINQ query operators compared to the Method syntax.

Method Syntax

Query syntax and Method syntax are semantically identical. However, many people find query syntax simpler and easier to read since it doesn’t use lambda expressions. In Figure 3, we can see the semantically equivalent LINQ Query syntax example written in Method syntax.

The query syntax is translated into method calls (method syntax) for the .NET common language runtime (CLR) in compile-time. Thus, in terms of runtime performance, both LINQ syntaxes are the same.

Figure 3. – LINQ Method syntax example.

Note: In terms of runtime performance, both LINQ syntaxes are the same.

Query Execution

In the previous sections, we saw how to use Query and Method syntax to create our query object. It is essential to notice that the query object doesn’t contain the results (i.e., the query result data). Instead, it includes the information required to produce the results when the query is executed. As we can understand, we can execute the query multiple times.

There are two ways to execute a LINQ query object, the deferred execution and the forced execution:

Deferred Execution is performed when we use the query object in a foreach statement, executing it and iterating the results.

Forced execution is performed when we execute the query to retrieve its results in a single collection object using the ToList() or ToArray() methods. Another way to force the query execution is when we perform functions that need to iterate the results, such as Count(), Max(), Average(), etc.

Let’s assume we have the Customer[] customers array from a related service. We have created the following query object to retrieve the customers who live in Athens.

// Data source

Customer[] customers = CustomerService.GetAllCustomers();

// Create the Query object (via Query Syntax)

IEnumerable<Customer> customerQuery =
from customer in customers
where customer.City == “Athens”
select customer;

In the following example, we can see how to execute the query object using the two execution methods (Deferred and Forced).

//Deferred: Query execution using the foreach stamenent

foreach (Customer customer in customerQuery)
{
Console.WriteLine($”{customer.Lastname}, {customer.Firstname});
}

// Forced: Query execution using the ToList method

List<Customer> customerResults = customerQuery.ToList();
foreach (Customer customer in customerResults)
{
Console.WriteLine($”{customer.Lastname}, {customer.Firstname});
}

Basic LINQ Query Operations

In the following table, we can see the majority of the LINQ Query Operations grouped in categories. For information regarding each query operator’s result type and execution type (Deferred or Forced), click here.

LINQ Operator Category
LINQ Query Operators

Filtering Data
Where, OfType

Sorting Data
OrderBy, OrderByDescending, ThenBy, ThenByDescending, Reverse

Projection Operations
Select, SelectMany

Quantifier Operations
All, Any, Contains

Element Operations
ElementAt, ElementAtOrDefault, First, FirstOrDefault, Last, LastOrDefault, Single, SingleOrDefault

Partitioning Data
Skip, SkipWhile, Take, TakeWhile

Join Operations
Join, GroupJoin

Grouping Data
GroupBy, ToLookup

Aggregation Operations
Aggregate, Average, Count, LongCount, Max or MaxBy, Min or MinBy, Sum

Generation Operations
DefaultIfEmpty, Empty, Range, Repeat

Summary

The Language Integrated Query (LINQ) provides unified query syntax to query different data sources (e.g., SQL, XML, ADO.NET Datasets, Objects, etc.). In addition, it supports various query operations, such as filtering, ordering, grouping, etc.

LINQ queries are based on generic types, so in generic collections such as List<T>, we should replace the T parameter with our type object. The LINQ sources implement the IEnumerable interface to be enumerated. The available LINQ technologies include LINQ to Objects, XML, DataSet, SQL, and Entities.

Advantages

Provide unified query syntax of queries for different data sources.
Type checking at compile-time and IntelliSense support.
We can reuse the queries quickly.
Easier debugging through the .NET debugger.
Supports various query operations, such as filtering, ordering, grouping, etc.

Disadvantages

The project should be recompiled and redeployed for every change in the queries.
For complex SQL queries, LINQ is not very good.
We cannot take advantage of the execution caching provided in SQL store procedures.

LINQ provides powerful query capabilities that any .NET developer should know.

Flatlogic Admin Templates banner

.NET ? GitHub Actions

Hi friends, I put together two posts where I’m going to teach you the basics of the GitHub Actions platform. In this first post, you’ll learn how GitHub Actions can improve your .NET development experience and team productivity. I’ll show you how to use them to automate common .NET app dev scenarios with workflow composition. In the next post, I’ll show you how to create a custom GitHub Action written in .NET.

An introduction to GitHub Actions

Developers that use GitHub for managing their git repositories have a powerful continuous integration (CI) and continuous delivery (CD) feature with the help of GitHub Actions. A common developer scenario is when developers propose changes to the default branch (typically main) of a GitHub repository. These changes, while often scrutinized by reviewers, can have automated checks to ensure that the code compiles and tests pass.

GitHub Actions allow you to build, test, and deploy your code right from your source code repository on https://github.com. GitHub Actions are consumed by GitHub workflows. A GitHub workflow is a YAML (either *.yml or *.yaml) file within your GitHub repository. These workflow files reside in the .github/workflows/ directory from the root of the repository. A workflow references one or more GitHub Action(s) together as a series of instructions, where each instruction executes a specific task.

The GitHub Action terminology

To avoid mistakenly using some of these terms inaccurately, let’s define them:

GitHub Actions: GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline.

workflow: A workflow is a configurable automated process that will run one or more jobs.

event: An event is a specific activity in a repository that triggers a workflow run.

job: A job is a set of steps in a workflow that execute on the same runner.

action: An action is a custom application for the GitHub Actions platform that performs a complex but frequently repeated task.

runner: A runner is a server that runs your workflows when they’re triggered.

For more information, see GitHub Docs: Understanding GitHub Actions

Inside the GitHub workflow file

A workflow file defines a sequence of jobs and their corresponding steps to follow. Each workflow has a name and a set of triggers, or events to act on. You have to specify at least one trigger for your workflow to run unless it’s a reusable workflow. A common .NET GitHub workflow would be to build and test your C# code when changes are either pushed or when there’s a pull request targeting the default branch. Consider the following workflow file:

name: build and test
on:
push:
pull_request:
branches: [ main ]
paths-ignore:
– ‘README.md’
env:
DOTNET_VERSION: ‘6.0.x’
jobs:
build-and-test:
name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
– uses: actions/[email protected]
– name: Setup .NET
uses: actions/[email protected]
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
– name: Install dependencies
run: dotnet restore
– name: Build
run: dotnet build –configuration Release –no-restore
– name: Test
run: dotnet test –no-restore –verbosity normal

I’m not going to assume that you have a deep understanding of this workflow, and while it’s less than thirty lines — there is still a lot to unpack. I put together a sequence diagram (powered by Mermaid), that shows how a developer might visualize this workflow.

Here’s the same workflow file, but this time it is expanded with inline comments to add context (if you’re already familiar with the workflow syntax, feel free to skip past this):

# The name of the workflow.
# This is the name that’s displayed for status
# badges (commonly embedded in README.md files).
name: build and test

# Trigger this workflow on a push, or pull request to
# the main branch, when either C# or project files changed
on:
push:
pull_request:
branches: [ main ]
paths-ignore:
– ‘README.md’

# Create an environment variable named DOTNET_VERSION
# and set it as “6.0.x”
env:
DOTNET_VERSION: ‘6.0.x’ # The .NET SDK version to use

# Defines a single job named “build-and-test”
jobs:
build-and-test:

# When the workflow runs, this is the name that is logged
# This job will run three times, once for each “os” defined
name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]

# Each job run contains these five steps
steps:

# 1) Check out the source code so that the workflow can access it.
– uses: actions/[email protected]

# 2) Set up the .NET CLI environment for the workflow to use.
# The .NET version is specified by the environment variable.
– name: Setup .NET
uses: actions/[email protected]
with:
dotnet-version: ${{ env.DOTNET_VERSION }}

# 3) Restore the dependencies and tools of a project or solution.
– name: Install dependencies
run: dotnet restore

# 4) Build a project or solution and all of its dependencies.
– name: Build
run: dotnet build –configuration Release –no-restore

# 5) Test a project or solution.
– name: Test
run: dotnet test –no-restore –verbosity normal

The preceding workflow file contains many comments to help detail each area of the workflow. You might have noticed that the steps define various usages of GitHub Actions or simple run commands. The relationship between a GitHub Action and a consuming GitHub workflow is that workflows consume actions. A GitHub Action is only as powerful as the consuming workflow. Workflows can define anything from simple tasks to elaborate compositions and everything in between. For more information on creating GitHub workflows for .NET apps, see the following .NET docs resources:

Create a build validation workflow
Create a test validation workflow
Create a deploy workflow
Create a CodeQL security vulnerability scanning CRON job workflow

I hope that you’re asking yourself, “why is this important?” Sure, we can create GitHub Actions, and we can compose workflows that consume them — but why is that important?! That answer is GitHub status checks .

GitHub status checks

One of the primary benefits of using workflows is to define conditional status checks that can deterministically fail a build. A workflow can be configured as a status check for a pull request (PR), and if the workflow fails, for example the source code in the pull request doesn’t compile — the PR can be blocked from being merged. Consider the following screen capture, which shows that two checks have failed, thus blocking the PR from being merged.

As the developer who is responsible for reviewing a PR, you’d immediately see that the pull request has failing status checks. You’d work with the developer who proposed the PR to get all of the status checks to pass. The following is a screen capture showing a “green build”, a build that has all of its status checks as passing.

For more information, see GitHub Docs: GitHub status checks.

GitHub Actions that .NET developers should know

As a .NET developer, you’re likely familiar with the .NET CLI. The .NET CLI is included with the .NET SDK. If you don’t already have the .NET SDK, you can download the .NET 6 SDK.

Using the previous workflow file as a point of reference, there are five steps — each step includes either the run or uses syntax:

Action or command
Description

uses: actions/[email protected]
This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it. For more information, see actions/checkout

uses: actions/[email protected]
This action sets up a .NET CLI environment for use in actions. For more information, see actions/setup-dotnet

run: dotnet restore
Restores the dependencies and tools of a project or solution. For more information, see dotnet restore

run: dotnet build
Builds the project or solution. For more information, see dotnet build

run: dotnet test
Runs the tests for the project or solution. For more information, see dotnet test

Some steps rely on GitHub Actions and reference them with the uses syntax, while others run commands. For more information on the differences, see Workflow syntax for GitHub Actions: uses and run.

.NET applications rely on NuGet packages. You can optimize your workflows by caching various dependencies that change infrequently, such as NuGet packages. As an example, you can use the actions/cache to cache NuGet packages:

steps:
– uses: actions/[email protected]
– name: Setup dotnet
uses: actions/[email protected]
with:
dotnet-version: ‘6.0.x’
– uses: actions/[email protected]
with:
path: ~/.nuget/packages
# Look to see if there is a cache hit for the corresponding requirements file
key: ${{ runner.os }}-nuget-${{ hashFiles(‘**/packages.lock.json’) }}
restore-keys: |
${{ runner.os }}-nuget
– name: Install dependencies
run: dotnet add package Newtonsoft.Json –version 12.0.1

For more information, see GitHub Docs: Building and testing .NET – Caching dependencies.

In addition to using the standard GitHub Actions or invoking .NET CLI commands using the run syntax, you might be interested in learning about some additional GitHub Actions.

Additional GitHub Actions

Several .NET GitHub Actions are hosted on the dotnet GitHub organization:

.NET GitHub Action
Description

dotnet/versionsweeper
This action sweeps .NET repos for out-of-support target versions of .NET. The .NET docs team uses the .NET version sweeper GitHub Action to automate issue creation. The action runs as a cron job (or on a schedule). When it detects that .NET projects target out-of-support versions, it creates issues to report its findings. The output is configurable and helpful for tracking .NET version support concerns.

dotnet/code-analysis
This action runs the code analysis rules that are included in the .NET SDK as part of continuous integration (CI). The action runs both code-quality (CAXXXX) rules and code-style (IDEXXXX) rules.

.NET developer community spotlight

The .NET developer community is building GitHub Actions that might be useful in your organizations. As an example, check out the zyborg/dotnet-tests-report which is a GitHub Action to run .NET tests and generate reports and badges. If you use this GitHub Action, be sure to give their repo a star .

There are many .NET GitHub Actions that can be consumed from workflows, see the GitHub Marketplace: .NET.

A word on .NET workloads

.NET runs anywhere, and you can use it to build anything. There are optional workloads that may need to be installed when building from a GitHub workflow. There are many workloads available, see the output of the dotnet workload search command as an example:

dotnet workload search

Workload ID Description
—————————————————————————————–
android .NET SDK Workload for building Android applications.
android-aot .NET SDK Workload for building Android applications with AOT support.
ios .NET SDK Workload for building iOS applications.
maccatalyst .NET SDK Workload for building macOS applications with MacCatalyst.
macos .NET SDK Workload for building macOS applications.
maui .NET MAUI SDK for all platforms
maui-android .NET MAUI SDK for Android
maui-desktop .NET MAUI SDK for Desktop
maui-ios .NET MAUI SDK for iOS
maui-maccatalyst .NET MAUI SDK for Mac Catalyst
maui-mobile .NET MAUI SDK for Mobile
maui-windows .NET MAUI SDK for Windows
tvos .NET SDK Workload for building tvOS applications.
wasm-tools .NET WebAssembly build tools

If you’re writing a workflow for Blazor WebAssembly app, or .NET MAUI as an example — you’ll likely run the dotnet workload install command as one of your steps. For example, an individual run step to install the WebAssembly build tools would look like:

run: dotnet workload install wasm-tools

Summary

In this post, I explained the key differences between GitHub Actions and GitHub workflows. I explained and scrutinized each line in an example workflow file. I then showed you how a developer might visualize the execution of a GitHub workflow as a sequence diagram. I shared a few additional resources you may not have known about. For more information, see .NET Docs: GitHub Actions and .NET.

In the next post, I’ll show how to create GitHub Actions using .NET. I’ll walk you through upgrading an existing .NET GitHub Action that is used to automatically maintain a _CODEMETRICS.md file within the root of the repository. The code metrics analyze the C# source code of the target repository to determine things such as cyclomatic complexity and the maintainability index. In addition to these metrics, we’ll add the ability to generate Mermaid class diagrams, which is now natively supported by GitHub flavored markdown.

The post .NET ? GitHub Actions appeared first on .NET Blog.Flatlogic Admin Templates banner

Determine the country code from country name in C#

If you are trying to determine the country code (“IE”) from a string like “Dublin, Ireland”, then generally the best approach is to use a Geolocation API, such as Google Geocode, or Here maps, or the plethora of others. However, if speed is more important than accuracy, or the volume of data would be too costly to run through a paid API, then here is a simple script in C# to determine the country code from a string

https://github.com/infiniteloopltd/CountryISOFromString/

The code reads from an embedded resource, which is a CSV of country names. Some of the countries are repeated to allow for variations in spelling, such as “USA” and “United States”. The list is in English only, and feel free to submit a PR, if you have more variations to add to this.

It’s called quite simply as follows;

var country = Country.FromString(“Tampere, Pirkanmaa, Finland”);
Console.WriteLine(country.code);Flatlogic Admin Templates banner