Announcing .NET 6 Release Candidate 1

We are happy to release .NET 6 Release Candidate 1. It is the first of two “go live” release candidate releases that are supported in production. For the last month or so, the team has been focused exclusively on quality improvements that resolve functional or performance issues in new features or regressions in existing ones.

You can download .NET 6 Release Candidate 1 for Linux, macOS, and Windows.

Installers and binaries
Container images
Linux packages
Release notes
API diff
Known issues
GitHub issue tracker

See the .NET MAUI and ASP.NET Core posts for more detail on what’s new for client and web application scenarios.

We’re at that fun part of the cycle where we support the new release in production. We genuinely encourage it. In the last post, I suggested that folks email us at [email protected] to ask for guidance on how to approach that. A bunch of businesses reached out wanting to explore what they should do. The offer is still open. We’d love to hit two or three dozen early adopters and are happy to help you through the process. It’s pretty straightforward.

.NET 6 RC1 has been tested and is supported with Visual Studio 2022 Preview 4. Visual Studio 2022 enables you to leverage the Visual Studio tools developed for .NET 6 such as development in .NET MAUI, Hot Reload for C# apps, new Web Live Preview for WebForms, and other performance improvements in your IDE experience.

Support for .NET 6 RC1 is coming soon in Visual Studio 2022 for Mac Preview 1, which is currently available as a private preview.

Check out the new conversations posts for in-depth engineer-to-engineer discussions on the latest .NET features.

The rest of the post is dedicated to foundational features in .NET 6. In each release, we take on a few projects that take multiple years to complete and that (by definition) do not deliver their full value for some time. Given that these features have not come to their full fruition, you’ll notice a bias in this post to what we’re likely to do with these features in .NET 7 and beyond.

Source build

Source build is a scenario and also infrastructure that we’ve been working on in collaboration with Red Hat since before shipping .NET Core 1.0. Several years later, we’re very close to delivering a fully automated version of it. For Red Hat Enterprise Linux (RHEL) .NET users, this capability is a big deal. Red Hat tells us that .NET has grown to become an important developer platform for their ecosystem. Nice!

Clearly, .NET source code can be built into binaries. Developers do that every day after cloning a repo from the dotnet org. That’s not really what this is about.

The gold standard for Linux distros is to build open source code using compilers and toolchains that are part of the distro archive. That works for the .NET runtime (written in C++), but not for any of the code written in C#. For C# code, we use a two-pass build mechanism to satisfy distro requirements. It’s a bit complicated, but it’s important to understand the flow.

Red Hat builds .NET SDK source using the Microsoft build of the .NET SDK (#1) to produce a pure open source build of the SDK (#2). After that, the same SDK source code is built again using this fresh build of the SDK (#2) to produce a provably open source SDK (#3). This final SDK (#3) is then made available to RHEL users. After that, Red Hat can use this same SDK (#3) to build new .NET versions and no longer needs to use the Microsoft SDK to build monthly updates.

That process may be surprising and confusing. Open source distros need to be built by open source tools. This pattern ensures that the Microsoft build of the SDK isn’t required, either by intention or accident. There is a higher bar, as a developer platform, to being included in a distro than just using a compatible license. The source build project has enabled .NET to meet that bar.

The deliverable for source build is a source tarball. The source tarball contains all the source for the SDK (for a given release). From there, Red Hat (or another organization) can build their own version of the SDK. Red Hat policy requires using a built from source toolchain to produce a binary tar ball, which is why they use a two-pass methodology. But this two-pass method is not required for source build itself.

It is common in the Linux ecosystem to have both source and binary packages or tarballs available for a given component. We already had binary tarballs available and now have source tarballs as well. That makes .NET match the standard component pattern.

The big improvement in .NET 6 is that the source tarball is a now a product of our build. It used to require significant manual effort to produce, which also resulted in significant latency delivering the source tarball to Red Hat. Neither party was happy about that.

We’ve been working closely with Red Hat on this project for five+ years. It has succeeded, in no small part, due to the efforts of the excellent Red Hat engineers we’ve had the pleasure of working with. Other distros and organizations will benefit from their efforts.

As a side note, source build is a big step towards reproducible builds, which we also strongly believe in. The .NET SDK and C# compiler have significant reproducible build capabilities. There are some specific technical issues that still need to be resolved for full reproducibility. Surprisingly, a major remaining issue is using stable compression algorithms for compressed content in assemblies.

Profile-guided optimization (PGO)

Profile Guided Optimization (PGO) is an important capability of most developer platforms. It is based on the assumption that the code executed as part of startup is often uniform and that higher-level performance can be delivered by exploiting that.

There are lots of things you can do with PGO, such as:

Compile startup code at higher-quality.
Reduce binary size by compiling low-use code at lower-quality (or not at all).
Re-arrange application binaries such that code used at startup is co-located near the start of the file.

.NET has used PGO in various forms for twenty years. The system we initially developed was both proprietary and (very) difficult to use. It was so difficult to use that very few other teams at Microsoft used it even though it could have provided significant benefit. With .NET 6, we decided to rebuild the PGO system from scratch. This was motivated in large part by crossgen2 as the new enabling technology.

There are several aspects to enabling a world-class PGO system (at least in our view):

Easy-to-use training tools that collect PGO data from applications, on the developer desktop and/or in production.
Straightforward integration of PGO data in the application and library build flow.
Tools that process PGO data in various ways (differencing and transforming).
Human- and source-control-friendly text format for PGO data.
Static PGO data can be used by a dynamic PGO system to establish initial insight.

In .NET 6, we focused on building the foundation that can enable those and other experiences. In this release, we just got back to what we had before. The runtime libraries are compiled to ready-to-run format optimized with (the new form of) PGO data. This is all enabled with crossgen2. At present, we haven’t enabled anyone else to use PGO to optimize apps. That’s what will be coming next with .NET 7.

Dynamic PGO

Dynamic PGO is the mirror image of the static PGO system I just described. Where static PGO is integrated with crossgen2, dynamic PGO is integrated with RyuJIT. Where static PGO requires a separate training activity and using special tools, dynamic PGO is automatic and uses the running application to collect relevant data. Where static PGO data is persisted, dynamic PGO data is lost after every application run. Dynamic PGO is similar to a tracing JIT.

Dynamic PGO is currently opt-in, with the following environment variables set.

DOTNET_TieredPGO=1
DOTNET_TC_QuickJitForLoops=1

The Performance Improvements in .NET 6 post does a great job of demonstrating how dynamic PGO improves performance.

Tiered compilation (TC) has similar characteristics to dynamic PGO. In fact, dynamic PGO can be thought of as tiered compilation v2. TC provides a lot of benefit, but is unsophisticated in multiple dimensions and can be greatly improved. It’s brains for a scarecrow.

Perhaps the most interesting capability of dynamic PGO is devirtualization. The cost of method calls can be described like this: interface > non-interface virtual > non-virtual. If we can transform an interface method call into a non-virtual call, then that’s a significant performance improvement. That’s super hard in the general case, since it is very difficult to know statically which classes implement a given interface. If it is done wrong, the program will (hopefully) crash. Dynamic PGO can do this correctly and efficiently.

RyuJIT can now generate code using the “guarded devirtualization” compiler pattern. I’ll explain how that works. Dynamic PGO collects data on the actual classes that satisfy an interface in some part of a method signature at runtime. If there is a strong bias to one class, it can tell RyuJIT to generate code that prefers that class and use direct method calls in terms of that specific class. As suggested, direct calls are much faster. If, in the unexpected case, that the object is of a different class, then execution will jump to slower code that uses interface dispatch. This pattern preserves correctness, isn’t much slower in the unexpected case, and is much faster in the expected typical case. This dual-mode system is called guarded since the faster devirtualized code is only executed after a successful type check.

There are other capabilities that we can imagine implementing. For example, some combination of Crossgen2 and Dynamic PGO can learn how to sparsely compile methods based on usage (don’t initially compile rarely taken if/else blocks). Another idea is that Crossgen2 can communicate (via some weighting) which methods are most likely to benefit from higher tiers of compilation at runtime.

Crossgen2

I’ve discussed crossgen2 multiple times now, both in this post and previously. Crossgen2 is a major step forward for ahead-of-time or pre-compilation for the platform. There are several aspects to this that lay the foundation for future investments and capabilities. It’s non-obvious but crossgen2 may be the most promising foundational feature of the release. I’ll try to explain why I’m so excited about it.

The most important aspect is that crossgen2 has a design goal of being a standalone compiler. Crossgen1 was a separate build of the runtime with just the components required to enable code generation. That approach was a giant hack and problematic for a dozen different reasons. It got the job done, but that’s it.

As a result of being a standalone compiler, it could be written in any language. Naturally, we chose C#, but it could have equally been written in Rust or JavaScript. It just needs to be able to load a given build of RyuJIT as a plug-in and communicate with it with the prescribed protocol.

Similarly, the standalone nature enables it to be cross-targeting. It can, for example, target ARM64 from x64, Linux from Windows, or .NET 6 from .NET 7.

I’ll cover a few of the scenarios that we’re immediately interested in enabling, after .NET 6. From here on, I’ll just use “crossgen”, but I mean “crossgen2”.

By default, ready-to-run (R2R) code has the same version-ability as IL. That’s objectively the right default. If you are not sure what the implications of that are, that’s demonstrating we chose the right default. In .NET 6, we added a new mode that extends the version boundary from a single assembly to a group of assemblies. We call that a “version bubble”. There are two primary capabilities that version bubbles enable: inlining of methods and cross-assembly generic instantiations (like List<string> if List<T> and string were in different assemblies). The former enables us to generate higher-quality code and the latter enables actually generating R2R code where we otherwise have to rely on the JIT. This feature delivers double-digit startup benefits in our tests. The only downside is that version bubbles typically generate more code as a result. That’s where the next capability can help.

Today, crossgen generates R2R code for all methods in an assembly, including the runtime and SDK. That’s very wasteful since probably at least half of them would be best left for jitting at runtime (if they are needed at all). Crossgen has had the capability for partial compilation for a long time, and we’ve even used it. In .NET Core 3.0, we used it to remove about 10MB from the runtime distribution on Linux. Sadly, that configuration got lost at some point and we’re now carrying that extra 10MB around. For .NET 7, we’re going to take another crack at this, and will hopefully identify a lot more than 10MB of R2R code to no longer generate (which naturally has benefits beyond just size reduction).

Vector or SIMD instructions are significantly exploited in .NET libraries and are critical to delivering high performance. By default, crossgen uses an old version (SSE2) of these instructions and relies on tiered compilation to generate the best SIMD instructions for a given machine. That works but isn’t optimal for modern hardware (like in the cloud) and is particularly problematic for short-running serverless applications. Crossgen enables specifying a modern and better SIMD instruction set like AVX2 (for Intel and AMD). We plan to switch to producing ready-to-run images for AVX2 with this new capability for .NET 7. This capability isn’t currently relevant for Arm64 hardware, with NEON being universally and the best instructions available. When SVE and SVE2 become common place, we’ll need to deploy a similar model for Arm64.

Whatever is the most optimal crossgen configuration, that’s how we’ll deliver container images. We see containers as our most legacy-free distribution type and want to better exploit that. We see lots of opportunity for “fully optimized by default” for containers.

Security mitigations

We published a security roadmap earlier this year to provide more insight on how we are approaching industry standard security techniques and hardware features. The roadmap is also intended to be a conversation, particularly if you’ve got a viewpoint you want to share on these topics.

We added preview support for two key security mitigations this release: CET, and W^X. We intend to enable them by default in .NET 7.

CET

Intel’s Control-flow Enforcement Technology (CET) is a security feature available in some newer Intel and AMD processors. It adds capabilities to the hardware that protect against some common types of attacks involving control-flow hijacking. With CET shadow stacks, the processor and operating system can track the control flow of calls and returns in a thread in the shadow stack in addition to the data stack, and detect unintended changes to the control flow. The shadow stack is protected from application code memory accesses and helps to defend against attacks involving return-oriented programming (ROP).

See .NET 6 compatibility with Intel CET shadow stacks (early preview on Windows) for more details and instructions on enabling CET.

W^X

W^X is one of the most fundamental mitigations. It blocks the simplest attack path by disallowing memory pages to be writeable and executable at the same time. The lack of this mitigation has resulted in us not considering more advanced mitigations, since they could be bypassed by the lack of this capability. With W^X in place, we will be adding other complementary mitigations, like CET.

Apple has made the W^X mandatory for future versions of macOS desktop operating system as part of Apple Silicon transition. It motivated us to schedule implementation of this mitigation for .NET 6, on all supported operating systems. Our principle is to treat all supported operating systems equally with respect to security, where possible. W^X is available all operating systems with .NET 6 but only enabled by default on Apple Silicon. It will be enabled on all operating systems for .NET 7.

HTTP/3

HTTP/3 is a new HTTP version. It is in preview with .NET 6. HTTP/3 solves existing functional and performance challenges with past HTTP versions by using a new underlying connection protocol called QUIC. QUIC uses UDP and has TLS built in, so it’s faster to establish connections as the TLS handshake occurs as part of the connection. Each frame of data is independently encrypted so the protocol no longer has the head of line blocking challenge in the case of packet loss. Unlike TCP a QUIC connection is independent of the IP address, so mobile clients can roam between wifi and cellular networks, keeping the same logical connection, and can continuing long downloads.

At the current time, the RFC for HTTP/3 is not yet finalized, and so can still change. We have included HTTP/3 in .NET 6 so that you can start experimenting with it. It is a preview feature, and so is unsupported. There may be rough edges, and there needs to be broader testing with other servers & clients to ensure compatibility.

.NET 6 does not include support for HTTP/3 on macOS, primarily because of a lack of a QUIC-compatible TLS API. .NET uses SecureTransport on MacOS for its TLS implementation, which does not yet include TLS APIs to support QUIC handshake.

A deep-dive blog post on HTTP/3 in .NET 6 will soon be published.

SDK workloads

SDK workloads is a new capability that enables us to add new major capabilities to .NET without growing the SDK. That’s what we’ve done for .NET MAUI, Android, iOS, and WebAssembly. We haven’t measured all of the new workloads together, but it’s easy to guess that they would sum to at least the size of the SDK as-is. Without workloads, you’d probably be unhappy about the size of the SDK.

In future releases, we intend to remove more components and make them optional, including ASP.NET and the Windows Desktop. In the end, one can imagine the SDK containing only MSBuild, NuGet, the language compilers and workload acquisition functionality. We very much want to cater to a broad .NET ecosystem and to deliver just the software you need to get your particular job done. You can see how this model would be much better for CI scenarios, enabling dotnet tools to acquire a bespoke set of components for the specific code being built.

Contributor showcase

We’re coming to the end of the release. We thought we’d take a few moments to showcase some community contributors who have been significant contributions. We covered two contributors in the .NET 6 Preview 7 post and want to highlight another in this post.

The text is written in the contributor’s own words.

Theodore Tsirpanis (@teo-tsirpanis)

My name is Theodore Tsirpanis, and I am from Thessaloniki, Greece. In
less than a month, I will begin my senior (fourth and final) year as an
undergraduate student at the Department of Applied Informatics of the
University of Macedonia. Besides maintaining some projects of my own
(mostly developer-facing tools and libraries), I have been contributing
to various open-source projects on GitHub for quite some time. What I
like the most about open-source is that the soonest you find a bug or a
performance improvement opportunity, you can very quickly act upon it
yourself.

My journey with .NET also started quite some time ago. My first
programming language was Pascal but it didn’t take too long to discover
C# and later F#, and marvel at the sheer amount of technological
artisanship that permeates the .NET ecosystem. I am always eager to read
a new blog post and spend more time than I remember randomly strolling
around the libraries’ source code using ILSpy, improving my coding
skills in the process. My enjoyment of writing highly performant code,
and the impact of contributing to such a large project is what motivated
me to contribute to the .NET libraries. The team members are very
responsive and have the same passion for code quality and performance as
I do. I am very glad to have played a part in making .NET a great piece
of software, and I look forward to contributing even more in the future.

Closing

.NET 6 has a lot of new features and capabilities that are for the here-and-now, most of which have been explored in all the previews and also in the upcoming .NET 6 posts. At the same time, it’s inspiring to see the new features in .NET 6 that will lay the foundation for what’s coming next. These are big-bet features that will push the platform forward in both obvious and non-obvious ways.

For the first several releases, the team needed to focus on establishing .NET Core as a functional and holistic open source and cross-platform development system. Next, we focused on unifying the platform with Xamarin and Mono. You can see that we’re departing from that style of project to more forward-looking ones. It’s great to see the platform again expand in terms of fundamental runtime capabilities, and there’s much more to come along those lines.

Thanks for being a .NET developer.

The post Announcing .NET 6 Release Candidate 1 appeared first on .NET Blog.

HTTP/3 support in .NET 6

.NET 6 includes preview support for HTTP/3:

In Kestrel, HTTP.Sys & IIS for ASP.NET for server scenarios
In HttpClient to make outbound requests
For gRPC

What is HTTP/3 and why is support important?

HTTP through version 1.1 was a relatively simple protocol, open a TCP connection, send a set of headers over clear text and then receive the response. Requests can be pipelined over the same connection, but each has to be handled in order. TLS adds some additional complications, and a couple of round-trips to initiate a connection, but once established, HTTP is used the same way over the secure channel.

As many sites have moved to require TLS encryption, and you can only serve a request at a time per connection with HTTP/1.1, performance of web pages that typically require downloading multiple resources (scripts, images, CSS, fonts, etc.) was being limited as multiple connections are needed, each of which has high setup costs.

HTTP/2 solved the problem by changing to become a binary protocol using a framing concept to enable multiple requests to be handled at the same time over the same connection. The setup costs for TLS could be paid once, and then all the requests can be interleaved over that single connection.

That’s all great, except we have all gone mobile and much of the access is now from phones and tablets using Wi-Fi and cellular connections which can be unreliable. Although HTTP/2 enables multiple streams, they all go over a connection which is TLS encrypted, so if a TCP packet is lost all of the streams are blocked until the data can be recovered. This is known as the head of line blocking problem.

HTTP/3 solves these problems by using a new underlying connection protocol called QUIC. QUIC uses UDP and has TLS built in, so it’s faster to establish connections as the TLS handshake occurs as part of the connection. Each frame of data is independently encrypted so it no longer has the head of line blocking in the case of packet loss. Unlike TCP a QUIC connection is independent of the IP address, so mobile clients can roam between wifi and cellular networks, keeping the same logical connection, continuing long downloads etc.

Amongst the metrics that sites and services use, tracking the latency of the worst connections using P90, P95 or P99 is common. HTTP/3 is already proving to have a positive impact on these numbers, improving the experience for users with the worst connections, for example Facebook, Snapchat & Google Cloud.

QUIC Support in .NET

QUIC is designed as a base layer for HTTP/3 but it can also be used by other protocols. It is designed to work well for mobile with the ability to handle network changes, and have good recovery if packet loss occurs.

.NET uses the MSQuic library for its QUIC implementation. This is an open source, cross platform library from the Windows networking team. For packaging reasons it is included in .NET 6 for Windows, and as a separate package for Linux.

A key difference with QUIC is that TLS encryption is built in, and so the connection establishment includes the TLS handshake. This means that the TLS library used needs to provide APIs to enable this type of handshake. For Windows, the APIs are included in SChannel / Bcrypt.dll. For Linux it’s a bit more complicated – OpenSSL which is used by .NET and most other software on Linux does not yet include these APIs. The OpenSSL team has been heads down working on OpenSSL 3.0, which has a hard deadline to be submitted for FIPS 140-2 certification, so was unable to add that support directly in OpenSSL 3.0.

This created a problem for us, and many others working on QUIC and HTTP/3. To provide a stopgap solution until OpenSSL can include API support for QUIC handshakes, Microsoft has partnered with Akamai to create a fork of OpenSSL – QuicTLS – that provides the APIs to enable the QUIC handshake. Unlike other forks which have diverged over time from OpenSSL, QuicTLS provides a minimal delta over mainline OpenSSL, and is kept in sync with the upstream.

The MSQuic package for Linux is statically linked with QuicTLS, and so does not need a separate download and managing multiple variants of the OpenSSL library. This also means that when mainline OpenSSL includes QUIC APIs, the package will be updated to use those instead.

In .NET 6, we are not exposing the .NET QUIC APIs, the goal is to make them public in .NET 7. QUIC can be used like a TCP socket and is not specific to HTTP/3 so we expect other protocols to be built on QUIC over time, such as SMB over QUIC.

HTTP/3 Support in .NET 6

At the time of publishing this post, the RFC for HTTP/3 is not yet finalized, and so can still change. We have included HTTP/3 in .NET 6 so that customers can start experimenting with it, but it is a preview feature for .NET 6 – this is because it does not meet the quality standards of the rest of .NET 6. There may be rough edges, and there needs to be broader testing with other servers & clients to ensure compatibility, especially in the edge cases.

Prerequisites

To use HTTP/3 the prerequisite versions of MSQuic and its TLS dependencies need to be installed.

Windows

MsQuic is installed as part of .NET 6, but it needs an updated version of Schannel SSP which provides the TLS API, this is supplied with recent releases of the OS.

Windows 11 Build 22000 or later, or Server 2022 RTM

Windows 11 builds are currently only available to Windows Insiders.

Linux

On Linux, libmsquic is published via Microsoft official Linux package repository packages.microsoft.com. In order to consume it, it must be added manually. See Linux Software Repository for Microsoft Products. After configuring the package feed, it can be installed via the package manager for your distro, for example, on Ubuntu:

sudo apt install libmsquic

Kestrel Server

Server support is included in Kestrel. Preview features need to be enabled using the following project property:

<PropertyGroup>
<EnablePreviewFeatures>True</EnablePreviewFeatures>
</PropertyGroup>

And then set in the listener options, for example:

public static async Task Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
builder.WebHost.ConfigureKestrel((context, options) =>
{
options.Listen(IPAddress.Any, 5001, listenOptions =>
{
// Use HTTP/3
listenOptions.Protocols = HttpProtocols.Http1AndHttp2AndHttp3;
listenOptions.UseHttps();
});
});
}

For more details, see Use HTTP/3 with the ASP.NET Core Kestrel web server

HTTP/3 Client

HttpClient has been updated to include support for HTTP/3, but it needs to be enabled with a runtime flag. Include the following in the project file to enable HTTP/3 with HttpClient:

<ItemGroup>
<RuntimeHostConfigurationOption Include=”System.Net.SocketsHttpHandler.Http3Support” Value=”true” />
</ItemGroup>

HTTP/3 needs to be specified as the version for the request:

// See https://aka.ms/new-console-template for more information
using System.Net;

var client = new HttpClient();
client.DefaultRequestVersion = HttpVersion.Version30;
client.DefaultVersionPolicy = HttpVersionPolicy.RequestVersionExact;

var resp = await client.GetAsync(“https://localhost:5001/”);
var body = await resp.Content.ReadAsStringAsync();

Console.WriteLine($”status: {resp.StatusCode}, version: {resp.Version}, body: {body.Substring(0, Math.Min(100, body.Length))}”);

HTTP/3 via HTTP.sys & IIS

On Windows Server 2022, Http.sys supports HTTP/3 when it’s enabled with a registry key, and TLS 1.3 must be enabled (default). This is independent of ASP.NET’s support for HTTP/3, as the HTTP protocol is handled by HTTP.sys in this configuration – so it applies to not just ASP.NET but any content or services served by HTTP.sys. For more details see this blog post from the Windows networking team.

gRPC with HTTP/3

gRPC is a RPC mechanism using the protobuf serialization format. gRPC typically uses HTTP/2 as its transport. HTTP/3 uses the same semantics, so there is little change required to make it work. gRPC over HTTP/3 is not yet a standard, and is proposed by the .NET team.

The following code is based on the greeter sample, with the hello world proto.

The client and server projects require the same respective preview feature enablement in their projects as the samples further above.

ASP.NET Server

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddGrpc();
builder.WebHost.ConfigureKestrel((context, options) =>
{
options.Listen(IPAddress.Any, 5001, listenOptions =>
{
listenOptions.Protocols = HttpProtocols.Http3;
listenOptions.UseHttps();
});
});
var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.MapGrpcService<GreeterService>();
app.MapGet(“/”, () => “Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit: https://go.microsoft.com/fwlink/?linkid=2086909”);

app.Run();

Client

using Grpc.Net.Client;
using GrpcService1;
using System.Net;

var httpClient = new HttpClient();
httpClient.DefaultRequestVersion = HttpVersion.Version30;
httpClient.DefaultVersionPolicy = HttpVersionPolicy.RequestVersionExact;

var channel = GrpcChannel.ForAddress(“https://localhost:5001”, new GrpcChannelOptions() { HttpClient = httpClient });
var client = new Greeter.GreeterClient(channel);

var response = await client.SayHelloAsync(
new HelloRequest { Name = “World” });

Console.WriteLine(response.Message);

macOS support

.NET 6 does not include support for HTTP/3 on macOS, primarily because of a lack of a QUIC compatible TLS API. .NET uses SecureTransport on macOS for its TLS implementation, which does not yet include TLS APIs to support QUIC handshake. While we could use OpenSSL, we felt it better to not introduce an additional dependency which is not integrated with the cert management of the OS.

Going Forward

We will be investing further in QUIC and HTTP/3 in .NET 7, so expect to see updated functionality in the previews.

The post HTTP/3 support in .NET 6 appeared first on .NET Blog.

File Scoped Namespaces In C# 10

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces

This is probably going to be my final post on new features in C# 10 (Well, before I do a roundup of everything C# 10 and .NET 6 related). But it doesn’t mean this post is any less useful. Infact, this one hits a very special place in my heart.

For a little bit of a story. Back in 2006-ish, I wanted to learn a new programming language. I was a teenager all hyped up on computers and making various utilities, mostly revolving around MSN Messenger auto replies and the like. I had mastered Pascal to a certain degree, and had moved on to Delphi. There was this new thing called “.NET” and a language called C# – and since anything starting with a C was clearly amazing in the programming world, I went down that rabbit hole.

I convinced a family member to buy me a C# tutorial book *I think* from Microsoft, I can’t exactly remember. I do remember it having a “tool” on the front so I can only presume it was this one or another in the series : https://www.amazon.com/Microsoft%C2%AE-Visual-2005-Step-Developer/dp/0735621292. Eagerly, I opened the book and inserted the CD Rom that came with it. And I can still remember my heart sinking.

Your operating system is not compatible

For reasons unknown to me at the time, I was using Windows ME. Quite possibly the worst operating system known to man. I mean, we didn’t have have a lot of money. It was a 1GHZ, 256MB RAM machine, Windows ME was the best we could do at the time. And so.. I was stuck. The CD ROM wouldn’t work, so I couldn’t install Visual Studio (These were days before broadband/ADSL for me), and so I did what any kid would do. I just read the book instead and took notes that someday I hoped I could use when writing C# code. Literally, I couldn’t even write C# code on my PC, and instead I just wrote it on paper and “pretended” it would work first time and I was learning. Ugh.

However, the actual point of the story is this. The first chapter of the blimmin book had the driest introduction to namespaces you could imagine. I thought maybe we could ease into “integers vs strings” or a nice “if statement”, but nope, let’s talk about how namespaces work. I just remember it being sooo off putting. And 15 years later, if a new programmer asked me to teach them C#, I would probably not even mention namespaces in the first month.

So with that story done, let’s look at the actual feature….

Introducing File Scoped Namespaces

We can take a namespace scoped class like so :

namespace MyNamespace.Services
{
class MyClass
{

}
}

But in C# 10, we can now remove the additional parenthesis and have the code look like so :

namespace MyNamespace.Services;

class MyClass
{

}

And that’s… kinda it. It’s done for no other reason than it removes an additional level of indenting that really isn’t needed in this day and age. It just presumes that whatever is inside that file (hence file scoped) is all within the same namespace. I can’t think of a time literally in 15 years that I have ever had more than 1 namespace in the same file. So this addition to C# really does make sense.

Visual Studio 2019 vs 2022

I just want to put a huge caveat when using this feature. For a couple of months now, I tried this feature out in every .NET 6 preview SDK release. And each time I couldn’t get it to work, but I kept seeing people talk about it.

As it turns out, for whatever reason, I could not get this feature to work in Visual Studio 2019 (And actually, Minimal APIs in .NET 6 had similar issues), but it worked first try in Visual Studio 2022. So if you are getting errors such as :

{ expected

Then you probably need to try it inside Visual Studio 2022.

The post File Scoped Namespaces In C# 10 appeared first on .NET Core Tutorials.

Creating Microsoft Teams meetings in ASP.NET Core using Microsoft Graph

This article shows how to create Microsoft Teams online meetings in ASP.NET Core using Microsoft Graph. Azure AD is used to implement the authentication using Microsoft.Identity.Web and the authenticated user can create teams meetings and send emails to all participants or attendees of the meeting.

Code: https://github.com/damienbod/TeamsAdminUI

Setup Azure App registration

An Azure App registration is setup to authenticate against Azure AD. The ASP.NET Core application will use delegated permissions for the Microsoft Graph. The listed permissions underneath are required to create the teams meetings and to send emails to the attendees. The account used to login needs access to office and should be able to send emails.

User.Read
Mail.Send
Mail.ReadWrite
OnlineMeetings.ReadWrite

This is the list of permissions I have activate for this demo.

The Azure App registration requires a user secret or a certificate to authentication the ASP.NET Core Razor page application. Microsoft.Identity.Web uses this to authenticate the application. You should always authenticate the application if possible.

Setup ASP.NET Core application

The Microsoft.Identity.Web Nuget packages with the MicrosoftGraphBeta package are used to implement the Azure AD client. We want to implement Open ID Connect code flow with PKCE and a secret to authenticate the identity and the Microsoft packages implements this client for us.

<ItemGroup>
<PackageReference
Include=”Microsoft.Identity.Web”
Version=”1.16.1″ />
<PackageReference
Include=”Microsoft.Identity.Web.UI”
Version=”1.16.1″ />
<PackageReference
Include=”Microsoft.Identity.Web.MicrosoftGraphBeta”
Version=”1.16.1″ />
</ItemGroup>

The ConfigureServices method is used to add the required services for the Azure AD client authentication and the Microsoft Graph client for the API calls. The AddMicrosoftGraph is used to initialize the required permissions.

public void ConfigureServices(IServiceCollection services)
{
// more services …

var scopes = “User.read Mail.Send Mail.ReadWrite OnlineMeetings.ReadWrite”;

services.AddMicrosoftIdentityWebAppAuthentication(Configuration)
.EnableTokenAcquisitionToCallDownstreamApi()
.AddMicrosoftGraph(“https://graph.microsoft.com/beta”, scopes)
.AddInMemoryTokenCaches();

services.AddRazorPages().AddMvcOptions(options =>
{
var policy = new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.Build();
options.Filters.Add(new AuthorizeFilter(policy));
}).AddMicrosoftIdentityUI();
}

The AzureAd configuration is read from the app.settings file. The secrets are read from the user secrets in local development.

“AzureAd”: {
“Instance”: “https://login.microsoftonline.com/”,
“Domain”: “damienbodsharepoint.onmicrosoft.com”,
“TenantId”: “5698af84-5720-4ff0-bdc3-9d9195314244”,
“ClientId”: “a611a690-9f96-424f-9ea5-4ba99a642c01”,
“CallbackPath”: “/signin-oidc”,
“SignedOutCallbackPath “: “/signout-callback-oidc”
// “ClientSecret”: “add secret to the user secrets”
},

Creating a Teams meeting using Microsoft Graph

The OnlineMeeting class from Microsoft.Graph is used to create the teams meeting. In this demo, we added a begin and an end DateTime in UTC and the name (Subject) of the meeting. We want that all invited attendees can bypass the lobby and enter directly into the meeting. This is implemented with the LobbyBypassSettings property. The attendees are added to the meeting using the Upn property and setting this with the email of each attendee. The organizer is automatically set to the identity signed in.

public OnlineMeeting CreateTeamsMeeting(
string meeting, DateTimeOffset begin, DateTimeOffset end)
{

var onlineMeeting = new OnlineMeeting
{
StartDateTime = begin,
EndDateTime = end,
Subject = meeting,
LobbyBypassSettings = new LobbyBypassSettings
{
Scope = LobbyBypassScope.Everyone
}
};

return onlineMeeting;
}

public OnlineMeeting AddMeetingParticipants(
OnlineMeeting onlineMeeting, List<string> attendees)
{
var meetingAttendees = new List<MeetingParticipantInfo>();
foreach(var attendee in attendees)
{
if(!string.IsNullOrEmpty(attendee))
{
meetingAttendees.Add(new MeetingParticipantInfo
{
Upn = attendee.Trim()
});
}
}

if(onlineMeeting.Participants == null)
{
onlineMeeting.Participants = new MeetingParticipants();
};

onlineMeeting.Participants.Attendees = meetingAttendees;

return onlineMeeting;
}

A simple service is used to implement the GraphServiceClient instance which is used to send the Microsoft Graph requests. This uses the Microsoft Graph as described by the docs.

public async Task<OnlineMeeting> CreateOnlineMeeting(
OnlineMeeting onlineMeeting)
{
return await _graphServiceClient.Me
.OnlineMeetings
.Request()
.AddAsync(onlineMeeting);
}

public async Task<OnlineMeeting> UpdateOnlineMeeting(
OnlineMeeting onlineMeeting)
{
return await _graphServiceClient.Me
.OnlineMeetings[onlineMeeting.Id]
.Request()
.UpdateAsync(onlineMeeting);
}

public async Task<OnlineMeeting> GetOnlineMeeting(
string onlineMeetingId)
{
return await _graphServiceClient.Me
.OnlineMeetings[onlineMeetingId]
.Request()
.GetAsync();
}

A Razor page is used to create a new Microsoft Teams online meeting. The two services are added to the class and a HTTP Post method implements the form request from the Razor page. This method creates the Microsoft Teams meeting using the services and redirects to the created Razor page with the ID of the meeting.

[AuthorizeForScopes(Scopes = new string[] { “User.read”, “Mail.Send”, “Mail.ReadWrite”, “OnlineMeetings.ReadWrite” })]
public class CreateTeamsMeetingModel : PageModel
{
private readonly AadGraphApiDelegatedClient _aadGraphApiDelegatedClient;
private readonly TeamsService _teamsService;

public string JoinUrl { get; set; }

[BindProperty]
public DateTimeOffset Begin { get; set; }
[BindProperty]
public DateTimeOffset End { get; set; }
[BindProperty]
public string AttendeeEmail { get; set; }
[BindProperty]
public string MeetingName { get; set; }

public CreateTeamsMeetingModel(AadGraphApiDelegatedClient aadGraphApiDelegatedClient,
TeamsService teamsService)
{
_aadGraphApiDelegatedClient = aadGraphApiDelegatedClient;
_teamsService = teamsService;
}

public async Task<IActionResult> OnPostAsync()
{
if (!ModelState.IsValid)
{
return Page();
}

var meeting = _teamsService.CreateTeamsMeeting(MeetingName, Begin, End);

var attendees = AttendeeEmail.Split(‘;’);
List<string> items = new();
items.AddRange(attendees);
var updatedMeeting = _teamsService.AddMeetingParticipants(
meeting, items);

var createdMeeting = await _aadGraphApiDelegatedClient.CreateOnlineMeeting(updatedMeeting);

JoinUrl = createdMeeting.JoinUrl;

return RedirectToPage(“./CreatedTeamsMeeting”, “Get”, new { meetingId = createdMeeting.Id });
}

public void OnGet()
{
Begin = DateTimeOffset.UtcNow;
End = DateTimeOffset.UtcNow.AddMinutes(60);
}
}

Sending Emails to attendees using Microsoft Graph

The Created Razor page displays the meeting JoinUrl and some details of the Teams meeting. The page implements a form which can send emails to all the attendees using Microsoft Graph. The EmailService class implements the email logic to send plain mails or HTML mails using the Microsoft Graph.

using Microsoft.Graph;
using System;
using System.Collections.Generic;
using System.IO;

namespace TeamsAdminUI.GraphServices
{
public class EmailService
{
MessageAttachmentsCollectionPage MessageAttachmentsCollectionPage = new();

public Message CreateStandardEmail(string recipient, string header, string body)
{
var message = new Message
{
Subject = header,
Body = new ItemBody
{
ContentType = BodyType.Text,
Content = body
},
ToRecipients = new List<Recipient>()
{
new Recipient
{
EmailAddress = new EmailAddress
{
Address = recipient
}
}
},
Attachments = MessageAttachmentsCollectionPage
};

return message;
}

public Message CreateHtmlEmail(string recipient, string header, string body)
{
var message = new Message
{
Subject = header,
Body = new ItemBody
{
ContentType = BodyType.Html,
Content = body
},
ToRecipients = new List<Recipient>()
{
new Recipient
{
EmailAddress = new EmailAddress
{
Address = recipient
}
}
},
Attachments = MessageAttachmentsCollectionPage
};

return message;
}

public void AddAttachment(byte[] rawData, string filePath)
{
MessageAttachmentsCollectionPage.Add(new FileAttachment
{
Name = Path.GetFileName(filePath),
ContentBytes = EncodeTobase64Bytes(rawData)
});
}

public void ClearAttachments()
{
MessageAttachmentsCollectionPage.Clear();
}

static public byte[] EncodeTobase64Bytes(byte[] rawData)
{
string base64String = System.Convert.ToBase64String(rawData);
var returnValue = Convert.FromBase64String(base64String);
return returnValue;
}
}
}

The CreatedTeamsMeetingModel class is used to implement the Razor page logic to display some meeting details and send emails using a form post request. The OnGetAsync uses the meetingId to request the Teams meeting using Microsoft Graph and displays the data in the UI. The OnPostAsync method sends emails to all attendees.

using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.RazorPages;
using System.Threading.Tasks;
using TeamsAdminUI.GraphServices;
using Microsoft.Graph;

namespace TeamsAdminUI.Pages
{
public class CreatedTeamsMeetingModel : PageModel
{
private readonly AadGraphApiDelegatedClient _aadGraphApiDelegatedClient;
private readonly EmailService _emailService;

public CreatedTeamsMeetingModel(
AadGraphApiDelegatedClient aadGraphApiDelegatedClient,
EmailService emailService)
{
_aadGraphApiDelegatedClient = aadGraphApiDelegatedClient;
_emailService = emailService;
}

[BindProperty]
public OnlineMeeting Meeting {get;set;}

[BindProperty]
public string EmailSent { get; set; }

public async Task<ActionResult> OnGetAsync(string meetingId)
{
Meeting = await _aadGraphApiDelegatedClient.GetOnlineMeeting(meetingId);
return Page();
}

public async Task<IActionResult> OnPostAsync(string meetingId)
{
Meeting = await _aadGraphApiDelegatedClient.GetOnlineMeeting(meetingId);
foreach (var attendee in Meeting.Participants.Attendees)
{
var recipient = attendee.Upn.Trim();
var message = _emailService.CreateStandardEmail(recipient, Meeting.Subject, Meeting.JoinUrl);
await _aadGraphApiDelegatedClient.SendEmailAsync(message);
}

EmailSent = “Emails sent to all attendees, please check your mailbox”;
return Page();
}

}
}

The created Razor page implements the HTML display logic and adds a form to send the emails. The JoinUrl is displayed as this is what you need to open the meeting a Microsoft Teams application.

@page “{handler?}”
@model TeamsAdminUI.Pages.CreatedTeamsMeetingModel
@{
}

<h4>Teams Meeting Created: @Model.Meeting.Subject</h4>
<hr />

<h4>Meeting Id:</h4>

<p>@Model.Meeting.Id</p>

<h4>JoinUrl</h4>

<p>@Model.Meeting.JoinUrl</p>

<h4>Participants</h4>

@foreach(var attendee in Model.Meeting.Participants.Attendees)
{
<p>@attendee.Upn</p>
}

<form method=”post”>
<div class=”form-group”>
<input type=”hidden” value=”@Model.Meeting.Id” />
<button type=”submit” class=”btn btn-primary”><i class=”fas fa-save”></i> Send Mail to attendees</button>
</div>
</form>

<p>@Model.EmailSent</p>

Testing

When the application is started, you can create a new Teams meeting with the required details. The logged in user must have an account with access to Office and be on the same tenant as the Azure App registration setup for the Microsoft Graph permissions. The Teams meeting is organized using the identity that signed in because we used the delegated permissions.

Once the meeting is created, the created Razor page is opened with the details. You can send an email to all attendees or use the JoinUrl directly to open up the Teams meeting.

Creating Teams meetings and sending emails in ASP.NET Core is really useful and I will do a few following up posts to this as there is so much more you can do here once this is integrated.

Links:

https://docs.microsoft.com/en-us/graph/api/application-post-onlinemeetings

https://github.com/AzureAD/microsoft-identity-web

Send Emails using Microsoft Graph API and a desktop client

https://www.office.com/?auth=2

https://aad.portal.azure.com/

https://admin.microsoft.com/Adminportal/Home

Reading And Writing YAML In C# .NET

Let me just start off by saying that YAML itself is not that popular in either C# or .NET. For a long time, under .NET Framework, XML seemed to rein supreme with things like csproj files, solution files and even msbuild configurations all being XML driven. That slowly changed to be more JSON friendly, and I think we could all agree that things like NewtonSoft.Json/JSON.NET had a huge impact on pretty much every .NET developer using JSON these days.

That being said, there are times when you need to parse YAML. Recently, on a project that involved other languages such as Go, Python and PHP, YAML was chosen as a shared configuration type between all languages. Don’t get me started on why this was the case…. But it happened. And so if you are stuck working out how to parse YAML files in C#, then this guide is for you.

Introducing YamlDotNet

In .NET, there is no support for reading or writing YAML files out of the box. Unlike things like JSON Serializer and XML Serializers, you aren’t able to rely on Microsoft for this one. Luckily, there is a nuget package that is more or less the absolute standard when it comes to working with YAML in C#. YamlDotNet.

To install it, from our package manager console we just have to run :

Install-Package YamlDotNet

And we are ready to go!

Deserializing YAML To A POCO

Deserializing YAML directly to a POCO is actually simple!

Let’s say we have a YAML file that looks like so :

databaseConnectionString: Server=.;Database=myDataBase;
uploadFolder: /uploads/
approvedFileTypes : [.png, .jpeg, .jpg]

And we then have a plain C# class that is set up like the following :

class Configuration
{
public string DatabaseConnectionString { get; set; }
public string UploadFolder { get; set; }
public List<string> ApprovedFileTypes { get; set; }
}

The code to deserialize this is just a few lines long :

var deserializer = new YamlDotNet.Serialization.DeserializerBuilder()
.WithNamingConvention(CamelCaseNamingConvention.Instance)
.Build();

var myConfig = deserializer.Deserialize<Configuration>(File.ReadAllText(“config.yaml”));

Easy right! But I do want to point out one big caveat to all of this. YamlDotNet by default is *not* case insensitive. Infact, it’s actually somewhat frustrating that you must match the casing perfectly. Maybe that’s just me being spoiled with JSON.NET’s excellent case insensitivity, but it is annoying here.

You must use one of the following :

CamelCase Naming
Hyphenated Naming
LowerCase Naming
PascalCase Naming
Underscored Naming
Or simply have the YAML match the casing of your properties exactly

But you can’t mix up casing that easily unfortunately.

YamlDotNet does have a “YamlMember” attribute that works much the same as JsonProperty in JSON.NET. However, you must also override the ApplyNamingConventions property to be false for it to really work properly. e.g. If in my YAML I have “Database_ConnectionString”, I need to apply an alias *as well* as remove the camel case naming convention otherwise it will look for “database_ConnectionString”.

[YamlMember(Alias = “Database_ConnectionString”, ApplyNamingConventions = false)]
public string DatabaseConnectionString { get; set; }

Deserializing YAML To A Dynamic Object

If you check my guide on parsing JSON, you’ll notice I talk about things like JObject, JsonPath, dynamic JTokens etc. Basically, ways to read a JSON File, without having the structured class to deserialize into.

In my brief time working with YamlDotNet, it doesn’t seem to have the same functionality. It looks to be either you serialize into a class, or nothing at all. There are some work arounds however, you can for example deserialize into a dynamic object.

dynamic myConfig = deserializer.Deserialize<ExpandoObject>(File.ReadAllText(“config.yaml”));

But, it isn’t quite the same as the ability to use things like JsonPath to find deep seated nodes. What I will say is that that probably has more to do with where and how JSON is used vs YAML. It’s generally going to be pretty rare to have a YAML file be hundreds or thousands of lines long (Although not unheard of), so the need for things like JsonPath is maybe in the edge case territory.

Serializing A C# Object To YAML

Writing a C# object into YAML is actually pretty straight forward. If we take our simple C# configuration class we had before :

class Configuration
{
public string DatabaseConnectionString { get; set; }
public string UploadFolder { get; set; }
public List<string> ApprovedFileTypes { get; set; }
}

We can do everything in just 4 lines :

var config = new Configuration();

var serializer = new SerializerBuilder()
.WithNamingConvention(CamelCaseNamingConvention.Instance)
.Build();

var stringResult = serializer.Serialize(config);

I’ll note a couple of things about the serializing/writing process :

If an object is null, it will still be serialized (with an empty value), but you can override this if you want
Naming Convention is uber important here obviously, with the default being whatever casing your properties are in with your C# code

But outside of those points, it’s really straight forward and just works a treat.

The post Reading And Writing YAML In C# .NET appeared first on .NET Core Tutorials.

.NET Foundation Board of Directors Election 2021: Results!

The results are in!

We are pleased to announce the winners of the 2021 Board Election, but before we do, we have a bit of news to announce.

Rodney Littles has decided to resign from the Foundation. Rodney has been a board member and the chair for the Technical Steering Group for the past year. We wish him all the best as he refocuses on his personal life. The bylaws of the Foundation state that the existing board should appoint a replacement for his seat. We felt with the election taking place as he sent his resignation, it would be fitting for our community of members to have a say in that 4th seat.

The newly elected board seats will be filled by:

Mattias Karlsson

Frank Odoom

Rob Prouse

Javier Lozano (re-elected)

We understand there has been concern and conversations around the lack of gender and cultural diversity in the slate of candidates. There is work that needs to be done within ourselves and the .NET Community to be more inclusive without excluding the work of those that are trying to make a difference for everyone. That work will begin this week as the new board will meet with the existing board to start the hand-off process.

Thanks to everyone who was nominated, ran, and contributed to the success of this election!

.NET 6 / C# 10 Top New Features Recap

Over the past few months, I’ve been publishing posts around new features inside .NET 6 and C# 10. I put those as two separate feature lanes but in reality, they somewhat blur together now as a new release of .NET generally means a new release of C#. And features built inside .NET, are typically built on the back of new C# 10 features.

That being said, I thought it might be worthwhile doing a recap of the features I’m most excited about. This is not an exhaustive list of every single feature we should expect come release time in November, but instead, a nice little retrospective on what’s coming, and what it means going forward as a C#/.NET Developer.

Minimal API Framework

The new Minimal API framework is in full swing, and allows you to build an API without the huge ceremony of startup files. If you liked the approach in NodeJS of “open the main.js file and go”, then you’ll like the new Minimal API framework. I highly suggest that everyone take a look at this feature because I suspect it’s going to become very very popular given the modern day love for microservices architectures.

https://dotnetcoretutorials.com/2021/07/16/building-minimal-apis-in-net-6/

DateOnly and TimeOnly Types

This is a biggie in my opinion. The ability to now specify types as being *only* a date or *only* a time is huge. No more rinky dink coding around using a DateTime with no time portion for example.

https://dotnetcoretutorials.com/2021/09/07/dateonly-and-timeonly-types-in-net-6/

LINQ OrDefault Enhancements

Not as great as it sounds on the tin, but being able to specify what exactly the “OrDefault” will return as a default can be handy in some cases. S

https://dotnetcoretutorials.com/2021/09/02/linq-ordefault-enhancements-in-net-6/

Implicit Using Statements

Different project types can now implicitly import using statements globally so you don’t have to. e.g. No more writing “using System;” at the top of every single file. However, this particular feature has slightly been walked back to not be turned on by default. Still interesting none the less.

https://dotnetcoretutorials.com/2021/08/31/implicit-using-statements-in-net-6/

IEnumerable Chunk

Much handier than it sounds at first glance. More sugar than anything, but the ability for the framework to handle “chunking” a collection for you will see a lot of use in the future.

https://dotnetcoretutorials.com/2021/08/12/ienumerable-chunk-in-net-6/

SOCKS Proxy Support

Somewhat surprisingly, .NET has never supported SOCKS proxies until now. I can’t say I’ve ever run into this issue myself, but I could definitely see this being a right pain when you are half way down a project build and realize that you can’t use SOCKS. But it’s here now atleast!

https://dotnetcoretutorials.com/2021/07/11/socks-proxy-support-in-net/

Priority Queue

Another feature that is surprising it’s never been here till now. The ability to have a priority on queue items will be a huge help to many. This is likely to see a whole heap of use in the coming years.

https://dotnetcoretutorials.com/2021/03/17/priorityqueue-in-net/

MaxBy/MinBy

How have we lived without this until now? The ability to find the “max” of a property on a complex object, but then return the complete object. Replaces the cost of doing a full order by then picking the first item. Very handy!

https://dotnetcoretutorials.com/2021/09/09/maxby-minby-in-net-6/

Global Using Statements

The feature that makes Implicit Using Statements possible. Essentially the ability to declare a using statement once in your project, and not have to clutter the top of every single file importing the exact same things over and over again. Will see use from day 1.

https://dotnetcoretutorials.com/2021/08/19/global-using-statements-in-c10/

File Scoped Namespaces

More eye candy than anything. Being able to declare a namespace without braces services to save you one tab to the right.

https://dotnetcoretutorials.com/2021/09/20/file-scoped-namespaces-in-c-10/

What’s Got You Excited?

For me, I’m super pumped about the minimal API framework. The low ceremony is just awesome for quick API’s that need be shipped yesterday. Besides that, I think the DateOnly and TimeOnly will see a tonne of use from Day 1, and I imagine that new .NET developers won’t even think twice that we went 20 odd years with only DateTime.

How about you? What are you excited about?

The post .NET 6 / C# 10 Top New Features Recap appeared first on .NET Core Tutorials.

Car Registration #API now available via #NuGET

NuGet is the de-facto package manager for .NET, and as perhaps a major oversight, the Car Registration API was never available via a NuGet Package.

We’ve put this live today, here: https://www.nuget.org/packages/LicensePlateAPI/ and here is are the steps to use it;

Install the following three NuGet Packages

Install-Package LicensePlateAPI
Install-Package System.ServiceModel.Primitives
Install-Package System.ServiceModel.Http

Then, assuming you’ve already opened an account, here is some sample code;

var client = LicensePlateAPI.API.GetClient();
var car = client.CheckAsync(“{LICENSE PLATE}”, “{USERNAME}”).Result;
Console.WriteLine(car.vehicleJson);

Where evidently {LICENSE PLATE} and {USERNAME} are placeholders. “CheckAsync” checks for UK license plates, but you can change this to any country by using CheckUSAAsync or Check<Country>Async.

Enjoy!

Intercept #AJAX “open” statements in #JavaScript

If you want to change the default behaviour of AJAX across your website, perhaps you want to make sure that every AJAX called is logged before executing, or that it is somehow audited for security before being called, you can use interceptor scripts in Javascript that override the default functionality of the XMLHttpRequest object that is behind every AJAX call, even if a library like JQuery is used ontop of it.

So, for instance, if you wanted to catch the body of all POST requests sent via AJAX, you could do this;

(function(send) {
XMLHttpRequest.prototype.send = function(body) {
var info=”send datarn”+body;
alert(info);
send.call(this, body);
};
})(XMLHttpRequest.prototype.send);

Or, if you wanted to change the destination of all AJAX requests such that all communications are sent via a logging service first, then you could do this;

(function(open) {
XMLHttpRequest.prototype.open = function(verb,url,async,user,password) {
open.call(this, verb,”https://somewhere.com/log”,async,user,password);
this.setRequestHeader(“X-Original-URL”, url);
};
})(XMLHttpRequest.prototype.open);

Where somewhere.com/log is obviously fictitious.

Hope this is useful to somebody!

Deploying Angular with ASP.​NET MVC 5 on IIS

This blog post is about Deploying Angular with ASP.NET MVC 5 on IIS. Recently I saw one discussion in K-MUG and I had to consult for an issue on deploying Angular with ASP.NET MVC on IIS. So I thought of writing a blog post around it. In this blog post I am using Angular 12 and ASP.NET MVC 5. First I am creating an ASP.NET MVC project and then creating an Angular project using ng new command. I created an ASP.NET MVC project using Visual Studio and in the root folder I created an Angular project using ng new Frontend –minimal command. Once it is done, I am adding Bootstrap to the Angular project using npm install Bootstrap command in the Frontend folder. Next I am modifying the project.json file to use Bootstrap style and script. Also I modified the output path property to Scripts/Dist – here is the code inside Angular.json file.

options: {
outputPath: ../Scripts/Dist,
index: src/index.html,
main: src/main.ts,
polyfills: src/polyfills.ts,
tsConfig: tsconfig.app.json,
assets: [
src/favicon.ico,
src/assets
],
styles: [
src/styles.css,
node_modules/bootstrap/dist/css/bootstrap.min.css
],
scripts: [
node_modules/bootstrap/dist/js/bootstrap.min.js
]
}

Please make sure Dist folder included in the project, otherwise the Dist folder won’t be deployed in the Publish folder. Next I modified the bundleconfig.cs file to bundle the scripts and styles generated by Angular CLI and use it as script reference.

public class BundleConfig
{
public static void RegisterBundles(BundleCollection bundles)
{
bundles.Add(new Bundle(“~/bundles/angular”)
.Include(new[] {
“~/Scripts/Dist/runtime.*”,
“~/Scripts/Dist/polyfills.*”,
“~/Scripts/Dist/scripts.*”,
“~/Scripts/Dist/vendor.*”,
“~/Scripts/Dist/main.*”
}));

bundles.Add(new StyleBundle(“~/bundles/angular-css”)
.Include(new[] {
“~/Scripts/Dist/styles.*”
}));
}
}

I am using Bundle class instead of ScriptBundle. If you use ScriptBundle you might get some errors related to minification. Next I modified the Index.cshtml file like this.

@{
Layout = null;
}

<!DOCTYPE html>
<html lang=“en”>
<head>
<meta charset=“utf-8” />
<meta name=“viewport” content=“width=device-width, initial-scale=1.0” />
@Styles.Render(“~/bundles/angular-css”)
</head>
<body>

<div class=“container”>
<main role=“main” class=“pb-3”>
<app-root></app-root>
</main>
</div>
@Scripts.Render(“~/bundles/angular”)
</body>
</html>

Please note, I am removed the _Layout.cshtml reference and I am moving all the HTML code to app.component.ts file.

import { Component } from @angular/core;

@Component({
selector: app-root,
template: `

<div class=”jumbotron”>
<h1>ASP.NET</h1>
<p class=”lead”>ASP.NET is a free web framework for building great Web sites and Web applications using HTML, CSS and JavaScript.</p>
<p><a href=”https://asp.net” class=”btn btn-primary btn-lg”>Learn more &raquo;</a></p>
</div>

`,
styles: []
})
export class AppComponent {
title = Frontend;
}

Next I modified the package.json file to include the command to build angular production builds.

scripts: {
ng: ng,
start: ng serve,
build: ng build,
watch: ng build –watch –configuration development,
prod: ng build –configuration production –vendor-chunk=true
}

For development build I am using the build script and for production build I am using prod script. Next let us modify the project properties and include Pre-Build event command line like this.

if $(ConfigurationName) == Debug (
npm run build –prefix $(ProjectDir)Frontend
) ELSE (
npm run prod –prefix $(ProjectDir)Frontend
)

You can right click on the Project and Select Properties Menu. And then Select the Build events.

And when you build the app you will see logs like this.

I included the IF – ELSE logic because I don’t want to run a separate Angular terminal always. It might take some time if you’re running build every time. As alternative you can run a terminal window and run the npm run watch command, so that Angular app will be compiled when you modify any typescript file and modify the Pre Build command line event like this.

if $(ConfigurationName) == Release (
npm run prod –prefix $(ProjectDir)Frontend
)

So that it will be execute only when you publish / build the app in Release mode.

Now we are ready to publish the app to IIS. I am using the Folder Publish method. Right click on the Project and select Publish menu, and from the screen, create a new Folder Publish option, I am continuing with the default options and then click on the Publish button. Once you click on Publish button, you will see something like this in the Build log.

If you notice, you will be able to see the Angular release configuration script is invoked. Next you can create an app in IIS and point it to the published directory.

Now lets browse the application and will display page like this and view the source, you will be able to see the page source.

This way you can prepare, develop and deploy the ASP.NET MVC application with Angular to IIS.

Happy Programming 🙂