Working with model validation in Minimal APIs

This post is about implementing model validation in ASP.NET Core Minimal APIs. Minimal APIs do not come with any built-in support for validation. In this post we will explore how to build one and we explore will use some other libraries which can be used to implement validations.

You can implement a minimal validation library compatible with the existing validation attributes, like this.

public interface IMinimalValidator
{
ValidationResult Validate<T>(T model);
}
public class MinimalValidator : IMinimalValidator
{
public ValidationResult Validate<T>(T model)
{
var result = new ValidationResult()
{
IsValid = true
};
var properties = typeof(T).GetProperties();
foreach (var property in properties)
{
var customAttributes = property.GetCustomAttributes(typeof(ValidationAttribute), true);
foreach (var attribute in customAttributes)
{
var validationAttribute = attribute as ValidationAttribute;
if (validationAttribute != null)
{
var propertyValue = property.CanRead ? property.GetValue(model) : null;
var isValid = validationAttribute.IsValid(propertyValue);

if (!isValid)
{
if (result.Errors.ContainsKey(property.Name))
{
var errors = result.Errors[property.Name].ToList();
errors.Add(validationAttribute.FormatErrorMessage(property.Name));
result.Errors[property.Name] = errors.ToArray();
}
else
{
result.Errors.Add(property.Name, new string[] { validationAttribute.FormatErrorMessage(property.Name) });
}

result.IsValid = false;
}
}
}
}

return result;
}
}

public class ValidationResult
{
public bool IsValid { get; set; }
public Dictionary<string, string[]> Errors { get; set; } = new Dictionary<string, string[]>();
}

And you can inject this as a service in your pipeline and use it like this.

builder.Services.AddScoped<IMinimalValidator, MinimalValidator>();
var app = builder.Build();

if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}

app.UseHttpsRedirection();
app.MapPost(/bookmarks, async (BookmarkDbContext bookmarkDbContext, Link link, IMinimalValidator minimalValidator) =>
{
var validationResult = minimalValidator.Validate(link);
if (validationResult.IsValid)
{
await bookmarkDbContext.Links.AddAsync(link);
await bookmarkDbContext.SaveChangesAsync();
return Results.Created($/{link.Id}, link);
}
return Results.ValidationProblem(validationResult.Errors);
}).WithName(AddBookmark).ProducesValidationProblem(400).Produces(201);

In the MinimalValidator class, we are using Reflection and identifying ValidationAttribute classes and invoking the IsValid method, if it not valid, we are calling the FormatErrorMessage method and adding the error message to the Dictionary of result. It is very minimal implementation, and I didn’t tested it with all the validation attribute and custom validator implementations. And it will not work if the model object is a collection.

Damian Edwards from PM Architect on the .NET team at Microsoft, already created library https://github.com/DamianEdwards/MiniValidation which help you to do the same. And it is available as nuget package. Here is an example using MiniValidation nuget package.

app.MapPost(“/bookmarks”, async (BookmarkDbContext bookmarkDbContext, Link link) =>
{
if (MiniValidator.TryValidate(link, out var errors))
{
await bookmarkDbContext.AddAsync(link);
await bookmarkDbContext.SaveChangesAsync();
return Results.Created($”/{link.Id}, link);
}
return Results.ValidationProblem(errors);
}).WithName(“AddBookmark”).ProducesValidationProblem(400).Produces(201);

This will show error like this in Open API page.

This package offers validation support for collection type models. As it is a static class no need to inject it in the pipeline. I think it becomes a challenge in the unit testing. Yes, you can wrap it into a service and do the testing.

Next one we can use FluentValidation it a popular validation library available in the market. But it can’t be used with existing validation attributes. You need to write validation code explicitly. To use this first you need to install the package – FluentValidation.AspNetCore. Next you can inject the validation service to the pipeline like this – builder.Services.AddFluentValidation(v => v.RegisterValidatorsFromAssemblyContaining<Program>());. And you need to create validator classes by inheriting AbstractValidator class. Here is an example.

public class LinkValidator : AbstractValidator<Link>
{
public LinkValidator()
{
RuleFor(x => x.Url)
.NotNull().WithMessage(“Url is required”)
.Must(uri => Uri.TryCreate(uri, UriKind.Absolute, out _)).WithMessage(“Url must be valid”);
}
}

And in the HTTP Post, you can use it like this.

app.MapPost(“/bookmarks”, async (BookmarkDbContext bookmarkDbContext, Link link, IValidator<Link> validator) =>
{
var validationResult = validator.Validate(link);
if (validationResult.IsValid)
{
await bookmarkDbContext.Links.AddAsync(link);
await bookmarkDbContext.SaveChangesAsync();
return Results.Created($”/{link.Id}, link);
}
return Results.ValidationProblem(validationResult.ToDictionary());
}).WithName(“AddBookmark”).ProducesValidationProblem(400).Produces(201);

I created an extension method which converts FluentValidation.Results.ValidationResult to Dictionary like this. Otherwise we can’t return the Results.ValidationProblem from the API endpoint.

public static class FluentValidationExtensions
{
public static IDictionary<string, string[]> ToDictionary(this ValidationResult validationResult)
{
return validationResult.Errors
.GroupBy(x => x.PropertyName)
.ToDictionary(
g => g.Key,
g => g.Select(x => x.ErrorMessage).ToArray()
);
}
}

It will give you the exact same results as we saw in the screenshot. And each method we used has its own pros and cons. Choose a validation library based on your requirements until ASP.NET Core team offers one out of the box for Minimal APIs.

Happy Programming 🙂

What’s new in Windows Forms in .NET 6.0

We continue to support and innovate in Windows Forms runtime. Let’s recap what we’ve done in .NET 6.0.

Accessibility improvements and fixes

Making Windows Forms applications more accessible to more users is one of the big goals for the team. Building on the momentum we gained in .NET 5.0 timeframe in this release we delivered further improvements, including but not limited to the following:

Improved support for assistive technology when using Windows Forms apps. UIA providers enable tools like Narrator and others to interact with the elements of an application. UIA is also often used to create test automation to drive apps.
We have now added UIA providers support for the following controls:

CheckedListBox
LinkLabel
Panel
ScrollBar
TabControl
TrackBar

Improved Narrator announcements in DataGridView, ErrorProvider and ListView column header controls.
Keyboard tooltips for the TabControl’s TabPage and the TreeView’s TreeNode controls.

ScrollItem Control Pattern support for ComboBoxItemAccessibleObject.
Corrected control types for better support of Text Control Patterns.

ExpandCollapse Control Pattern support for the DateTimePicker control.

Invoke Control Pattern support for the UpDownButtons component in DomainUpDown and NumericUpDown controls.
Improved color contrast in the following controls:

CheckedListBox
DataGridView
Label
PropertyGridView
ToolStripButton

 

Application bootstrap

In .NET Core 3.0 we started to modernize and rejuvenate Windows Forms. As part of that initiative we changed the default font to Segoe UI, 9f (dotnet/winforms#656), and quickly learned that a great number of things depended on this default font metrics. For example, the designer was no longer a true WYSIWYG, as Visual Studio process is run under .NET Framework 4.7.2 and uses the old default font (Microsoft Sans Serif, 8.25f), and .NET application at runtime uses the new font. This change also made it harder for some customers to migrate their large applications with pixel-perfect layouts. Whilst we had provided migration strategies, applying those across hundreds of forms and controls could be a significant undertaking.

To make it easier to migrate those pixel-perfect apps we introduced a new API (for more details refer to the Application-wide default font post):

void Application.SetDefaultFont(Font font)

However, this API wasn’t sufficient to address the designer’s ability to render forms and controls with the same new font. At the same time, with our sister teams heavily pushing for little code/low ceremony application templates, our Program.cs and its Main() method started looking very dated, and we decided to follow the general .NET trend and trim the boilerplate. Please welcome the new Windows Forms application bootstrap:

class Program
{
[STAThread]
static void Main()
{
ApplicationConfiguration.Initialize();
Application.Run(new Form1());
}
}

ApplicationConfiguration.Initialize() is a source generated API that behind the scenes emits the following calls:

Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.SetDefaultFont(new Font(…));
Application.SetHighDpiMode(HighDpiMode.SystemAware);

The parameters of these calls are configurable via MSBuild properties in csproj or props files.
The Windows Forms designer in Visual Studio 2022 is also aware of these properties (for now it only reads the default font), and can show you your application (C#, .NET 6.0 and above) as it would look at runtime:

(We know, the form in the designer still has that Windows XP look, We’re working on it…)

Please note that Visual Basic handles these application-wide default values differently. In .NET 6.0 Visual Basic introduces a new application event ApplyApplicationDefaults which allows you to define application-wide settings (e.g., HighDpiMode or the default font) in the typical Visual Basic way. The designer support for the default font configured via MSBuild properties is also coming in the near future. For more details head over to the dedicated Visual Basic blog post discussing what’s new in Visual Basic.

 

Template updates

As mentioned above we have updated our C# templates in line with related changes in .NET workloads, Windows Forms templates for C# have been updated to support global using directives, file-scoped namespaces, and nullable reference types. Because a typical Windows Forms app requires a STAThread attribute and consist of multiple types split across multiple files (e.g., Form1.cs and Form1.Designer.cs) the top-level statements are notably absent from the Windows Forms templates. However, the updated templates do include the application bootstrap code.

 

More runtime designers

We have completed porting missing designers and designer-related infrastructure that enable building a general-purpose designer (e.g., a report designer). For more details refer to our earlier announcement.

If you think we missed a designer that your application depends on, please let us know at our GitHub repository.

 

High DPI and scaling fixes

We’ve been working through the high DPI space with the aim to get Windows Forms applications to correctly support PerMonitorV2 mode out of the box. It is a challenging undertaking, and sadly we couldn’t achieve as much as we’d hoped. Still in this release we made some progress, and we now can:

Create controls in the same DPI awarenes as the application
Correctly scale ContainerControls and MDI child windows in PerMonitorV2 mode in most scenarios. There are still few specific scenarios (e.g., anchoring) and controls (e.g., MonthCalendar) where the experience remains subpar.

 

Other notable changes

New overloads for Control.Invoke() and Control.BeginInvoke() methods that take Action and Func<T> and allow writing more modern and concise code.
New Control.IsAncestorSiteInDesignMode API is complimentary to Component.DesignMode, and indicates if one of the ancestors of this control is sited, and that site in design mode. A dedicated blog post exploring this API is coming later, so stay tuned.
Windows 11 style default tooltip behavior makes the tooltip remain open when mouse hovers over it, and not disappear automatically. The tooltip can be dismissed by CONTROL or ESCAPE keys.

 

Community contributions

We’d like to call out a few community contributions:

@paul1956 updated NotifyIcon.Text limits text to 127 (dotnet/winforms#4363).

@weltkante enhanced FolderBrowserDialog with InitialDirectory and ClientGuid properties in dotnet/winforms#4645.

@weltkante added link span to LinkClickedEventArgs (dotnet/winforms#4708) making it easier to migrate RichTextBox functionality targeting RichEdit v3.0 or below that relied on hidden text to render hyperlinks.

@AraHaan updated the good old MessageBox with two new buttons Try Again and Continue, and made it possible to show four buttons at the same time (dotnet/winforms#4746):

@kant2002 was helping us making Windows Forms runtime more ILLink/NativeAOT-friendlier by adding ComWrappers and removing redundant RCWs. (dotnet/winforms#5174 and dotnet/winforms#4971).

@kirsan31 provided the ability to anchor minimized MDI children to TopLeft to match Windows MFC behavior in dotnet/winforms#5221.

 

Reporting bugs and suggesting features

If you have any comments, suggestions or faced some issues, please let us know! Submit Visual Studio and Designer
related issues via Visual Studio Feedback (look for a button in the top right corner in Visual Studio), and Windows
Forms runtime related issues at our GitHub repository.

Happy coding!

The post What’s new in Windows Forms in .NET 6.0 appeared first on .NET Blog.

GraphQL multiple requests and EF Core DbContext

GraphQL support multiple operations in a single query. So that you can query multiple objects in a single request. Here is an example.

query {
a:links {
title
url
description
imageUrl
}
b:links {
title
url
description
imageUrl
}
c:links {
title
url
description
imageUrl
}
}

In this query we are looking for the same information in parallel – it can be any query operations for the demo purposes we are using the same. If you execute this code, you will be able to see the result like this.

It is showing a concurrency exception. It is because the DbContext is not thread safe. To fix this issue we can use the AddDbContextFactory – it is a extension method introduced in .NET 5.0 – which helps to register a factory instead of registering the context type directly allows for easy creation of new DbContext instances. And in the code, we need to manage the DbContext object.

Let’s update the code to use AddDbContextFactory() method.

using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContextFactory<BookmarkDbContext>(options =>
options.UseSqlServer(builder.Configuration.GetConnectionString(“BookmarkDbConnection”)));
builder.Services.AddGraphQLServer().AddQueryType<Query>().AddProjections().AddFiltering().AddSorting();
var app = builder.Build();

app.MapGet(“/”, () => “Hello World!”);
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGraphQL();
});
app.Run();

And we need to modify the query class as well.

public class Query
{
[UseDbContext(typeof(BookmarkDbContext))]
[UseProjection]
[UseFiltering]
[UseSorting]
public IQueryable<Link> Links([ScopedService] BookmarkDbContext bookmarkDbContext)
=> bookmarkDbContext.Links;
}

Now you can run the app again and you will be able to fetch the results without any issue.

You can find the source code in GitHub

Happy Programming 🙂

GraphQL in ASP.NET Core with EF Core

This post is about GraphQL in ASP.NET Core with EF Core. In the earlier post I discussed about integrating GraphQL in ASP.NET Core with HotChocolate. In this post I will discuss about how to use GraphQL on top EF Core.

First I will be adding nuget packages required to work with EF Core – Microsoft.EntityFrameworkCore.SqlServer and Microsoft.EntityFrameworkCore.Design – this optional, since I am running migrations this package is required. Next I am modifying the code – adding DbContext and wiring the the DbContext to the application. Here is the DbContext code and updated Query class.

public class Link
{
public int Id { get; set; }
public string Url { get; set; }
public string Title { get; set; }
public string Description { get; set; }
public string ImageUrl { get; set; }
public DateTime CreatedOn { get; set; }
public ICollection<Tag> Tags { get; set; } = new List<Tag>();
}

public class Tag
{
public int Id { get; set; }
public string Name { get; set; }
public int LinkId { get; set; }
public Link Link { get; set; }
}

public class BookmarkDbContext : DbContext
{
public BookmarkDbContext(DbContextOptions options) : base(options)
{
}
public DbSet<Link> Links { get; set; }
public DbSet<Tag> Tags { get; set; }
}

I wrote the OnModelCreating method to seed the database. And I modified the code of the Program.cs and added the DbCotext class.

using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext<BookmarkDbContext>(options =>
options.UseSqlServer(builder.Configuration.GetConnectionString(“BookmarkDbConnection”)));
builder.Services.AddGraphQLServer().AddQueryType<Query>();
var app = builder.Build();

app.MapGet(“/”, () => “Hello World!”);
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGraphQL();
});
app.Run();

And Query class modified like this.

public class Query
{
public IQueryable<Link> Links([Service] BookmarkDbContext bookmarkDbContext)
=> bookmarkDbContext.Links;
}

In this code the Service attribute will help to inject the DbContext to the method. Next lets run the application and execute query.

query {
links{
id
url
title
imageUrl
description
createdOn
}
}

We will be able to see result like this.

Next let us remove some parameters in the query and run it again.

query {
links{
title
imageUrl
}
}

We can see the result like this.

And when we look into the EF Core log, we will be able to see the EF Core SQL Log like this.

In the Log, even though we are querying only two fields it is querying all the fields. We can fix this issue by adding a new nuget package HotChocolate.Data.EntityFramework. And modify the code like this.

using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext<BookmarkDbContext>(options =>
options.UseSqlServer(builder.Configuration.GetConnectionString(“BookmarkDbConnection”)));
builder.Services.AddGraphQLServer().AddQueryType<Query>().AddProjections().AddFiltering().AddSorting();
var app = builder.Build();

app.MapGet(“/”, () => “Hello World!”);
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGraphQL();
});
app.Run();

And modify the query class as well, decorate with the HotChocolate attributes for Projections, Filtering and Sorting.

public class Query
{
[UseProjection]
[UseFiltering]
[UseSorting]
public IQueryable<Link> Links([Service] BookmarkDbContext bookmarkDbContext)
=> bookmarkDbContext.Links;
}

Now lets run the query again and check the logs.

We can see only the required fields are queried. Not every fields in the table.

This way you can configure GraphQL in ASP.NET Core with EF Core. This code will fail, if you try to execute the GraphQL query with alias. We can use the DbContextFactory class to fix this issue. We will look into it in the next blog post.

Happy Programming 🙂

Copying signals and dashboards using seqcli templates

Templates make it easy to copy entities like signals and dashboards from one Seq server to one or more others:

Between isolated dev, test, and production environments,
Through blog posts and support documentation, and
Between developers on a team.

(We’re really excited about the last one, because keeping a stash of relevant signals and dashboards will be a great way to get new developers up and running with a useful local Seq instance when joining a project.)

This post covers the fundamentals, but templates are really self-explanatory, and you should be able to achieve just about anything you need to using the information here, in conjunction with the seqcli help template export and seqcli help template import command-line help.

An example signal and dashboard

Our example Seq instance has a selection of signals that identify log events raised when the Seq Cafe roastery receives orders and ships them.

Source server with signals for order lifecycle events.

We also have a nice order status dashboard that shows orders flowing through the roastery:

Orders dashboard showing totals for various order lifecycle events.

We’ll move both the signals and related dashboard to a fresh Seq instance using templates.

The important thing that templates account for is the relationship between the two: the dashboard, when its imported into another Seq instance, needs a reference to the imported copies of the signals.

Exporting the template

The first step when exporting entities from a Seq server is to make a directory for the template files:

mkdir cafe
cd cafe

Then, pointing seqcli at the source server (and using an API key, if required), run:

seqcli template export -s https://source.example.com

The default output location is ., so if you run ls you should see a list of files along these lines:

dashboard-Orders.template
signal-Order Abandoned.template
signal-Order Created.template
signal-Order Placed.template
signal-Order Shipped.template

I’ve deleted everything except for the dashboard and signals we intend to export.

If the entities you’re expecting aren’t there, you may need to share them, or identify them explicitly on the template export command-line using -i <id>. By default, seqcli will only export shared entities.

Peeking inside the template files

The template files are plain text. Here are the first dozen lines or so lines of dashboard-Orders.template:

{
“$entity”: “dashboard”,
“OwnerId”: null,
“Title”: “Orders”,
“IsProtected”: false,
“SignalExpression”: null,
“Charts”: [
{
“Title”: “Order Lifecycle”,
“SignalExpression”: null,
“Queries”: [
{
“Measurements”: [
{
“Value”: “count(@EventType = 0x8CC54029)”,
“Label”: “shipped”
},

If you’ve spent time using Seq’s HTTP API, the entity structure here will be familiar.

Template files are JSON with placeholders. A little farther down in the dashboard template, where the dashboard makes reference to one of the associated signals, you’ll see a placeholder:

{
“Title”: “Created”,
“SignalExpression”: {
“SignalId”: ref(“signal-Order Created.template”),
“Kind”: “Signal”
},
“Queries”: [

The function-call-like ref(“signal-Order Created.template”) looks up the id of the signal that was imported from the signal-Order Created.template file on the target server.

Importing again

Importing the template into the target server is as easy as:

seqcli template import -s https://dest.example.com –state ./dest.state

You’ll notice that the import command accepts the path of a state file: this file tracks the mapping between templates and the entity ids assigned to them on the target Seq server. If the same state file is used in later imports, entities that already exist will be updated instead of duplicated.

Here’s our target server, with the dashboard and signals imported 😎.

The orders dashboard imported into the target server.

Getting seqcli

The Seq installer for Windows includes a copy of seqcli, so you’ll find it’s already on the PATH if you have Seq installed on that OS.

For macOS and Linux, binaries can be downloaded from GitHub, or, you can docker run the datalust/seqcli container.

Have fun!

Azure Active Directory’s gateway is on .NET 6.0!

Azure Active Directory’s gateway service is a reverse proxy that fronts hundreds
of services that make up Azure Active Directory (Azure AD). If you’ve used
services such as office.com, outlook.com, portal.azure.com or xbox.live.com,
then you’ve used Azure AD’s gateway. The gateway provides features such as TLS
termination, automatic failovers/retries, geo-proximity routing, throttling, and
tarpitting to services in Azure AD. The gateway is present in 54 Azure
datacenters worldwide and serves ~185 Billion requests each day. Up until
recently, Azure AD’s gateway was running on .NET 5.0. As of September 2021, it’s
running on .NET 6.0.

Efficiency gains by moving to .NET 6.0

The below image shows that application CPU utilization dropped by 33% for
the same traffic volume after moving to .NET 6.0 on our production fleet.

The above meant that our application efficiency went up by 50%. Application
efficiency is one of the key metrics we use to measure performance and is
defined as

Application efficiency = (Requests per second) / (CPU utilization of application)

Changes made in .NET 6.0 upgrade

Along with the .NET 6.0 upgrade, we made two major changes:

Migrated from IIS to HTTP.sys
server
.
This was made possible by new features in .NET 6.0.
Enabled dynamic
PGO
(profile-guided optimization). This is a new feature of .NET 6.0.

The following sections will describe each of those changes in more detail.

Migrating from IIS to HTTP.sys server

There are 3 server options to pick from in ASP.NET Core:

Kestrel
HTTP.sys server
IIS

A previous blog
post

describes why Azure AD gateway chose IIS as the server to run on during our .NET
Framework 4.6.2 to .NET Core 3.1 migration. During the .NET 6.0 upgrade, we
migrated from IIS to HTTP.sys server. Kestrel was not chosen due to the lack of
certain TLS
features our service depends
on (support is expected by June 2022 in Windows Server 2022).

By migrating from IIS to HTTP.sys server, Azure AD gateway saw the following
benefits:

A 27% increase in application efficiency.

Deterministic queuing model: HTTP.sys server runs on a single-queue system,
whereas IIS has an internal queue on top of the HTTP.sys queue. The
double-queue system in IIS results in unique performance problems (especially
in high concurrency situations, although issues in IIS can potentially be
offset by tweaking Windows registry keys such as HKLM:SYSTEMCurrentControlSetServicesW3SVCPerformanceReceiveRequestPending). By removing IIS and moving to a single-queue system
on HTTP.sys, queuing issues that arose due to rate mismatches in the
double-queue system disappeared as we moved to a deterministic model.

Improved deployment and autoscale experience: The move away from IIS
simplifies deployment since we no longer need to install/configure IIS and
ANCM
before starting the website. Additionally, TLS configuration is easier and
more resilient as it needs to be specified at just one layer (HTTP.sys)
instead of two as it had been with IIS.

The following showcase some of the changes that were made while moving from IIS
to HTTP.sys server:

TLS renegotiation: Renegotiation provides the ability to do optional client certificate negotiation
based on HTTP constructs such as request path.

Example: On IIS, during the initial TLS handshake with the client, the server
can be configured to not request a client certificate. However, if the path
of the request contains, say “foo”, IIS triggers a TLS renegotiation and
requests a client certificate.

The following web.config configuration in IIS is how path based TLS
renegotiation is enabled on IIS:

<location path=”foo”>
<system.webServer>
<security>
<access sslFlags=”Ssl, SslNegotiateCert, SslRequireCert”/>
</security>
</system.webServer>
</location>

In HTTP.sys server hosting (.NET 6.0 and up), the above configuration is
expressed in code by calling
GetClientCertificateAsync()
as below.

// default renegotiate timeout in http.sys is 120 seconds.
const int RenegotiateTimeOutInMilliseconds = 120000;
X509Certificate2 cert = null;
if (httpContext.Request.Path.StartsWithSegments(“foo”))
{
if (httpContext.Connection.ClientCertificate == null)
{
using (var ct = new CancellationTokenSource(RenegotiateTimeOutInMilliseconds))
{
cert = await context.Connection.GetClientCertificateAsync(ct.Token);
}
}
}

In order for GetClientCertificateAsync() to trigger a renegotiation, the
following setting should be set in
HttpSysOptions

options.ClientCertificateMethod = ClientCertificateMethod.AllowRenegotation;

Mapping IIS Server variables:

On IIS, TLS information such as CRYPT_PROTOCOL, CRYPT_CIPHER_ALG_ID,
CRYPT_KEYEXCHANGE_ALG_ID and CRYPT_HASH_ALG_ID is obtained by IIS Server
variables

and can be leveraged as shown
here.
On HTTP.sys server, equivalent information is exposed via
ITlsHandshakeFeature’s
Protocol, CipherAlgorithm, KeyExchangeAlgorithm and HashAlgorithm
respectively.

Ability to interpret non-ASCII headers:

The gateway receives millions of headers each day with non-ASCII characters in them and the ability to interpret non-ASCII headers is important. Kestrel and IIS already have this ability, and in .NET 6.0, Latin1 request header encoding was added for HTTP.sys as well. It can be enabled using HttpSysOptions as shown below.

options.UseLatin1RequestHeaders = true;

Observability:

In addition to .NET
telemetry
,
the health of a service can be monitored by plugging into a wealth of
telemetry
exposed by HTTP.sys such as:

Http Service Request QueuesArrivalRate
Http Service Request QueuesRejectedRequests
Http Service Request QueuesCurrentQueueSize
Http Service Request QueuesMaxQueueItemAge
Http Service Url GroupsConnectionAttempts
Http Service Url GroupsCurrentConnections

Enabling Dynamic PGO (profile-guided optimization)

Dynamic
PGO

is one the most exciting features of .NET 6.0! PGO can benefit .NET 6.0
applications by maximizing steady-state performance.

Dynamic PGO is an opt-in feature in .NET 6.0. There are 3 environment variables
you need to set to enable dynamic PGO:

set DOTNET_TieredPGO=1. This setting leverages the initial Tier0 compilation of
methods to observe method behavior. When methods are rejitted at Tier1, the
information gathered from the Tier0 executions is used to optimize the Tier1
code. Enabling this switch increased our application efficiency by 8.18%
compared to plain .NET 6.0.

set DOTNET_TC_QuickJitForLoops=1. This setting enables tiering for methods
that contain loops. Enabling this switch (in conjunction with above switch)
increased our application efficiency by 10.2% compared to plain .NET 6.0.

set DOTNET_ReadyToRun=0. The core libraries that ship with .NET come with
ReadyToRun enabled by default. ReadyToRun allows for faster startup because
there is less to JIT compile, but this also means code in ReadyToRun images
doesn’t go through the Tier0 profiling process which enables dynamic PGO. By
disabling ReadyToRun, the .NET libraries also participate in the dynamic PGO
process. Setting this switch (in conjunction with the two above) increased
our application efficiency by 13.23% compared to plain .NET 6.0.

Learnings

There were a few SocketsHttpHandler changes in .NET 6.0 that surfaced as
issues in our service. We worked with the .NET team to identify workarounds
and improvements.

New connection attempts that fail can impact HTTP multiple
requests
in .NET 6.0,
whereas a failed connection attempt would only impact a single HTTP
request in .NET 5.0.

Workaround : Setting a
ConnectTimeout
slightly lower than HTTP request timeout ensures .NET 5.0 behavior is
maintained. Alternatively, disposing the underlying handler on a
failure also ensures only a single request is impacted due to a
connect timeout (although this can be expensive depending on the size
of the connection pool, please be sure to measure for your scenario).

Requests that fail due to RST packets are no longer automatically
retried
in .NET 6.0 and
this results in an elevated rate of An existing connection was forcibly closed by the remote host exceptions bubbling up to the application from
HttpClient.

Workaround : The application can add retries on top of HttpClient for
idempotent requests. Additionally, if RST packets are due to idle
timeouts, setting
PooledConnectionIdleTimeout
to lower than the idle timeout of the server will help eliminate RST
packets due to idle connections.

HttpContext.RequestAborted.IsCancellationRequested had inconsistent behavior
on HTTP.sys compared to other servers and has been
fixed in .NET 6.0.
Client side disconnects were noisy on HTTP.sys
server
and there was a
race condition that was
triggered while trying to set StatusCode on a disconnected request. Both have
been fixed in .NET 6.0.

Summary

Every new release of .NET has tremendous performance
improvements

and there is a huge upside to migrating to the latest version of .NET. For Azure
AD gateway, we look forward to trying out newer APIs specific to .NET 6.0 for
even bigger wins and further enhancements in .NET 7.0.

The post Azure Active Directory’s gateway is on .NET 6.0! appeared first on .NET Blog.

#OpenSource list of disposable temporary email providers.

We’ve just compiled a list of 35,000 temporary email domains, with their associated MX-Records (Mail Exchange servers) and IP addresses associated with the MX-Records. This should allow users to not only block known temporary email address domains, but to discover future domains. It’s easier to register a new domain than to get a new IP address and Mail Exchanger.

You can download this file at GITHUB here; https://github.com/infiniteloopltd/TempEmailDomainMXRecords

TempEmailDomainMXRecords

A CSV of temporary email domains with their associated MX Records, in the format

Domain
MX Record
IP

tempemail.biz
mx001.tempemail.biz
78.46.205.76

tempemail.co.za
park-mx.above.com
103.224.212.34

tempmail.de
tempmail.de
85.25.13.241

temp-mail.de
tempmail.de
85.25.13.241

temp-mail.org
mx.yandex.net
77.88.21.249

temp-mail.ru
mx.yandex.net
77.88.21.249

tempmaildemo.com
mxlb.ispgateway.de
80.67.18.126

tempmailer.com
tempmailer.com
91.250.86.53

tempmailer.de
tempmailer.de
91.250.86.53

temporarymailaddress.com
temporarymailaddress.com
37.97.167.105

Where Domain is a domain name associated with a temporary email address such as [email protected] the MX record is a Mail-exchange server associated with that domain, and the IP is the IP of the Mail-exchange server.

Blocking user registrations if temporary email addresses are used can be risky, you can end up blocking a legitimate user.

If you block emails using the domain listed in the domain column, then it is very likely the email is temporary, but fresh “disposable” domains will not be discovered.

If you block emails using a domain that uses the same mail exchanger as the mx-record listed in the “MX column” then this is highly risky, since many legitimate russian users use “mx.yandex.net”, (yandex being the russian equivalent of Google). However patterns of dispostable emails can be discovered and blocked on a case by case basis.

This is an open-source list and we do invite users to contribute by raising pull requests. Please give credit to our work, if you use it. https://www.infiniteloop.ie/

Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core

This post shows how to implement an ASP.NET Core Razor Page application which authenticates using Azure B2C and uses custom claims implemented using the Azure B2C API connector. The claims provider is implemented using an ASP.NET Core API application and the Azure API connector requests the data from this API. The Azure API connector adds the claims after an Azure B2C sign in flow or whatever settings you configured in the Azure B2C user flow.

Code: https://github.com/damienbod/AspNetCoreB2cExtraClaims

Setup the Azure B2C App Registration

An Azure App registration is setup for the ASP.NET Core Razor page application. A client secret is used to authenticate the client. The redirect URI is added for the app. This is a standard implementation.

Setup the API connector

The API connector is setup to add the extra claims after a sign in. This defines the API endpoint and the authentication method. Only Basic or certificate authentication is possible for this API service. Both of these are not ideal for implementing and using this service to add extra claims to the identity. I started ngrok using the cmd and used the URL from this to configure Azure B2C API connector. Maybe two separate connectors could be setup for a solution, one like this for development and a second one with the Azure App service host address and certificate authentication used.

Azure B2C user attribute

The custom claims are added to the Azure B2C user attributes. The custom claims can be add as required.

Setup to Azure B2C user flow

The Azure B2C user flow is configured to used the API connector. This flow adds the application claims to the token which it receives from the API call used in the API connector.

The custom claims are added then using the application claims blade. This is required if the custom claims are to be added.

I also added the custom claims to the Azure B2C user flow user attributes.

Azure B2C is now setup to use the custom claims and the data for these claims will be set used the API connector service.

ASP.NET Core Razor Page

The ASP.NET Core Razor Page uses Microsoft.Identity.Web to authenticate using Azure B2C. This is a standard setup for a B2C user flow.

builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(builder.Configuration.GetSection(“AzureAdB2C”));

builder.Services.AddAuthorization(options =>
{
options.FallbackPolicy = options.DefaultPolicy;
});
builder.Services.AddRazorPages()
.AddMicrosoftIdentityUI();

var app = builder.Build();

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

The main difference between an Azure B2C user flow and an Azure AD authentication is the configuration. The SignUpSignInPolicyId is set to match the configured Azure B2C user flow and the Instance uses the b2clogin from the domain unlike the AAD configuration definition.

“AzureAdB2C”: {
“Instance”: “https://b2cdamienbod.b2clogin.com”,
“ClientId”: “ab393e93-e762-4108-a3f5-326cf8e3874b”,
“Domain”: “b2cdamienbod.onmicrosoft.com”,
“SignUpSignInPolicyId”: “B2C_1_ExtraClaims”,
“TenantId”: “f611d805-cf72-446f-9a7f-68f2746e4724”,
“CallbackPath”: “/signin-oidc”,
“SignedOutCallbackPath”: “/signout-callback-oidc”
//”ClientSecret”: “–in-user-settings–”
},

The index Razor page returns the claims and displays the values in the UI.

public class IndexModel : PageModel
{
[BindProperty]
public IEnumerable<Claim> Claims { get; set; } = Enumerable.Empty<Claim>();

public void OnGet()
{
Claims = User.Claims;
}
}

This is all the end user application requires, there is no special setup here.

ASP.NET Core API connector implementation

The API implemented for the Azure API connector uses a HTTP Post. Basic authentication is used to validate the request as well as the client ID which needs to match the configured App registration. This is weak authentication and should not be used in production especially since the API provides sensitive PII data. If the request provides the correct credentials and the correct client ID, the data is returned for the email. In this demo, the email is returned in the custom claim. Normal the data would be returned using some data store or whatever.

[HttpPost]
public async Task<IActionResult> PostAsync()
{
// Check HTTP basic authorization
if (!IsAuthorized(Request))
{
_logger.LogWarning(“HTTP basic authentication validation failed.”);
return Unauthorized();
}

string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync();
var requestConnector = JsonSerializer.Deserialize<RequestConnector>(content);

// If input data is null, show block page
if (requestConnector == null)
{
return BadRequest(new ResponseContent(“ShowBlockPage”, “There was a problem with your request.”));
}

string clientId = _configuration[“AzureAdB2C:ClientId”];
if (!clientId.Equals(requestConnector.ClientId))
{
_logger.LogWarning(“HTTP clientId is not authorized.”);
return Unauthorized();
}

// If email claim not found, show block page. Email is required and sent by default.
if (requestConnector.Email == null || requestConnector.Email == “” || requestConnector.Email.Contains(“@”) == false)
{
return BadRequest(new ResponseContent(“ShowBlockPage”, “Email name is mandatory.”));
}

var result = new ResponseContent
{
// use the objectId of the email to get the user specfic claims
MyCustomClaim = $”everything awesome {requestConnector.Email}”
};

return Ok(result);
}

private bool IsAuthorized(HttpRequest req)
{
string username = _configuration[“BasicAuthUsername”];
string password = _configuration[“BasicAuthPassword”];

// Check if the HTTP Authorization header exist
if (!req.Headers.ContainsKey(“Authorization”))
{
_logger.LogWarning(“Missing HTTP basic authentication header.”);
return false;
}

// Read the authorization header
var auth = req.Headers[“Authorization”].ToString();

// Ensure the type of the authorization header id `Basic`
if (!auth.StartsWith(“Basic “))
{
_logger.LogWarning(“HTTP basic authentication header must start with ‘Basic ‘.”);
return false;
}

// Get the the HTTP basinc authorization credentials
var cred = System.Text.Encoding.UTF8.GetString(Convert.FromBase64String(auth.Substring(6))).Split(‘:’);

// Evaluate the credentials and return the result
return (cred[0] == username && cred[1] == password);
}

The ResponseContent class is used to return the data for the identity. All custom claims must be prefixed with the extension_ The data is then added to the profile data.

public class ResponseContent
{
public const string ApiVersion = “1.0.0”;

public ResponseContent()
{
Version = ApiVersion;
Action = “Continue”;
}

public ResponseContent(string action, string userMessage)
{
Version = ApiVersion;
Action = action;
UserMessage = userMessage;
if (action == “ValidationError”)
{
Status = “400”;
}
}

[JsonPropertyName(“version”)]
public string Version { get; }

[JsonPropertyName(“action”)]
public string Action { get; set; }

[JsonPropertyName(“userMessage”)]
public string? UserMessage { get; set; }

[JsonPropertyName(“status”)]
public string? Status { get; set; }

[JsonPropertyName(“extension_MyCustomClaim”)]
public string MyCustomClaim { get; set; } = string.Empty;
}
}

With this, custom claims can be added to Azure B2C identities. This can be really useful when for example implementing verifiable credentials using id_tokens. This is much more complicated to implement compared to other IDPs but at least it is possible and can be solved. The technical solution to secure the API has room for improvements.

Testing

The applications can be started and the API connector needs to be mapped to a public IP. After starting the apps, start ngrok with a matching configuration for the HTTP address of the API connector API.

ngrok http https://localhost:5002

This URL in the API connector configured on Azure needs to match this ngrok URL. all good, the applications will run and the custom claim will be displayed in the UI.

Notes

The profile data in this API is very sensitive and you should use maximal security protections which are possible. Using Basic authentication alone for this type of API is not a good idea. It would be great to see managed identities supported or something like this. I used basic authentication so that I could use ngrok to demo the feature, we need a public endpoint for testing. I would not use this in a productive deployment. I would use certificate authentication with an Azure App service deployment and the certificate created and deployed using Azure Key Vault. Certificate rotation would have to be setup. I am not sure how good API connector infrastructure automation can be implemented, I have not tried this yet. A separate security solution would need to be implemented for local development. This is all a bit messy as all these extra steps end up in costs or developers taking short cuts and deploying with less security.

Links:

https://docs.microsoft.com/en-us/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-user-flow

https://github.com/Azure-Samples/active-directory-dotnet-external-identities-api-connector-azure-function-validate/

https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-customize-properties?pivots=dotnet-6-0

https://github.com/AzureAD/microsoft-identity-web/wiki

https://ngrok.com/

Getting started with GraphQL in ASP.NET Core

This post is about GraphQL in ASP.NET Core. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. GraphQL implemented using HotChocolate package. To get started, create an empty web project using dotnet web command and then add reference of HotChocolate.AspNetCore package using dotnet add package HotChocolate.AspNetCore command. Once it is done, you can modify the program.cs file like following. I am using .NET 6.0 for this. So there is no Startup.cs and configure and configureservices() methods.

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddGraphQLServer();
var app = builder.Build();

app.MapGet(“/”, () => “Hello World!”);
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGraphQL();
});
app.Run();

Now you’re configured GraphQL endpoint. You can run the application and verify you’re able to see the /graphql endpoint. It will display an empty screen like this since we haven’t configured anything.

Unlike REST API, GraphQL always provides only one endpoint. And all the operations are executed against this endpoint. For reading data, we use Query operation, and Creating, Updating and Deleting we use Mutation operation. And for real time notifications, we use Subscription operation. As we haven’t configured any of these it will throw error. Next we will create a Query operation. For this demo I am not using EF Core. So I created two model classes and implemented Query class.

public class Link
{
public int Id { get; set; }
public string Url { get; set; }
public string Title { get; set; }
public string Description { get; set; }
public string ImageUrl { get; set; }
public DateTime CreatedOn { get; set; }
public ICollection<Tag> Tags { get; set; } = new List<Tag>();
}

public class Tag
{
public int Id { get; set; }
public string Name { get; set; }
public int LinkId { get; set; }
public Link Link { get; set; }
}

public class Query
{
public IQueryable<Link> Links => new List<Link>
{
new Link
{
Id = 1,
Url = “https://example.com”,
Title = “Example”,
Description = “This is an example link”,
ImageUrl = “https://example.com/image.png”,
Tags = new List<Tag> { new Tag(){ Name = “Example” } },
CreatedOn = DateTime.Now
},
new Link
{
Id = 2,
Url = “https://dotnetthoughts.net”,
Title = “DotnetThoughts”,
Description = “DotnetThoughts is a blog about .NET”,
ImageUrl = “https://dotnetthoughts.net/image.png”,
Tags = new List<Tag>
{
new Tag(){ Name = “Programming” },
new Tag(){ Name = “Blog” },
new Tag(){ Name = “dotnet” }
},
CreatedOn = DateTime.Now
},
}.AsQueryable();
}

And add the query type to the Http Pipeline like this.

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddGraphQLServer().AddQueryType<Query>();
var app = builder.Build();

app.MapGet(“/”, () => “Hello World!”);
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGraphQL();
});
app.Run();

Now you can run the app again and check the /graphql endpoint again. You will be able to see empty screen again. Then choose the Schema Reference option from the Operations dropdown.

There you will be able to see the GraphQL schema of the Query. Next let us execute a Query and fetch some data. Select the Operations tab. And you can write the following code.

query {
links{
id
url
title
imageUrl
description
createdOn
}
}

Which will execute the Query and display result like this.

The one major advantage of GraphQL over REST is client can decide which of the fields it requires. In case of REST if there is an endpoint like which returns links, it will always returns all the fields which the API developer configured – even if the consumer application is not using them. But incase of GraphQL consuming client can decide which all fields required and query those fields only. For example, if the app requires only Title and Image URL, app can send a query like this.

query {
links{
title
imageUrl
}
}

It will only return those fields.

GraphQL got lot of advantages like this which can be used to improve your application and API performance. In the upcoming blog posts we will discuss about using EF Core along with GraphQL, Mutations and Subscriptions in GraphQL.

I have implemented all these operations on top of Minimal APIs in .NET 6.0. You can find the source code in GitHub

Happy Programming 🙂

Testing multiple implementations of a trait in Rust

I’ve been hacking on a small practice project in Rust where I implement the same
data structure in several different ways. When testing this project, I want to
run exactly the same set of tests on several types that implement the same
trait.

As a demonstrative example, let’s take the following trait:

pub trait Calculator {
fn new() -> Self;
fn add(&self, a: u32, b: u32) -> u32;
}

A straightforward implementation could be Foo:

pub struct Foo {}

impl Calculator for Foo {
fn new() -> Self {
Self {}
}

fn add(&self, a: u32, b: u32) -> u32 {
a + b
}
}

Or, if you enjoy the Peano axioms, a somewhat more involved
implementation could be Bar:

pub struct Bar {}

impl Calculator for Bar {
fn new() -> Self {
Self {}
}

fn add(&self, a: u32, b: u32) -> u32 {
if b == 0 {
a
} else {
self.add(a, b 1) + 1
}
}
}

Our task is to write the same set of tests once, and invoke it on both
Foo and Bar with as little boilerplate as possible. Let’s examine
several approaches for doing this [1].

Straightforward trait-based testing

The most basic approach to testing our types would be something like:

#[cfg(test)]
mod tests {
use crate::calculator::{Bar, Calculator, Foo};

fn trait_tester<C: Calculator>() {
let c = C::new();
assert_eq!(c.add(2, 3), 5);
assert_eq!(c.add(10, 43), 53);
}

#[test]
fn test_foo() {
trait_tester::<Foo>();
}

#[test]
fn test_bar() {
trait_tester::<Bar>();
}
}

The trait_tester function can be invoked on any type that implements the
Calculator trait and can host a collection of tests. “Concrete” test
functions like test_foo then call trait_tester; the concrete test
functions are what the Rust testing framework sees because they’re marked with
the #[test] attribute.

On the surface, this approach seems workable; looking deeper, however, there
is a serious issue.

Suppose we want to write multiple test functions that test different
features and usages of our Calculator. We could add
trait_tester_feature1, trait_tester_feature2, etc. Then, the concrete
test functions would look something like:

#[test]
fn test_foo() {
trait_tester::<Foo>();
trait_tester_feature1::<Foo>();
trait_tester_feature2::<Foo>();
}

#[test]
fn test_bar() {
trait_tester::<Bar>();
trait_tester_feature1::<Bar>();
trait_tester_feature2::<Bar>();
}

Taken to the limit, there’s quite a bit of repetition here. In a realistic
project the number of tests can easily run into the dozens.

The problem doesn’t end here, though; in Rust, the unit of testing is
test_foo, not the trait_tester* functions. This means that only
test_foo will show up in the testing report, there’s no easy way to select
to run only trait_tester_feature1, etc. Moreover, test parallelization can
only happen between #[test] functions.

The fundamental issue here is: what we really want is to mark each of
the trait_tester* functions with #[test], but this isn’t trivial because
#[test] is a compile-time feature, and the compiler is supposed to know what
concrete types partake in each #[test] function definition.

Thankfully, Rust has just the tool for generating code at compile time.

First attempt with macros

Macros can help us generate functions tagged with #[test] at compile time.
Let’s try this:

macro_rules! calculator_tests {
($($name:ident: $type:ty,)*) => {
$(
#[test]
fn $name() {
let c = <$type>::new();
assert_eq!(c.add(2, 3), 5);
assert_eq!(c.add(10, 43), 53);
}
)*
}
}

#[cfg(test)]
mod tests {
use crate::calculator::{Bar, Calculator, Foo};

calculator_tests! {
foo: Foo,
bar: Bar,
}
}

The calculator_tests macro generates multiple #[test]-tagged functions,
one per type. If we run cargo test, we’ll see that the Rust testing
framework recognizes and runs them:

[…]
test typetest::tests::bar … ok
test typetest::tests::foo … ok
[…]

However, there’s an issue; how to we add more testing functions per type, as
discussed previously? If only we could do something like fn ${name}_feature1
to name a function. But alas, we cannot! Due to macro hygiene rules, Rust won’t let us
generate identifiers like that. It might be possible somehow, but I didn’t find
a straightforward way to do it. Luckily, there’s a better solution.

Second attempt with macros

Instead of encoding the type variant in the function name, we can use a Rust
sub-module:

macro_rules! calculator_tests {
($($name:ident: $type:ty,)*) => {
$(
mod $name {
use super::*;

#[test]
fn test() {
let c = <$type>::new();
assert_eq!(c.add(2, 3), 5);
assert_eq!(c.add(10, 43), 53);
}
}
)*
}
}

#[cfg(test)]
mod tests {
use crate::calculator::{Bar, Calculator, Foo};

calculator_tests! {
foo: Foo,
bar: Bar,
}
}

Now all functions are named test, but they’re namespaced inside a module
with a configurable name. And yes, now we can easily add more testing functions:

macro_rules! calculator_tests {
($($name:ident: $type:ty,)*) => {
$(
mod $name {
use super::*;

#[test]
fn test() {
let c = <$type>::new();
assert_eq!(c.add(2, 3), 5);
assert_eq!(c.add(10, 43), 53);
}

#[test]
fn test_feature1() {
let c = <$type>::new();
assert_eq!(c.add(6, 9), 15);
}
}
)*
}
}

If we run cargo test, it works as expected:

test typetestmod::tests::bar::test … ok
test typetestmod::tests::bar::test_feature1 … ok
test typetestmod::tests::foo::test_feature1 … ok
test typetestmod::tests::foo::test … ok

Each test has its own full path, and is invoked separately. We can select which
tests to run from the command line – running only the tests for Bar, say, or
run all the feature1 tests for all types. Also notice that the test names
are reported “out of order”; this is because they are all run concurrently!

To conclude, with some macro hackery the goal is achieved. We can now write any
number of tests in a generic way, and invoke all these tests on multiple types
with minimal duplication – just one extra line per type [2].

It’s not all perfect, though. Macros add a layer of indirection and it leaks
in the error messages. If one of the assert_eq! invocations fails, the
reported line is at the point of macro instantiation, which is the same line
for all tests for any given type. This is quite inconvenient and makes debugging
failures more challenging. It could be that I’m missing something obvious, or
maybe this is a limitation of the Rust compiler. If you know how to fix this,
please drop me a line!

[1]
The full source code for this post can be found
on GitHub.

[2]
Sharp-eyed readers will note that using this approach the common trait
isn’t actually needed at all! Macros work by textual substitution (AST
substitution, to be precise), so the generated code creates a concrete
type and invokes its methods. The macro-based tests would work even if
Foo and Bar didn’t declare themselves as implementing the
Calculator trait.