Building a gRPC Client in .NET

Introduction

In this article, we will take a look at how to create a simple gRPC client with .NET and communicate with a server. This is the final post of the blog series where we talk about building gRPC services.

Motivation

This is the second part of an articles series on gRPC. If you want to jump ahead, please feel free to do so. The links are down below.

Introduction to gRPC
Building a gRPC server with Go
Building a gRPC client with .NET
Building a gRPC client with Go

Building a gRPC client with .NET (You are here)

Please note that this is intended for anyone who’s interested in getting started with gRPC. If you’re not, please feel free to skip this article.

Plan

The plan for this article is as follows.

Scaffold a .NET console project.
Implementing the gRPC client.
Communicating with the server.

In a nutshell, we will be generating the client for the server we built in our previous post.


?  As always, all the code samples documentation can be found at: https://github.com/sahansera/dotnet-grpc

Prerequisites

.NET 6 SDK
Visual Studio Code or IDE of your choice
gRPC compiler

Please note that I’m using some of the commands that are macOS specific. Please follow this link to set it up if you are on a different OS.

To install Protobuf compiler:

brew install protobuf

Project Structure

We can use .NET’s tooling to generate a sample gRPC project. Run the following command at the root of your workspace. Remember how we used dotnet new grpc command to scaffold the server project? For this one though, it can simply be a console app.

dotnet new console -o BookshopClient

Your project structure should look like this.


You must be wondering if this is a console app how does it know how to generate the client stubs? Well, it doesn’t. You have to add the following packages to the project first.

dotnet add BookshopClient.csproj package Grpc.Net.Client
dotnet add BookshopClient.csproj package Google.Protobuf
dotnet add BookshopClient.csproj package Grpc.Tools

Once everything’s installed, we can proceed with the rest of the steps.

Generating the client stubs

We will be using the same Protobuf files that we generated in our previous step. If you haven’t seen that already head over to my previous post.

Open up the BookshopClient.csproj file you need to add the following lines:


<ItemGroup>
<Protobuf Include=../proto/bookshop.proto GrpcServices=Client />
</ItemGroup>

As you can see we will be reusing our Bookshop.proto file. in this example too. One thing to note here is that we have updated the GrpcServices attribute to be Client.

Implementing the gRPC client

Let’s update the Program.cs file to connect to and get the response from the server.

using System.Threading.Tasks;
using Grpc.Net.Client;
using Bookshop;

// The port number must match the port of the gRPC server.
using var channel = GrpcChannel.ForAddress(“http://localhost:5000”);
var client = new Inventory.InventoryClient(channel);
var reply = await client.GetBookListAsync(new GetBookListRequest { });

Console.WriteLine(“Greeting: “ + reply.Books);
Console.WriteLine(“Press any key to exit…”);
Console.ReadKey();

This is based on the example given on the Microsoft docs site btw. What I really like about the above code is how easy it is to read. So here’s what happens.


We first create a gRPC channel with GrpcChannel.ForAddress to the server by giving its URI and port. A client can reuse the same channel object to communicate with a gRPC server. This is an expensive operation compared to invoking a gRPC method on the server. You can also pass in a GrpcChannelOptions object as the second parameter to define client options. Here’s a list for that.
Then we use the auto-generated method Inventory.InventoryClient by leveraging the channel we created above. One thing to note here is that, if your server has multiple services, you can still use the same channel object for all of those.
We call the GetBookListAsync on our server. By the way, this is a Unary call, we will go through other client-server communication mechanisms in a separate post.
Our GetBookList method gets called on the server and returns the list of books.

Now that we know how the requests work, let’s see this in action.

Communicating with the server

Let’s spin up the server that we built in my previous post first. This will be up and running at port 5000.

dotnet run –project BookshopServer/BookshopServer.csproj


For the client-side, we invoke a similar command.

dotnet run –project BookshopClient/BookshopClient.csproj

And in the terminal, we will get the following outputs.


Nice! as you can see it’s not that hard to get everything working ? One thing to note is that we left out the details about TLS and different ways to communicate with the server (i.e. Unary, streaming etc.). I will cover such topics in-depth in the future.

Conclusion

In this article, we looked at how to reuse our Protobuf files to create a client to interact with the server we created in the previous post.

I hope this article series cleared up a lot of confusion that you had about gRPC. Please feel free to share your questions, thoughts, or feedback in the comments section below. Until next time ?

References

https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-6.0&tabs=visual-studio-code

Flatlogic Admin Templates banner

Transforming identity claims in ASP.NET Core and Cache

The article shows how to add extra identity claims to an ASP.NET Core application which authenticates using the Microsoft.Identity.Web client library and Azure AD B2C or Azure AD as the identity provider (IDP). This could easily be switched to OpenID Connect and use any IDP which supports OpenID Connect. The extra claims are added after an Azure Microsoft Graph HTTP request and it is important that this is only called once for a user session.

Code https://github.com/damienbod/azureb2c-fed-azuread

Normally I use the IClaimsTransformation interface to add extra claims to an ASP.NET Core session. This interface gets called multiple times and has no caching solution. If using this interface to add extra claims to you application, you must implement a cache solution for the extra claims and prevent extra API calls or database requests with every request. Instead of implementing a cache and using the IClaimsTransformation interface, alternatively you could just use the OnTokenValidated event with the OpenIdConnectDefaults.AuthenticationScheme scheme. This gets called after a successfully authentication against your identity provider. If Microsoft.Identity.Web is used as the OIDC client which is specific for Azure AD and Azure B2C, you must add the configuration to the MicrosoftIdentityOptions otherwise downstream APIs will not work. If using OpenID Connect directly and a different IDP, then use the OpenIdConnectOptions configuration. This can be added to the services of the ASP.NET Core application.

services.Configure<MicrosoftIdentityOptions>(
OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events.OnTokenValidated = async context =>
{
if (ApplicationServices != null && context.Principal != null)
{
using var scope = ApplicationServices.CreateScope();
context.Principal = await scope.ServiceProvider
.GetRequiredService<MsGraphClaimsTransformation>()
.TransformAsync(context.Principal);
}
};
});

Note

If using default OpenID Connect and not the Microsoft.Identity.Web client to authenticate, use the OpenIdConnectOptions and not the MicrosoftIdentityOptions.

Here’s an example of an OIDC setup.

builder.Services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events.OnTokenValidated = async context =>
{
if(ApplicationServices != null && context.Principal != null)
{
using var scope = ApplicationServices.CreateScope();
context.Principal = await scope.ServiceProvider
.GetRequiredService<MyClaimsTransformation>()
.TransformAsync(context.Principal);
}
};
});

The IServiceProvider ApplicationServices are used to add the scoped MsGraphClaimsTransformation service which is used to add the extra calls using Microsoft Graph. This needs to be added to the configuration in the startup or the program file.

protected IServiceProvider ApplicationServices { get; set; } = null;

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
ApplicationServices = app.ApplicationServices;

The Microsoft Graph services are added to the IoC.

services.AddScoped<MsGraphService>();
services.AddScoped<MsGraphClaimsTransformation>();

The MsGraphClaimsTransformation uses the Microsoft Graph client to get groups of a user, create a new ClaimsIdentity, add the extra claims to this group and add the ClaimsIdentity to the ClaimsPrincipal.

using AzureB2CUI.Services;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;

namespace AzureB2CUI;

public class MsGraphClaimsTransformation
{
private readonly MsGraphService _msGraphService;

public MsGraphClaimsTransformation(MsGraphService msGraphService)
{
_msGraphService = msGraphService;
}

public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
{
ClaimsIdentity claimsIdentity = new();
var groupClaimType = “group”;
if (!principal.HasClaim(claim => claim.Type == groupClaimType))
{
var objectidentifierClaimType = “http://schemas.microsoft.com/identity/claims/objectidentifier”;
var objectIdentifier = principal.Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType);

var groupIds = await _msGraphService.GetGraphApiUserMemberGroups(objectIdentifier.Value);

foreach (var groupId in groupIds.ToList())
{
claimsIdentity.AddClaim(new Claim(groupClaimType, groupId));
}
}

principal.AddIdentity(claimsIdentity);
return principal;
}
}

The MsGraphService service implements the different HTTP requests to Microsoft Graph. Azure AD B2C is used in this example and so an application client is used to access the Azure AD with the ClientSecretCredential. The implementation is setup to use secrets from Azure Key Vault directly in any deployments, or from user secrets for development.

using Azure.Identity;
using Microsoft.Extensions.Configuration;
using Microsoft.Graph;
using System.Threading.Tasks;

namespace AzureB2CUI.Services;

public class MsGraphService
{
private readonly GraphServiceClient _graphServiceClient;

public MsGraphService(IConfiguration configuration)
{
string[] scopes = configuration.GetValue<string>(“GraphApi:Scopes”)?.Split(‘ ‘);
var tenantId = configuration.GetValue<string>(“GraphApi:TenantId”);

// Values from app registration
var clientId = configuration.GetValue<string>(“GraphApi:ClientId”);
var clientSecret = configuration.GetValue<string>(“GraphApi:ClientSecret”);

var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

_graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes);
}

public async Task<User> GetGraphApiUser(string userId)
{
return await _graphServiceClient.Users[userId]
.Request()
.GetAsync();
}

public async Task<IUserAppRoleAssignmentsCollectionPage> GetGraphApiUserAppRoles(string userId)
{
return await _graphServiceClient.Users[userId]
.AppRoleAssignments
.Request()
.GetAsync();
}

public async Task<IDirectoryObjectGetMemberGroupsCollectionPage> GetGraphApiUserMemberGroups(string userId)
{
var securityEnabledOnly = true;

return await _graphServiceClient.Users[userId]
.GetMemberGroups(securityEnabledOnly)
.Request().PostAsync();
}
}

When the application is run, the two ClaimsIdentity instances exist with every request and are available for using in the ASP.NET Core application.

Notes

This works really well but you should not add too many claims to the identity in this way. If you have many identity descriptions or a lot of user data, then you should use the IClaimsTransformation interface with a good cache solution.

Links

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/claims

https://andrewlock.net/exploring-dotnet-6-part-10-new-dependency-injection-features-in-dotnet-6/

Flatlogic Admin Templates banner

Create Azure B2C users with Microsoft Graph and ASP.NET Core

This article shows how to create different types of Azure B2C users using Microsoft Graph and ASP.NET Core. The users are created using application permissions in an Azure App registration.

Code https://github.com/damienbod/azureb2c-fed-azuread

The Microsoft.Identity.Web Nuget package is used to authenticate the administrator user that can create new Azure B2C users. An ASP.NET Core Razor page application is used to implement the Azure B2C user management and also to hold the sensitive data.

public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<MsGraphService>();
services.AddTransient<IClaimsTransformation, MsGraphClaimsTransformation>();
services.AddHttpClient();

services.AddOptions();

services.AddMicrosoftIdentityWebAppAuthentication(Configuration, “AzureAdB2C”)
.EnableTokenAcquisitionToCallDownstreamApi()
.AddInMemoryTokenCaches();

The AzureAdB2C app settings configures the B2C client. An Azure B2C user flow is implemented for authentication. In this example, a signin or signup flow is implemented, although if creating your own user, maybe only a signin is required. The GraphApi configuration is used for the Microsoft Graph application client with uses the client credentials flow. A user secret was created to access the Azure App registration. This secret is stored in the user secrets for development and stored in Azure Key Vault for any deployments. You could use certificates as well but this offers no extra security unless using directly from a client host.

“AzureAdB2C”: {
“Instance”: “https://b2cdamienbod.b2clogin.com”,
“ClientId”: “8cbb1bd3-c190-42d7-b44e-42b20499a8a1”,
“Domain”: “b2cdamienbod.onmicrosoft.com”,
“SignUpSignInPolicyId”: “B2C_1_signup_signin”,
“TenantId”: “f611d805-cf72-446f-9a7f-68f2746e4724”,
“CallbackPath”: “/signin-oidc”,
“SignedOutCallbackPath “: “/signout-callback-oidc”
},
“GraphApi”: {
“TenantId”: “f611d805-cf72-446f-9a7f-68f2746e4724”,
“ClientId”: “1d171c13-236d-4c2b-ac10-0325be2cbc74”,
“Scopes”: “.default”
//”ClientSecret”: “–in-user-settings–”
},
“AadIssuerDomain”: “damienbodhotmail.onmicrosoft.com”,

The application User.ReadWrite.All permission is used to create the users. See the permissions in the Microsoft Graph docs.

The MsGraphService service implements the Microsoft Graph client to create Azure tenant users. Application permissions are used because we use Azure B2C. If authenticating using Azure AD, you could use delegated permissions. The ClientSecretCredential is used to get the Graph access token and client with the required permissions.

public MsGraphService(IConfiguration configuration)
{
string[] scopes = configuration.GetValue<string>(“GraphApi:Scopes”)?.Split(‘ ‘);
var tenantId = configuration.GetValue<string>(“GraphApi:TenantId”);

// Values from app registration
var clientId = configuration.GetValue<string>(“GraphApi:ClientId”);
var clientSecret = configuration.GetValue<string>(“GraphApi:ClientSecret”);

_aadIssuerDomain = configuration.GetValue<string>(“AadIssuerDomain”);
_aadB2CIssuerDomain = configuration.GetValue<string>(“AzureAdB2C:Domain”);

var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

_graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes);
}

The CreateAzureB2CSameDomainUserAsync method creates a same domain Azure B2C user and also creates an initial password which needs to be updated after a first signin. The users UserPrincipalName email must match the Azure B2C domain and the users can only signin with the the password. MFA should be setup. This works really good but it is not a good idea to handle passwords from your users, if this can be avoided. You need to share this with the user in a secure way.

public async Task<(string Upn, string Password, string Id)>
CreateAzureB2CSameDomainUserAsync(UserModelB2CTenant userModel)
{
if(!userModel.UserPrincipalName.ToLower().EndsWith(_aadB2CIssuerDomain.ToLower()))
{
throw new ArgumentException(“incorrect Email domain”);
}

var password = GetEncodedRandomString();
var user = new User
{
AccountEnabled = true,
UserPrincipalName = userModel.UserPrincipalName,
DisplayName = userModel.DisplayName,
Surname = userModel.Surname,
GivenName = userModel.GivenName,
PreferredLanguage = userModel.PreferredLanguage,
MailNickname = userModel.DisplayName,
PasswordProfile = new PasswordProfile
{
ForceChangePasswordNextSignIn = true,
Password = password
}
};

await _graphServiceClient.Users
.Request()
.AddAsync(user);

return (user.UserPrincipalName, user.PasswordProfile.Password, user.Id);
}

The CreateFederatedUserWithPasswordAsync method creates an Azure B2C with any email address. This uses the SignInType federated, but uses a password and the user signs in directly to the Azure B2C. This password is not updated after a first signin. Again this is a bad idea because you need share the password with the user somehow and you as an admin should not know the user password. I would avoid creating users in this way and use a custom invitation flow, if you need this type of Azure B2C user.

public async Task<(string Upn, string Password, string Id)>
CreateFederatedUserWithPasswordAsync(UserModelB2CIdentity userModel)
{
// new user create, email does not matter unless you require to send mails
var password = GetEncodedRandomString();
var user = new User
{
DisplayName = userModel.DisplayName,
PreferredLanguage = userModel.PreferredLanguage,
Surname = userModel.Surname,
GivenName = userModel.GivenName,
OtherMails = new List<string> { userModel.Email },
Identities = new List<ObjectIdentity>()
{
new ObjectIdentity
{
SignInType = “federated”,
Issuer = _aadB2CIssuerDomain,
IssuerAssignedId = userModel.Email
},
},
PasswordProfile = new PasswordProfile
{
Password = password,
ForceChangePasswordNextSignIn = false
},
PasswordPolicies = “DisablePasswordExpiration”
};

var createdUser = await _graphServiceClient.Users
.Request()
.AddAsync(user);

return (createdUser.UserPrincipalName, user.PasswordProfile.Password, createdUser.Id);
}

The CreateFederatedNoPasswordAsync method creates an Azure B2C federated user from a specific Azure AD domain which already exists and no password. The user can only signin using a federated signin to this tenant. No passwords are shared. This is really good way to onboard existing AAD users to an Azure B2C tenant. One disadvantage with this is that the email is not verified unlike implementing this using an invitation flow directly in the Azure AD tenant.

public async Task<string>
CreateFederatedNoPasswordAsync(UserModelB2CIdentity userModel)
{
// User must already exist in AAD
var user = new User
{
DisplayName = userModel.DisplayName,
PreferredLanguage = userModel.PreferredLanguage,
Surname = userModel.Surname,
GivenName = userModel.GivenName,
OtherMails = new List<string> { userModel.Email },
Identities = new List<ObjectIdentity>()
{
new ObjectIdentity
{
SignInType = “federated”,
Issuer = _aadIssuerDomain,
IssuerAssignedId = userModel.Email
},
}
};

var createdUser = await _graphServiceClient.Users
.Request()
.AddAsync(user);

return createdUser.UserPrincipalName;
}

When the application is started, you can signin as an IT admin and create new users as required. The Birthday can only be added if you have an SPO license. If the user exists in the AAD tenant, the user can signin using the federated identity provider. This could be improved by adding a search of the users in the target tenant and only allowing existing users.

Notes:

It is really easy to create users using Microsoft Graph but this is not always the best way, or a secure way of onboarding new users in an Azure B2C tenant. If local data is required, this can be really useful. Sharing passwords between an IT admin and a new user should be avoided if possible. The Microsoft Graph invite APIs do not work for Azure AD B2C, only Azure AD.

Links

https://docs.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core

Flatlogic Admin Templates banner

Implementing an API Gateway in ASP.NET Core with Ocelot

This post is about what is an API Gateway and how to build an API Gateway in ASP.NET Core with Ocelot. An API gateway is service that sits between an endpoint and backend APIs, transmitting client requests to an appropriate service of an application. It’s an architectural pattern, which was initially created to support microservices. In this post I am building API Gateway using Ocelot. Ocelot is aimed at people using .NET running a micro services / service orientated architecture that need a unified point of entry into their system.

Let’s start the implementation.

First we will create two web api applications – both these services returns some hard coded string values. Here is the first web api – CustomersController – which returns list of customers.

using Microsoft.AspNetCore.Mvc;

namespace ServiceA.Controllers;

[ApiController]
[Route(“[controller]”)]
public class CustomersController : ControllerBase
{
private readonly ILogger<CustomersController> _logger;

public CustomersController(ILogger<CustomersController> logger)
{
_logger = logger;
}

[HttpGet(Name = “GetCustomers”)]
public IActionResult Get()
{
return Ok(new[] { “Customer1”, “Customer2”,“Customer3” });
}
}

And here is the second web api – ProductsController.

using Microsoft.AspNetCore.Mvc;

namespace ServiceB.Controllers;

[ApiController]
[Route(“[controller]”)]
public class ProductsController : ControllerBase
{
private readonly ILogger<ProductsController> _logger;

public ProductsController(ILogger<ProductsController> logger)
{
_logger = logger;
}

[HttpGet(Name = “GetProducts”)]
public IActionResult Get()
{
return Ok(new[] { “Product1”, “Product2”,
“Product3”, “Product4”, “Product5” });
}
}

Next we will create the API Gateway. To do this create an ASP.NET Core empty web application using the command – dotnet new web -o ApiGateway. Once we create the gateway application, we need to add the reference of Ocelot nuget package – we can do this using dotnet add package Ocelot. Now we can modify the Program.cs file like this.

using Ocelot.DependencyInjection;
using Ocelot.Middleware;

var builder = WebApplication.CreateBuilder(args);

builder.Configuration.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile(“configuration.json”, false, true).AddEnvironmentVariables();

builder.Services.AddOcelot(builder.Configuration);
var app = builder.Build();

app.UseOcelot();
app.Run();

Next you need to configure your API routes using configuration.json. Here is the basic configuration which help to send requests from one endpoint to the web api endpoints.

{
Routes: [
{
DownstreamPathTemplate: /customers,
DownstreamScheme: https,
DownstreamHostAndPorts: [
{
Host: localhost,
Port: 7155
}
],
UpstreamPathTemplate: /api/customers,
UpstreamHttpMethod: [ Get ]
},
{
DownstreamPathTemplate: /products,
DownstreamScheme: https,
DownstreamHostAndPorts: [
{
Host: localhost,
Port: 7295
}
],
UpstreamPathTemplate: /api/products,
UpstreamHttpMethod: [ Get ]
}
],
GlobalConfiguration: {
BaseUrl: https://localhost:7043
}
}

Now run all the three applications and browse the endpoint – https://localhost:7043/api/products – which invokes the ProductsController class GET action method. And if we browse the endpoint – https://localhost:7043/api/customers – which invokes the CustomersController GET action method. In the configuration the UpstreamPathTemplate will be the API Gateway endpoint and API Gateway will transfers the request to the DownstreamPathTemplate endpoint.

Due to some strange reason it was not working properly for me. Today I configured it again and it started working. It is an introductory post. I will blog about some common use cases where API Gateway help and how to deploy it in Azure and all in the future.

Happy Programming 🙂

Flatlogic Admin Templates banner

Implementing authorization in Blazor ASP.NET Core applications using Azure AD security groups

This article shows how to implement authorization in an ASP.NET Core Blazor application using Azure AD security groups as the data source for the authorization definitions. Policies and claims are used in the application which decouples the descriptions from the Azure AD security groups and the application specific authorization requirements. With this setup, it is easy to support any complex authorization requirement and IT admins can manager the accounts independently in Azure. This solution will work for Azure AD B2C or can easily be adapted to use data from your database instead of Azure AD security groups if required.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate/tree/main/BlazorBff

Setup the AAD security groups

Before we start using the Azure AD security groups, the groups need to be created. I use Powershell to create the security groups. This is really simple using the Powershell AZ module with AD. For this demo, just two groups are created, one for users and one for admins. The script can be run from your Powershell console. You are required to authenticate before running the script and the groups are added if you have the rights. In DevOps, you could use a managed identity and the client credentials flow.

# https://theitbros.com/install-azure-powershell/
#
# https://docs.microsoft.com/en-us/powershell/module/az.accounts/connect-azaccount?view=azps-7.1.0
#
# Connect-AzAccount -Tenant “–tenantId–”
# AZ LOGIN –tenant “–tenantId–”

$tenantId = “–tenantId–”
$gpAdmins = “demo-admins”
$gpUsers = “demo-users”

function testParams {

if (!$tenantId)
{
Write-Host “tenantId is null”
exit 1
}
}

testParams

function CreateGroup([string]$name) {
Write-Host ” – Create new group”
$group = az ad group create –display-name $name –mail-nickname $name

$gpObjectId = ($group | ConvertFrom-Json).objectId
Write-Host ” $gpObjectId $name”
}

Write-Host “Creating groups”

##################################
### Create groups
##################################

CreateGroup $gpAdmins
CreateGroup $gpUsers

#az ad group list –display-name $groupName

return

Once created, the new security groups should be visible in the Azure portal. You need to add group members or user members to the groups.

That’s all the configuration required to setup the security groups. Now the groups can be used in the applications.

Define the authorization policies

We do not use the security groups directly in the applications because this can change a lot or maybe the application is deployed to different host environments. The security groups are really just descriptions about the identity. How you use this, is application specific and depends on the solution business requirements which tend to change a lot. In the applications, shared authorization policies are defined and only used in the Blazor WASM and the Blazor server part. The definitions have nothing to do with the security groups, the groups get mapped to application claims. A Policies class definition was created for all the policies in the shared Blazor project because this is defined once, but used in the server project and the client project. The code was built based on the excellent blog from Chris Sainty. The claims definition for the authorization check have nothing to do with the Azure security groups, this logic is application specific and sometimes the applications need to apply different authorization logic how the security groups are used in different applications inside the same solution.

using Microsoft.AspNetCore.Authorization;

namespace BlazorAzureADWithApis.Shared.Authorization
{
public static class Policies
{
public const string DemoAdminsIdentifier = “demo-admins”;
public const string DemoAdminsValue = “1”;

public const string DemoUsersIdentifier = “demo-users”;
public const string DemoUsersValue = “1”;

public static AuthorizationPolicy DemoAdminsPolicy()
{
return new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.RequireClaim(DemoAdminsIdentifier, DemoAdminsValue)
.Build();
}

public static AuthorizationPolicy DemoUsersPolicy()
{
return new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.RequireClaim(DemoUsersIdentifier, DemoUsersValue)
.Build();
}
}
}

Add the authorization to the WASM and the server project

The policy definitions can now be added to the Blazor Server project and the Blazor WASM project. The AddAuthorization extension method is used to add the authorization to the Blazor server. The policy names can be anything you want.

services.AddAuthorization(options =>
{
// By default, all incoming requests will be authorized according to the default policy
options.FallbackPolicy = options.DefaultPolicy;
options.AddPolicy(“DemoAdmins”, Policies.DemoAdminsPolicy());
options.AddPolicy(“DemoUsers”, Policies.DemoUsersPolicy());
});

The AddAuthorizationCore method is used to add the authorization policies to the Blazor WASM client project.

var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.Services.AddOptions();
builder.Services.AddAuthorizationCore(options =>
{
options.AddPolicy(“DemoAdmins”, Policies.DemoAdminsPolicy());
options.AddPolicy(“DemoUsers”, Policies.DemoUsersPolicy());
});

Now the application policies, claims are defined. Next job is to connect the Azure security definitions to the application authorization claims used for the authorization policies.

Link the security groups from Azure to the app authorization

This can be done using the IClaimsTransformation interface which gets called after a successful authentication. An application Microsoft Graph client is used to request the Azure AD security groups. The IDs of the Azure security groups are mapped to the application claims. Any logic can be added here which is application specific. If a hierarchical authorization system is required, this could be mapped here.

public class GraphApiClaimsTransformation : IClaimsTransformation
{
private readonly MsGraphApplicationService _msGraphApplicationService;

public GraphApiClaimsTransformation(MsGraphApplicationService msGraphApplicationService)
{
_msGraphApplicationService = msGraphApplicationService;
}

public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
{
ClaimsIdentity claimsIdentity = new();
var groupClaimType = “group”;
if (!principal.HasClaim(claim => claim.Type == groupClaimType))
{
var objectidentifierClaimType = “http://schemas.microsoft.com/identity/claims/objectidentifier”;
var objectIdentifier = principal
.Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType);

var groupIds = await _msGraphApplicationService
.GetGraphUserMemberGroups(objectIdentifier.Value);

foreach (var groupId in groupIds.ToList())
{
var claim = GetGroupClaim(groupId);
if (claim != null) claimsIdentity.AddClaim(claim);
}
}

principal.AddIdentity(claimsIdentity);
return principal;
}

private Claim GetGroupClaim(string groupId)
{
Dictionary<string, Claim> mappings = new Dictionary<string, Claim>() {
{ “1d9fba7e-b98a-45ec-b576-7ee77366cf10”,
new Claim(Policies.DemoUsersIdentifier, Policies.DemoUsersValue)},

{ “be30f1dd-39c9-457b-ab22-55f5b67fb566”,
new Claim(Policies.DemoAdminsIdentifier, Policies.DemoAdminsValue)},
};

if (mappings.ContainsKey(groupId))
{
return mappings[groupId];
}

return null;
}
}

The MsGraphApplicationService class is used to implement the Microsoft Graph requests. This uses application permissions with a ClientSecretCredential. I use secrets which are read from an Azure Key vault. You need to implement rotation for this or make it last forever and update the secrets in the DevOps builds every time you deploy. My secrets are only defined in Azure and used from the Azure Key Vault. You could use certificates but this adds no extra security unless you need to use the secret/certificate outside of Azure or in app settings somewhere. The GetMemberGroups method is used to get the groups for the authenticated user using the object identifier.

public class MsGraphApplicationService
{
private readonly IConfiguration _configuration;

public MsGraphApplicationService(IConfiguration configuration)
{
_configuration = configuration;
}

public async Task<IUserAppRoleAssignmentsCollectionPage>
GetGraphUserAppRoles(string objectIdentifier)
{
var graphServiceClient = GetGraphClient();

return await graphServiceClient.Users[objectIdentifier]
.AppRoleAssignments
.Request()
.GetAsync();
}

public async Task<IDirectoryObjectGetMemberGroupsCollectionPage>
GetGraphUserMemberGroups(string objectIdentifier)
{
var securityEnabledOnly = true;

var graphServiceClient = GetGraphClient();

return await graphServiceClient.Users[objectIdentifier]
.GetMemberGroups(securityEnabledOnly)
.Request().PostAsync();
}

private GraphServiceClient GetGraphClient()
{
string[] scopes = new[] { “https://graph.microsoft.com/.default” };
var tenantId = _configuration[“AzureAd:TenantId”];

// Values from app registration
var clientId = _configuration.GetValue<string>(“AzureAd:ClientId”);
var clientSecret = _configuration.GetValue<string>(“AzureAd:ClientSecret”);

var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

return new GraphServiceClient(clientSecretCredential, scopes);
}
}

The security groups are mapped to the application claims and policies. The policies can be applied in the application.

Use the Policies in the Server

The Blazor server applications implements secure APIs for the Blazor WASM. The Authorize attribute is used with the policy definition. Now the user must be authorized using our definition to get data from this API. We also use cookies because the Blazor application is secured using the BFF architecture which has improved security compared to using tokens in the untrusted SPA.

[ValidateAntiForgeryToken]
[Authorize(Policy= “DemoAdmins”,
AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)]
[ApiController]
[Route(“api/[controller]”)]
public class DemoAdminController : ControllerBase
{
[HttpGet]
public IEnumerable<string> Get()
{
return new List<string>
{
“admin data”,
“secret admin record”,
“loads of admin data”
};
}
}

Use the policies in the WASM

The Blazor WASM application can also use the authorization policies. This is not really authorization but only usability because you cannot implement authorization in an untrusted application which you have no control of once it’s running. We would like to hide the components and menus which cannot be used, if you are not authorized. I use an AuthorizeView with a policy definition for this.

<div class=”@NavMenuCssClass” @onclick=”ToggleNavMenu”>
<ul class=”nav flex-column”>
<AuthorizeView Policy=”DemoAdmins”>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”demoadmin”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> DemoAdmin
</NavLink>
</li>
</Authorized>
</AuthorizeView>

<AuthorizeView Policy=”DemoUsers”>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”demouser”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> DemoUser
</NavLink>
</li>
</Authorized>
</AuthorizeView>

<AuthorizeView>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”graphprofile”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> Graph Profile
</NavLink>
</li>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”” Match=”NavLinkMatch.All”>
<span class=”oi oi-home” aria-hidden=”true”></span> Home
</NavLink>
</li>
</Authorized>
<NotAuthorized>
<li class=”nav-item px-3″>
<p style=”color:white”>Please sign in</p>
</li>
</NotAuthorized>
</AuthorizeView>

</ul>
</div>

The Blazor UI pages should also use an Authorize attribute. This prevents an unhandled exception. You could add logic which forces you to login then with the permissions required or just display an error page. This depends on the UI strategy.

@page “/demoadmin”
@using Microsoft.AspNetCore.Authorization
@inject IHttpClientFactory HttpClientFactory
@inject IJSRuntime JSRuntime
@attribute [Authorize(Policy =”DemoAdmins”)]

<h1>Demo Admin</h1>

When the application is started, you will only see what you allowed to see and more important, only be able to get data for what you are authorized.

If you open a page where you have no access rights:

Notes:

This solution is very flexible and can work with any source of identity definitions, not just Azure security groups. I could very easily switch to a database. One problem with this, is that with a lot of authorization definitions, the size of the cookie might get to big and you would need to switch from using claims in the policies definitions to using a cache database or something. This would also be easy to adapt because the claims are only mapped in the policies and the IClaimsTransformation implementation. Only the policies are used in the application logic.

Links

https://chrissainty.com/securing-your-blazor-apps-configuring-policy-based-authorization-with-blazor/

https://docs.microsoft.com/en-us/aspnet/core/blazor/security

Flatlogic Admin Templates banner

Implementing Basic Authentication in ASP.NET Core Minimal API

This post is about how implement basic authentication in ASP.NET Core Minimal API. Few days back I got a question / comment in the blog post about Minimal APIs – about implementing Basic authentication in Minimal APIs. Since the Action Filters support is not available in Minimal API I had to find some alternative approach for the implementation. I already wrote two blog posts Basic authentication middleware for ASP.NET 5 and Basic HTTP authentication in ASP.Net Web API on implementing Basic authentication. In this post I am implementing an AuthenticationHandler and using this for implementing basic authentication. As I already explained enough about the concepts, I am not discussing them again in this post.

Here is the implementation of the BasicAuthenticationHandler which implements the abstract class AuthenticationHandler.

public class BasicAuthenticationHandler : AuthenticationHandler<AuthenticationSchemeOptions>
{
public BasicAuthenticationHandler(
IOptionsMonitor<AuthenticationSchemeOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
ISystemClock clock
) : base(options, logger, encoder, clock)
{
}

protected override Task<AuthenticateResult> HandleAuthenticateAsync()
{
var authHeader = Request.Headers[“Authorization”].ToString();
if (authHeader != null && authHeader.StartsWith(“basic”, StringComparison.OrdinalIgnoreCase))
{
var token = authHeader.Substring(“Basic “.Length).Trim();
System.Console.WriteLine(token);
var credentialstring = Encoding.UTF8.GetString(Convert.FromBase64String(token));
var credentials = credentialstring.Split(‘:’);
if (credentials[0] == “admin” && credentials[1] == “admin”)
{
var claims = new[] { new Claim(“name”, credentials[0]), new Claim(ClaimTypes.Role, “Admin”) };
var identity = new ClaimsIdentity(claims, “Basic”);
var claimsPrincipal = new ClaimsPrincipal(identity);
return Task.FromResult(AuthenticateResult.Success(new AuthenticationTicket(claimsPrincipal, Scheme.Name)));
}

Response.StatusCode = 401;
Response.Headers.Add(“WWW-Authenticate”, “Basic realm=”dotnetthoughts.net””);
return Task.FromResult(AuthenticateResult.Fail(“Invalid Authorization Header”));
}
else
{
Response.StatusCode = 401;
Response.Headers.Add(“WWW-Authenticate”, “Basic realm=”dotnetthoughts.net””);
return Task.FromResult(AuthenticateResult.Fail(“Invalid Authorization Header”));
}
}
}

Next modify the Program.cs like this.

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddAuthentication(“BasicAuthentication”)
.AddScheme<AuthenticationSchemeOptions, BasicAuthenticationHandler>
(“BasicAuthentication”, null);
builder.Services.AddAuthorization();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseAuthentication();
app.UseAuthorization();

app.UseHttpsRedirection();

Now it is done. You can enable block the anonymous access by adding the authorize attribute to the method like this.

app.MapGet(“/weatherforecast”, [Authorize]() =>
{
var forecast = Enumerable.Range(1, 5).Select(index =>
new WeatherForecast
(
DateTime.Now.AddDays(index),
Random.Shared.Next(-20, 55),
summaries[Random.Shared.Next(summaries.Length)]
))
.ToArray();
return forecast;
}).WithName(“GetWeatherForecast”);

Now if you browse the Weather forecast endpoint – https://localhost:5001/weatherforecast, it will prompt for user name and password. Here is the screenshot of the app running on my machine.

Happy Programming 🙂

Flatlogic Admin Templates banner

Implement a PWA using Blazor with BFF security and Azure B2C

The article shows how to implement a progressive web application (PWA) using Blazor which is secured using the backend for frontend architecture and Azure B2C as the identity provider.

Code https://github.com/damienbod/PwaBlazorBffAzureB2C

Setup and challenges with PWAs

The application is setup to implement all security in the trusted backend and reduce the security risks of the overall software. We use Azure B2C as an identity provider. When implementing and using BFF security architecture, cookies are used to secure the Blazor WASM UI and its backend. Microsoft.Identity.Web is used to implement the authentication as recommended by Microsoft for server rendered applications. Anti-forgery tokens as well as all the other cookie protections can be used to reduce the risk of CSRF attacks. This requires that the WASM application is hosted in an ASP.NET Core razor page and the dynamic data can be added. With PWA applications, this is not possible. To work around this, CORS preflight and custom headers can be used to protect against this as well as same site. The anti-forgery cookies need to be removed to support PWAs. Using CORS preflight has some disadvantages compared to anti-forgery cookies but works good.

Setup Blazor BFF with Azure B2C for PWA

The application is setup using the Blazor.BFF.AzureB2C.Template Nuget package. This uses anti-forgery cookies. All of the anti-forgery protection can be completely removed. The Azure App registrations and the Azure B2C user flows need to be setup and the application should work (without PWA support).

To setup the PWA support, you need to add an index.html file to the wwwroot of the Blazor client and a service worker JS script to implement the PWA. The index.html file adds what is required and the serviceWorkerRegistration.js script is linked.

<!DOCTYPE html>
<html>
<!– PWA / Offline Version –>
<head>
<meta charset=”utf-8″ />
<meta name=”viewport” content=”width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no” />
<base href=”/” />
<title>PWA Blazor Azure B2C Cookie</title>
<base href=”~/” />
<link rel=”stylesheet” href=”css/bootstrap/bootstrap.min.css” />
<link href=”css/app.css” rel=”stylesheet” />
<link href=”BlazorHosted.Client.styles.css” rel=”stylesheet” />
<link href=”manifest.json” rel=”manifest” />
<link rel=”apple-touch-icon” sizes=”512×512″ href=”icon-512.png” />

<body>
<div id=”app”>
<div class=”spinner d-flex align-items-center justify-content-center spinner”>
<div class=”spinner-border text-success” role=”status”>
<span class=”sr-only”>Loading…</span>
</div>
</div>
</div>

<div id=”blazor-error-ui”>
An unhandled error has occurred.
<a href=”” class=”reload”>Reload</a>
<a class=”dismiss”>🗙</a>
</div>

<script src=”_framework/blazor.webassembly.js”></script>
<script src=”serviceWorkerRegistration.js”></script>
</body>

</html>

The serviceWorker.published.js script is pretty standard except that the OpenID Connect redirects and signout URLs need to be excluded from the PWA and always rendered from the trusted backend. The registration script references the service worker so that the inline Javascript is removed from the html because we do not allow unsafe inline scripts anywhere in an application if possible.

navigator.serviceWorker.register(‘service-worker.js’);

The service worker excludes all the required authentication URLs and any other required server URLs. The published script registers the PWA.

Note: if you would like to test the PWA locally without deploying the application, you can reference the published script directly and it will run locally. You need to be carefully testing as the script and the cache needs to be emptied before testing each time.

// Caution! Be sure you understand the caveats before publishing an application with
// offline support. See https://aka.ms/blazor-offline-considerations

self.importScripts(‘./service-worker-assets.js’);
self.addEventListener(‘install’, event => event.waitUntil(onInstall(event)));
self.addEventListener(‘activate’, event => event.waitUntil(onActivate(event)));
self.addEventListener(‘fetch’, event => event.respondWith(onFetch(event)));

const cacheNamePrefix = ‘offline-cache-‘;
const cacheName = `${cacheNamePrefix}${self.assetsManifest.version}`;
const offlineAssetsInclude = [/.dll$/, /.pdb$/, /.wasm/, /.html/, /.js$/, /.json$/, /.css$/, /.woff$/, /.png$/, /.jpe?g$/, /.gif$/, /.ico$/, /.blat$/, /.dat$/];
const offlineAssetsExclude = [/^service-worker.js$/];

async function onInstall(event) {
console.info(‘Service worker: Install’);

// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash, cache: ‘no-cache’ }));

await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}

async function onActivate(event) {
console.info(‘Service worker: Activate’);

// Delete unused caches
const cacheKeys = await caches.keys();
await Promise.all(cacheKeys
.filter(key => key.startsWith(cacheNamePrefix) && key !== cacheName)
.map(key => caches.delete(key)));
}

async function onFetch(event) {
let cachedResponse = null;
if (event.request.method === ‘GET’) {
// For all navigation requests, try to serve index.html from cache
// If you need some URLs to be server-rendered, edit the following check to exclude those URLs
const shouldServeIndexHtml = event.request.mode === ‘navigate’
&& !event.request.url.includes(‘/signin-oidc’)
&& !event.request.url.includes(‘/signout-callback-oidc’)
&& !event.request.url.includes(‘/api/Account/Login’)
&& !event.request.url.includes(‘/api/Account/Logout’)
&& !event.request.url.includes(‘/HostAuthentication/’);

const request = shouldServeIndexHtml ? ‘index.html’ : event.request;
const cache = await caches.open(cacheName);
cachedResponse = await cache.match(request);
}

return cachedResponse || fetch(event.request, { credentials: ‘include’ });
}

The ServiceWorkerAssetsManifest definition needs to be added to the client project.

<ServiceWorkerAssetsManifest>service-worker-assets.js</ServiceWorkerAssetsManifest>

Now the PWA should work. The next step is to add the extra CSRF protection.

Setup CSRF protection using CORS preflight

CORS preflight can be used to protect against CSRF as well as same site. All API calls should include a custom HTTP header and this needs to be controlled on the APIs that the header exists.

The can be implemented in the Blazor WASM client by using a CSRF middleware protection.

public class CsrfProtectionCorsPreflightAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var header = context.HttpContext
.Request
.Headers
.Any(p => p.Key.ToLower() == “x-force-cors-preflight”);

if (!header)
{
// “X-FORCE-CORS-PREFLIGHT header is missing”
context.Result = new UnauthorizedObjectResult(“X-FORCE-CORS-PREFLIGHT header is missing”);
return;
}
}
}

In the Blazor client, the middleware can be added to all HttpClient instances used in the Blazor WASM.

builder.Services.AddHttpClient(“default”, client =>
{
client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress);
client.DefaultRequestHeaders
.Accept
.Add(new MediaTypeWithQualityHeaderValue(“application/json”));

}).AddHttpMessageHandler<CsrfProtectionMessageHandler>();

builder.Services.AddHttpClient(“authorizedClient”, client =>
{
client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress);
client.DefaultRequestHeaders
.Accept
.Add(new MediaTypeWithQualityHeaderValue(“application/json”));

}).AddHttpMessageHandler<AuthorizedHandler>()
.AddHttpMessageHandler<CsrfProtectionMessageHandler>();

The CSRF CORS preflight header can be validated using an ActionFilter in the ASP.NET Core backend application. This is not the only way of doing this. The CsrfProtectionCorsPreflightAttribute implements the ActionFilterAttribute so only the OnActionExecuting needs to be implemented. The custom header is validated and if it fails, an unauthorized result is returned. It does not matter if you give the reason why, unless you want to obfuscate this a bit.

public class CsrfProtectionCorsPreflightAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var header = context.HttpContext
.Request
.Headers
.Any(p => p.Key.ToLower() == “x-force-cors-preflight”);

if (!header)
{
// “X-FORCE-CORS-PREFLIGHT header is missing”
context.Result = new UnauthorizedObjectResult(“X-FORCE-CORS-PREFLIGHT header is missing”);
return;
}
}
}

The CSRF can then be applied anywhere this is required. All secured routes where cookies are used should enforce this.

[CsrfProtectionCorsPreflight]
[Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)]
[ApiController]
[Route(“api/[controller]”)]
public class DirectApiController : ControllerBase
{
[HttpGet]
public IEnumerable<string> Get()
{
return new List<string> { “some data”, “more data”, “loads of data” };
}
}

Now the PWA works using the server rendered application and protected using BFF with all security in the trusted backend.

Problems with this solution and Blazor

The custom header cannot be applied and added when sending direct links, redirects or forms which don’t used Javascript. Anywhere a form is implemented and requires the CORS preflight protection, a HttpClient which adds the header needs to be used.

This is a problem with the Azure B2C signin and signout. The signin redirects the whole application, but this is not so much a problem because when signing in, the identity has no cookie with sensitive data, or should have none. The signout only works correctly with Azure B2C with a form request from the whole application and not HttpClient API call using Javascript. The CORS preflight header cannot be applied to an Azure B2C identity provider signout request, if you require the session to be ended on Azure B2C. If you only require a local logout, then the HttpClient can be used.

Note: Same site protection also exists for modern browsers, so this double CSRF fallback is not really critical, if the same site is implemented correctly and using a browser which enforces this.

Links

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api

Managing Azure B2C users with Microsoft Graph API

https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS#client-credentials-provider

https://github.com/search?q=Microsoft.Identity.Web

https://github.com/damienbod/Blazor.BFF.AzureB2C.Template

Comparing the backend for frontend (BFF) security architecture with an SPA UI using a public API

This article compares the security architecture of an application implemented using a public UI SPA with a trusted API backend and the same solution implemented using the backend for frontend (BFF) security architecture. The main difference is that the first solution is separated into two applications, implemented and deployed as two where as the second application is a single deployment and secured as a single application. The BFF has less risks and is a better security architecture but as always, no solution is perfect.

Setup BFF

The BFF solution is implemented and deployed as a single trusted application. All security is implemented in the trusted backend. The UI part of the application can only use the same domain APIs and cannot use APIs from separate domains. This architecture is the same as a standard ASP.NET Core Razor page UI confidential client. All APIs can be implemented in the same server part of the application. There is no requirement for downstream APIs. Due to this architecture, no sensitive data needs to be saved in the browser. This is effectively a trusted server rendered application. As with any server rendered application, this is protected using cookies with the required cookie protections and normally authenticates against an OIDC server using a trusted, confidential client with code flow and PKCE protection. Because the application is trusted, further protections can be added as required for example MTLS, further OIDC FAPI requirements and so on. If downstream APIs are required, these APIs do not need to be exposed in the public zone (internet) and can be implemented using a trusted client and with token binding between the client and the server. The XSS protection can be improved using a better CSP and all front-channel cross-domain calls can be completely blocked. Dynamic data (ie nonces) can be used to produce the CSP. The UI can be hosted using a server rendered page and dynamic meta data and settings can easily be added for the UI without further architecture or DevOps flows. I always host the UI part in a BFF using a server rendered file. A big win with the BFF architecture, is that the access tokens and the refresh tokens are not stored publicly in the browser. When using SignalR, the secure same site HTTP only cookie can be used and no token for authorization is required in the URL. This is an improvement as long as CSRF protection is applied. Extra CSRF protection is required for all server requests because cookies are used (as well as same site). This can be implemented using anti-forgery tokens or forcing CORS preflight using a custom header. This must be enforced on the backend for all HTTP requests where required. Because only a single application needs to be deployed, DevOPs is simpler and reduces complexity, this is my experience after using this in production with Blazor. Reduced complexity is reduced costs.

Setup SPA with public API

An SPA page solution is deployed as a separate UI application and a separate public API application. Two security flows are used in this setup and are two completely separate applications, even though the API and UI are “business” tightly coupled. The best and most productive solutions with this setup are where the backend APIs are made specifically and optimized for the UI. The API must be public if the SPA is public. The SPA has no backend which can be used for security, tokens and sensitive data are stored in the browser and needs to be accessed using Javascript. As XSS is very hard to protect against, this will always have security risks. When using SPAs, as the access tokens are shared around the browser or added to URLs for web sockets, it is really important to revoke the tokens on a logout. The refresh token requires specific protections for usage in SPAs. Access tokens cannot be revoked and reference tokens with introspection used in the API is the preferred security solution. A logout is possible with introspection and reference tokens using the revocation endpoint. It is very hard to implement a SSO logout when using an SPA. This is because only the front-channel logout is possible in an SPA and not a back-channel logout as with a server rendered application. This setup has performance advantages compared to the BFF architecture when using downstream APIs. The APIs from different domains can be used directly. Implementing UIs with PWA requirements is easier to implement compared to the BFF architecture. CSRF attacks are easier to secure against using tokens but has more risk with an XSS attack due to sensitive data in the public client.

Advantages using BFF

Single trusted application instead of two apps, public untrusted UI + public trusted API (reduced attack surface)
Trusted client protected with a secret or certificate
No access/reference tokens in the browser
No refresh token in the browser
Web sockets security improved (SignalR), no access/reference token in the URL
Backchannel logout, SSO logout possible
Improved CSP and security headers (can use dynamic data and block all other domains) => possible for better protection against XSS (depends on UI tech stack)
Can use MTLS, OIDC FAPI, client binding for all downstream API calls from trusted UI app, so much improved security possible for the downstream API calls.
No architecture requirement for public APIs outside same domain, downstream APIs can be deployed in a private trusted zone.
Easier to build, deploy (my experience so far), Easier for me means reduced costs.
Reduced maintenance due to reduced complexity. (This is my experience so far)

Disadvantages using BFF

Downstream APIs require redirect or second API call (YARP, OBO, •OAuth2 Resource Owner Credentials Flow, certificate auth )
PWA support not out of the box
Performance worse if downstream APIs required (i.e an API call not on the same domain)
All UI API POST, DELETE, PATCH, PUT HTTP requests must use anti-forgery token or force CORS preflight as well as same site protection.
Cookies are hard to invalidate, requires extra logic (Is this required for a secure HTTP only same site cookie? low risk)

Discussions

I have had some excellent discussions on this topic and very valid points and arguments against some of points above. I would recommend reading these (link below) to get a bigger picture. Thanks kevin_chalet for the great feedback and comments.

https://github.com/openiddict/openiddict-samples/issues/180

Notes

A lot of opinions exist with this setup and I am sure lots of people see this in a different way with very valid points. Others are following software tech religions which prevents them accessing, evaluating different solution architectures. Nothing is ever black or white. No one solution is best for everything and all solutions have problems, or future problems with any setup will always happen. I believe using the BFF architecture, I can increase the security for the solutions with less effort and reduce the security risks and costs, thus creating more value for my clients. I still use SPA with APIs and see this as a valuable and good security solution for some systems. The entry level for BFF architecture with some tech stacks is still very high.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/damienbod/Blazor.BFF.AzureB2C.Template

https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template

https://github.com/DuendeSoftware/BFF

https://github.com/manfredsteyer/yarp-auth-proxy

https://docs.microsoft.com/en-us/aspnet/core/blazor/

https://docs.duendesoftware.com/identityserver/v5/bff/overview/

https://github.com/berhir/BlazorWebAssemblyCookieAuth

https://github.com/manfredsteyer/yarp-auth-proxy

OIDC FAPI

https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/

https://csp-evaluator.withgoogle.com/

https://docs.microsoft.com/en-us/aspnet/signalr/overview/getting-started/introduction-to-signalr

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow

https://microsoft.github.io/reverse-proxy/index.html

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/certauth

https://github.com/openiddict/openiddict-samples/issues/180

https://www.w3.org/TR/CSP3/

https://content-security-policy.com/

https://csp.withgoogle.com/docs/strict-csp.html

https://github.com/manfredsteyer/angular-oauth2-oidc

https://github.com/damienbod/angular-auth-oidc-client

https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-overview

https://github.com/AzureAD/microsoft-identity-web

https://github.com/damienbod/AspNetCoreOpeniddict

https://github.com/openiddict/openiddict-samples

Secure a Blazor WASM ASP.NET Core hosted APP using BFF and OpenIddict

This article shows how to implement authentication and secure a Blazor WASM application hosted in ASP.NET Core using the backend for frontend (BFF) security architecture to authenticate. All security is implemented in the backend and the Blazor WASM is a view of the ASP.NET Core application, no security is implemented in the public client. The application is a trusted client and a secret is used to authenticate the application as well as the identity. The Blazor WASM UI can only use the hosted APIs on the same domain.

Code https://github.com/damienbod/AspNetCoreOpeniddict

Setup

The Blazor WASM and the ASP.NET Core host application is implemented as a single application and deployed as one. The server part implements the authentication using OpenID Connect. OpenIddict is used to implement the OpenID Connect server application. The code flow with PKCE and a user secret is used for authentication.

Open ID Connect Server setup

The OpenID Connect server is implemented using OpenIddict. The is standard implementation as like the documentation. The worker class implements the IHostService interface and is used to add the code flow client used by the Blazor ASP.NET Core application. PKCE is added as well as a client secret.

static async Task RegisterApplicationsAsync(IServiceProvider provider)
{
var manager = provider.GetRequiredService<IOpenIddictApplicationManager>();

// Blazor Hosted
if (await manager.FindByClientIdAsync(“blazorcodeflowpkceclient”) is null)
{
await manager.CreateAsync(new OpenIddictApplicationDescriptor
{
ClientId = “blazorcodeflowpkceclient”,
ConsentType = ConsentTypes.Explicit,
DisplayName = “Blazor code PKCE”,
DisplayNames =
{
[CultureInfo.GetCultureInfo(“fr-FR”)] = “Application cliente MVC”
},
PostLogoutRedirectUris =
{
new Uri(“https://localhost:44348/signout-callback-oidc”),
new Uri(“https://localhost:5001/signout-callback-oidc”)
},
RedirectUris =
{
new Uri(“https://localhost:44348/signin-oidc”),
new Uri(“https://localhost:5001/signin-oidc”)
},
ClientSecret = “codeflow_pkce_client_secret”,
Permissions =
{
Permissions.Endpoints.Authorization,
Permissions.Endpoints.Logout,
Permissions.Endpoints.Token,
Permissions.Endpoints.Revocation,
Permissions.GrantTypes.AuthorizationCode,
Permissions.GrantTypes.RefreshToken,
Permissions.ResponseTypes.Code,
Permissions.Scopes.Email,
Permissions.Scopes.Profile,
Permissions.Scopes.Roles,
Permissions.Prefixes.Scope + “dataEventRecords”
},
Requirements =
{
Requirements.Features.ProofKeyForCodeExchange
}
});
}
}

Blazor client Application

The client application was created using the Blazor.BFF.OpenIDConnect.Template Nuget template package. The configuration is read from the app settings using the OpenIDConnectSettings section. You could add more configurations if required. This is otherwise a standard OpenID Connect client and will work with any OIDC compatible server. PKCE is required and also a secret to validate the application. The AddAntiforgery method is used so that API calls can be forced to validate anti-forgery token to protect against CSRF as well as the same site cookie protection.

public void ConfigureServices(IServiceCollection services)
{
services.AddAntiforgery(options =>
{
options.HeaderName = “X-XSRF-TOKEN”;
options.Cookie.Name = “__Host-X-XSRF-TOKEN”;
options.Cookie.SameSite = Microsoft.AspNetCore.Http.SameSiteMode.Strict;
options.Cookie.SecurePolicy = Microsoft.AspNetCore.Http.CookieSecurePolicy.Always;
});

services.AddHttpClient();
services.AddOptions();
;
var openIDConnectSettings = Configuration.GetSection(“OpenIDConnectSettings”);

services.AddAuthentication(options =>
{
options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
})
.AddCookie()
.AddOpenIdConnect(options =>
{
options.SignInScheme = “Cookies”;
options.Authority = openIDConnectSettings[“Authority”];
options.ClientId = openIDConnectSettings[“ClientId”];
options.ClientSecret = openIDConnectSettings[“ClientSecret”];
options.RequireHttpsMetadata = true;
options.ResponseType = “code”;
options.UsePkce = true;
options.Scope.Add(“profile”);
options.Scope.Add(“offline_access”);
options.SaveTokens = true;
options.GetClaimsFromUserInfoEndpoint = true;
//options.ClaimActions.MapUniqueJsonKey(“preferred_username”, “preferred_username”);
});

services.AddControllersWithViews(options =>
options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute()));

services.AddRazorPages().AddMvcOptions(options =>
{
//var policy = new AuthorizationPolicyBuilder()
// .RequireAuthenticatedUser()
// .Build();
//options.Filters.Add(new AuthorizeFilter(policy));
});
}

The OIDC configuration settings are read from the OpenIDConnectSettings section. This can be extended if further specific settings are required.

“OpenIDConnectSettings”: {
“Authority”: “https://localhost:44395”,
“ClientId”: “blazorcodeflowpkceclient”,
“ClientSecret”: “codeflow_pkce_client_secret”
},

The NetEscapades.AspNetCore.SecurityHeaders Nuget package is used to add security headers to the application to protect the session. The configuration is setup for Blazor.

public static HeaderPolicyCollection GetHeaderPolicyCollection(bool isDev, string idpHost)
{
var policy = new HeaderPolicyCollection()
.AddFrameOptionsDeny()
.AddXssProtectionBlock()
.AddContentTypeOptionsNoSniff()
.AddReferrerPolicyStrictOriginWhenCrossOrigin()
.AddCrossOriginOpenerPolicy(builder =>
{
builder.SameOrigin();
})
.AddCrossOriginResourcePolicy(builder =>
{
builder.SameOrigin();
})
.AddCrossOriginEmbedderPolicy(builder => // remove for dev if using hot reload
{
builder.RequireCorp();
})
.AddContentSecurityPolicy(builder =>
{
builder.AddObjectSrc().None();
builder.AddBlockAllMixedContent();
builder.AddImgSrc().Self().From(“data:”);
builder.AddFormAction().Self().From(idpHost);
builder.AddFontSrc().Self();
builder.AddStyleSrc().Self();
builder.AddBaseUri().Self();
builder.AddFrameAncestors().None();

// due to Blazor
builder.AddScriptSrc()
.Self()
.WithHash256(“v8v3RKRPmN4odZ1CWM5gw80QKPCCWMcpNeOmimNL2AA=”)
.UnsafeEval();

// due to Blazor hot reload requires you to disable script and style CSP protection
// if using hot reload, DO NOT deploy an with an insecure CSP
})
.RemoveServerHeader()
.AddPermissionsPolicy(builder =>
{
builder.AddAccelerometer().None();
builder.AddAutoplay().None();
builder.AddCamera().None();
builder.AddEncryptedMedia().None();
builder.AddFullscreen().All();
builder.AddGeolocation().None();
builder.AddGyroscope().None();
builder.AddMagnetometer().None();
builder.AddMicrophone().None();
builder.AddMidi().None();
builder.AddPayment().None();
builder.AddPictureInPicture().None();
builder.AddSyncXHR().None();
builder.AddUsb().None();
});

if (!isDev)
{
// maxage = one year in seconds
policy.AddStrictTransportSecurityMaxAgeIncludeSubDomains(maxAgeInSeconds: 60 * 60 * 24 * 365);
}

return policy;
}

The APIs used by the Blazor UI are protected by the ValidateAntiForgeryToken and the Authorize attribute. You could add authorization as well if required. Cookies are used for this API with same site protection.

[ValidateAntiForgeryToken]
[Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)]
[ApiController]
[Route(“api/[controller]”)]
public class DirectApiController : ControllerBase
{
[HttpGet]
public IEnumerable<string> Get()
{
return new List<string> { “some data”, “more data”, “loads of data” };
}
}

When the application is started, the user can sign-in and authenticate using OpenIddict.

The setup keeps all the security implementation in the trusted backend. This setup can work against any OpenID Connect conform server. By having a trusted application, it is now possible to implement access to downstream APIs in a number of ways and possible to add further protections as required. The downstream API does not need to be public either. You should only use a downstream API if required. If a software architecture forces you to use APIs from separate domains, then a YARP reverse proxy can be used to access to API, or a service to service API call, ie trusted client with a trusted server, or an on behalf flow (OBO) flow can be used.

Links

https://documentation.openiddict.com/

https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template

https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders

Implement Compound Proof BBS+ verifiable credentials using ASP.NET Core and MATTR

This article shows how Zero Knowledge Proofs BBS+ verifiable credentials can be used to verify credential subject data from two separate verifiable credentials implemented in ASP.NET Core and MATTR. The ZKP BBS+ verifiable credentials are issued and stored on a digital wallet using a Self-Issued Identity Provider (SIOP) and OpenID Connect. A compound proof presentation template is created to verify the user data in a single verify.

Code: https://github.com/swiss-ssi-group/MattrAspNetCoreCompoundProofBBS

Blogs in the series

Getting started with Self Sovereign Identity SSI
Create an OIDC credential Issuer with MATTR and ASP.NET Core
Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR
Verify vaccination data using Zero Knowledge Proofs with ASP.NET Core and MATTR
Challenges to Self Sovereign Identity
Implement Compound Proof BBS+ verifiable credentials using ASP.NET Core and MATTR

What are ZKP BBS+ verifiable credentials

BBS+ verifiable credentials are built using JSON-LD and makes it possible to support selective disclosure of subject claims from a verifiable credential, compound proofs of different VCs, zero knowledge proofs where the subject claims do not need to be exposed to verify something, private holder binding and prevent tracking. The specification and implementation are still a work in progress.

Setup

The solution is setup to issue and verify the BBS+ verifiable credentials. The credential issuers are implemented in ASP.NET Core as well as the verifiable credential verifier. One credential issuer implements a BBS+ JSON-LD E-ID verifiable credential using SIOP together with Auth0 as the identity provider and the MATTR API which implements the access to the ledger and implements the logic for creating and verifying the verifiable credential and implementing the SSI specifications. The second credential issuer implements a county of residence BBS+ verifiable credential issuer like the first one. The ASP.NET Core verifier project uses a BBS+ verify presentation to verify that a user has the correct E-ID credentials and the county residence verifiable credentials in one request. This is presented as a compound proof using credential subject data from both verifiable credentials. The credentials are presented from the MATTR wallet to the ASP.NET Core verifier application.

The BBS+ compound proof is made up from the two verifiable credentials stored on the wallet. The holder of the wallet owns the credentials and can be trusted to a fairly high level because SIOP was used to add the credentials to the MATTR wallet which requires a user authentication on the wallet using OpenID Connect. If the host system has strong authentication, the user of the wallet is probably the same person for which the credentials where intended for and issued too. We only can prove that the verifiable credentials are valid, we cannot prove that the person sending the credentials is also the subject of the credentials or has the authorization to act on behalf of the credential subject. With SIOP, we know that the credentials were issued in a way which allows for strong authentication.

Implementing the Credential Issuers

The credentials are created using a credential issuer and can be added to the users wallet using SIOP. An ASP.NET Core application is used to implement the MATTR API client for creating and issuing the credentials. Auth0 is used for the OIDC server and the profiles used in the verifiable credentials are added here. The Auth0 server is part of the credential issuer service business. The application has two separate flows for administrators and users, or holders of the credentials and credential issuer administrators.

An administrator can signin to the credential issuer ASP.NET Core application using OIDC and can create new OIDC credential issuers using BBS+. Once created, the callback URL for the credential issuer needs to be added to the Auth0 client application as a redirect URL.

A user can login to the ASP.NET Core application and request the verifiable credentials only for themselves. This is not authenticated on the ASP.NET Core application, but on the wallet application using the SIOP flow. The application presents a QR Code which starts the flow. Once authenticated, the credentials are added to the digital wallet. Both the E-ID and the county of residence credentials are added and stored on the wallet.

Auth0 Auth pipeline rules

The credential subject claims added to the verifiable credential uses the profile data from the Auth0 identity provider. This data can be added using an Auth0 auth pipeline rule. Once defined, if the user has the profile data, the verifiable credentials can be created from the data.

function (user, context, callback) {
const namespace = ‘https://damianbod-sandbox.vii.mattr.global/’;
context.idToken[namespace + ‘name’] = user.user_metadata.name;
context.idToken[namespace + ‘first_name’] = user.user_metadata.first_name;
context.idToken[namespace + ‘date_of_birth’] = user.user_metadata.date_of_birth;

context.idToken[namespace + ‘family_name’] = user.user_metadata.family_name;
context.idToken[namespace + ‘given_name’] = user.user_metadata.given_name;

context.idToken[namespace + ‘birth_place’] = user.user_metadata.birth_place;
context.idToken[namespace + ‘gender’] = user.user_metadata.gender;
context.idToken[namespace + ‘height’] = user.user_metadata.height;
context.idToken[namespace + ‘nationality’] = user.user_metadata.nationality;

context.idToken[namespace + ‘address_country’] = user.user_metadata.address_country;
context.idToken[namespace + ‘address_locality’] = user.user_metadata.address_locality;
context.idToken[namespace + ‘address_region’] = user.user_metadata.address_region;
context.idToken[namespace + ‘street_address’] = user.user_metadata.street_address;
context.idToken[namespace + ‘postal_code’] = user.user_metadata.postal_code;

callback(null, user, context);
}

Once issued, the verifiable credential is saved to the digital wallet like this:

{
“type”: [
“VerifiableCredential”,
“VerifiableCredentialExtension”
],
“issuer”: {
“id”: “did:key:zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g”,
“name”: “damianbod-sandbox.vii.mattr.global”
},
“name”: “EID”,
“issuanceDate”: “2021-12-04T11:47:41.319Z”,
“credentialSubject”: {
“id”: “did:key:z6MkmGHPWdKjLqiTydLHvRRdHPNDdUDKDudjiF87RNFjM2fb”,
“family_name”: “Bob”,
“given_name”: “Lammy”,
“date_of_birth”: “1953-07-21”,
“birth_place”: “Seattle”,
“height”: “176cm”,
“nationality”: “USA”,
“gender”: “Male”
},
“@context”: [
“https://www.w3.org/2018/credentials/v1”,
“https://w3id.org/security/bbs/v1”,
{
“@vocab”: “https://w3id.org/security/undefinedTerm#”
},
“https://mattr.global/contexts/vc-extensions/v1”,
“https://schema.org”,
“https://w3id.org/vc-revocation-list-2020/v1”
],
“credentialStatus”: {
“id”: “https://damianbod-sandbox.vii.mattr.global/core/v1/revocation-lists/dd507c44-044c-433b-98ab-6fa9934d6b01#0”,
“type”: “RevocationList2020Status”,
“revocationListIndex”: “0”,
“revocationListCredential”: “https://damianbod-sandbox.vii.mattr.global/core/v1/revocation-lists/dd507c44-044c-433b-98ab-6fa9934d6b01”
},
“proof”: {
“type”: “BbsBlsSignature2020”,
“created”: “2021-12-04T11:47:42Z”,
“proofPurpose”: “assertionMethod”,
“proofValue”: “qquknHC7zaklJd0/IbceP0qC9sGYfkwszlujrNQn+RFg1/lUbjCe85Qnwed7QBQkIGnYRHydZiD+8wJG8/R5i8YPJhWuneWNE151GbPTaMhGNZtM763yi2A11xYLmB86x0d1JLdHaO30NleacpTs9g==”,
“verificationMethod”: “did:key:zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g#zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g”
}
}

For more information on adding BBS+ verifiable credentials using MATTR, see the documentation, or a previous blog in this series.

Verifying the compound proof BBS+ verifiable credential

The verifier application needs to use both E-ID and county of residence verifiable credentials. This is done using a presentation template which is specific to the MATTR platform. Once created, a verify request is created using this template and presented to the user in the UI as a QR code. The holder of the wallet can scan this code and the verification begins. The wallet will use the verification request and try to find the credentials on the wallet which matches what was requested. If the wallet has the data from the correct issuers, the holder of the wallet consents, the data is sent to the verifier application using a new presentation verifiable credential using the credential subject data from both of the existing verifiable credentials stored on the wallet. The webhook or an API on the verifier application handles this and validates the request. If all is good, the data is persisted and the UI is updated using SignalR messaging.

Creating a verifier presentation template

Before verifier presentations can be sent a the digital wallet, a template needs to be created in the MATTR platform. The CreatePresentationTemplate Razor page is used to create a new template. The template requires the two DIDs used for issuing the credentials from the credential issuer applications.

public class CreatePresentationTemplateModel : PageModel
{
private readonly MattrPresentationTemplateService _mattrVerifyService;
public bool CreatingPresentationTemplate { get; set; } = true;
public string TemplateId { get; set; }

[BindProperty]
public PresentationTemplate PresentationTemplate { get; set; }
public CreatePresentationTemplateModel(MattrPresentationTemplateService mattrVerifyService)
{
_mattrVerifyService = mattrVerifyService;
}
public void OnGet()
{
PresentationTemplate = new PresentationTemplate();
}

public async Task<IActionResult> OnPostAsync()
{
if (!ModelState.IsValid)
{
return Page();
}

TemplateId = await _mattrVerifyService.CreatePresentationTemplateId(
PresentationTemplate.DidEid, PresentationTemplate.DidCountyResidence);

CreatingPresentationTemplate = false;
return Page();
}
}

public class PresentationTemplate
{
[Required]
public string DidEid { get; set; }

[Required]
public string DidCountyResidence { get; set; }

}

The MattrPresentationTemplateService class implements the logic required to create a new presentation template. The service gets a new access token for your MATTR tenant and creates a new template using the credential subjects required and the correct contexts. BBS+ and frames require specific contexts. The CredentialQuery2 has two separate Frame items, one for each verifiable credential created and stored on the digital wallet.

public class MattrPresentationTemplateService
{
private readonly IHttpClientFactory _clientFactory;
private readonly MattrTokenApiService _mattrTokenApiService;
private readonly VerifyEidCountyResidenceDbService _verifyEidAndCountyResidenceDbService;
private readonly MattrConfiguration _mattrConfiguration;

public MattrPresentationTemplateService(IHttpClientFactory clientFactory,
IOptions<MattrConfiguration> mattrConfiguration,
MattrTokenApiService mattrTokenApiService,
VerifyEidCountyResidenceDbService VerifyEidAndCountyResidenceDbService)
{
_clientFactory = clientFactory;
_mattrTokenApiService = mattrTokenApiService;
_verifyEidAndCountyResidenceDbService = VerifyEidAndCountyResidenceDbService;
_mattrConfiguration = mattrConfiguration.Value;
}

public async Task<string> CreatePresentationTemplateId(string didEid, string didCountyResidence)
{
// create a new one
var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(didEid, didCountyResidence);

// save to db
var template = new EidCountyResidenceDataPresentationTemplate
{
DidEid = didEid,
DidCountyResidence = didCountyResidence,
TemplateId = v1PresentationTemplateResponse.Id,
MattrPresentationTemplateReponse = JsonConvert.SerializeObject(v1PresentationTemplateResponse)
};
await _verifyEidAndCountyResidenceDbService.CreateEidAndCountyResidenceDataTemplate(template);

return v1PresentationTemplateResponse.Id;
}

private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate(string didId, string didCountyResidence)
{
HttpClient client = _clientFactory.CreateClient();
var accessToken = await _mattrTokenApiService.GetApiToken(client, “mattrAccessToken”);

client.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue(“Bearer”, accessToken);
client.DefaultRequestHeaders.TryAddWithoutValidation(“Content-Type”, “application/json”);

var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(client, didId, didCountyResidence);
return v1PresentationTemplateResponse;
}

private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate(
HttpClient client, string didEid, string didCountyResidence)
{
// create presentation, post to presentations templates api
// https://learn.mattr.global/tutorials/verify/presentation-request-template
// https://learn.mattr.global/tutorials/verify/presentation-request-template#create-a-privacy-preserving-presentation-request-template-for-zkp-enabled-credentials

var createPresentationsTemplatesUrl = $”https://{_mattrConfiguration.TenantSubdomain}/v1/presentations/templates”;

var eidAdditionalPropertiesCredentialSubject = new Dictionary<string, object>();
eidAdditionalPropertiesCredentialSubject.Add(“credentialSubject”, new EidDataCredentialSubject
{
Explicit = true
});

var countyResidenceAdditionalPropertiesCredentialSubject = new Dictionary<string, object>();
countyResidenceAdditionalPropertiesCredentialSubject.Add(“credentialSubject”, new CountyResidenceDataCredentialSubject
{
Explicit = true
});

var additionalPropertiesCredentialQuery = new Dictionary<string, object>();
additionalPropertiesCredentialQuery.Add(“required”, true);

var additionalPropertiesQuery = new Dictionary<string, object>();
additionalPropertiesQuery.Add(“type”, “QueryByFrame”);
additionalPropertiesQuery.Add(“credentialQuery”, new List<CredentialQuery2> {
new CredentialQuery2
{
Reason = “Please provide your E-ID”,
TrustedIssuer = new List<TrustedIssuer>{
new TrustedIssuer
{
Required = true,
Issuer = didEid // DID used to create the oidc
}
},
Frame = new Frame
{
Context = new List<object>{
“https://www.w3.org/2018/credentials/v1”,
“https://w3id.org/security/bbs/v1”,
“https://mattr.global/contexts/vc-extensions/v1”,
“https://schema.org”,
“https://w3id.org/vc-revocation-list-2020/v1”
},
Type = “VerifiableCredential”,
AdditionalProperties = eidAdditionalPropertiesCredentialSubject

},
AdditionalProperties = additionalPropertiesCredentialQuery
},
new CredentialQuery2
{
Reason = “Please provide your Residence data”,
TrustedIssuer = new List<TrustedIssuer>{
new TrustedIssuer
{
Required = true,
Issuer = didCountyResidence // DID used to create the oidc
}
},
Frame = new Frame
{
Context = new List<object>{
“https://www.w3.org/2018/credentials/v1”,
“https://w3id.org/security/bbs/v1”,
“https://mattr.global/contexts/vc-extensions/v1”,
“https://schema.org”,
“https://w3id.org/vc-revocation-list-2020/v1”
},
Type = “VerifiableCredential”,
AdditionalProperties = countyResidenceAdditionalPropertiesCredentialSubject

},
AdditionalProperties = additionalPropertiesCredentialQuery
}
});

var payload = new MattrOpenApiClient.V1_CreatePresentationTemplate
{
Domain = _mattrConfiguration.TenantSubdomain,
Name = “zkp-eid-county-residence-compound”,
Query = new List<Query>
{
new Query
{
AdditionalProperties = additionalPropertiesQuery
}
}
};

var payloadJson = JsonConvert.SerializeObject(payload);

var uri = new Uri(createPresentationsTemplatesUrl);

using (var content = new StringContentWithoutCharset(payloadJson, “application/json”))
{
var presentationTemplateResponse = await client.PostAsync(uri, content);

if (presentationTemplateResponse.StatusCode == System.Net.HttpStatusCode.Created)
{

var v1PresentationTemplateResponse = JsonConvert
.DeserializeObject<MattrOpenApiClient.V1_PresentationTemplateResponse>(
await presentationTemplateResponse.Content.ReadAsStringAsync());

return v1PresentationTemplateResponse;
}

var error = await presentationTemplateResponse.Content.ReadAsStringAsync();

}

throw new Exception(“whoops something went wrong”);
}
}

public class EidDataCredentialSubject
{
[Newtonsoft.Json.JsonProperty(“@explicit”, Required = Newtonsoft.Json.Required.Always)]
public bool Explicit { get; set; }

[Newtonsoft.Json.JsonProperty(“family_name”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object FamilyName { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“given_name”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object GivenName { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“date_of_birth”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object DateOfBirth { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“birth_place”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object BirthPlace { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“height”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object Height { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“nationality”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object Nationality { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“gender”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object Gender { get; set; } = new object();
}

public class CountyResidenceDataCredentialSubject
{
[Newtonsoft.Json.JsonProperty(“@explicit”, Required = Newtonsoft.Json.Required.Always)]
public bool Explicit { get; set; }

[Newtonsoft.Json.JsonProperty(“family_name”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object FamilyName { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“given_name”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object GivenName { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“date_of_birth”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object DateOfBirth { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“address_country”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object AddressCountry { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“address_locality”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object AddressLocality { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“address_region”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object AddressRegion { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“street_address”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object StreetAddress { get; set; } = new object();

[Newtonsoft.Json.JsonProperty(“postal_code”, Required = Newtonsoft.Json.Required.Always)]
[System.ComponentModel.DataAnnotations.Required]
public object PostalCode { get; set; } = new object();
}

When the presentation template is created, the following JSON payload in returned. This is what is used to create verifier presentation requests. The context must contain the value of the context value of the credentials on the wallet. You can also verify that the trusted issuer matches and that the two Frame objects are created correctly with the required values.

{
“id”: “f188df35-e76f-4794-8e64-eedbe0af2b19”,
“domain”: “damianbod-sandbox.vii.mattr.global”,
“name”: “zkp-eid-county-residence-compound”,
“query”: [
{
“type”: “QueryByFrame”,
“credentialQuery”: [
{
“reason”: “Please provide your E-ID”,
“frame”: {
“@context”: [
“https://www.w3.org/2018/credentials/v1”,
“https://w3id.org/security/bbs/v1”,
“https://mattr.global/contexts/vc-extensions/v1”,
“https://schema.org”,
“https://w3id.org/vc-revocation-list-2020/v1”
],
“type”: “VerifiableCredential”,
“credentialSubject”: {
“@explicit”: true,
“family_name”: {},
“given_name”: {},
“date_of_birth”: {},
“birth_place”: {},
“height”: {},
“nationality”: {},
“gender”: {}
}
},
“trustedIssuer”: [
{
“required”: true,
“issuer”: “did:key:zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g”
}
],
“required”: true
},
{
“reason”: “Please provide your Residence data”,
“frame”: {
“@context”: [
“https://www.w3.org/2018/credentials/v1”,
“https://w3id.org/security/bbs/v1”,
“https://mattr.global/contexts/vc-extensions/v1”,
“https://schema.org”,
“https://w3id.org/vc-revocation-list-2020/v1”
],
“type”: “VerifiableCredential”,
“credentialSubject”: {
“@explicit”: true,
“family_name”: {},
“given_name”: {},
“date_of_birth”: {},
“address_country”: {},
“address_locality”: {},
“address_region”: {},
“street_address”: {},
“postal_code”: {}
}
},
“trustedIssuer”: [
{
“required”: true,
“issuer”: “did:key:zUC7G95fmyuYXNP2oqhhWkysmMPafU4dUWtqzXSsijsLCVauFDhAB7Dqbk2LCeo488j9iWGLXCL59ocYzhTmS3U7WNdukoJ2A8Z8AVCzeS5TySDJcYCjzuaPm7voPGPqtYa6eLV”
}
],
“required”: true
}
]
}
]
}

The presentation template is ready and can be used now. This is just a specific definition used by the MATTR platform. This is not saved to the ledger.

Creating a verifier request and present QR Code

Now that we have a presentation template, we initialize a verifier presentation request and present this as a QR Code for the holder of the digital wallet to scan. The CreateVerifyCallback method creates the verification and returns a signed token which is added to the QR Code to scan and the challengeId is encoded in base64 as we use this in the URL to request or handle the webhook callback.

public class CreateVerifierDisplayQrCodeModel : PageModel
{
private readonly MattrCredentialVerifyCallbackService _mattrCredentialVerifyCallbackService;

public bool CreatingVerifier { get; set; } = true;
public string QrCodeUrl { get; set; }

[BindProperty]
public string ChallengeId { get; set; }

[BindProperty]
public string Base64ChallengeId { get; set; }

[BindProperty]
public CreateVerifierDisplayQrCodeCallbackUrl CallbackUrlDto { get; set; }
public CreateVerifierDisplayQrCodeModel(MattrCredentialVerifyCallbackService mattrCredentialVerifyCallbackService)
{
_mattrCredentialVerifyCallbackService = mattrCredentialVerifyCallbackService;
}
public void OnGet()
{
CallbackUrlDto = new CreateVerifierDisplayQrCodeCallbackUrl();
CallbackUrlDto.CallbackUrl = $”https://{HttpContext.Request.Host.Value}”;
}

public async Task<IActionResult> OnPostAsync()
{
if (!ModelState.IsValid)
{
return Page();
}

var result = await _mattrCredentialVerifyCallbackService
.CreateVerifyCallback(CallbackUrlDto.CallbackUrl);

CreatingVerifier = false;

var walletUrl = result.WalletUrl.Trim();
ChallengeId = result.ChallengeId;
var valueBytes = Encoding.UTF8.GetBytes(ChallengeId);
Base64ChallengeId = Convert.ToBase64String(valueBytes);

VerificationRedirectController.WalletUrls.Add(Base64ChallengeId, walletUrl);

// https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e#redirect-urls
//var qrCodeUrl = $”didcomm://{walletUrl}”;

QrCodeUrl = $”didcomm://https://{HttpContext.Request.Host.Value}/VerificationRedirect/{Base64ChallengeId}”;
return Page();
}
}

public class CreateVerifierDisplayQrCodeCallbackUrl
{
[Required]
public string CallbackUrl { get; set; }
}

The CreateVerifyCallback method uses the host as the base URL for the callback definition which is included in the verification. An access token is requested for the MATTR API, this is used for all the requests. The last issued template is used in the verification. A new DID is created or the existing DID for this verifier is used to attach the verify presentation on the ledger. The InvokePresentationRequest is used to initialize the verification presentation. This request uses the templateId, the callback URL and the DID. Part of the body payload of the response of the request is signed and this is returned to the Razor page to be displayed as part of the QR code. This signed token is longer and so a didcomm redirect is used in the QR Code and not the value directly in the Razor page..

/// <summary>
/// https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e
/// </summary>
/// <param name=”callbackBaseUrl”></param>
/// <returns></returns>
public async Task<(string WalletUrl, string ChallengeId)> CreateVerifyCallback(string callbackBaseUrl)
{
callbackBaseUrl = callbackBaseUrl.Trim();
if (!callbackBaseUrl.EndsWith(‘/’))
{
callbackBaseUrl = $”{callbackBaseUrl}/”;
}

var callbackUrlFull = $”{callbackBaseUrl}{MATTR_CALLBACK_VERIFY_PATH}”;
var challenge = GetEncodedRandomString();

HttpClient client = _clientFactory.CreateClient();
var accessToken = await _mattrTokenApiService.GetApiToken(client, “mattrAccessToken”);

client.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue(“Bearer”, accessToken);
client.DefaultRequestHeaders.TryAddWithoutValidation(“Content-Type”, “application/json”);

var template = await _VerifyEidAndCountyResidenceDbService.GetLastPresentationTemplate();

var didToVerify = await _mattrCreateDidService.GetDidOrCreate(“did_for_verify”);
// Request DID from ledger
V1_GetDidResponse did = await RequestDID(didToVerify.Did, client);

// Invoke the Presentation Request
var invokePresentationResponse = await InvokePresentationRequest(
client,
didToVerify.Did,
template.TemplateId,
challenge,
callbackUrlFull);

// Sign and Encode the Presentation Request body
var signAndEncodePresentationRequestBodyResponse = await SignAndEncodePresentationRequestBody(
client, did, invokePresentationResponse);

// fix strange DTO
var jws = signAndEncodePresentationRequestBodyResponse.Replace(“””, “”);

// save to db
var vaccinationDataPresentationVerify = new EidCountyResidenceDataPresentationVerify
{
DidEid = template.DidEid,
DidCountyResidence = template.DidCountyResidence,
TemplateId = template.TemplateId,
CallbackUrl = callbackUrlFull,
Challenge = challenge,
InvokePresentationResponse = JsonConvert.SerializeObject(invokePresentationResponse),
Did = JsonConvert.SerializeObject(did),
SignAndEncodePresentationRequestBody = jws
};
await _VerifyEidAndCountyResidenceDbService.CreateEidAndCountyResidenceDataPresentationVerify(vaccinationDataPresentationVerify);

var walletUrl = $”https://{_mattrConfiguration.TenantSubdomain}/?request={jws}”;

return (walletUrl, challenge);
}

The QR Code is displayed in the UI.

Once the QR Code is created and scanned, the SignalR client starts listening for messages returned for the challengeId.

@section scripts {
<script src=”~/js/qrcode.min.js”></script>
<script type=”text/javascript”>
new QRCode(document.getElementById(“qrCode”),
{
text: “@Html.Raw(Model.QrCodeUrl)”,
width: 300,
height: 300,
correctLevel: QRCode.CorrectLevel.L
});

$(document).ready(() => {

});

var connection = new signalR.HubConnectionBuilder().withUrl(“/mattrVerifiedSuccessHub”).build();

connection.on(“MattrCallbackSuccess”, function (base64ChallengeId) {
console.log(“received verification:” + base64ChallengeId);
window.location.href = “/VerifiedUser?base64ChallengeId=” + base64ChallengeId;
});

connection.start().then(function () {
console.log(connection.connectionId);
const base64ChallengeId = $(“#Base64ChallengeId”).val();
console.warn(“base64ChallengeId: ” + base64ChallengeId);

if (base64ChallengeId) {
console.log(base64ChallengeId);
// join message
connection.invoke(“AddChallenge”, base64ChallengeId, connection.connectionId).catch(function (err) {
return console.error(err.toString());
});
}
}).catch(function (err) {
return console.error(err.toString());
});
</script>
}

Validating the verification callback

After the holder of the digital wallet has given consent, the wallet sends the verifiable credential data back to the verifier application in a HTTP request. This is sent to a webhook or an API in the verifier application. This needs to be verified correctly. In this demo, only the challengeId is used to match the request, the payload is not validated which it should be. The callback handler stores the data to the database and sends a SignalR message to inform the waiting client that the verify has been completed successfully.

private readonly VerifyEidCountyResidenceDbService _verifyEidAndCountyResidenceDbService;

private readonly IHubContext<MattrVerifiedSuccessHub> _hubContext;

public VerificationController(VerifyEidCountyResidenceDbService verifyEidAndCountyResidenceDbService,
IHubContext<MattrVerifiedSuccessHub> hubContext)
{
_hubContext = hubContext;
_verifyEidAndCountyResidenceDbService = verifyEidAndCountyResidenceDbService;
}

/// <summary>
/// {
/// “presentationType”: “QueryByFrame”,
/// “challengeId”: “nGu/E6eQ8AraHzWyB/kluudUhraB8GybC3PNHyZI”,
/// “claims”: {
/// “id”: “did:key:z6MkmGHPWdKjLqiTydLHvRRdHPNDdUDKDudjiF87RNFjM2fb”,
/// “http://schema.org/birth_place”: “Seattle”,
/// “http://schema.org/date_of_birth”: “1953-07-21”,
/// “http://schema.org/family_name”: “Bob”,
/// “http://schema.org/gender”: “Male”,
/// “http://schema.org/given_name”: “Lammy”,
/// “http://schema.org/height”: “176cm”,
/// “http://schema.org/nationality”: “USA”,
/// “http://schema.org/address_country”: “Schweiz”,
/// “http://schema.org/address_locality”: “Thun”,
/// “http://schema.org/address_region”: “Bern”,
/// “http://schema.org/postal_code”: “3000”,
/// “http://schema.org/street_address”: “Thunerstrasse 14”
/// },
/// “verified”: true,
/// “holder”: “did:key:z6MkmGHPWdKjLqiTydLHvRRdHPNDdUDKDudjiF87RNFjM2fb”
/// }
/// </summary>
/// <param name=”body”></param>
/// <returns></returns>
[HttpPost]
[Route(“[action]”)]
public async Task<IActionResult> VerificationDataCallback()
{
string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync();
var body = JsonSerializer.Deserialize<VerifiedEidCountyResidenceData>(content);

var valueBytes = Encoding.UTF8.GetBytes(body.ChallengeId);
var base64ChallengeId = Convert.ToBase64String(valueBytes);

string connectionId;
var found = MattrVerifiedSuccessHub.Challenges
.TryGetValue(base64ChallengeId, out connectionId);

//test Signalr
//await _hubContext.Clients.Client(connectionId).SendAsync(“MattrCallbackSuccess”, $”{base64ChallengeId}”);
//return Ok();

var exists = await _verifyEidAndCountyResidenceDbService.ChallengeExists(body.ChallengeId);

if (exists)
{
await _verifyEidAndCountyResidenceDbService.PersistVerification(body);

if (found)
{
//$”/VerifiedUser?base64ChallengeId={base64ChallengeId}”
await _hubContext.Clients
.Client(connectionId)
.SendAsync(“MattrCallbackSuccess”, $”{base64ChallengeId}”);
}

return Ok();
}

return BadRequest(“unknown verify request”);
}

The VerifiedUser ASP.NET Core Razor page displays the data after a successful verification. This uses the challengeId to get the data from the database and display this in the UI for the next steps.

public class VerifiedUserModel : PageModel
{
private readonly VerifyEidCountyResidenceDbService _verifyEidCountyResidenceDbService;

public VerifiedUserModel(VerifyEidCountyResidenceDbService verifyEidCountyResidenceDbService)
{
_verifyEidCountyResidenceDbService = verifyEidCountyResidenceDbService;
}

public string Base64ChallengeId { get; set; }
public EidCountyResidenceVerifiedClaimsDto VerifiedEidCountyResidenceDataClaims { get; private set; }

public async Task OnGetAsync(string base64ChallengeId)
{
// user query param to get challenge id and display data
if (base64ChallengeId != null)
{
var valueBytes = Convert.FromBase64String(base64ChallengeId);
var challengeId = Encoding.UTF8.GetString(valueBytes);

var verifiedDataUser = await _verifyEidCountyResidenceDbService.GetVerifiedUser(challengeId);
VerifiedEidCountyResidenceDataClaims = new EidCountyResidenceVerifiedClaimsDto
{
// Common
DateOfBirth = verifiedDataUser.DateOfBirth,
FamilyName = verifiedDataUser.FamilyName,
GivenName = verifiedDataUser.GivenName,

// E-ID
BirthPlace = verifiedDataUser.BirthPlace,
Height = verifiedDataUser.Height,
Nationality = verifiedDataUser.Nationality,
Gender = verifiedDataUser.Gender,

// County Residence
AddressCountry = verifiedDataUser.AddressCountry,
AddressLocality = verifiedDataUser.AddressLocality,
AddressRegion = verifiedDataUser.AddressRegion,
StreetAddress = verifiedDataUser.StreetAddress,
PostalCode = verifiedDataUser.PostalCode
};
}
}
}

The demo UI displays the data after a successful verification. The next steps of the verifier process can be implemented using these values. This would typically included creating an account and setting up an authentication which is not subject to phishing for high security or at least which has a second factor.

Notes

The MATTR BBS+ verifiable credentials look really good and supports selective disclosure and compound proofs. The implementation is still a WIP and MATTR are investing in this at present and will hopefully complete and improve all the BBS+ features. Until BBS+ is implemented by the majority of SSI platform providers and the specs are completed, I don’t not see how SSI can be adopted unless of course all converge on some other standard. This would help improve some of the interop problems between the vendors.

Links

https://mattr.global/

https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e

https://mattr.global/get-started/

https://learn.mattr.global/

https://keybase.io/

Generating a ZKP-enabled BBS+ credential using the MATTR Platform

https://learn.mattr.global/tutorials/dids/did-key

https://gunnarpeipman.com/httpclient-remove-charset/

https://auth0.com/

Where to begin with OIDC and SIOP

https://anonyome.com/2020/06/decentralized-identity-key-concepts-explained/

Verifiable-Credentials-Flavors-Explained

https://learn.mattr.global/api-reference/

https://w3c-ccg.github.io/ld-proofs/

Verifiable Credentials Data Model v1.1 (w3.org)