Using Open Source Cedar to Write and Enforce Custom Authorization Policies

Cedar is an open source language and software development kit (SDK) for writing and enforcing authorization policies for your applications. You can use Cedar to control access to resources such as photos in a photo-sharing app, compute nodes in a micro-services cluster, or components in a workflow automation system. You specify fine-grained permissions as Cedar policies, and your application authorizes access requests by calling the Cedar SDK’s authorization engine. Cedar has a simple and expressive syntax that supports common authorization paradigms, including both role-based access control (RBAC) and attribute-based access control (ABAC). Because Cedar policies are separate from application code, they can be independently authored, analyzed, and audited, and even shared among multiple applications.

In this blog post, we introduce Cedar and the SDK using an example application, TinyTodo, whose users and teams can organize, track, and share their todo lists. We present examples of TinyTodo permissions as Cedar policies and how TinyTodo uses the Cedar authorization engine to ensure that only intended users are granted access. A more detailed version of this post is included with the TinyTodo code.

TinyTodo

TinyTodo allows individuals, called Users, and groups, called Teams, to organize, track, and share their todo lists. Users create Lists which they can populate with tasks. As tasks are completed, they can be checked off the list.

TinyTodo Permissions

We don’t want to allow TinyTodo users to see or make changes to just any task list. TinyTodo uses Cedar to control who has access to what. A List‘s creator, called its owner, can share the list with other Users or Teams. Owners can share lists in two different modes: reader and editor. A reader can get details of a List and the tasks inside it. An editor can do those things as well, but may also add new tasks, as well as edit, (un)check, and remove existing tasks.

We specify and enforce these access permissions using Cedar. Here is one of TinyTodo’s Cedar policies.

// policy 1: A User can perform any action on a List they own
permit(principal, action, resource)
when {
    resource has owner && resource.owner == principal
};

This policy states that any principal (a TinyTodo User) can perform any action on any resource (a TinyTodoList) as long as the resource has an owner attribute that matches the requesting principal.

Here’s another TinyTodo Cedar policy.

// policy 2: A User can see a List if they are either a reader or editor
permit (
    principal,
    action == Action::”GetList”,
    resource
)
when {
    principal in resource.readers || principal in resource.editors
};

This policy states that any principal can read the contents of a task list (Action::”GetList”) so long as they are in either the list’s readers group, or its editors group.

Cedar’s authorizer enforces default deny: A request is authorized only if a specific permit policy grants it.

The full set of policies can be found in the file TinyTodo file policies.cedar (discussed below). To learn more about Cedar’s syntax and capabilities, check out the Cedar online tutorial at https://www.cedarpolicy.com/.

Building TinyTodo

To build TinyTodo you need to install Rust and Python3, and the Python3 requests module. Download and build the TinyTodo code by doing the following:

> git clone https://github.com/cedar-policy/tinytodo
…downloading messages here
> cd tinytodo
> cargo build
…build messages here

The cargo build command will automatically download and build the Cedar Rust packages cedar-policy-core, cedar-policy-validator, and others, from Rust’s standard package registry, crates.io, and build the TinyTodo server, tiny-todo-server. The TinyTodo CLI is a Python script, tinytodo.py, which interacts with the server. The basic architecture is shown in Figure 1.

Figure 1: TinyTodo application architecture

Running TinyTodo

Let’s run TinyTodo. To begin, we start the server, assume the identity of user andrew, create a new todo list called Cedar blog post, add two tasks to that list, and then complete one of the tasks.

> python -i tinytodo.py
>>> start_server()
TinyTodo server started on port 8080
>>> set_user(andrew)
User is now andrew
>>> get_lists()
No lists for andrew
>>> create_list(“Cedar blog post”)
Created list ID 0
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
>>> create_task(0,”Draft the post”)
Created task on list ID 0
>>> create_task(0,”Revise and polish”)
Created task on list ID 0
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
1. [ ] Draft the post
2. [ ] Revise and polish
>>> toggle_task(0,1)
Toggled task on list ID 0
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
1. [X] Draft the post
2. [ ] Revise and polish

Figure 2: Users and Teams in TinyTodo

The get_list, create_task, and toggle_task commands are all authorized by the Cedar Policy 1 we saw above: since andrew is the owner of List ID 0, he is allowed to carry out any action on it.

Now, continuing as user andrew, we share the list with team interns as a reader. TinyTodo is configured so that the relationship between users and teams is as shown in Figure 2. We switch the user identity to aaron, list the tasks, and attempt to complete another task, but the attempt is denied because aaronis only allowed to view the list (since he’s a member of interns) not edit it. Finally, we switch to user kesha and attempt to view the list, but the attempt is not allowed (interns is a member of temp, but not the reverse).

>>> share_list(0,interns,read_only=True)
Shared list ID 0 with interns as reader
>>> set_user(aaron)
User is now aaron
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
1. [X] Draft the post
2. [ ] Revise and polish
>>> toggle_task(0,2)
Access denied. User aaron is not authorized to Toggle Task on [0, 2]
>>> set_user(kesha)
User is now kesha
>>> get_list(0)
Access denied. User kesha is not authorized to Get List on [0]
>>> stop_server()
TinyTodo server stopped on port 8080

Here, aaron‘s get_list command is authorized by the Cedar Policy 2 we saw above, since aaron is a member of the Team interns, which andrew made a reader of List 0. aaron‘s toggle_task and kesha‘s get_list commands are both denied because no specific policy exists that authorizes them.

Extending TinyTodo’s Policies with Administrator Privileges

We can change the policies with no updates to the application code because they are defined and maintained independently. To see this, add the following policy to the end of the policies.cedar file:

permit(
principal in Team::”admin”,
action,
resource in Application::”TinyTodo”);

This policy states that any user who is a member of Team::”Admin” is able to carry out any action on any List (all of which are part of the Application::”TinyTodo” group). Since user emina is defined to be a member of Team::”Admin” (see Figure 2), if we restart TinyTodo to use this new policy, we can see emina is able to view and edit any list:

> python -i tinytodo.py
>>> start_server()
=== TinyTodo started on port 8080
>>> set_user(andrew)
User is now andrew
>>> create_list(“Cedar blog post”)
Created list ID 0
>>> set_user(emina)
User is now emina
>>> get_list(0)
=== Cedar blog post ===
List ID: 0
Owner: User::”andrew”
Tasks:
>>> delete_list(0)
List Deleted
>>> stop_server()
TinyTodo server stopped on port 8080

Enforcing access requests

When the TinyTodo server receives a command from the client, such as get_list or toggle_task, it checks to see if that command is allowed by invoking the Cedar authorization engine. To do so, it translates the command information into a Cedar request and passes it with relevant data to the Cedar authorization engine, which either allows or denies the request.

Here’s what that looks like in the server code, written in Rust. Each command has a corresponding handler, and that handler first calls the function self.is_authorized to authorize the request before continuing with the command logic. Here’s what that function looks like:

pub fn is_authorized(
&self,
principal: impl AsRef<EntityUid>,
action: impl AsRef<EntityUid>,
resource: impl AsRef<EntityUid>,
) -> Result<()> {
let es = self.entities.as_entities();
let q = Request::new(
Some(principal.as_ref().clone().into()),
Some(action.as_ref().clone().into()),
Some(resource.as_ref().clone().into()),
Context::empty(),
);
info!(“is_authorized request: …”);
let resp = self.authorizer.is_authorized(&q, &self.policies, &es);
info!(“Auth response: {:?}”, resp);
match resp.decision() {
Decision::Allow => Ok(()),
Decision::Deny => Err(Error::AuthDenied(resp.diagnostics().clone())),
}
}

The Cedar authorization engine is stored in the variable self.authorizer and is invoked via the call self.authorizer.is_authorized(&q, &self.policies, &es). The first argument is the access request &q — can the principal perform action on resource with an empty context? An example from our sample run above is whether User::”kesha” can perform action Action::”GetList” on resource List::”0″. (The notation Type::”id” used here is of a Cedar entity UID, which has Rust type cedar_policy::EntityUid in the code.) The second argument is the set of Cedar policies &self.policies the engine will consult when deciding the request; these were read in by the server when it started up. The last argument &es is the set of entities the engine will consider when consulting the policies. These are data objects that represent TinyTodo’s Users, Teams, and Lists, to which the policies may refer. The Cedar authorizer returns a decision: If Decision::Allow then the TinyTodo command can proceed; if Decision::Deny then the server returns that access is denied. The request and its outcome are logged by the calls to info!(…).

Learn More

We are just getting started with TinyTodo, and we have only seen some of what the Cedar SDK can do. You can find a full tutorial in TUTORIAL.md in the tinytodo source code directory which explains (1) the full set of TinyTodo Cedar policies; (2) information about TinyTodo’s Cedar data model, i.e., how TinyTodo stores information about users, teams, lists and tasks as Cedar entities; (3) how we specify the expected data model and structure of TinyTodo access requests as a Cedar schema, and use the Cedar SDK’s validator to ensure that policies conform to the schema; and (4) challenge problems for extending TinyTodo to be even more full featured.

Cedar and Open Source

Cedar is the authorization policy language used by customers of the Amazon Verified Permissions and AWS Verified Access managed services. With the release of the Cedar SDK on GitHub, we provide transparency into Cedar’s development, invite community contributions, and hope to build trust in Cedar’s security.

All of Cedar’s code is available at https://github.com/cedar-policy/. Check out the roadmap and issues list on the site to see where it is going and how you could contribute. We welcome submissions of issues and feature requests via GitHub issues. We built the core Cedar SDK components (for example, the authorizer) using a new process called verification-guided development in order to provide extra assurance that they are safe and secure. To contribute to these components, you can submit a “request for comments” and engage with the core team to get your change approved.

To learn more, feel free to submit questions, comments, and suggestions via the public Cedar Slack workspace, https://cedar-policy.slack.com. You can also complete the online Cedar tutorial and play with it via the language playground at https://www.cedarpolicy.com/.

Flatlogic Admin Templates banner

Setting up a secure CI/CD pipeline in a private Amazon Virtual Private Cloud with no public internet access

With the rise of the cloud and increased security awareness, the use of private Amazon VPCs with no public internet access also expanded rapidly. This setup is recommended to make sure of proper security through isolation. The isolation requirement also applies to code pipelines, in which developers deploy their application modules, software packages, and other dependencies and bundles throughout the development lifecycle. This is done without having to push larger bundles from the developer space to the staging space or the target environment. Furthermore, AWS CodeArtifact is used as an artifact management service that will help organizations of any size to securely store, publish, and share software packages used in their software development process.

We’ll walk through the steps required to build a secure, private continuous integration/continuous development (CI/CD) pipeline with no public internet access while maintaining log retention in Amazon CloudWatch. We’ll utilize AWS CodeCommit for source, CodeArtifact for the Modules and software packages, and Amazon Simple Storage Service (Amazon S3) as artifact storage.

Prerequisites

The prerequisites for following along with this post include:

An AWS Account
A Virtual Private Cloud (Amazon VPC)
A CI/CD pipeline – This can be CodePipeline, Jenkins or any CI/CD tool you want to integrate CodeArtifact with, we will use CodePipeline in our walkthrough here.

Solution walkthrough

The main service we’ll focus on is CodeArtifact, a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process. CodeArtifact works with commonly used package managers and build tools, such as Maven and Gradle (Java), npm and yarn (JavaScript), pip and twine (Python), or NuGet (.NET).

Users push code to CodeCommit, CodePipeline will detect the change and start the pipeline, in CodeBuild the build stage will utilize the private endpoints and download the software packages needed without the need to go over the internet.

The preceding diagram shows how the requests remain private within the VPC and won’t go through the Internet gateway, by going from CodeBuild over the private endpoint to CodeArtifact service, all within the private subnet.

The requests will use the following VPC endpoints to connect to these AWS services:

CloudWatch Logs endpoint (for CodeBuild to put logs in CloudWatch)
CodeArtifact endpoints

AWS Security Token Service (AWS STS) endpoint
Amazon Simple Storage Service (Amazon S3) endpoint

Walkthrough

Create a CodeCommit Repository:

Navigate to your CodeCommit Console then click on Create repository

Figure 2. Screenshot: Create repository button.

Type in name for the repository then click Create

Figure 3. Screenshot: Repository setting with name shown as “Private” and empty Description.

Scroll down and click Create file

Figure 4. Create file button.

Copy the example buildspec.yml file and paste it to the editor

Example buildspec.yml file:

version: 0.2
phases:
install:
runtime-versions:
nodejs: 16

commands:
– export AWS_STS_REGIONAL_ENDPOINTS=regional
– ACCT=`aws sts get-caller-identity –region ${AWS_REGION} –query Account –output text`
– aws codeartifact login –tool npm –repository Private –domain private –domain-owner ${ACCT}
– npm install
build:
commands:
– node index.js

Name the file buildspec.yml, type in your name and your email address then Commit changes

Figure 5. Screenshot: Create file page.

Create CodeArtifact

Navigate to your CodeArtifact Console then click on Create repository
Give it a name and select npm-store as public upsteam repository

Figure 6. Screenshot: Create repository page with Repository name “Private”.

For the Domain Select this AWS account and enter a domain name

Figure 7. Screenshot: Select domain page.

Click Next then Create repository

Figure 8. Screenshot: Create repository review page.

Create a CI/CD using CodePipeline

Navigate to your CodePipeline Console then click on Create pipeline

Figure 9. Screenshot: Create pipeline button.

Type a name, leave the Service role as “New service role” and click next

Figure 10. Screenshot: Choose pipeline setting page with pipeline name “Private”.

Select AWS CodeCommit as your Source provider
Then choose the CodeCommit repository you created earlier and for branch select main then click Next

Figure 11. Screenshot: Create pipeline add source stage.

For the Build Stage, Choose AWS CodeBuild as the build provider, then click Create Project

Figure 12. Screenshot: Create pipeline add build stage.

This will open new window to create the new Project, Give this project a name

Figure 13. Screenshot: Create pipeline create build project window.

 Scroll down to the Environment section: select pick Managed image,
For Operating system select “Amazon Linux 2”,
Runtime “Standard” and
For Image select the aws/codebuild/amazonlinux2-x86+64-standard:4.0
For the Image version: Always use the latest image for this runtime version
Select Linux for the Environment type
Leave the Privileged option unchecked and set Service Role to “New service role”

Figure 14. Screenshot: Create pipeline create build project, setting up environment window.

Expand Additional configurations and scroll down to the VPC section, select the desired VPC, your Subnets (we recommend selecting multiple AZs, to ensure high availability), and Security Group (the security group rules must allow resources that will use the VPC endpoint to communicate with the AWS service to communicate with the endpoint network interface, default VPC security group will be used here as an example)

Figure 15. Screenshot: Create pipeline create build project networking window.

Scroll down to the Buildspec and select “Use a buildspec file” and type “buildspec.yml” for the Buildspec name

Figure 16. Screenshot: Create pipeline create build project buildspec window.

Select the CloudWatch logs option you can leave the group name and stream empty this will let the service use the default values and click Continue to CodePipeline

Figure 17. Screenshot: Create pipeline create build project logs window.

This will create the new CodeBuild Project, update the CodePipeline page, now you can click Next

Figure 18. Screenshot: Create pipeline add build stage window.

 Since we are not deploying this to any environment, you can skip the deploy stage and click “Skip deploy stage”

Figure 19. Screenshot: Create pipeline add deploy stage.

Figure 20. Screenshot: Create pipeline skip deployment stage confirmation.

After you get the popup click skip again you’ll see the review page, scroll all the way down and click Create Pipeline

Create a VPC endpoint for Amazon CloudWatch Logs. This will enable CodeBuild to send execution logs to CloudWatch:

Navigate to your VPC console, and from the navigation menu on the left select “Endpoints”.

Figure 21. Screenshot: VPC endpoint.

 click Create endpoint Button.

Figure 22. Screenshot: Create endpoint.

For service Category, select “AWS Services”. You can set a name for the new endpoint, and make sure to use something descriptive.

Figure 23. Screenshot: Create endpoint page.

From the list of services, search for the endpoint by typing logs in the search bar and selecting the one with com.amazonaws.us-west-2.logs.
This walkthrough can be done in any region that supports the services. I am going to be using us-west-2, please select the appropriate region for your workload.

Figure 24. Screenshot: create endpoint select services with com.amazonaws.us-west-2.logs selected.

Select the VPC that you want the endpoint to be associated with, and make sure that the Enable DNS name option is checked under additional settings.

Figure 25. Screenshot: create endpoint VPC setting shows VPC selected.

Select the Subnets where you want the endpoint to be associated, and you can leave the security group as default and the policy as empty.

Figure 26. Screenshot: create endpoint subnet setting shows 2 subnet selected and default security group selected.

Select Create Endpoint.

Figure 27. Screenshot: create endpoint button.

Create a VPC endpoint for CodeArtifact. At the time of writing this article, CodeArifact has two endpoints: one is for API operations like service level operations and authentication, and the other is for using the service such as getting modules for our code. We’ll need both endpoints to automate working with CodeArtifact. Therefore, we’ll create both endpoints with DNS enabled.

In addition, we’ll need AWS Security Token Service (AWS STS) endpoint for get-caller-identity API call:

Follow steps a-c from the steps that were used from the creating the Logs endpoint above.

a. From the list of services, you can search for the endpoint by typing codeartifact in the search bar and selecting the one with com.amazonaws.us-west-2.codeartifact.api.

Figure 28. Screenshot: create endpoint select services with com.amazonaws.us-west-2.codeartifact.api selected.

Follow steps e-g from Part 4.

Then, repeat the same for com.amazon.aws.us-west-2.codeartifact.repositories service.

Figure 29. Screenshot: create endpoint select services with com.amazonaws.us-west-2.codeartifact.api selected.

Enable a VPC endpoint for AWS STS:

Follow steps a-c from Part 4

a. From the list of services you can search for the endpoint by typing sts in the search bar and selecting the one with com.amazonaws.us-west-2.sts.

Figure 30.Screenshot: create endpoint select services with com.amazon.aws.us-west-2.codeartifact.repositories selected.

Then follow steps e-g from Part 4.

Create a VPC endpoint for S3:

Follow steps a-c from Part 4

a. From the list of services you can search for the endpoint by typing sts in the search bar and selecting the one with com.amazonaws.us-west-2.s3, select the one with type of Gateway

Then select your VPC, and select the route tables for your subnets, this will auto update the route table with the new S3 endpoint.

Figure 31. Screenshot: create endpoint select services with com.amazonaws.us-west-2.s3 selected.

Now we have all of the endpoints set. The last step is to update your pipeline to point at the CodeArtifact repository when pulling your code dependencies. I’ll use CodeBuild buildspec.yml as an example here.

Make sure that your CodeBuild AWS Identity and Access Management (IAM) role has the permissions to perform STS and CodeArtifact actions.

Navigate to IAM console and click Roles from the left navigation menu, then search for your IAM role name, in our case since we selected “New service role” option in step 2.k was created with the name “codebuild-Private-service-role” (codebuild-<BUILD PROJECT NAME>-service-role)

Figure 32. Screenshot: IAM roles with codebuild-Private-service-role role shown in search.

From the Add permissions menu, click on Create inline policy

Search for STS in the services then select STS

Figure 34. Screenshot: IAM visual editor with sts shown in search.

Search for “GetCallerIdentity” and select the action

Figure 35. Screenshot: IAM visual editor with GetCallerIdentity in search and action selected.

Repeat the same with “GetServiceBearerToken”

Figure 36. Screenshot: IAM visual editor with GetServiceBearerToken in search and action selected.

Click on Review, add a name then click on Create policy

Figure 37. Screenshot: Review page and Create policy button.

You should see the new inline policy added to the list

Figure 38. Screenshot: shows the new in-line policy in the list.

For CodeArtifact actions we will do the same on that role, click on Create inline policy

Figure 39. Screenshot: attach policies.

Search for CodeArtifact in the services then select CodeArtifact

Figure 40. Screenshot: select service with CodeArtifact in search.

Search for “GetAuthorizationToken” in actions and select that action in the check box

Figure 41. CodeArtifact: with GetAuthorizationToken in search.

Repeat for “GetRepositoryEndpoint” and “ReadFromRepository”

Click on Resources to fix the 2 warnings, then click on Add ARN on the first one “Specify domain resource ARN for the GetAuthorizationToken action.”

Figure 42. Screenshot: with all selected filed and 2 warnings.

You’ll get a pop up with fields for Region, Account and Domain name, enter your region, your account number, and the domain name, we used “private” when we created our domain earlier.

Figure 43. Screenshot: Add ARN page.

Then click Add

Repeat the same process for “Specify repository resource ARN for the ReadFromRepository and 1 more”, and this time we will provide Region, Account ID, Domain name and Repository name, we used “Private” for the repository we created earlier and “private” for domain

Figure 44. Screenshot: add ARN page.

Note it is best practice to specify the resource we are targeting, we can use the checkbox for “Any” but we want to narrow the scope of our IAM role best we can.

Navigate to CodeCommit then click on the repo you created earlier in step1

Figure 45. Screenshot: CodeCommit repo.

Click on Add file dropdown, then Create file button

Paste the following in the editor space:

{
“dependencies”: {
“mathjs”: “^11.2.0”
}
}

Name the file “package.json”

Add your name and email, and optional commit message

Repeat this process for “index.js” and paste the following in the editor space:

const { sqrt } = require(‘mathjs’)
console.log(sqrt(49).toString())

Figure 46. Screenshot: CodeCommit Commit changes button.

This will force the pipeline to kick off and start building the application

Figure 47. Screenshot: CodePipeline.

This is a very simple application that gets the square root of 49 and log it to the screen, if you click on the Details link from the pipeline build stage, you’ll see the output of running the NodeJS application, the logs are stored in CloudWatch and you can navigate there by clicking on the link the View entire log “Showing the last xx lines of the build log. View entire log”

Figure 48. Screenshot: Showing the last 54 lines of the build log. View entire log.

We used npm example in the buildspec.yml above, Similar setup will be used for pip and twine,

For Maven, Gradle, and NuGet, you must set Environment variables and change your settings.xml and build.gradle, as well as install the plugin for your IDE. For more information, see here.

Cleanup

Navigate to VPC endpoint from the AWS console and delete the endpoints that you created.

Navigate to CodePipeline and delete the Pipeline you created.

Navigate to CodeBuild and delete the Build Project created.

Navigate to CodeCommit and delete the Repository you created.

Navigate to CodeArtifact and delete the Repository and the domain you created.

Navigate to IAM and delete the Roles created:

For CodeBuild: codebuild-<Build Project Name>-service-role

For CodePipeline: AWSCodePipelineServiceRole-<Region>-<Project Name>

Conclusion

In this post, we deployed a full CI/CD pipeline with CodePipeline orchestrating CodeBuild to build and test a small NodeJS application, using CodeArtifact to download the application code dependencies. All without going to the public internet and maintaining the logs in CloudWatch.

About the author:

MJ Kubba

MJ Kubba is a Solutions Architect who enjoys working with public sector customers to build solutions that meet their business needs. MJ has over 15 years of experience designing and implementing software solutions. He has a keen passion for DevOps and cultural transformation.

Building a gRPC Client in .NET

Introduction

In this article, we will take a look at how to create a simple gRPC client with .NET and communicate with a server. This is the final post of the blog series where we talk about building gRPC services.

Motivation

This is the second part of an articles series on gRPC. If you want to jump ahead, please feel free to do so. The links are down below.

Introduction to gRPC
Building a gRPC server with Go
Building a gRPC client with .NET
Building a gRPC client with Go

Building a gRPC client with .NET (You are here)

Please note that this is intended for anyone who’s interested in getting started with gRPC. If you’re not, please feel free to skip this article.

Plan

The plan for this article is as follows.

Scaffold a .NET console project.
Implementing the gRPC client.
Communicating with the server.

In a nutshell, we will be generating the client for the server we built in our previous post.


?  As always, all the code samples documentation can be found at: https://github.com/sahansera/dotnet-grpc

Prerequisites

.NET 6 SDK
Visual Studio Code or IDE of your choice
gRPC compiler

Please note that I’m using some of the commands that are macOS specific. Please follow this link to set it up if you are on a different OS.

To install Protobuf compiler:

brew install protobuf

Project Structure

We can use .NET’s tooling to generate a sample gRPC project. Run the following command at the root of your workspace. Remember how we used dotnet new grpc command to scaffold the server project? For this one though, it can simply be a console app.

dotnet new console -o BookshopClient

Your project structure should look like this.


You must be wondering if this is a console app how does it know how to generate the client stubs? Well, it doesn’t. You have to add the following packages to the project first.

dotnet add BookshopClient.csproj package Grpc.Net.Client
dotnet add BookshopClient.csproj package Google.Protobuf
dotnet add BookshopClient.csproj package Grpc.Tools

Once everything’s installed, we can proceed with the rest of the steps.

Generating the client stubs

We will be using the same Protobuf files that we generated in our previous step. If you haven’t seen that already head over to my previous post.

Open up the BookshopClient.csproj file you need to add the following lines:


<ItemGroup>
<Protobuf Include=../proto/bookshop.proto GrpcServices=Client />
</ItemGroup>

As you can see we will be reusing our Bookshop.proto file. in this example too. One thing to note here is that we have updated the GrpcServices attribute to be Client.

Implementing the gRPC client

Let’s update the Program.cs file to connect to and get the response from the server.

using System.Threading.Tasks;
using Grpc.Net.Client;
using Bookshop;

// The port number must match the port of the gRPC server.
using var channel = GrpcChannel.ForAddress(“http://localhost:5000”);
var client = new Inventory.InventoryClient(channel);
var reply = await client.GetBookListAsync(new GetBookListRequest { });

Console.WriteLine(“Greeting: “ + reply.Books);
Console.WriteLine(“Press any key to exit…”);
Console.ReadKey();

This is based on the example given on the Microsoft docs site btw. What I really like about the above code is how easy it is to read. So here’s what happens.


We first create a gRPC channel with GrpcChannel.ForAddress to the server by giving its URI and port. A client can reuse the same channel object to communicate with a gRPC server. This is an expensive operation compared to invoking a gRPC method on the server. You can also pass in a GrpcChannelOptions object as the second parameter to define client options. Here’s a list for that.
Then we use the auto-generated method Inventory.InventoryClient by leveraging the channel we created above. One thing to note here is that, if your server has multiple services, you can still use the same channel object for all of those.
We call the GetBookListAsync on our server. By the way, this is a Unary call, we will go through other client-server communication mechanisms in a separate post.
Our GetBookList method gets called on the server and returns the list of books.

Now that we know how the requests work, let’s see this in action.

Communicating with the server

Let’s spin up the server that we built in my previous post first. This will be up and running at port 5000.

dotnet run –project BookshopServer/BookshopServer.csproj


For the client-side, we invoke a similar command.

dotnet run –project BookshopClient/BookshopClient.csproj

And in the terminal, we will get the following outputs.


Nice! as you can see it’s not that hard to get everything working ? One thing to note is that we left out the details about TLS and different ways to communicate with the server (i.e. Unary, streaming etc.). I will cover such topics in-depth in the future.

Conclusion

In this article, we looked at how to reuse our Protobuf files to create a client to interact with the server we created in the previous post.

I hope this article series cleared up a lot of confusion that you had about gRPC. Please feel free to share your questions, thoughts, or feedback in the comments section below. Until next time ?

References

https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-6.0&tabs=visual-studio-codeFlatlogic Admin Templates banner

Transforming identity claims in ASP.NET Core and Cache

The article shows how to add extra identity claims to an ASP.NET Core application which authenticates using the Microsoft.Identity.Web client library and Azure AD B2C or Azure AD as the identity provider (IDP). This could easily be switched to OpenID Connect and use any IDP which supports OpenID Connect. The extra claims are added after an Azure Microsoft Graph HTTP request and it is important that this is only called once for a user session.

Code https://github.com/damienbod/azureb2c-fed-azuread

Normally I use the IClaimsTransformation interface to add extra claims to an ASP.NET Core session. This interface gets called multiple times and has no caching solution. If using this interface to add extra claims to you application, you must implement a cache solution for the extra claims and prevent extra API calls or database requests with every request. Instead of implementing a cache and using the IClaimsTransformation interface, alternatively you could just use the OnTokenValidated event with the OpenIdConnectDefaults.AuthenticationScheme scheme. This gets called after a successfully authentication against your identity provider. If Microsoft.Identity.Web is used as the OIDC client which is specific for Azure AD and Azure B2C, you must add the configuration to the MicrosoftIdentityOptions otherwise downstream APIs will not work. If using OpenID Connect directly and a different IDP, then use the OpenIdConnectOptions configuration. This can be added to the services of the ASP.NET Core application.

services.Configure<MicrosoftIdentityOptions>(
OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events.OnTokenValidated = async context =>
{
if (ApplicationServices != null && context.Principal != null)
{
using var scope = ApplicationServices.CreateScope();
context.Principal = await scope.ServiceProvider
.GetRequiredService<MsGraphClaimsTransformation>()
.TransformAsync(context.Principal);
}
};
});

Note

If using default OpenID Connect and not the Microsoft.Identity.Web client to authenticate, use the OpenIdConnectOptions and not the MicrosoftIdentityOptions.

Here’s an example of an OIDC setup.

builder.Services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events.OnTokenValidated = async context =>
{
if(ApplicationServices != null && context.Principal != null)
{
using var scope = ApplicationServices.CreateScope();
context.Principal = await scope.ServiceProvider
.GetRequiredService<MyClaimsTransformation>()
.TransformAsync(context.Principal);
}
};
});

The IServiceProvider ApplicationServices are used to add the scoped MsGraphClaimsTransformation service which is used to add the extra calls using Microsoft Graph. This needs to be added to the configuration in the startup or the program file.

protected IServiceProvider ApplicationServices { get; set; } = null;

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
ApplicationServices = app.ApplicationServices;

The Microsoft Graph services are added to the IoC.

services.AddScoped<MsGraphService>();
services.AddScoped<MsGraphClaimsTransformation>();

The MsGraphClaimsTransformation uses the Microsoft Graph client to get groups of a user, create a new ClaimsIdentity, add the extra claims to this group and add the ClaimsIdentity to the ClaimsPrincipal.

using AzureB2CUI.Services;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;

namespace AzureB2CUI;

public class MsGraphClaimsTransformation
{
private readonly MsGraphService _msGraphService;

public MsGraphClaimsTransformation(MsGraphService msGraphService)
{
_msGraphService = msGraphService;
}

public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
{
ClaimsIdentity claimsIdentity = new();
var groupClaimType = “group”;
if (!principal.HasClaim(claim => claim.Type == groupClaimType))
{
var objectidentifierClaimType = “http://schemas.microsoft.com/identity/claims/objectidentifier”;
var objectIdentifier = principal.Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType);

var groupIds = await _msGraphService.GetGraphApiUserMemberGroups(objectIdentifier.Value);

foreach (var groupId in groupIds.ToList())
{
claimsIdentity.AddClaim(new Claim(groupClaimType, groupId));
}
}

principal.AddIdentity(claimsIdentity);
return principal;
}
}

The MsGraphService service implements the different HTTP requests to Microsoft Graph. Azure AD B2C is used in this example and so an application client is used to access the Azure AD with the ClientSecretCredential. The implementation is setup to use secrets from Azure Key Vault directly in any deployments, or from user secrets for development.

using Azure.Identity;
using Microsoft.Extensions.Configuration;
using Microsoft.Graph;
using System.Threading.Tasks;

namespace AzureB2CUI.Services;

public class MsGraphService
{
private readonly GraphServiceClient _graphServiceClient;

public MsGraphService(IConfiguration configuration)
{
string[] scopes = configuration.GetValue<string>(“GraphApi:Scopes”)?.Split(‘ ‘);
var tenantId = configuration.GetValue<string>(“GraphApi:TenantId”);

// Values from app registration
var clientId = configuration.GetValue<string>(“GraphApi:ClientId”);
var clientSecret = configuration.GetValue<string>(“GraphApi:ClientSecret”);

var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

_graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes);
}

public async Task<User> GetGraphApiUser(string userId)
{
return await _graphServiceClient.Users[userId]
.Request()
.GetAsync();
}

public async Task<IUserAppRoleAssignmentsCollectionPage> GetGraphApiUserAppRoles(string userId)
{
return await _graphServiceClient.Users[userId]
.AppRoleAssignments
.Request()
.GetAsync();
}

public async Task<IDirectoryObjectGetMemberGroupsCollectionPage> GetGraphApiUserMemberGroups(string userId)
{
var securityEnabledOnly = true;

return await _graphServiceClient.Users[userId]
.GetMemberGroups(securityEnabledOnly)
.Request().PostAsync();
}
}

When the application is run, the two ClaimsIdentity instances exist with every request and are available for using in the ASP.NET Core application.

Notes

This works really well but you should not add too many claims to the identity in this way. If you have many identity descriptions or a lot of user data, then you should use the IClaimsTransformation interface with a good cache solution.

Links

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/claims

https://andrewlock.net/exploring-dotnet-6-part-10-new-dependency-injection-features-in-dotnet-6/Flatlogic Admin Templates banner

Create Azure B2C users with Microsoft Graph and ASP.NET Core

This article shows how to create different types of Azure B2C users using Microsoft Graph and ASP.NET Core. The users are created using application permissions in an Azure App registration.

Code https://github.com/damienbod/azureb2c-fed-azuread

The Microsoft.Identity.Web Nuget package is used to authenticate the administrator user that can create new Azure B2C users. An ASP.NET Core Razor page application is used to implement the Azure B2C user management and also to hold the sensitive data.

public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<MsGraphService>();
services.AddTransient<IClaimsTransformation, MsGraphClaimsTransformation>();
services.AddHttpClient();

services.AddOptions();

services.AddMicrosoftIdentityWebAppAuthentication(Configuration, “AzureAdB2C”)
.EnableTokenAcquisitionToCallDownstreamApi()
.AddInMemoryTokenCaches();

The AzureAdB2C app settings configures the B2C client. An Azure B2C user flow is implemented for authentication. In this example, a signin or signup flow is implemented, although if creating your own user, maybe only a signin is required. The GraphApi configuration is used for the Microsoft Graph application client with uses the client credentials flow. A user secret was created to access the Azure App registration. This secret is stored in the user secrets for development and stored in Azure Key Vault for any deployments. You could use certificates as well but this offers no extra security unless using directly from a client host.

“AzureAdB2C”: {
“Instance”: “https://b2cdamienbod.b2clogin.com”,
“ClientId”: “8cbb1bd3-c190-42d7-b44e-42b20499a8a1”,
“Domain”: “b2cdamienbod.onmicrosoft.com”,
“SignUpSignInPolicyId”: “B2C_1_signup_signin”,
“TenantId”: “f611d805-cf72-446f-9a7f-68f2746e4724”,
“CallbackPath”: “/signin-oidc”,
“SignedOutCallbackPath “: “/signout-callback-oidc”
},
“GraphApi”: {
“TenantId”: “f611d805-cf72-446f-9a7f-68f2746e4724”,
“ClientId”: “1d171c13-236d-4c2b-ac10-0325be2cbc74”,
“Scopes”: “.default”
//”ClientSecret”: “–in-user-settings–”
},
“AadIssuerDomain”: “damienbodhotmail.onmicrosoft.com”,

The application User.ReadWrite.All permission is used to create the users. See the permissions in the Microsoft Graph docs.

The MsGraphService service implements the Microsoft Graph client to create Azure tenant users. Application permissions are used because we use Azure B2C. If authenticating using Azure AD, you could use delegated permissions. The ClientSecretCredential is used to get the Graph access token and client with the required permissions.

public MsGraphService(IConfiguration configuration)
{
string[] scopes = configuration.GetValue<string>(“GraphApi:Scopes”)?.Split(‘ ‘);
var tenantId = configuration.GetValue<string>(“GraphApi:TenantId”);

// Values from app registration
var clientId = configuration.GetValue<string>(“GraphApi:ClientId”);
var clientSecret = configuration.GetValue<string>(“GraphApi:ClientSecret”);

_aadIssuerDomain = configuration.GetValue<string>(“AadIssuerDomain”);
_aadB2CIssuerDomain = configuration.GetValue<string>(“AzureAdB2C:Domain”);

var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

_graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes);
}

The CreateAzureB2CSameDomainUserAsync method creates a same domain Azure B2C user and also creates an initial password which needs to be updated after a first signin. The users UserPrincipalName email must match the Azure B2C domain and the users can only signin with the the password. MFA should be setup. This works really good but it is not a good idea to handle passwords from your users, if this can be avoided. You need to share this with the user in a secure way.

public async Task<(string Upn, string Password, string Id)>
CreateAzureB2CSameDomainUserAsync(UserModelB2CTenant userModel)
{
if(!userModel.UserPrincipalName.ToLower().EndsWith(_aadB2CIssuerDomain.ToLower()))
{
throw new ArgumentException(“incorrect Email domain”);
}

var password = GetEncodedRandomString();
var user = new User
{
AccountEnabled = true,
UserPrincipalName = userModel.UserPrincipalName,
DisplayName = userModel.DisplayName,
Surname = userModel.Surname,
GivenName = userModel.GivenName,
PreferredLanguage = userModel.PreferredLanguage,
MailNickname = userModel.DisplayName,
PasswordProfile = new PasswordProfile
{
ForceChangePasswordNextSignIn = true,
Password = password
}
};

await _graphServiceClient.Users
.Request()
.AddAsync(user);

return (user.UserPrincipalName, user.PasswordProfile.Password, user.Id);
}

The CreateFederatedUserWithPasswordAsync method creates an Azure B2C with any email address. This uses the SignInType federated, but uses a password and the user signs in directly to the Azure B2C. This password is not updated after a first signin. Again this is a bad idea because you need share the password with the user somehow and you as an admin should not know the user password. I would avoid creating users in this way and use a custom invitation flow, if you need this type of Azure B2C user.

public async Task<(string Upn, string Password, string Id)>
CreateFederatedUserWithPasswordAsync(UserModelB2CIdentity userModel)
{
// new user create, email does not matter unless you require to send mails
var password = GetEncodedRandomString();
var user = new User
{
DisplayName = userModel.DisplayName,
PreferredLanguage = userModel.PreferredLanguage,
Surname = userModel.Surname,
GivenName = userModel.GivenName,
OtherMails = new List<string> { userModel.Email },
Identities = new List<ObjectIdentity>()
{
new ObjectIdentity
{
SignInType = “federated”,
Issuer = _aadB2CIssuerDomain,
IssuerAssignedId = userModel.Email
},
},
PasswordProfile = new PasswordProfile
{
Password = password,
ForceChangePasswordNextSignIn = false
},
PasswordPolicies = “DisablePasswordExpiration”
};

var createdUser = await _graphServiceClient.Users
.Request()
.AddAsync(user);

return (createdUser.UserPrincipalName, user.PasswordProfile.Password, createdUser.Id);
}

The CreateFederatedNoPasswordAsync method creates an Azure B2C federated user from a specific Azure AD domain which already exists and no password. The user can only signin using a federated signin to this tenant. No passwords are shared. This is really good way to onboard existing AAD users to an Azure B2C tenant. One disadvantage with this is that the email is not verified unlike implementing this using an invitation flow directly in the Azure AD tenant.

public async Task<string>
CreateFederatedNoPasswordAsync(UserModelB2CIdentity userModel)
{
// User must already exist in AAD
var user = new User
{
DisplayName = userModel.DisplayName,
PreferredLanguage = userModel.PreferredLanguage,
Surname = userModel.Surname,
GivenName = userModel.GivenName,
OtherMails = new List<string> { userModel.Email },
Identities = new List<ObjectIdentity>()
{
new ObjectIdentity
{
SignInType = “federated”,
Issuer = _aadIssuerDomain,
IssuerAssignedId = userModel.Email
},
}
};

var createdUser = await _graphServiceClient.Users
.Request()
.AddAsync(user);

return createdUser.UserPrincipalName;
}

When the application is started, you can signin as an IT admin and create new users as required. The Birthday can only be added if you have an SPO license. If the user exists in the AAD tenant, the user can signin using the federated identity provider. This could be improved by adding a search of the users in the target tenant and only allowing existing users.

Notes:

It is really easy to create users using Microsoft Graph but this is not always the best way, or a secure way of onboarding new users in an Azure B2C tenant. If local data is required, this can be really useful. Sharing passwords between an IT admin and a new user should be avoided if possible. The Microsoft Graph invite APIs do not work for Azure AD B2C, only Azure AD.

Links

https://docs.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core

Flatlogic Admin Templates banner

Implementing an API Gateway in ASP.NET Core with Ocelot

This post is about what is an API Gateway and how to build an API Gateway in ASP.NET Core with Ocelot. An API gateway is service that sits between an endpoint and backend APIs, transmitting client requests to an appropriate service of an application. It’s an architectural pattern, which was initially created to support microservices. In this post I am building API Gateway using Ocelot. Ocelot is aimed at people using .NET running a micro services / service orientated architecture that need a unified point of entry into their system.

Let’s start the implementation.

First we will create two web api applications – both these services returns some hard coded string values. Here is the first web api – CustomersController – which returns list of customers.

using Microsoft.AspNetCore.Mvc;

namespace ServiceA.Controllers;

[ApiController]
[Route(“[controller]”)]
public class CustomersController : ControllerBase
{
private readonly ILogger<CustomersController> _logger;

public CustomersController(ILogger<CustomersController> logger)
{
_logger = logger;
}

[HttpGet(Name = “GetCustomers”)]
public IActionResult Get()
{
return Ok(new[] { “Customer1”, “Customer2”,“Customer3” });
}
}

And here is the second web api – ProductsController.

using Microsoft.AspNetCore.Mvc;

namespace ServiceB.Controllers;

[ApiController]
[Route(“[controller]”)]
public class ProductsController : ControllerBase
{
private readonly ILogger<ProductsController> _logger;

public ProductsController(ILogger<ProductsController> logger)
{
_logger = logger;
}

[HttpGet(Name = “GetProducts”)]
public IActionResult Get()
{
return Ok(new[] { “Product1”, “Product2”,
“Product3”, “Product4”, “Product5” });
}
}

Next we will create the API Gateway. To do this create an ASP.NET Core empty web application using the command – dotnet new web -o ApiGateway. Once we create the gateway application, we need to add the reference of Ocelot nuget package – we can do this using dotnet add package Ocelot. Now we can modify the Program.cs file like this.

using Ocelot.DependencyInjection;
using Ocelot.Middleware;

var builder = WebApplication.CreateBuilder(args);

builder.Configuration.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile(“configuration.json”, false, true).AddEnvironmentVariables();

builder.Services.AddOcelot(builder.Configuration);
var app = builder.Build();

app.UseOcelot();
app.Run();

Next you need to configure your API routes using configuration.json. Here is the basic configuration which help to send requests from one endpoint to the web api endpoints.

{
Routes: [
{
DownstreamPathTemplate: /customers,
DownstreamScheme: https,
DownstreamHostAndPorts: [
{
Host: localhost,
Port: 7155
}
],
UpstreamPathTemplate: /api/customers,
UpstreamHttpMethod: [ Get ]
},
{
DownstreamPathTemplate: /products,
DownstreamScheme: https,
DownstreamHostAndPorts: [
{
Host: localhost,
Port: 7295
}
],
UpstreamPathTemplate: /api/products,
UpstreamHttpMethod: [ Get ]
}
],
GlobalConfiguration: {
BaseUrl: https://localhost:7043
}
}

Now run all the three applications and browse the endpoint – https://localhost:7043/api/products – which invokes the ProductsController class GET action method. And if we browse the endpoint – https://localhost:7043/api/customers – which invokes the CustomersController GET action method. In the configuration the UpstreamPathTemplate will be the API Gateway endpoint and API Gateway will transfers the request to the DownstreamPathTemplate endpoint.

Due to some strange reason it was not working properly for me. Today I configured it again and it started working. It is an introductory post. I will blog about some common use cases where API Gateway help and how to deploy it in Azure and all in the future.

Happy Programming 🙂Flatlogic Admin Templates banner

Implementing authorization in Blazor ASP.NET Core applications using Azure AD security groups

This article shows how to implement authorization in an ASP.NET Core Blazor application using Azure AD security groups as the data source for the authorization definitions. Policies and claims are used in the application which decouples the descriptions from the Azure AD security groups and the application specific authorization requirements. With this setup, it is easy to support any complex authorization requirement and IT admins can manager the accounts independently in Azure. This solution will work for Azure AD B2C or can easily be adapted to use data from your database instead of Azure AD security groups if required.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate/tree/main/BlazorBff

Setup the AAD security groups

Before we start using the Azure AD security groups, the groups need to be created. I use Powershell to create the security groups. This is really simple using the Powershell AZ module with AD. For this demo, just two groups are created, one for users and one for admins. The script can be run from your Powershell console. You are required to authenticate before running the script and the groups are added if you have the rights. In DevOps, you could use a managed identity and the client credentials flow.

# https://theitbros.com/install-azure-powershell/
#
# https://docs.microsoft.com/en-us/powershell/module/az.accounts/connect-azaccount?view=azps-7.1.0
#
# Connect-AzAccount -Tenant “–tenantId–”
# AZ LOGIN –tenant “–tenantId–”

$tenantId = “–tenantId–”
$gpAdmins = “demo-admins”
$gpUsers = “demo-users”

function testParams {

if (!$tenantId)
{
Write-Host “tenantId is null”
exit 1
}
}

testParams

function CreateGroup([string]$name) {
Write-Host ” – Create new group”
$group = az ad group create –display-name $name –mail-nickname $name

$gpObjectId = ($group | ConvertFrom-Json).objectId
Write-Host ” $gpObjectId $name”
}

Write-Host “Creating groups”

##################################
### Create groups
##################################

CreateGroup $gpAdmins
CreateGroup $gpUsers

#az ad group list –display-name $groupName

return

Once created, the new security groups should be visible in the Azure portal. You need to add group members or user members to the groups.

That’s all the configuration required to setup the security groups. Now the groups can be used in the applications.

Define the authorization policies

We do not use the security groups directly in the applications because this can change a lot or maybe the application is deployed to different host environments. The security groups are really just descriptions about the identity. How you use this, is application specific and depends on the solution business requirements which tend to change a lot. In the applications, shared authorization policies are defined and only used in the Blazor WASM and the Blazor server part. The definitions have nothing to do with the security groups, the groups get mapped to application claims. A Policies class definition was created for all the policies in the shared Blazor project because this is defined once, but used in the server project and the client project. The code was built based on the excellent blog from Chris Sainty. The claims definition for the authorization check have nothing to do with the Azure security groups, this logic is application specific and sometimes the applications need to apply different authorization logic how the security groups are used in different applications inside the same solution.

using Microsoft.AspNetCore.Authorization;

namespace BlazorAzureADWithApis.Shared.Authorization
{
public static class Policies
{
public const string DemoAdminsIdentifier = “demo-admins”;
public const string DemoAdminsValue = “1”;

public const string DemoUsersIdentifier = “demo-users”;
public const string DemoUsersValue = “1”;

public static AuthorizationPolicy DemoAdminsPolicy()
{
return new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.RequireClaim(DemoAdminsIdentifier, DemoAdminsValue)
.Build();
}

public static AuthorizationPolicy DemoUsersPolicy()
{
return new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.RequireClaim(DemoUsersIdentifier, DemoUsersValue)
.Build();
}
}
}

Add the authorization to the WASM and the server project

The policy definitions can now be added to the Blazor Server project and the Blazor WASM project. The AddAuthorization extension method is used to add the authorization to the Blazor server. The policy names can be anything you want.

services.AddAuthorization(options =>
{
// By default, all incoming requests will be authorized according to the default policy
options.FallbackPolicy = options.DefaultPolicy;
options.AddPolicy(“DemoAdmins”, Policies.DemoAdminsPolicy());
options.AddPolicy(“DemoUsers”, Policies.DemoUsersPolicy());
});

The AddAuthorizationCore method is used to add the authorization policies to the Blazor WASM client project.

var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.Services.AddOptions();
builder.Services.AddAuthorizationCore(options =>
{
options.AddPolicy(“DemoAdmins”, Policies.DemoAdminsPolicy());
options.AddPolicy(“DemoUsers”, Policies.DemoUsersPolicy());
});

Now the application policies, claims are defined. Next job is to connect the Azure security definitions to the application authorization claims used for the authorization policies.

Link the security groups from Azure to the app authorization

This can be done using the IClaimsTransformation interface which gets called after a successful authentication. An application Microsoft Graph client is used to request the Azure AD security groups. The IDs of the Azure security groups are mapped to the application claims. Any logic can be added here which is application specific. If a hierarchical authorization system is required, this could be mapped here.

public class GraphApiClaimsTransformation : IClaimsTransformation
{
private readonly MsGraphApplicationService _msGraphApplicationService;

public GraphApiClaimsTransformation(MsGraphApplicationService msGraphApplicationService)
{
_msGraphApplicationService = msGraphApplicationService;
}

public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
{
ClaimsIdentity claimsIdentity = new();
var groupClaimType = “group”;
if (!principal.HasClaim(claim => claim.Type == groupClaimType))
{
var objectidentifierClaimType = “http://schemas.microsoft.com/identity/claims/objectidentifier”;
var objectIdentifier = principal
.Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType);

var groupIds = await _msGraphApplicationService
.GetGraphUserMemberGroups(objectIdentifier.Value);

foreach (var groupId in groupIds.ToList())
{
var claim = GetGroupClaim(groupId);
if (claim != null) claimsIdentity.AddClaim(claim);
}
}

principal.AddIdentity(claimsIdentity);
return principal;
}

private Claim GetGroupClaim(string groupId)
{
Dictionary<string, Claim> mappings = new Dictionary<string, Claim>() {
{ “1d9fba7e-b98a-45ec-b576-7ee77366cf10”,
new Claim(Policies.DemoUsersIdentifier, Policies.DemoUsersValue)},

{ “be30f1dd-39c9-457b-ab22-55f5b67fb566”,
new Claim(Policies.DemoAdminsIdentifier, Policies.DemoAdminsValue)},
};

if (mappings.ContainsKey(groupId))
{
return mappings[groupId];
}

return null;
}
}

The MsGraphApplicationService class is used to implement the Microsoft Graph requests. This uses application permissions with a ClientSecretCredential. I use secrets which are read from an Azure Key vault. You need to implement rotation for this or make it last forever and update the secrets in the DevOps builds every time you deploy. My secrets are only defined in Azure and used from the Azure Key Vault. You could use certificates but this adds no extra security unless you need to use the secret/certificate outside of Azure or in app settings somewhere. The GetMemberGroups method is used to get the groups for the authenticated user using the object identifier.

public class MsGraphApplicationService
{
private readonly IConfiguration _configuration;

public MsGraphApplicationService(IConfiguration configuration)
{
_configuration = configuration;
}

public async Task<IUserAppRoleAssignmentsCollectionPage>
GetGraphUserAppRoles(string objectIdentifier)
{
var graphServiceClient = GetGraphClient();

return await graphServiceClient.Users[objectIdentifier]
.AppRoleAssignments
.Request()
.GetAsync();
}

public async Task<IDirectoryObjectGetMemberGroupsCollectionPage>
GetGraphUserMemberGroups(string objectIdentifier)
{
var securityEnabledOnly = true;

var graphServiceClient = GetGraphClient();

return await graphServiceClient.Users[objectIdentifier]
.GetMemberGroups(securityEnabledOnly)
.Request().PostAsync();
}

private GraphServiceClient GetGraphClient()
{
string[] scopes = new[] { “https://graph.microsoft.com/.default” };
var tenantId = _configuration[“AzureAd:TenantId”];

// Values from app registration
var clientId = _configuration.GetValue<string>(“AzureAd:ClientId”);
var clientSecret = _configuration.GetValue<string>(“AzureAd:ClientSecret”);

var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

return new GraphServiceClient(clientSecretCredential, scopes);
}
}

The security groups are mapped to the application claims and policies. The policies can be applied in the application.

Use the Policies in the Server

The Blazor server applications implements secure APIs for the Blazor WASM. The Authorize attribute is used with the policy definition. Now the user must be authorized using our definition to get data from this API. We also use cookies because the Blazor application is secured using the BFF architecture which has improved security compared to using tokens in the untrusted SPA.

[ValidateAntiForgeryToken]
[Authorize(Policy= “DemoAdmins”,
AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)]
[ApiController]
[Route(“api/[controller]”)]
public class DemoAdminController : ControllerBase
{
[HttpGet]
public IEnumerable<string> Get()
{
return new List<string>
{
“admin data”,
“secret admin record”,
“loads of admin data”
};
}
}

Use the policies in the WASM

The Blazor WASM application can also use the authorization policies. This is not really authorization but only usability because you cannot implement authorization in an untrusted application which you have no control of once it’s running. We would like to hide the components and menus which cannot be used, if you are not authorized. I use an AuthorizeView with a policy definition for this.

<div class=”@NavMenuCssClass” @onclick=”ToggleNavMenu”>
<ul class=”nav flex-column”>
<AuthorizeView Policy=”DemoAdmins”>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”demoadmin”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> DemoAdmin
</NavLink>
</li>
</Authorized>
</AuthorizeView>

<AuthorizeView Policy=”DemoUsers”>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”demouser”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> DemoUser
</NavLink>
</li>
</Authorized>
</AuthorizeView>

<AuthorizeView>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”graphprofile”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> Graph Profile
</NavLink>
</li>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”” Match=”NavLinkMatch.All”>
<span class=”oi oi-home” aria-hidden=”true”></span> Home
</NavLink>
</li>
</Authorized>
<NotAuthorized>
<li class=”nav-item px-3″>
<p style=”color:white”>Please sign in</p>
</li>
</NotAuthorized>
</AuthorizeView>

</ul>
</div>

The Blazor UI pages should also use an Authorize attribute. This prevents an unhandled exception. You could add logic which forces you to login then with the permissions required or just display an error page. This depends on the UI strategy.

@page “/demoadmin”
@using Microsoft.AspNetCore.Authorization
@inject IHttpClientFactory HttpClientFactory
@inject IJSRuntime JSRuntime
@attribute [Authorize(Policy =”DemoAdmins”)]

<h1>Demo Admin</h1>

When the application is started, you will only see what you allowed to see and more important, only be able to get data for what you are authorized.

If you open a page where you have no access rights:

Notes:

This solution is very flexible and can work with any source of identity definitions, not just Azure security groups. I could very easily switch to a database. One problem with this, is that with a lot of authorization definitions, the size of the cookie might get to big and you would need to switch from using claims in the policies definitions to using a cache database or something. This would also be easy to adapt because the claims are only mapped in the policies and the IClaimsTransformation implementation. Only the policies are used in the application logic.

Links

https://chrissainty.com/securing-your-blazor-apps-configuring-policy-based-authorization-with-blazor/

https://docs.microsoft.com/en-us/aspnet/core/blazor/securityFlatlogic Admin Templates banner

Implementing Basic Authentication in ASP.NET Core Minimal API

This post is about how implement basic authentication in ASP.NET Core Minimal API. Few days back I got a question / comment in the blog post about Minimal APIs – about implementing Basic authentication in Minimal APIs. Since the Action Filters support is not available in Minimal API I had to find some alternative approach for the implementation. I already wrote two blog posts Basic authentication middleware for ASP.NET 5 and Basic HTTP authentication in ASP.Net Web API on implementing Basic authentication. In this post I am implementing an AuthenticationHandler and using this for implementing basic authentication. As I already explained enough about the concepts, I am not discussing them again in this post.

Here is the implementation of the BasicAuthenticationHandler which implements the abstract class AuthenticationHandler.

public class BasicAuthenticationHandler : AuthenticationHandler<AuthenticationSchemeOptions>
{
public BasicAuthenticationHandler(
IOptionsMonitor<AuthenticationSchemeOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
ISystemClock clock
) : base(options, logger, encoder, clock)
{
}

protected override Task<AuthenticateResult> HandleAuthenticateAsync()
{
var authHeader = Request.Headers[“Authorization”].ToString();
if (authHeader != null && authHeader.StartsWith(“basic”, StringComparison.OrdinalIgnoreCase))
{
var token = authHeader.Substring(“Basic “.Length).Trim();
System.Console.WriteLine(token);
var credentialstring = Encoding.UTF8.GetString(Convert.FromBase64String(token));
var credentials = credentialstring.Split(‘:’);
if (credentials[0] == “admin” && credentials[1] == “admin”)
{
var claims = new[] { new Claim(“name”, credentials[0]), new Claim(ClaimTypes.Role, “Admin”) };
var identity = new ClaimsIdentity(claims, “Basic”);
var claimsPrincipal = new ClaimsPrincipal(identity);
return Task.FromResult(AuthenticateResult.Success(new AuthenticationTicket(claimsPrincipal, Scheme.Name)));
}

Response.StatusCode = 401;
Response.Headers.Add(“WWW-Authenticate”, “Basic realm=”dotnetthoughts.net””);
return Task.FromResult(AuthenticateResult.Fail(“Invalid Authorization Header”));
}
else
{
Response.StatusCode = 401;
Response.Headers.Add(“WWW-Authenticate”, “Basic realm=”dotnetthoughts.net””);
return Task.FromResult(AuthenticateResult.Fail(“Invalid Authorization Header”));
}
}
}

Next modify the Program.cs like this.

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddAuthentication(“BasicAuthentication”)
.AddScheme<AuthenticationSchemeOptions, BasicAuthenticationHandler>
(“BasicAuthentication”, null);
builder.Services.AddAuthorization();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseAuthentication();
app.UseAuthorization();

app.UseHttpsRedirection();

Now it is done. You can enable block the anonymous access by adding the authorize attribute to the method like this.

app.MapGet(“/weatherforecast”, [Authorize]() =>
{
var forecast = Enumerable.Range(1, 5).Select(index =>
new WeatherForecast
(
DateTime.Now.AddDays(index),
Random.Shared.Next(-20, 55),
summaries[Random.Shared.Next(summaries.Length)]
))
.ToArray();
return forecast;
}).WithName(“GetWeatherForecast”);

Now if you browse the Weather forecast endpoint – https://localhost:5001/weatherforecast, it will prompt for user name and password. Here is the screenshot of the app running on my machine.

Happy Programming 🙂Flatlogic Admin Templates banner