Send email from #SQL server using a #CLR function

Send Email SQL CLR

Send Email from a SQL CLR function

Although you can, and should use sp_send_dbmail to send email from SQL server, it’s often not quite as flexible as you need it to be. So here is a .NET CLR Function that you can install in your MSSQL server, in order to send an email using whatever additional configuration that you need.

You need to run these commands before installing the assembly

EXEC sp_changedbowner ‘sa’
ALTER DATABASE <your-database> SET trustworthy ON

CREATE ASSEMBLY [SendEmailCLR]
AUTHORIZATION [dbo]
FROM 0x4D5A90000300000004000000FFFF000…..
WITH PERMISSION_SET = UNSAFE;

CREATE PROCEDURE [dbo].[SendEmail]
@smtpServer NVARCHAR (MAX) NULL,
@smtpUsername NVARCHAR (MAX) NULL,
@smtpPassword NVARCHAR (MAX) NULL,
@from NVARCHAR (MAX) NULL,
@to NVARCHAR (MAX) NULL,
@subject NVARCHAR (MAX) NULL,
@body NVARCHAR (MAX) NULL
AS EXTERNAL NAME [SendEmailCLR].[SendEmailCLR].[SendEmail]

The full binary string is redacted here to save space, but you can get this from https://github.com/infiniteloopltd/SendEmailSQLCLR/blob/master/bin/Debug/SendEmailCLR_4.publish.sql

Implementing an API Gateway in ASP.NET Core with Ocelot

This post is about what is an API Gateway and how to build an API Gateway in ASP.NET Core with Ocelot. An API gateway is service that sits between an endpoint and backend APIs, transmitting client requests to an appropriate service of an application. It’s an architectural pattern, which was initially created to support microservices. In this post I am building API Gateway using Ocelot. Ocelot is aimed at people using .NET running a micro services / service orientated architecture that need a unified point of entry into their system.

Let’s start the implementation.

First we will create two web api applications – both these services returns some hard coded string values. Here is the first web api – CustomersController – which returns list of customers.

using Microsoft.AspNetCore.Mvc;

namespace ServiceA.Controllers;

[ApiController]
[Route(“[controller]”)]
public class CustomersController : ControllerBase
{
private readonly ILogger<CustomersController> _logger;

public CustomersController(ILogger<CustomersController> logger)
{
_logger = logger;
}

[HttpGet(Name = “GetCustomers”)]
public IActionResult Get()
{
return Ok(new[] { “Customer1”, “Customer2”,“Customer3” });
}
}

And here is the second web api – ProductsController.

using Microsoft.AspNetCore.Mvc;

namespace ServiceB.Controllers;

[ApiController]
[Route(“[controller]”)]
public class ProductsController : ControllerBase
{
private readonly ILogger<ProductsController> _logger;

public ProductsController(ILogger<ProductsController> logger)
{
_logger = logger;
}

[HttpGet(Name = “GetProducts”)]
public IActionResult Get()
{
return Ok(new[] { “Product1”, “Product2”,
“Product3”, “Product4”, “Product5” });
}
}

Next we will create the API Gateway. To do this create an ASP.NET Core empty web application using the command – dotnet new web -o ApiGateway. Once we create the gateway application, we need to add the reference of Ocelot nuget package – we can do this using dotnet add package Ocelot. Now we can modify the Program.cs file like this.

using Ocelot.DependencyInjection;
using Ocelot.Middleware;

var builder = WebApplication.CreateBuilder(args);

builder.Configuration.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile(“configuration.json”, false, true).AddEnvironmentVariables();

builder.Services.AddOcelot(builder.Configuration);
var app = builder.Build();

app.UseOcelot();
app.Run();

Next you need to configure your API routes using configuration.json. Here is the basic configuration which help to send requests from one endpoint to the web api endpoints.

{
Routes: [
{
DownstreamPathTemplate: /customers,
DownstreamScheme: https,
DownstreamHostAndPorts: [
{
Host: localhost,
Port: 7155
}
],
UpstreamPathTemplate: /api/customers,
UpstreamHttpMethod: [ Get ]
},
{
DownstreamPathTemplate: /products,
DownstreamScheme: https,
DownstreamHostAndPorts: [
{
Host: localhost,
Port: 7295
}
],
UpstreamPathTemplate: /api/products,
UpstreamHttpMethod: [ Get ]
}
],
GlobalConfiguration: {
BaseUrl: https://localhost:7043
}
}

Now run all the three applications and browse the endpoint – https://localhost:7043/api/products – which invokes the ProductsController class GET action method. And if we browse the endpoint – https://localhost:7043/api/customers – which invokes the CustomersController GET action method. In the configuration the UpstreamPathTemplate will be the API Gateway endpoint and API Gateway will transfers the request to the DownstreamPathTemplate endpoint.

Due to some strange reason it was not working properly for me. Today I configured it again and it started working. It is an introductory post. I will blog about some common use cases where API Gateway help and how to deploy it in Azure and all in the future.

Happy Programming 🙂

Early peek at C# 11 features

Visual Studio 17.1 (Visual Studio 2022 Update 1) and .NET SDK 6.0.200 include preview features for C# 11! You can update Visual Studio or download the latest .NET SDK to get these features.

Check out the post Visual Studio 2022 17.1 is now available! to find out what’s new in Visual Studio and the post Announcing .NET 7 Preview 1 to learn about more .NET 7 preview features.

Designing C# 11

We love designing and developing in the open! You can find proposals for future C# features and notes from language design meetings in the CSharpLang repo. The main page explains our design process and you can listen to Mads Torgersen on the .NET Community Runtime and Languages Standup where he talks about the design process.

Once work for a feature is planned, work and tracking shifts to the Roslyn repo. You can find the status of upcoming features on the Feature Status page. You can see what we are working on and what’s merged into each preview. You can also look back at previous versions to check out features you may have overlooked.

For this post I’ve distilled these sometimes complex and technical discussions to what each feature means in your code.

We hope you will try out these new preview features and let us know what you think. To try out the C# 11 preview features, create a C# project and set the LangVersion to Preview. Your .csproj file might look like:

<Project Sdk=”Microsoft.NET.Sdk”>
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<LangVersion>preview</LangVersion>
</PropertyGroup>
</Project>

C# 11 Preview: Allow newlines in the “holes” of interpolated strings

Read more about this change in the proposal Remove restriction that interpolations within a non-verbatim interpolated string cannot contain new-lines. #4935

C# supports two styles of interpolated strings: verbatim and non-verbatim interpolated strings ([email protected]”” and $”” respectively). A key difference between these is that a non-verbatim interpolated strings cannot contain newlines in its text segments, and must instead use escapes (like rn). A verbatim interpolated string can contain newlines in its text segments, and doesn’t escape newlines or other character (except for “” to escape a quote itself).
All of this behavior remains the same.

Previously, these restrictions extended to the holes of non-verbatim interpolated strings. Holes is a shorthand way of saying interpolation expressions and are the portions inside the curly braces that supply runtime values. The holes themselves are not text, and shouldn’t be held to the escaping/newline rules of the interpolated string text segments.

For example, the following would have resulted in a compiler error in C# 10 and is legal in this C# 11 preview:

var v = $”Count ist: { this.Is.Really.Something()
.That.I.Should(
be + able)[
to.Wrap()] }.”;

C# 11 Preview: List patterns

Read more about this change in the proposal List patterns.

The new list pattern allows you to match against lists and arrays. You can match elements and optionally include a slice pattern that matches zero or more elements. Using slice patterns you can discard or capture zero or more elements.

The syntax for list patterns are values surrounded by square brackets and for the slice pattern it is two dots. The slice pattern can be followed by another list pattern, such as the var pattern to capture the contents of the slice.

The pattern [1, 2, .., 10] matches all of the following:

int[] arr1 = { 1, 2, 10 };
int[] arr1 = { 1, 2, 5, 10 };
int[] arr1 = { 1, 2, 5, 6, 7, 8, 9, 10 };

To explore list patterns consider:

public static int CheckSwitch(int[] values)
=> values switch
{
[1, 2, .., 10] => 1,
[1, 2] => 2,
[1, _] => 3,
[1, ..] => 4,
[..] => 50
};

When it is passed the following arrays, the results are as indicated:

WriteLine(CheckSwitch(new[] { 1, 2, 10 })); // prints 1
WriteLine(CheckSwitch(new[] { 1, 2, 7, 3, 3, 10 })); // prints 1
WriteLine(CheckSwitch(new[] { 1, 2 })); // prints 2
WriteLine(CheckSwitch(new[] { 1, 3 })); // prints 3
WriteLine(CheckSwitch(new[] { 1, 3, 5 })); // prints 4
WriteLine(CheckSwitch(new[] { 2, 5, 6, 7 })); // prints 50

You can also capture the results of a slice pattern:

public static string CaptureSlice(int[] values)
=> values switch
{
[1, .. var middle, _] => $”Middle {String.Join(“, “, middle)}”,
[.. var all] => $”All {String.Join(“, “, all)}”
};

List patterns work with any type that is countable and indexable — which means it has an accessible Length or Count property and with an indexer an int or System.Index parameter. Slice patterns work with any type that is countable and sliceable — which means it has an accessible indexer that takes a Range as an argument or has an accessible Slice method with two int parameters.

We’re considering adding support for list patterns on IEnumerable types. If you have a chance to play with this feature, let us know your thoughts on it.

C# 11 Preview: Parameter null-checking

Read more about this change in the proposal Parameter null checking.

We are putting this feature into this early preview to ensure we have time to get feedback. There have been discussions on a very succinct syntax vs. a more verbose one. We want to get customer feedback and from users that have had a chance to experiment with this feature.

It is quite common to validate whether method arguments are null with variations of boilerplate code like:

public static void M(string s)
{
if (s is null)
{
throw new ArgumentNullException(nameof(s));
}
// Body of the method
}

With Parameter null checking, you can abbreviate your intent by adding !! to the parameter name:

public static void M(string s!!)
{
// Body of the method
}

Code will be generated to perform the null check. The generated null check will execute before any of the code within the method. For constructors, the null check occurs before field initialization, calls to base constructors, and calls to this constructors.

This features is independent of Nullable Reference Types (NRT), although they work well together. NRT helps you know at design time whether a null is possible. Parameter null-checking makes it easier to check at runtime whether nulls have been passed to your code. This is particularly important when your code is interacting with external code that might not have NRT enabled.

The check is equivalent if (param is null) throw new ArgumentNullException(…). When multiple parameters contain the !! operator then the checks will occur in the same order as the parameters are declared.

There are a few guidelines limiting where !! can be used:

Null-checks can only be applied to parameters when there is an implementation. For example, an abstract method parameter cannot use !!. Other cases where it cannot be used include:

extern method parameters.
Delegate parameters.
Interface method parameters when the method is not a Default Interface Method (DIM).

Null checking can only be applied to parameters that can be checked.

An example of scenarios that are excluded based on the second rule are discards and out parameters. Null-checking can be done on ref and in parameters.

Null-checking is allowed on indexer parameters, and the check is added to the get and set accessor. For example:

public string this[string key!!] { get { … } set { … } }

Null-checks can be used on lambda parameters, whether or not they are surrounded by parentheses:

// An identity lambda which throws on a null input
Func<string, string> s = x!! => x;

async methods can have null-checked parameters. The null check occurs when the method is invoked.

The syntax is also valid on parameters to iterator methods. The null-check will occur when the iterator method is invoked, not when the underlying enumerator is walked. This is true for traditional or async iterators:

class Iterators {
IEnumerable<char> GetCharacters(string s!!) {
foreach (var c in s) {
yield return c;
}
}

void Use() {
// The invocation of GetCharacters will throw
IEnumerable<char> e = GetCharacters(null);
}
}

Interaction with Nullable Reference Types

Any parameter which has a !! operator applied to its name will start with the nullable state being not-null. This is true even if the type of the parameter itself is potentially null. That can occur with an explicitly nullable type, such as say string?, or with an unconstrained type parameter.

When !! syntax on parameters is combined with an explicitly nullable type on the parameter, the compiler will issue a warning:

void WarnCase<T>(
string? name!!, // CS8995 Nullable type ‘string?’ is null-checked and will throw if null.
T value1!! // Okay
)

Constructors

There is a small, but observable change when you change from explicit null-checks in your code to null-checks using the null validation syntax (!!). Your explicit validation occurs after field initializers, base class constructors, and constructors called using this. Null-checks performed with the parameter null-check syntax will occur before any of these execute. Early testers found this order to be helpful and we think it will be very rare that this difference will adversely affect code. But check that it will not impact your program before shifting from explicit null-checks to the new syntax.

Notes on design

You can hear Jared Parsons in the .NET Languages and Runtime Community Standup on Feb. 9th, 2022. This clip starts about 45 minutes into the stream when Jared joins us to talk more about the decisions made to get this feature into preview, and responds to some of the common feedback.

Some folks learned about this feature when they saw PRs using this feature in the .NET Runtime. Other teams at Microsoft provide important dogfooding feedback on C#. It was exciting to learn that the .NET Runtime removed nearly 20,000 lines of code using this new null-check syntax.

The syntax is !! on the parameter name. It is on the name, not the type, because this is a feature of how that specific parameter will be treated in your code. We decided against attributes because of how it would impact code readability and because attributes very rarely impact how your program executes in the way this feature does.

We considered and rejected making a global setting that there would be null-checks on all nullable parameters. Parameter null checking forces a design choice about how null will be handled. There are many methods where a null argument is a valid value. Doing this everywhere a type is not null would be excessive and have a performance impact. It would be extremely difficult to limit only to methods that were vulnerable to nulls (such as public interfaces). We also know from the .NET Runtime work that there are many places the check is not appropriate, so a per parameter opt-out mechanism would be needed. We do not currently think that a global approach to runtime null checks is likely to be appropriate, and if we ever consider a global approach, it would be a different feature.

Summary

Visual Studio 17.1 and .NET SDK 6.0.200 offer an early peek into C# 11. You can play with parameter null-checking, list patterns, and new lines within curly braces (the holes) of interpolated strings.

We hope you’ll check out the C# 11 Preview features by updating Visual Studio or downloading the latest .NET SDK, and then setting the LangVersion to preview.

We look forward to hearing what you think, here or via discussions in the CSharpLang repo on GitHub!

The post Early peek at C# 11 features appeared first on .NET Blog.

Verify an Email address without sending an Email via an #API for free

One of these two email addresses is valid : [email protected] or [email protected] – how can you tell which one? Regexes will say both are valid, even a DNS MX lookup will say that @gmail.com is valid.

Here’s the trick: https://avatarapi.com/avatar.asmx?op=VerifyEmail

It’s a free API, that does not require registration, or authentication, and does not store the email addresses supplied to it. It does not send an email, but just checks the mailbox.

Here is a result for [email protected]

<EmailVerificationResponse xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns=”http://avatarapi.com/”>
<Verification>FAIL</Verification>
<MailExchange>alt4.gmail-smtp-in.l.google.com.</MailExchange>
<SmtpResponse>550-5.1.1 The email account that you tried to reach does not exist. Please try</SmtpResponse>
</EmailVerificationResponse>

And here is the result for [email protected]

<EmailVerificationResponse xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns=”http://avatarapi.com/”>
<Verification>SUCCESS</Verification>
<MailExchange>alt4.gmail-smtp-in.l.google.com.</MailExchange>
<SmtpResponse>250 2.1.5 OK hf21-20020a17090aff9500b001bc3052777csi2002522pjb.42 – gsmtp</SmtpResponse>
</EmailVerificationResponse>

It also works with every email host, not just Gmail. However, some mail exchangers do not give information on their mailboxes, in which case the result can be inconclusive.

Decoding binary #WebSockets data using C#

On some websites, you may notice data being exchanged between server and client, with no evident Ajax calls being made, in which case there may be activity on the WebSockets (WS) channel, and in this channel, if you are greeted by a jumble of binary data, then you may feel like giving up, but you may find it is easier to decode than you think.

The first clue I noticed was that there was a request header called

Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits

Where deflate is a compression mechanism, which is similar to GZip, and can be decoded easily in C#, First step, though is to view the binary data as base64, so you can copy & paste it, then using this function;

public static byte[] Decompress(byte[] data)
{
MemoryStream input = new MemoryStream(data);
MemoryStream output = new MemoryStream();
using (DeflateStream dstream = new DeflateStream(input, CompressionMode.Decompress))
{
dstream.CopyTo(output);
}
return output.ToArray();
}

Which is called as follows;

var binInput = Convert.FromBase64String(b64Input);
var bDeflate = Decompress(binInput);
var output = Encoding.UTF8.GetString(bDeflate);

And from there, you see much more familiar JSON text.

Implementing authorization in Blazor ASP.NET Core applications using Azure AD security groups

This article shows how to implement authorization in an ASP.NET Core Blazor application using Azure AD security groups as the data source for the authorization definitions. Policies and claims are used in the application which decouples the descriptions from the Azure AD security groups and the application specific authorization requirements. With this setup, it is easy to support any complex authorization requirement and IT admins can manager the accounts independently in Azure. This solution will work for Azure AD B2C or can easily be adapted to use data from your database instead of Azure AD security groups if required.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate/tree/main/BlazorBff

Setup the AAD security groups

Before we start using the Azure AD security groups, the groups need to be created. I use Powershell to create the security groups. This is really simple using the Powershell AZ module with AD. For this demo, just two groups are created, one for users and one for admins. The script can be run from your Powershell console. You are required to authenticate before running the script and the groups are added if you have the rights. In DevOps, you could use a managed identity and the client credentials flow.

# https://theitbros.com/install-azure-powershell/
#
# https://docs.microsoft.com/en-us/powershell/module/az.accounts/connect-azaccount?view=azps-7.1.0
#
# Connect-AzAccount -Tenant “–tenantId–”
# AZ LOGIN –tenant “–tenantId–”

$tenantId = “–tenantId–”
$gpAdmins = “demo-admins”
$gpUsers = “demo-users”

function testParams {

if (!$tenantId)
{
Write-Host “tenantId is null”
exit 1
}
}

testParams

function CreateGroup([string]$name) {
Write-Host ” – Create new group”
$group = az ad group create –display-name $name –mail-nickname $name

$gpObjectId = ($group | ConvertFrom-Json).objectId
Write-Host ” $gpObjectId $name”
}

Write-Host “Creating groups”

##################################
### Create groups
##################################

CreateGroup $gpAdmins
CreateGroup $gpUsers

#az ad group list –display-name $groupName

return

Once created, the new security groups should be visible in the Azure portal. You need to add group members or user members to the groups.

That’s all the configuration required to setup the security groups. Now the groups can be used in the applications.

Define the authorization policies

We do not use the security groups directly in the applications because this can change a lot or maybe the application is deployed to different host environments. The security groups are really just descriptions about the identity. How you use this, is application specific and depends on the solution business requirements which tend to change a lot. In the applications, shared authorization policies are defined and only used in the Blazor WASM and the Blazor server part. The definitions have nothing to do with the security groups, the groups get mapped to application claims. A Policies class definition was created for all the policies in the shared Blazor project because this is defined once, but used in the server project and the client project. The code was built based on the excellent blog from Chris Sainty. The claims definition for the authorization check have nothing to do with the Azure security groups, this logic is application specific and sometimes the applications need to apply different authorization logic how the security groups are used in different applications inside the same solution.

using Microsoft.AspNetCore.Authorization;

namespace BlazorAzureADWithApis.Shared.Authorization
{
public static class Policies
{
public const string DemoAdminsIdentifier = “demo-admins”;
public const string DemoAdminsValue = “1”;

public const string DemoUsersIdentifier = “demo-users”;
public const string DemoUsersValue = “1”;

public static AuthorizationPolicy DemoAdminsPolicy()
{
return new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.RequireClaim(DemoAdminsIdentifier, DemoAdminsValue)
.Build();
}

public static AuthorizationPolicy DemoUsersPolicy()
{
return new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.RequireClaim(DemoUsersIdentifier, DemoUsersValue)
.Build();
}
}
}

Add the authorization to the WASM and the server project

The policy definitions can now be added to the Blazor Server project and the Blazor WASM project. The AddAuthorization extension method is used to add the authorization to the Blazor server. The policy names can be anything you want.

services.AddAuthorization(options =>
{
// By default, all incoming requests will be authorized according to the default policy
options.FallbackPolicy = options.DefaultPolicy;
options.AddPolicy(“DemoAdmins”, Policies.DemoAdminsPolicy());
options.AddPolicy(“DemoUsers”, Policies.DemoUsersPolicy());
});

The AddAuthorizationCore method is used to add the authorization policies to the Blazor WASM client project.

var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.Services.AddOptions();
builder.Services.AddAuthorizationCore(options =>
{
options.AddPolicy(“DemoAdmins”, Policies.DemoAdminsPolicy());
options.AddPolicy(“DemoUsers”, Policies.DemoUsersPolicy());
});

Now the application policies, claims are defined. Next job is to connect the Azure security definitions to the application authorization claims used for the authorization policies.

Link the security groups from Azure to the app authorization

This can be done using the IClaimsTransformation interface which gets called after a successful authentication. An application Microsoft Graph client is used to request the Azure AD security groups. The IDs of the Azure security groups are mapped to the application claims. Any logic can be added here which is application specific. If a hierarchical authorization system is required, this could be mapped here.

public class GraphApiClaimsTransformation : IClaimsTransformation
{
private readonly MsGraphApplicationService _msGraphApplicationService;

public GraphApiClaimsTransformation(MsGraphApplicationService msGraphApplicationService)
{
_msGraphApplicationService = msGraphApplicationService;
}

public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
{
ClaimsIdentity claimsIdentity = new();
var groupClaimType = “group”;
if (!principal.HasClaim(claim => claim.Type == groupClaimType))
{
var objectidentifierClaimType = “http://schemas.microsoft.com/identity/claims/objectidentifier”;
var objectIdentifier = principal
.Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType);

var groupIds = await _msGraphApplicationService
.GetGraphUserMemberGroups(objectIdentifier.Value);

foreach (var groupId in groupIds.ToList())
{
var claim = GetGroupClaim(groupId);
if (claim != null) claimsIdentity.AddClaim(claim);
}
}

principal.AddIdentity(claimsIdentity);
return principal;
}

private Claim GetGroupClaim(string groupId)
{
Dictionary<string, Claim> mappings = new Dictionary<string, Claim>() {
{ “1d9fba7e-b98a-45ec-b576-7ee77366cf10”,
new Claim(Policies.DemoUsersIdentifier, Policies.DemoUsersValue)},

{ “be30f1dd-39c9-457b-ab22-55f5b67fb566”,
new Claim(Policies.DemoAdminsIdentifier, Policies.DemoAdminsValue)},
};

if (mappings.ContainsKey(groupId))
{
return mappings[groupId];
}

return null;
}
}

The MsGraphApplicationService class is used to implement the Microsoft Graph requests. This uses application permissions with a ClientSecretCredential. I use secrets which are read from an Azure Key vault. You need to implement rotation for this or make it last forever and update the secrets in the DevOps builds every time you deploy. My secrets are only defined in Azure and used from the Azure Key Vault. You could use certificates but this adds no extra security unless you need to use the secret/certificate outside of Azure or in app settings somewhere. The GetMemberGroups method is used to get the groups for the authenticated user using the object identifier.

public class MsGraphApplicationService
{
private readonly IConfiguration _configuration;

public MsGraphApplicationService(IConfiguration configuration)
{
_configuration = configuration;
}

public async Task<IUserAppRoleAssignmentsCollectionPage>
GetGraphUserAppRoles(string objectIdentifier)
{
var graphServiceClient = GetGraphClient();

return await graphServiceClient.Users[objectIdentifier]
.AppRoleAssignments
.Request()
.GetAsync();
}

public async Task<IDirectoryObjectGetMemberGroupsCollectionPage>
GetGraphUserMemberGroups(string objectIdentifier)
{
var securityEnabledOnly = true;

var graphServiceClient = GetGraphClient();

return await graphServiceClient.Users[objectIdentifier]
.GetMemberGroups(securityEnabledOnly)
.Request().PostAsync();
}

private GraphServiceClient GetGraphClient()
{
string[] scopes = new[] { “https://graph.microsoft.com/.default” };
var tenantId = _configuration[“AzureAd:TenantId”];

// Values from app registration
var clientId = _configuration.GetValue<string>(“AzureAd:ClientId”);
var clientSecret = _configuration.GetValue<string>(“AzureAd:ClientSecret”);

var options = new TokenCredentialOptions
{
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud
};

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

return new GraphServiceClient(clientSecretCredential, scopes);
}
}

The security groups are mapped to the application claims and policies. The policies can be applied in the application.

Use the Policies in the Server

The Blazor server applications implements secure APIs for the Blazor WASM. The Authorize attribute is used with the policy definition. Now the user must be authorized using our definition to get data from this API. We also use cookies because the Blazor application is secured using the BFF architecture which has improved security compared to using tokens in the untrusted SPA.

[ValidateAntiForgeryToken]
[Authorize(Policy= “DemoAdmins”,
AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)]
[ApiController]
[Route(“api/[controller]”)]
public class DemoAdminController : ControllerBase
{
[HttpGet]
public IEnumerable<string> Get()
{
return new List<string>
{
“admin data”,
“secret admin record”,
“loads of admin data”
};
}
}

Use the policies in the WASM

The Blazor WASM application can also use the authorization policies. This is not really authorization but only usability because you cannot implement authorization in an untrusted application which you have no control of once it’s running. We would like to hide the components and menus which cannot be used, if you are not authorized. I use an AuthorizeView with a policy definition for this.

<div class=”@NavMenuCssClass” @onclick=”ToggleNavMenu”>
<ul class=”nav flex-column”>
<AuthorizeView Policy=”DemoAdmins”>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”demoadmin”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> DemoAdmin
</NavLink>
</li>
</Authorized>
</AuthorizeView>

<AuthorizeView Policy=”DemoUsers”>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”demouser”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> DemoUser
</NavLink>
</li>
</Authorized>
</AuthorizeView>

<AuthorizeView>
<Authorized>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”graphprofile”>
<span class=”oi oi-list-rich” aria-hidden=”true”></span> Graph Profile
</NavLink>
</li>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”” Match=”NavLinkMatch.All”>
<span class=”oi oi-home” aria-hidden=”true”></span> Home
</NavLink>
</li>
</Authorized>
<NotAuthorized>
<li class=”nav-item px-3″>
<p style=”color:white”>Please sign in</p>
</li>
</NotAuthorized>
</AuthorizeView>

</ul>
</div>

The Blazor UI pages should also use an Authorize attribute. This prevents an unhandled exception. You could add logic which forces you to login then with the permissions required or just display an error page. This depends on the UI strategy.

@page “/demoadmin”
@using Microsoft.AspNetCore.Authorization
@inject IHttpClientFactory HttpClientFactory
@inject IJSRuntime JSRuntime
@attribute [Authorize(Policy =”DemoAdmins”)]

<h1>Demo Admin</h1>

When the application is started, you will only see what you allowed to see and more important, only be able to get data for what you are authorized.

If you open a page where you have no access rights:

Notes:

This solution is very flexible and can work with any source of identity definitions, not just Azure security groups. I could very easily switch to a database. One problem with this, is that with a lot of authorization definitions, the size of the cookie might get to big and you would need to switch from using claims in the policies definitions to using a cache database or something. This would also be easy to adapt because the claims are only mapped in the policies and the IClaimsTransformation implementation. Only the policies are used in the application logic.

Links

https://chrissainty.com/securing-your-blazor-apps-configuring-policy-based-authorization-with-blazor/

https://docs.microsoft.com/en-us/aspnet/core/blazor/security

Some jQuery Event Methods.

Some jQuery Event Methods.

jQuery events are those tasks that can be detected by your web application. They are used to create dynamic web pages. An event represents the exact moment when something happens. This section contains a comprehensive list of event methods belonging to the latest jQuery library. All the methods are grouped into categories.

Below are some examples of jQuery events:

A mouse click, hover, etc.
Select a radio button.
An HTML form submission.
clicking on an element.
Scrolling of the web page etc.

List of jQuery Event Methods.

 

Mouse Events.

 

Method
Description

click()
Bind an event handler to be fired when the element is clicked, or trigger that handler on an element.

dblclick()
Bind an event handler to be fired when the element is double-clicked, or trigger that event on an element.

hover()
Bind one or two handlers to the selected elements, to be executed when the mouse pointer enters and leaves the elements.

mousedown()
Bind an event handler to be fired when the mouse button is pressed within the element, or trigger that event on an element.

mouseenter()
Bind an event handler to be fired when the mouse enters an element, or trigger that handler on an element.

mouseleave()
Bind an event handler to be fired when the mouse leaves an element, or trigger that handler on an element.

mouseout()
Bind an event handler to be fired when the mouse pointer leaves the element, or trigger that event on an element.

mouseup()
Bind an event handler to be fired when the mouse button is released within the element, or trigger that event on an element.

Keyboard Events.

 

Method
Description

keydown()
Bind an event handler to be fired when a key is pressed and the element has keyboard focus, or trigger that event on an element.

keypress()
Bind an event handler to be fired when a keystroke occurs and the element has keyboard focus, or trigger that event on an element.

keyup()
Bind an event handler to be fired when a key is released and the element has keyboard focus, or trigger that event on an element.

Form Events.

Method
Description

blur()
Bind an event handler to be fired when the element loses keyboard focus, or trigger that event on an element.

change()
Bind an event handler to be fired when the element’s value changes, or trigger that event on an element.

focus()
Bind an event handler to be fired when the element gains keyboard focus, or trigger that event on an element.

focusin()
Bind an event handler to be fired when the element, or a descendant, gains keyboard focus.

focusout()
Bind an event handler to be fired when the element, or a descendant, loses keyboard focus.

select()
Bind an event handler to be fired when text in the element is selected, or trigger that event on an element.

submit()
Bind an event handler to be fired when the form element is submitted, or trigger that event on an element.

Document/Browser Events.

 

Method
Description

load()
Bind an event handler to be fired when the element finishes loading. Deprecated in favor of Ajax load() method.

ready()
Bind an event handler to be fired when the DOM is fully loaded.

resize()
Bind an event handler to be fired when the element is resized, or trigger that event on an element.

scroll()
Bind an event handler to be fired when the window’s or element’s scroll position changes, or trigger that event on an element.

Read Also.

The post Some jQuery Event Methods. appeared first on PHPFOREVER.

What is React?

Introduction

Why use React?
Virtual Document Object Model (VDOM)
JSX
React Native

Main Components
Function Components
Class Components

Benefits
Who uses React

How to build your first application on ReactHow to create your app on Reactjs from the terminal of your IDE

How to create your app with Flatlogic Platform
Creating a CRUD application with Flatlogic
Creating a one-page application with Flatlogic

Introduction: What is React

React.js was released by a software engineer working for Facebook – Jordane Walke in 2011. React is a JavaScript library focused on creating declarative user interfaces (UIs) using a component-based concept. It’s used for handling the view layer and can be used for web and mobile apps. React’s main goal is to be extensive, fast,  declarative, flexible, and simple. 

React is not a framework, it is specifically a library.  The explanation for this is that React only deals with rendering the UIs and reserves many things at the discretion of individual projects. The standard set of tools for creating an application using ReactJS is frequently called the stack.

Why use React?

Let’s take a more detailed look at what sets React library aside against other frameworks and libraries and makes it so powerful and popular for application development.

Virtual Document Object Model (VDOM)

The Document Object Model (DOM) is an API for valid HTML and well-formed XML documents.

A virtual DOM is a representation of a real DOM that is built/manipulated by browsers. Advanced libraries, such as React, generate a tree of elements in memory equivalent to the real DOM, which forms the virtual DOM in a declarative way. The virtual DOM is one of the features that make the framework so fast and reliable.

Image source: https://miro.medium.com/max/1400/1*HyoU7X-SMyT8xQD1PjrRGw.png

JSX 

React uses a syntax extension to JavaScript called JSX. We use it to create ‘elements’.

JSX uses Babel preprocessors to convert HTML-like text in JavaScript files into JavaScript objects to be parsed.

React doesn’t require the use of JSX, but most developers find that it makes for a more user-friendly experience within the JavaScript code.

We use JSX to create React components, so this is why it is an important part of ReactJS.

React Native

React Native is an open-source JavaScript framework for building apps on different platforms, such as iOS, Android, and UPD. It is React-based and gives all its greatness to mobile app development.

React Native uses JavaScript to build the UI of an application but also uses OS-native representations. It allows code to be implemented in OS-native languages (Swift and Objective-C for iOS and Java and Kotlin for Android) for more sophisticated functions.

Main components 

ReactJS is a component-based library where components make our code reusable and split our UI into different pieces. Components are divided into two types, Class components and Function components. All React components follow the separation of concerns design principle, meaning that we should separate our application into different sections to address separate concerns.

Function components.

React components work similarly to JavaScript functions. A component takes random inputs, which we call props, and must always return a React element that defines what is intended to be displayed to the user.

The simple method to specify a React component is to define a JavaScript function and return a React element. The React component must always return a React element, or it will throw an error.

We’ve defined a ReactJS component called HelloWorld that takes one prop, which stands for properties and returns a ReactJS element, in this case, a simple h1 element. 

Class components.

The Class component must have the extends `React.Component` statement. This statement sets up a `React.Component` subclass that allows your component to access `React.Component` functions.

The component must also have a `render()` method, which returns HTML.

Benefits 

So the main question is why you should choose ReactJS as a frontend development stack while there are a lot of others. Here are some reasons:

Speedless. React allows developers to use individual parts of their application on both the client and server sides, and any changes they make will not affect the application’s logic. This makes the development process extremely fast.

Components support. The use of HTML tags and JS codes makes it easy to work with a huge dataset containing the DOM. React acts as an intermediary that represents the DOM and helps you decide which component requires changes to get accurate results.

Easy to use and learn. ReactJS is incredibly user-friendly and makes any UI interactive. It also allows you to quickly and efficiently build applications, which is time-saving for clients and developers alike.

SEO Friendly. A common problem complained by most web developers is that traditional JavaScript frameworks often have problems with SEO.  ReactJS solves this problem by helping developers navigate different search engines easily through the fact that the ReactJS application can run on the server, and the virtual DOM renders and returns it to the browser as a  web page.

One-way Data Binding. One-way data-binding implies that absolutely anyone can trace all the changes that have been made to a segment of the data.  This is also one of the reasons that make React so easy.

Who uses React?

Here is the list of popular ReactJS websites:

Facebook
Atlassian
Uber Eats
Netflix
Airbnb
Trello
Grammarly
Outlook.com
Codecademy
Dropbox

How to build your first application on React

Creating your app on React.js from the terminal of your IDE

First, you should install the framework package using `npx create-react-app`

`npx create-react-app my-app`, where is the `my-app` name of your application.

The next step is navigating into your new application.

`cd my-app`

And the last step is to start your application.

`npm start` 

In the end, you will have only a frontend application without any database and backend, which takes a lot of work to get a full-fledged application.

How to create your app with Flatlogic Platform

There are two ways to build your application on the Flatlogic Platform: you can create a simple and clear frontend application, generated by the framework CLI, or the CRUD application with frontend+backend+database.

Creating a CRUD application with Flatlogic

1 Step. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

2 Step. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

3 Step. Schema Editor

In this part you will need to know which application you want to build, that is, CRM or E-commerce, also in this part you build a database schema i.e. tables and relationships between them.

If you are not familiar with database design and it is difficult for you to understand what tables are, we have prepared several ready-made example schemas of real-world apps that you can build your app upon modification:

E-commerce app;
Time tracking app;
Books store;
Chat (messaging) app;
Blog.

At the final, you can male a deploy of your application and in a few minutes, you will get a fully functional CMS for your Application.

Creating a one-page application with Flatlogic 

You can create a frontend-only app with the Flatlogic Platform. This assumes you host the back-end somewhere else or do not need it at all. To generate a one-page application you don’t need to enter anything in the terminal of your IDE, you just need to go to the page of creating an application on the Flatlogic website and make only 2 steps:

1 Step. Choosing the Tech Stack

In this step, you set the name of your application and choose the stack: React as Frontend, No-Backend as Backend.

2 Step. Choosing the Starter Template

In this step, you choose the design of the web app. Since this is a standard one-page application created using the CLI framework, it will have the design of a standard one-page ReactJS CLI application.

At the final, you can deploy your app and in a few minutes, you will get a one-page React application, which you can further modify as you like.

The post What is React? appeared first on Flatlogic Blog.

Introduction to gRPC

Intro

If you have built RESTful or other OpenAPI-like APIs for some time and wondering what’s next for you, then you have come to the right place. This article series discusses leveraging gRPC to build your next API, even multiple services. We will initially look at the main concepts from a high-level view and then move on to the implementation aspects of it.

Motivation

There are many tutorials on getting started there. But the main issue I faced was either, they even made me more confused as there was a lot of contexts lost in the process and brought in way too many third-party libraries or explained a bunch of steps without emphasizing how different pieces work together. Therefore, I thought to create a guide for anyone who’s interested in getting started with gRPC from a hands-on perspective.

This article is the first part of a series on gRPC. The links are down below. If you want to jump ahead, please feel free to do so.

Introduction to gRPC (You are here)
Building a gRPC server with Go
Building a gRPC server with .NET
Building a gRPC client with Go
Building a gRPC client with .NET

Background

One of the popular choices for building APIs nowadays is creating a RESTful service. However, before even coming to REST API, we need to look back to see other forms we used to have in the past.

SOAP – Popular back in the late-90s for building service-oriented architectures (SOA) systems which are known for exchanging bloated XMLs. The benefits were detrimental for distributed applications where the schema was rigid.

REST – Promoted the resource-oriented architecture (ROA)-style distributed applications. Often bulky with JSONs, and everything that the service provides is represented as resources (Eg: /api/v1/users/ or api/v1/books/1234 etc.). Sometimes this could result in too exposing too little data or too much data at the cost of making multiple HTTP calls.

GraphQL – GraphQL takes a step further and exposes a single endpoint that you can use to query or mutate the data through HTTP verbs. It’s still a request-response model and based on the text-based transport protocol, HTTP 1.x.

What if I want to have some bi-directional communication? None of the above solved that. Then we got technologies like WebSockets and Server Sent Eventing.

WebSockets – Built to support bi-directional communication over a single TCP connection. Known to be a very chatty protocol often sending packets back and forth. If you want to know about WebSockets, here is an article that I wrote.

Server Sent Eventing – Another paradigm where the server sends messages to the client once the initial connection has been set up by the client.

There’s a recurring theme going on in the above technologies. It could be issued with messages being (bulky, not strongly typed etc.), inefficient protocols (text-based such as HTTP 1.x etc.).

Hello gRPC

gRPC was born to address some of the challenges we face in the above approaches. In 2015 Google released gRPC to the open-source world. The idea behind gRPC is to have a single client library – Whoever is maintaining gRPC will maintain the client libraries. Because HTTP/2 works on binary format. So the idea is to abstract away the HTTP/2 stuff from you. The developers only have to define their service contracts through requests, responses and RPC calls, and the gRPC framework will handle the rest for us.

💡 Even before we start learning about gRPC, you must have also thought about what the “g” in GRPC mean? Some say it stands for “Good”; others say it stands for “Google”. You know what? It doesn’t matter as it doesn’t provide more context. You can find all the different variations here, which is, by the way, hilarious! 😂


Source: https://grpc.io/docs/what-is-grpc/introduction/

RPC has been around for some time; however, gRPC approaches RPC much cleaner way.

Another term that you’d come across when learning gRPC is inter-process communication or IPCs. GRPC is mainly built to cater to inter-process communication that lets you connect, invoke, and operate, an excellent choice for micro-services like applications. In distributed computing realm, inter-process communication or, in short, IPC refers to passing messages (synchronously or asynchronously) where any application or a node can act as both client and a server.

Does this mean I should replace my current APIs, which are customer-facing? Absolutely not. If you have customer-facing APIs, they should be actually

Protocol Buffers

They are a language-agnostic way to define what your service does. These are commonly known as “IDLs” or Interface Definition Language(s).

So the steps are,

You write the messages. These messages have statically typed fields
You write your services by defining what comes in, what goes out
Compile the proto files and generate the client libraries for your application

It’s also worth mentioning that Protocol Buffers are not the only way to define our IDLs. There are other formats like FlatBuffers, Bond etc.

This is what a protocol buffer looks like:

syntax = “proto3”;

message Book {
string title = 1;
string author = 2;
int32 page_count = 3;
optional string language = 4;
}

message GetBookListRequest {}
message GetBookListResponse { repeated Book books = 1; }

service Inventory {
rpc GetBookList(GetBookListRequest) returns (GetBookListResponse) {}
}

The above is an example that we will use throughout this blog series.


The first line specifies the Protobuf version we will be using. It will be set to proto2 if you don’t specify it.
The Book is a message definition with some statically typed fields such as title, author etc.

GetBookListRequest and GetBookListResponse are also messages composed of the Book type we defined above.
Inventory is a service that says what methods we expose to the remotely invoked clients.

There are many advantages of using Protobufs compared to something like JSON. Once you write the definitions for your service, you can share them with other teams and use them to generate stubs/code that can interact with your service.

Another advantage is that Protobufs are binary encoded. The payload is smaller than JSON, which means it would be efficient to send. This also means that it will use fewer CPU cycles to serialize/deserialize the messages.

gRPC Server & Client

Now that we have the Protobuf definitions for our service, we could generate the Server side and Client side implementations using the Protoc compiler.


We first create the definition of the service with a .proto file
We then generate the server-side code in our preferred language (Go, C#, Java etc.). This code includes the boilerplate code to serialize, deserialize, functions for receiving and responding to messages.
We then generate the client-side code in our preferred language (doesn’t have to be the same language we chose for the server). This includes methods that we can invoke on our server with additional code to serialize, deserialize messages.
Depending on which gRPC mode we choose, the client-server communication happens over an HTTP/2 connection. We will discuss more on these modes in the next section.

gRPC Modes

There are 4 modes of gRPC communication styles. Following are their brief introductions. Feel free to go more in-depth by reading through the official docs.

Unary RPC – More like our traditional APIs where we send a request and receive a single response.
Server Streaming RPC – Client sends a request and reads until the server stops sending messages via a stream.
Client Streaming – Reverse of the above, the client sends messages through the stream and waits for the server to read and return a response.
Bidirectional streaming RPC – Pretty much both (2) & (3) combined – both client and server streams messages both ways.

Pros and Cons

Any technology comes with a set of advantages and disadvantages. Whether you choose to use gRPC might depend on some of these factors.

Pros

Efficient for inter-process communication with all the good stuff that comes with HTTP/2.
Well defined interfaces to be able to communicate also while supporting polyglot development.
Code-generation of client and server stubs with strong types.

Cons

It may not be suitable for external-facing services since most web browsers’ support is limited.
Changing the service definitions might require rework and regeneration of code.
Could be a steeper learning curve compared to other RESTful or GQL like architectural styles.

Conclusion

In this article, we looked at gRPC from a high level. In the next article, we will look at how we can put these into action and generate a gRPC service with Go. Feel free to let me know any feedback or questions. Thanks for reading ✌️

References

https://developers.google.com/protocol-buffers/docs/proto3
https://www.oreilly.com/library/view/grpc-up-and/9781492058328/

https://github.com/grpc-ecosystem/awesome-grpcyou

C#11 Parameter Null Checking

Such is life on Twitter, I’ve been watching from afar .NET developers argue about a particular upcoming C# 11 feature, Parameter Null Checks. It’s actually just a bit of syntactic sugar to make it easier to throw argument null exceptions, but it’s caused a bit of a stir for two main reasons.

People don’t like the syntax full stop. Which I understand, but other features such as some of the switch statement pattern matching and tuples look far worse! So in for a penny in for a pound!
It somewhat clashes with another recent C# feature of Nullable Reference Types (We’ll talk more about this later).

The Problem

First let’s look at the problem this is trying to solve.

I may have a very simple method that takes a list of strings (As an example, but it could be any nullable type). I may want to ensure that whatever the method is given is not null. So we would typically write something like :

void MyMethod(List<string> input)
{
if(input == null)
{
throw new ArgumentNullException(nameof(input));
}
}

Nothing too amazing here. If the list is null, throw an ArgumentNullException!

In .NET 6 (Specifically .NET 6, not a specific version of C#), a short hand was added to save a few lines. So we could now do :

void MyMethod(List input)
{
ArgumentNullException.ThrowIfNull(input);
}

There is no magic here. It’s just doing the same thing we did before with the null check, but wrapping it all up into a nice helper.

So what’s the problem? Well.. There isn’t one really. The only real issue is that should you have a method with many parameters, and all of them nullable, and yet you want to throw a ArgumentNullException, you might have an additional few lines at the start of your method. I guess that’s a problem to be solved, but it isn’t too much of a biggie.

Parameter Null Checking In C# 11

I put C# 11 here, but actually you can turn on this feature in C# 10 by adding the following to your csproj file :

<EnablePreviewFeatures>True</EnablePreviewFeatures>

Now we have a bit of sugar around null check by doing the following :

void MyMethod(List<string> input!!)
{
}

Adding the “!!” operator to a parameter name immediately adds an argument null check to it, skipping the need for the first few lines to be boilerplate null checks.

Just my personal opinion, it’s not… that bad. I think people see the use of symbols, such as ? or ! and they immediately get turned off. When using a symbol like this, especially one that isn’t universal across different languages (such as a ternary ?), it’s not immediately clear what it does. I’ve even seen some suggest just adding another keyword such as :

void MyMethod(notnull List<string> input)
{
}

I don’t think this is really any better to be honest.

Overall, it’s likely to see a little bit of use. But the interesting context of some of the arguments against this is….

Nullable Reference Types

C#8 introduced the concept of Nullable Reference Types. Before this, all reference types were nullable by default, and so the above checks were essentially required. C#8 came along and gave a flag to say, if I want something to be nullable, I’ll let you know, otherwise treat everything as non nullable. You can read more about the feature here : https://dotnetcoretutorials.com/2018/12/19/nullable-reference-types-in-c-8/

The interesting point here is that if I switch this flag on (And from .NET 6, it’s switched on by default in new projects), then there is no need for ArgumentNullExceptions because either the parameter is not null by default, or I specify that it can be null (And therefore won’t need the check).

Just as an example, with Nullables switched on using code :

#nullable enable
void MyMethod(List<string> input)
{
//Input cannot be null anyway. So no need for the check.
}

void MyMethod2(List<string>? input)
{
//Using ? I’ve specified it can be null, and if I’m saying it can be null…
//I won’t be throwing exceptions when it is null right?
}

There’s arguments that nullable reference types are a compile time check whereas throwing an exception is a runtime check. But the reality is they actually solve the same problem just in different ways, and if there is a push to do things one way (nullable reference types), then there’s no need for the other.

With all of that being said. Honestly, it’s a nice feature and I’m really not that fussed over it. The extent of my thinking is that it’s a handy little helper. That’s all.

The post C#11 Parameter Null Checking appeared first on .NET Core Tutorials.