What is PostgreSQL?


Postgres (or PostgreSQL) is a powerful open-source relational database that supports both SQL (relational) and JSON (non-relational) querying. It was created by scientists from the University of California at Berkeley. It is a very stable object-oriented database management system. The PostgreSQL community has grown for over 20 years, contributing to its high stability, consistency, and correctness. 

At first, PostgreSQL was called Ingres. Afterward, the creators introduced further improvements and expanded its functionality, and changed the name to Postgres95 and finally to PostgreSQL.

Postgres became the 1st choice for corporations performing complex and volumetric high-data operations because of their powerful core technology, featuring MVCC (Multivariant Parallelism Control), in which multiple readers and writers work simultaneously on the system. Postgres has an extraordinary ability to solve several problems concurrently and effectively. That is why business giants like Yahoo!, Apple, Meta, major telecommunication companies, and financial and government institutions keep using PostgreSQL.

Postgres supports multiple programming languages and protocols, such as Ruby, Python, .Net, C/C++, Go, Java, ODBC.

Why use Postgres

Atomicity, Consistency, Isolation, and Durability (ACID) Support. Postgres is completely ACID compliant. It provides the ability to verify and maintain data integrity regardless of errors or network failures. Postgres ACID compliance qualifies it as a valid option for corporate, e-commerce, and applications requiring resiliency.

MVCC. Multi-Version Concurrency Control provides a unique feature of Postgres that allows users to simultaneously write and read data. Supporting MVCC is possible with other SQL databases, although usually problematic without other technology.

Queries. Postgres is the kind of database with the ability to be creative with custom queries. In case your model is complex, you can extend the queries to the database with custom functionality. This allows you easily query the data in specific ways that fit your model of an application.

Community support. Postgres has pretty strong support and extensive documentation. If you have any questions or problems, you can always reach out to the Postgres community.

Extensive support for data types. Postgres is object-oriented and therefore offers to write and read capabilities for all kinds of data structures. Custom, structured and non-relational data types are supported, such as JSON, BSON, primitive and geometric types. PostgreSQL is great at data scaling as well.

Security. Postgres offers a variety of security mechanisms, including user authentication and/or secure TCP/IP connections, all of which protect data in a high-performance way.

Who uses Postgres

Postgres is widely used in a variety of industries like the finanсial sector, Big Data for R&D, web applications, logistics.

Just because the database system is so great, the largest and most important companies are probably choosing it such as:


Using the Flatlogic Platform you can also generate an application with a PostgreSQL database.

How to create your app with Flatlogic Platform

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app.

Step 3. Schema Editor

In this part you will need to know which application you want to build, that is, CRM or E-commerce, also in this part you build a database schema i.e. tables and relationships between them.

If you are not familiar with database design and its difficult for you to understand what tables are, we have prepared several ready-made example schemas of real-world apps that you can build your app upon modification:

E-commerce app;
Time tracking app;
Books store;
Chat (messaging) app;

Like all databases in PostgreSQL, there are such types of table relationships as relation_one, relation_many. You can enforce the relationships by defining the right foreign key constraints on the columns.

Relation (one) – one-sided relation capable of storing one or another entity, for example, Employee: [{ name: ‘John’}].
Relation (many) – two-sided relation capable of storing any number of other entities, for example, Employee: [{ name: ‘John’ }, { name: ‘Joe’ }].

Afterwards, you can deploy your application and in a few minutes, you will get a fully functional CMS application with PostgreSQL.

Suggested Articles

What is Webpack – Flatlogic Tech Glossary
How to Create a Vue Application [Learn the Ropes!]
What is Hosting and Domain Name – Flatlogic Blog

The post What is PostgreSQL? appeared first on Flatlogic Blog.

Announcing SQL Server to Snowflake Migration Solutions

It’s Spring (or at least it will be soon), and while nature may take the Winter off from growing it’s product, Mobilize.Net did not. As Snowflake continues to grow, SnowConvert continues to grow as well. Last month, Mobilize.Net announced SnowConvert for Oracle, the first follow-up to the immensely popular SnowConvert for Teradata. This month? It’s time for SnowConvert for SQL Server

SQL Server has been Microsoft’s database of choice since before Windows was in existence. It has provided a lightweight option for thousands of application’s back-end, and has evolved to be a comprehensive database platform for thousands of organizations. As an on-solution, SQL Server carried many developers and organization through the 90s and early 2000s. But like other on-prem solutions, the cloud has come. Even Microsoft has taken it’s database-ing to the cloud through Azure and Synapse. Snowflake has taken the lead as the Data Cloud, and SnowConvert is the best and most experienced way to help you get there. 

If you have SQL Server, I would hope the SQL you have written for SQL Server is not quite as old as the first version of windows. But even if it is and the architects of that original SQL are nowhere to be anymore, SnowConvert’s got you covered. SnowConvert automates any code the conversion of any DDL and DML that you may have to an equivalent in Snowflake. But that’s the easy part. The hard problem in a code migration for databases is the procedural code. That mean Transact SQL for MSSQL Server. And with T-SQL, SnowConvert again has you covered.

Procedures Transformed

SnowConvert can take your T-SQL to functionally equivalent JavaScript or Snowflake Scripting. Both our product page and documentation have more information on the type of transformation performed, so why not show you what that looks like on this page? Let’s take a look at really basic procedure from the Microsoft Adventure Works database and convert into functionally equivalent JavaScript. This is a procedure that does an update to a table: 

CREATE PROCEDURE [HumanResources].[uspUpdateEmployeePersonalInfo]
@BusinessEntityID [int],
@NationalIDNumber [nvarchar](15),
@BirthDate [datetime],
@MaritalStatus [nchar](1),
@Gender [nchar](1)

UPDATE [HumanResources].[Employee]
SET [NationalIDNumber] = @NationalIDNumber
,[BirthDate] = @BirthDate
,[MaritalStatus] = @MaritalStatus
,[Gender] = @Gender
WHERE [BusinessEntityID] = @BusinessEntityID;
EXECUTE [dbo].[uspLogError];

Pretty straightforward in SQL Server. But how do you replicate this functionality in JavaScript automatically? Of course, by using SnowConvert. Here’s the output transformation:

// REGION SnowConvert Helpers Code
// This section would be populated by SnowConvert for SQL Server’s JavaScript Helper Classes. If you’d like to see more of the helper classes, fill out the form on the SnowConvert for SQL Server Getting Started Page.

try {
EXEC(` UPDATE HumanResources.Employee
SET NationalIDNumber = ?
, BirthDate = ?
, MaritalStatus = ?
, Gender = ?
} catch(error) {
EXEC(`CALL dbo.uspLogError(/*** MSC-WARNING – MSCEWI4010 – Default value added ***/ 0)`);

SnowConvert creates multiple helper class functions (including the EXEC helper called in the output procedure) to recreate the functionality that is present in the source code. SnowConvert also has finely tuned error messages to give you more information about any issues that may be present. You can actually click on both of the codes in the output procedure above to see the documentation page for that error code.

Want to see the same procedure above in Snowflake Scripting? Interested in getting an inventory of code that you’d like to take the cloud? Let us know. We can help you get started and understand the codebase you’re working with. If you’re already familiar with SnowConvert in general, SnowConvert for SQL Server has all the capabilities that you’ve come to expect. From the ability to generate granular assessment data to functionally equivalent transformations built upon a semantic model of the source code, SnowConvert for SQL Server is ready to see what you can throw at it. Get started today!

Using EF Core Global Query Filters To Ignore Soft Deleted Entities

In a previous post, we talked about how we could soft delete entities by setting up a DateDeleted column (Read that post here : https://dotnetcoretutorials.com/2022/03/16/auto-updating-created-updated-and-deleted-timestamps-in-entity-framework/) But if you’ve ever done this (Or used a simple “IsDeleted” flag), you’ll know that it becomes a bit of a burden to always have the first line of your query go something like this :

dbSet.Where(x => x.DateDeleted == null);

Essentially, you need to remember to always be filtering out rows which have a DateDeleted. Annoying!

Microsoft have a great way to solve this with what’s called “Global Query Filters”. And the documentation even provides an example for how to ignore soft deletes in your code : https://docs.microsoft.com/en-us/ef/core/querying/filters

The problem with this is that it only gives examples on how to do this for each entity, one at a time. If your database has 30 tables, all with a DateDeleted flag, you’re going to have to remember to add the configuration each and every time.

In previous versions of Entity Framework, we could get around this by using “Conventions”. Conventions were a way to apply configuration to a broad set of Entities based on.. well.. conventions. So for example, you could say “If you see an IsDeleted boolean field on an entity, we always want to add a filter for that”. Unfortunately, EF Core does not have conventions (But it may land in EF Core 7). So instead, we have to do things a bit of a rinky dink way.

To do so, we just need to override the OnModelCreating to handle a bit of extra code (Of course we can extract this out to helper methods, but for simplicity I’m showing where it goes in our DBContext).

public class MyContext: DbContext
protected override void OnModelCreating(ModelBuilder modelBuilder)
foreach (var entityType in modelBuilder.Model.GetEntityTypes())
//If the actual entity is an auditable type.
//This adds (In a reflection type way), a Global Query Filter
//That always excludes deleted items. You can opt out by using dbSet.IgnoreQueryFilters()
var parameter = Expression.Parameter(entityType.ClrType, “p”);
var deletedCheck = Expression.Lambda(Expression.Equal(Expression.Property(parameter, “DateDeleted”), Expression.Constant(null)), parameter);


What does this do?

Loop through every type that is in our DbContext model
If the type is inheriting from Auditable class (See previous post here : https://dotnetcoretutorials.com/2022/03/16/auto-updating-created-updated-and-deleted-timestamps-in-entity-framework/)
Add a global query filter that ensures that DateDeleted is null

Of course, we can use this same loop to add other “Conventions” too. Things like adding an Index to the DateDeleted field is possible via the OnModelCreating override.

Now, whenever we query the database, Entity Framework will automatically filter our soft deleted entities for us!

The post Using EF Core Global Query Filters To Ignore Soft Deleted Entities appeared first on .NET Core Tutorials.

Announcing .NET MAUI Preview 14

Preview 14 of .NET Multi-platform App UI (MAUI) is now available in Visual Studio 2022 17.2 Preview 2. This release includes a hefty volume of issue resolutions and completed features, and one new feature that will be a welcome addition for desktop developers: the MenuBar. While desktop app navigation and menus are often designed into the content window of many modern applications (think Teams left sidebar or Maps top tabs), there’s still a strong need for a traditional menu that resides at the top of the app window on Windows, and in the title bar on macOS.

Menus may be expressed in XAML or in C# for any ContentPage currently hosted in Shell or a NavigationPage. Begin by adding a MenuBarItem to the page’s MenuBarItems collection, and add MenuFlyoutItem for direct children, or MenuFlyoutSubItem for containers of other MenuFlyoutItem.

<MenuBarItem Text=”File”>
<MenuFlyoutItem Text=”Quit” Command=”{Binding QuitCommand}”/>
<MenuBarItem Text=”Locations”>
<MenuFlyoutSubItem Text=”Change Location”>
<MenuFlyoutItem Text=”Boston, MA”/>
<MenuFlyoutItem Text=”Redmond, WA”/>
<MenuFlyoutItem Text=”St. Louis, MO”/>
<MenuFlyoutItem Text=”Add a Location” Command=”{Binding AddLocationCommand}”/>
<MenuBarItem Text=”View”>
<MenuFlyoutItem Text=”Refresh” Command=”{Binding RefreshCommand}”/>
<MenuFlyoutItem Text=”Toggle Light/Dark Mode” Command=”{Binding ToggleModeCommand}”/>

Additional Preview 14 highlights include:

Device and Essentials reconciliation, plus interfaces for Essentials APIs
Shell WinUI (#4501)
Image caching (#4515)
Native -> Platform renaming (#4599)
Shapes (#4472)
Use string for StrokeShape (#3256)
WebView cookies (#4419)
MenuBar (#4839)
RTL Windows (#4936)

Find more details in our release notes.

While combing through your feedback in previous .NET MAUI releases we have noticed a theme of questions such as “how do I add a FilePicker”, “how do I get check the connectivity of my app”, and other such “essential” application tasks that aren’t specifically UI.

Beyond UI: Accessing Platform APIs

Within .NET MAUI is a set of APIs located in the Microsoft.Maui.Essentials namespace that unlock common features to bring that same efficiency to non-UI demands as to creating beautiful UI quickly. Originally a library in the Xamarin ecosystem, Essentials is now baked into .NET MAUI and is hosted in the very same dotnet/maui repository (in case you’re wondering where to log your valuable feedback). With it you can access such features as:

App Actions
App Information

App Theme

Color Converters

Detect Shake

Display Info
Device Info

File Picker
File System Helpers


Haptic Feedback

Media Picker

Open Browser
Orientation Sensor

Phone Dialer
Platform Extensions

Secure Storage

Unit Converters

Version Tracking
Web Authenticator

That’s a lot! Each API uses common pattern, so let’s focus on a few by way of introduction.

File Picker

Desktop platforms may often have a UI control named FilePicker or similar, but not all platforms do. Mobile platforms do not, but it’s still possible to perform the action from any UI element that takes an action such as a simple Button.

<Button Text=”Select a File” Clicked=”OnClicked” />

Now we can use the Maui.Essentials API to start the file picking process and handle the callback.

async void OnClicked(object sender, EventArgs args)
var result = await PickAndShow(PickOptions.Default);

async Task<FileResult> PickAndShow(PickOptions options)
var result = await FilePicker.PickAsync(options);
if (result != null)
Text = $”File Name: {result.FileName}”;
if (result.FileName.EndsWith(“jpg”, StringComparison.OrdinalIgnoreCase) ||
result.FileName.EndsWith(“png”, StringComparison.OrdinalIgnoreCase))
var stream = await result.OpenReadAsync();
Image = ImageSource.FromStream(() => stream);

return result;
catch (Exception ex)
// The user canceled or something went wrong

return null;

The PickOptions conveniently provides options for configuring your file selection criteria such as file types with FilePickerFileType:



This is an important feature for mobile, but equally useful for desktop in order to handle both offline and online scenarios. In fact, if you have ever attempted to publish an app to the Apple App Store, you may have encountered this common rejection for not detection connectivity status prior to attempting a network call.

var current = Connectivity.NetworkAccess;

if (current == NetworkAccess.Internet)
// able to connect, do API call
// unable to connect, alert user

Some services require a bit of configuration per platform. In this case iOS, macOS, and Windows don’t require anything, but Android needs a simple permission added to the “AndroidManifest.xml” which you can find in the Platforms/Android path of your .NET MAUI solution.

<uses-permission android:name=”android.permission.ACCESS_NETWORK_STATE” />

Read the docs for additional information.

Get Started Today

.NET MAUI Preview 14 is bundled with Visual Studio 17.2 Preview 2 which is also available today with the latest productivity improvements for .NET MAUI development. If you are using Visual Studio 2022 17.1 Preview 2 or newer, you can upgrade to 17.2 Preview 2.

If you are upgrading from .NET MAUI preview 10 or earlier, or have been using maui-check we recommend starting from a clean slate by uninstalling all .NET 6 previews and Visual Studio 2022 previews.

Starting from scratch? Install this Visual Studio 2022 Preview (17.2 Preview 2) and confirm .NET MAUI (preview) is checked under the “Mobile Development with .NET workload”.

Ready? Open Visual Studio 2022 and create a new project. Search for and select .NET MAUI.

Preview 14 release notes are on GitHub. For additional information about getting started with .NET MAUI, refer to our documentation, and the migration tip sheet for a list of changes to adopt when upgrading projects.

For a look at what is coming in future .NET 6 releases, visit our product roadmap, and for a status of feature completeness visit our status wiki.

Feedback Welcome

We’d love to hear from you! Please let us know about your experiences using .NET MAUI by completing a short survey.

The post Announcing .NET MAUI Preview 14 appeared first on .NET Blog.

ASP.NET Core updates in .NET 7 Preview 2

.NET 7 Preview 2 is now available and includes many great new improvements to ASP.NET Core.

Here’s a summary of what’s new in this preview release:

Infer API controller action parameters that come from services
Dependency injection for SignalR hub methods
Provide endpoint descriptions and summaries for minimal APIs
Binding arrays and StringValues from headers and query strings in minimal APIs
Customize the cookie consent value

For more details on the ASP.NET Core work planned for .NET 7 see the full ASP.NET Core roadmap for .NET 7 on GitHub.

Get started

To get started with ASP.NET Core in .NET 7 Preview 2, install the .NET 7 SDK.

If you’re on Windows using Visual Studio, we recommend installing the latest Visual Studio 2022 preview. Visual Studio for Mac support for .NET 7 previews isn’t available yet but is coming soon.

To install the latest .NET WebAssembly build tools, run the following command from an elevated command prompt:

dotnet workload install wasm-tools

Upgrade an existing project

To upgrade an existing ASP.NET Core app from .NET 7 Preview 1 to .NET 7 Preview 2:

Update all Microsoft.AspNetCore.* package references to 7.0.0-preview.2.*.
Update all Microsoft.Extensions.* package references to 7.0.0-preview.2.*.

See also the full list of breaking changes in ASP.NET Core for .NET 7.

Infer API controller action parameters that come from services

Parameter binding for API controller actions now binds parameters through dependency injection when the type is configured as a service. This means it’s no longer required to explicitly apply the [FromServices] attribute to a parameter.


public class MyController : ControllerBase
// Both actions will bound the SomeCustomType from the DI container
public ActionResult GetWithAttribute([FromServices]SomeCustomType service) => Ok();
public ActionResult Get(SomeCustomType service) => Ok();

You can disable the feature by setting DisableImplicitFromServicesParameters:

Services.Configure<ApiBehaviorOptions>(options =>
options.DisableImplicitFromServicesParameters = true;

Dependency injection for SignalR hub methods

SignalR hub methods now support injecting services through dependency injection (DI).


public class MyHub : Hub
// SomeCustomType comes from DI by default now
public Task Method(string text, SomeCustomType type) => Task.CompletedTask;

You can disable the feature by setting DisableImplicitFromServicesParameters:

services.AddSignalR(options =>
options.DisableImplicitFromServicesParameters = true;

To explicitly mark a parameter to be bound from configured services, use the [FromServices] attribute:

public class MyHub : Hub
public Task Method(string arguments, [FromServices] SomeCustomType type);

Provide endpoint descriptions and summaries for minimal APIs

Minimal APIs now support annotating operations with descriptions and summaries used for OpenAPI spec generation. You can set these descriptions and summaries for route handlers in your minimal API apps using an extension methods:

app.MapGet(“/hello”, () => …)
.WithDescription(“Sends a request to the backend HelloService to process a greeting request.”);

Or set the description or summary via attributes on the route handler delegate:

app.MapGet(“/hello”, [EndpointSummary(“Sends a Hello request to the backend”)]() => …)

Binding arrays and StringValues from headers and query strings in minimal APIs

With this release, you can now bind values from HTTPS headers and query strings to arrays of primitive types, string arrays, or StringValues:

// Bind query string values to a primitive type array
// GET /tags?q=1&q=2&q=3
app.MapGet(“/tags”, (int[] q) => $”tag1: {q[0]} , tag2: {q[1]}, tag3: {q[2]}”)

// Bind to a string array
// GET /tags?names=john&names=jack&names=jane
app.MapGet(“/tags”, (string[] names) => $”tag1: {names[0]} , tag2: {names[1]}, tag3: {names[2]}”)

// Bind to StringValues
// GET /tags?names=john&names=jack&names=jane
app.MapGet(“/tags”, (StringValues names) => $”tag1: {names[0]} , tag2: {names[1]}, tag3: {names[2]}”)

You can also bind query strings or header values to an array of a complex type as long as the type has TryParse implementation as demonstrated in the example below.

// Bind to aan array of a complex type
// GET /tags?tags=trendy&tags=hot&tags=spicy
app.MapGet(“/tags”, (Tag[] tags) =>
return Results.Ok(tags);

class Tag
public string? TagName { get; init; }

public static bool TryParse(string? tagName, out Tag tag)
if (tagName is null)
tag = default;
return false;

tag = new Tag { TagName = tagName };
return true;

Customize the cookie consent value

You can now specify the value used to track if the user consented to the cookie use policy using the new CookiePolicyOptions.ConsentCookieValue property.

Thank you @daviddesmet for contributing this improvement!

Request for feedback on shadow copying for IIS

In .NET 6 we added experimental support for shadow copying app assemblies to the ASP.NET Core Module (ANCM) for IIS. When an ASP.NET Core app is running on Windows, the binaries are locked so that they cannot be modified or replaced. You can stop the app by deploying an app offline file, but sometimes doing so is inconvenient or impossible. Shadow copying enables the app assemblies to be updated while the app is running by making a copy of the assemblies.

You can enable shadow copying by customizing the ANCM handler settings in web.config:

<?xml version=”1.0″ encoding=”utf-8″?>
<remove name=”aspNetCore”/>
<add name=”aspNetCore” path=”*” verb=”*” modules=”AspNetCoreModuleV2″ resourceType=”Unspecified”/>
<aspNetCore processPath=”%LAUNCHER_PATH%” arguments=”%LAUNCHER_ARGS%” stdoutLogEnabled=”false” stdoutLogFile=”.logsstdout”>
<handlerSetting name=”experimentalEnableShadowCopy” value=”true” />
<handlerSetting name=”shadowCopyDirectory” value=”../ShadowCopyDirectory/” />

We’re investigating making shadow copying in IIS a feature of ASP.NET Core in .NET 7, and we’re seeking additional feedback on whether the feature satisfies user requirements. If you deploy ASP.NET Core to IIS, please give shadow copying a try and share with us your feedback on GitHub.

Give feedback

We hope you enjoy this preview release of ASP.NET Core in .NET 7. Let us know what you think about these new improvements by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

The post ASP.NET Core updates in .NET 7 Preview 2 appeared first on .NET Blog.

Announcing .NET 7 Preview 2 – The New, ‘New’ Experience

Today, we are glad to release .NET 7 Preview 2. The second preview of .NET 7 includes enhancements to RegEx source generators, progress moving NativeAOT from experimental status into the runtime, and a major set of improvements to the “dotnet new” CLI experience. The bits are available for you to grab right now and start experimenting with new features like:

Build a specialized RegEx pattern matching engine using source generators at compile-time rather than slower methods at runtime.
Take advantage of SDK improvements that provide an entirely new, streamlined tab completion experience to explore templates and parameters when running dotnet new.
Don’t trim your excitement, just your apps in preparation to try out NativeAOT with your own innovative solutions.

EF7 preview 2 was also released and is available on NuGet. You can also read what’s new in ASP.NET Core Preview 2.

You can download .NET 7 Preview 2, for Windows, macOS, and Linux.

Installers and binaries
Container images
Linux packages
Release notes
Known issues
GitHub issue tracker

We recommend you use the preview channel builds if you want to try .NET 7 with Visual Studio family products. Visual Studio for Mac support for .NET 7 previews isn’t available yet but is coming soon.

Preview 2

The following features are now available in the Preview 2 release.

Introducing the new Regex Source Generator


Have you ever wished you had all of the great benefits that come from having a specialized Regex engine that is optimized for your particular pattern, without the overhead of building this engine at runtime?

We are excited to announce the new Regex Source Generator which was included in Preview 1. It brings all of the performance benefits from our compiled engine without the startup cost, and it has additional benefits, like providing a great debugging experience as well as being trimming-friendly. If your pattern is known at compile-time, then the new regex source generator is the way to go.

In order to start using it, you only need to turn the containing type into a partial one, and declare a new partial method with the RegexGenerator attribute that will return the optimized Regex object, and that’s it! The source generator will fill the implementation of that method for you, and will get updated automatically as you make changes to your pattern or to the additional options that you pass in. Here is an example:


public class Foo
public Regex regex = new Regex(@”abc|def”, RegexOptions.IgnoreCase);

public bool Bar(string input)
bool isMatch = regex.IsMatch(input);
// ..


public partial class Foo // <– Make the class a partial class
[RegexGenerator(@”abc|def”, RegexOptions.IgnoreCase)] // <– Add the RegexGenerator attribute and pass in your pattern and options
public static partial Regex MyRegex(); // <– Declare the partial method, which will be implemented by the source generator

public bool Bar(string input)
bool isMatch = MyRegex().IsMatch(input); // <– Use the generated engine by invoking the partial method.
// ..

And that’s it. Please try it out and let us know if you have any feedback.


Community PRs (Many thanks to JIT community contributors!!)

From @sandreenko

[Jit] Delete Statement::m_compilerAdded . runtime#64506

From @SingleAccretion

Delete GT_DYN_BLK runtime#63026
Address-expose locals under complex local addresses in block morphing runtime#63100
Refactor optimizing morph for commutative operations runtime#63251
Preserve OBJ/BLK on the RHS of ASG runtime#63268
Handle embedded assignments in copy propagation runtime#63447
Do not set GTF_NO_CSE for sources of block copies runtime#63462
Exception sets: debug checker & fixes runtime#63539
Stop using CLS_VAR for “boxed statics” runtime#63845
Tune floating-point CSEs live across a call better runtime#63903
Reverse ASG(CLS_VAR, …) runtime#63957
Fix invalid threading of nodes in rationalization runtime#64012
Add the exception set for ObjGetType runtime#64106
Improve fgValueNumberBlockAssignment runtime#64110
Commutative morph optimizations runtime#64122
Fix unique VNs for ADDRs runtime#64230
Copy propagation tweaking runtime#64378
Introduce GenTreeDebugOperKind runtime#64498
Add support for TYP_BYREF LCL_FLDs to VN runtime#64501
Do not add NRE sets for non-null addresses runtime#64607
Take into account zero-offset field sequences when propagating locals runtime#64701
Another size estimate fix for movs runtime#64826
Mark promoted SIMD locals used by HWIs as DNER runtime#64855
Propagate exception sets for assignments runtime#64882
Account for HWI stores in LIR side effects code runtime#65079

From @Wraith2

Recognize BLSR in “x & (x-1)” Add blsr runtime#63545
Add xarch andn runtime#64350

Dynamic PGO

JIT: simple forward substitution pass runtime#63720


Local heap optimizations on Arm64 runtime#64481
Couple optimization to MultiRegStoreLoc runtime#64857
Implement LoadPairVector64 and LoadPairVector128 runtime#64864
‘cmeq’ and ‘fcmeq’ Vector64.Zero/Vector128.Zero ARM64 containment optimizations runtime#62933
Increase arm32/arm64 maximum instruction group size runtime#65153
Prefer “mov reg, wzr” over “mov reg, #0” runtime#64740
Biggen GC Gen0 for Apple M1 Fix LLC cache issue on Apple M1 runtime#64576
Optimize full memory barriers around volatile reads/writes ARM64: Avoid LEA for volatile IND runtime#64354
Optimize Math.Round(x, MidpointRounding.AwayFromZero/ToEven) runtime#64016

General Optimizations

Security: Add JIT support for control-flow guard on x64 and arm64 runtime#63763
Testing: Spmi replay asmdiffs mac os arm64 runtime#64119


Support Metrics UpDownCounter instrument runtime#63648

static Meter s_meter = new Meter(“MyLibrary.Queues”, “1.0.0”);
static UpDownCounter<int> s_queueSize = s_meter.CreateUpDownCounter<int>(“Queue-Size”);
static ObservableUpDownCounter<int> s_pullQueueSize = s_meter.CreateObservableUpDownCounter<int>(“Queue-Size”, () => s_pullQueueSize);


Logging source generator improvements

Logging source generator should gracefully fail when special parameters incorrectly get passed as template parameter runtime#64310
Logging Source Generator fails to compile when in parameter modifier is present runtime#62644
Logging Source Generator fails to compile using keyword parameters with @ prefixes runtime#60968
Logging Source Generator fails ungracefully with overloaded methods runtime#61814
Logging Source Generator fails to compile due to CS0246 and CS0265 errors if type for generic constraint is in a different namespace runtime#58550

SDK Improvements

[Epic] New CLI parser + tab completion #2191

For 7.0.100-preview2, the dotnet new command has been given a more consistent and intuitive interface for many of the subcommands that users already use. In addition, support for tab completion of template options and arguments has been massively updated, now giving rapid feedback on valid arguments and options as the user types.

Here’s the new help output as an example:

❯ dotnet new –help
Template Instantiation Commands for .NET CLI.

dotnet new [<template-short-name> [<template-args>…]] [options]
dotnet new [command] [options]

<template-short-name> A short name of the template to create.
<template-args> Template specific options to use.

-?, -h, –help Show command line help.

install <package> Installs a template package.
uninstall <package> Uninstalls a template package.
update Checks the currently installed template packages for update, and install the updates.
search <template-name> Searches for the templates on NuGet.org.
list <template-name> Lists templates containing the specified template name. If no name is specified, lists all templates.

New Command Names

Specifically, all of the commands in this help output no longer have the — prefix that they do today. This is more in line with what users expect from subcommands in a CLI application. The old versions (–install, etc) are still available to prevent breaking user scripts, but we hope to add obsoletion warnings to those commands in the future to encourage migration.

Tab Completion

The dotnet CLI has supported tab completion for quite a while on popular shells like PowerShell, bash, zsh, and fish (for instructions on how to enable that, see How to enable TAB completion for the .NET CLI). It’s up to individual dotnet commands to implement meaningful completions, however. For .NET 7, the new command learned how to provide tab completion for

Available template names (in dotnet new <template-short-name>)

❯ dotnet new angular
angular grpc razor viewstart worker -h
blazorserver mstest razorclasslib web wpf /?
blazorwasm mvc razorcomponent webapi wpfcustomcontrollib /h
classlib nugetconfig react webapp wpflib install
console nunit reactredux webconfig wpfusercontrollib list
editorconfig nunit-test sln winforms xunit search
gitignore page tool-manifest winformscontrollib –help uninstall
globaljson proto viewimports winformslib -? update

Template options (the list of template options in the web template)

❯ dotnet new web –dry-run
–dry-run –language –output -lang
–exclude-launch-settings –name –type -n
–force –no-https -? -o
–framework –no-restore -f /?
–help –no-update-check -h /h

Allowed values for those template options (choice values on an choice template argument)

❯ dotnet new blazorserver –auth Individual
Individual IndividualB2C MultiOrg None SingleOrg Windows

There are a few known gaps in completion – for example, –language doesn’t suggest installed language values.

Future work

In future previews, we plan to continue filling gaps left by this transition, as well as make enabling completions either automatic or as simple as a single command that the user can execute. We hope that this will make improvements in tab completion across the entire dotnet CLI more broadly used by the community!

What’s next

dotnet new users – go enable tab completion and try it for your templating use! Template authors – try out tab completion for the options on your templates and make sure you’re delivering the experiences you want your users to have. Everyone – raise any issues you find on the dotnet/templating repo and help us make .NET 7 the best release for dotnet new ever!

NativeAOT Update

We previously announced that we’re moving the NativeAOT project out of experimental status and into mainline development in .NET 7. Over the past few months, we’ve been heads down doing the coding to move NativeAOT out of the experimental dotnet/runtimelab repo and into the dotnet/runtime repo.

That work has now been completed, but we have yet to add first-class support in the dotnet SDK for publishing projects with NativeAOT. We hope to have that work done shortly, so you can try out NativeAOT with your apps. In the meantime, please try trimming your app and ensure there are no trim warnings. Trimming is a requirement of NativeAOT. If you own any libraries there are also instructions for preparing libraries for trimming.

Targeting .NET 7

To target .NET 7, you need to use a .NET 7 Target Framework Moniker (TFM) in your project file. For example:


The full set of .NET 7 TFMs, including operating-specific ones follows.


We expect that upgrading from .NET 6 to .NET 7 should be straightforward. Please report any breaking changes that you discover in the process of testing existing apps with .NET 7.


.NET 7 is a Current release, meaning it will receive free support and patches for 18 months from the release date. It’s important to note that the quality of all releases is the same. The only difference is the length of support. For more about .NET support policies, see the .NET and .NET Core official support policy.

Breaking changes

You can find the most recent list of breaking changes in .NET 7 by reading the Breaking changes in .NET 7 document. It lists breaking changes by area and release with links to detailed explanations.

To see what breaking changes are proposed but still under review, follow the Proposed .NET Breaking Changes GitHub issue.


Releases of .NET include products, libraries, runtime, and tooling, and represent a collaboration across multiple teams inside and outside Microsoft. You can learn more about these areas by reading the product roadmaps:

ASP.NET Core 7 and Blazor Roadmap
EF 7 Roadmap


We appreciate and thank you for your all your support and contributions to .NET. Please give .NET 7 Preview 2 a try and tell us what you think!

The post Announcing .NET 7 Preview 2 – The New, ‘New’ Experience appeared first on .NET Blog.

359: Tiffany Choong

I had tons of fun talking to Tiffany Choong this week! I loved learning her process on creating countless code art Pokémon characters. Just look at it and wing it! Wild. While I’m not nearly as creative as Tiffany, I feel some kinship looking through her Pens. Like how there are all these amazingly creative ones that clearly took tons of effort, that don’t have nearly the hearts they deserve (c’mon dino loader!), and then relatively simple practical Pens (like a menu) that go nuts with popularity and it’s hard to know why.

Time Jumps

01:05 Guest introduction

02:05 Recreating Pokemon

03:15 Rage animation

05:20 What’s your process for drawing shapes?

06:34 Let’s snuggle Pen

07:39 Does your job allow you to use this creativity?

08:37 Using Vue

10:39 Untitled dinosaur Pen

11:19 Education background

15:45 Your favorite pens

16:51 SVG as a medium

21:32 Reaching for CSS instead

24:05 Supporting IE 11

27:01 #CodePenChallenge Pens

28:21 Magical mobile menu

The post 359: Tiffany Choong appeared first on CodePen Blog.

What is Material UI


Material-UI (MUI) is a CSS framework that provides React components out-of-the-box and follows Google’s Material Design launched in 2014. MUI makes it possible to use different components to create a UI for a company’s web and mobile apps. Google uses Material Design to guarantee that no matter how users interact with the products they use, they will have a consistent experience. Material Design includes guidelines for typography, grid, space, scale, color, images, etc. And it also allows designers to build deliberate designs with hierarchy, meaning, and a focus on results.

The MUI library for React has over 76k stars on GitHub and is one of the most improved UI libraries. You can build an incredibly stylish application in a short amount of time with pre-styled components, as well as tune and expand these components according to your needs. It is based on Leaner Style Sheets (LESS), a CSS development extension.

You can also install the MUI into your application using yarn:

yarn add @material-ui/core

or npm:

npm i @material-ui/core

Why use Material UI?

Here are the reasons why developers prefer to integrate MUI into their applications:

Pre-designed UI components. MUI supplies multiple pre-designed components from which you easily approach and attach to your application, enabling an attractive, easy-to-use, and visually engaging design and rapid implementation. 

Material Design. Material Design is a well-thought-out design system that describes the value and use cases of components. With Material Design, for example, you can use the Material Icons, meaning choosing icons that match your design system.

Adjustable theme. MUI provides simple installation and adjustment themes to use and globally implement for all components available to you, making it a highly functional and dynamic experience. By doing so, the theme color of the component, information about the palette, and surface properties, and some other styles can be customized, meaning that all your components can be consistent.

Well-structured documentation. MUI has clearly understandable and well-structured documentation, which includes guides with code examples to practice with.

Widespread support. MUI has great support for fixing bugs and issues, due to its constantly updatable library. In small releases, for all issues found, the team provides fixes. And as a user, you can participate in influencing what additions to the library will be added in the future. The team sends a feedback survey to all library developers every year in order to fix any issues and make the Material UI more usable, also you can tweet them feedback on the official account: @MaterialUI.

Community. Here you can find basic links with support and examples of using the MUI.

Who uses Material UI and its integrations?

Node.js, React, Next.js, Emotion, and etc, represent some of the most popular tools that are integrated with Material-UI. About 214 companies use Material UI in their technology stacks, here are some of them:

SkyQuest Tech Stack

How to create your Material UI React application using the Flatlogic Platform

There are two ways to build your application on the Flatlogic Platform: 

Create a simple and clear frontend application, generated by the CLI framework, 


Build a CRUD application with frontend+backend+database.  

Creating your CRUD application with Flatlogic

Step 1. Choosing the Tech Stack

In this step, you’re setting the name of your application and choosing the stack: Frontend, Backend, and Database.

Step 2. Choosing the Starter Template

In this step, you’re choosing the design of the web app. Here you can find the Material Template for your application.

Step 3. Schema Editor

In this part you will need to know which application you want to build, that is, CRM or E-commerce, also in this part you build a database schema i.e. tables and relationships between them.

If you are not familiar with database design and it is difficult for you to understand what tables are, we have prepared several ready-made example schemas of real-world apps that you can build your app upon modification:

E-commerce app;
Time tracking app;
Books store;
Chat (messaging) app;

Afterwards, you can deploy your application and in a few minutes, you will get a fully functional CMS for your React Application with Material Design.

Suggested Articles:

What is Angular – Flatlogic Tech Glossary
What is Webpack – Flatlogic Tech Glossary
How to Create a Vue Application [Learn the Ropes!]

The post What is Material UI appeared first on Flatlogic Blog.

Too much magic?

Years ago my co-worker Maurits introduced me to the term “magic” in programming. He also provided the valuable dichotomy of convention and configuration (or in fact, he’d choose configuration over convention…). I think this distinction could be very helpful in psychological research, figuring out why some people prefer framework X over framework Y. One requires the developer to spell out everything they want in elaborate configuration files, the other relies on convention: placing certain files with certain names and certain methods in certain places will make everything work “magically”.

And there we are: magic. Often used in code reviews and discussions: “there’s too much magic here”. Yesterday the word popped up in a Twitter thread as well:

“symfony has too much magic, to its own detriment…” @bazinder

This was answered with:

“I’d say that everything is magic until you start to understand it :D” @iosifch

It made me wonder, what should we consider to be “magic” in programming? Is magic in code okay, or should it be avoided at all cost?

As an example of magic, the fact that you can define a controller like this, is already magical:

* @Route(“/task”)
final class TaskController
* @Route(“/new”)
public function new(Request $request): Response
// …

Who invokes it? Why, and when? You can’t figure that out by clicking “Find usages…” in PhpStorm!

This innocent example shows how quick we are to accept magic from a framework. Just do things “the framework way”, put this file there, add these annotations, and it’ll work. As an alternative, we could set up an HTTP kernel that doesn’t need any magic. For instance, we could write the dispatching logic ourselves:

$routes = [
‘#^/task/new$#’ => function (Request $request): Response {
$controller = new TaskController();
return $controller->new($request);
// …

foreach ($routes as $path => $dispatch) {
if (/* request URI matches path regex */) {
$response = $dispatch($request);
// Render $response to the client

// Show 404 page

Of course we wouldn’t or shouldn’t do this, but if we did, at least we’d be able to pinpoint the place where the controller is invoked, and we’d be able to inject the right dependencies as constructor arguments, or pass additional method arguments. Of course, a framework saves us from writing all these lines. It takes over the instantiation logic for the controller, and analyzes annotations to build up something similar to that $routes array. It allows other services to do work before the controller is invoked, or to post-process the Response object before it’s rendered to the client.

The more a framework is going to do before or after the controller is invoked, the more magical the framework will be. That’s because of all the dynamic programming that’s involved when you make things generic. E.g. you can add your own event subscribers that modify the Request or Response, or even by-pass the controller completely. It’s unclear if and when such an event subscriber will be invoked, because it happens in a dynamic way, by looping over a list of event subscriber services. If you have ever step-debugged your way from index.php to the controller, you know that you’ll encounter a lot of abstract code, that is hard to relate to. It’s even hard to figure what exactly happens there.

I’m afraid there’s no way around magic. If you want to use a framework, then you import magic into your project. Circling back to Iosif’s comment (“everything is magic until you start to understand it”), I agree that the way to deal with your framework’s magic is to understand it, know how everything works under the hood. It doesn’t make the magic go away, but at least you know how the trick works. Personally I don’t think this justifies relying on all the magic a framework has to offer. I think developers should need as little information as possible to go ahead and change any piece of code. If they want to learn more about it,

They should be able to “click” on method calls, to zoom in on what happens behind the call.
They should be able to click on “Find usages” to zoom out and figure out how and where a method is used.

When you get to the magical part of your code base, usually the part that integrates with the framework or the ORM, then none of this is possible. You can’t click on anything, you just have to “know” how things work. I think this is a maintainability risk. If you don’t know how a piece of code works, you’re more likely to make mistakes, and it becomes less and less likely that you’ll even dare to touch it. Which is why I prefer more explicit, less magical code, that is safer to change because every aspect is in plain sight. When it comes to framework integration code, we can never make everything explicit, or we should rather dump the framework entirely. So how can we find some middle ground; how can we find a good balance between framework magic and explicit, easy to understand and change code? There are three options:

When frameworks offer an explicit and a magical option for some feature, use the more explicit alternative.
Replace magical features with your own, hand-written, and more explicit alternative.
Keep using the magical feature, but document it.

As an example of 1: I don’t want models/entities to be passed as controller arguments.

* @Route(“/edit/{id}”)
public function edit(Task $task, Request $request): Response
// …

Instead, I want to see in the code where this object comes from, and based on what part of the request:

final class TaskController
public function __construct(
private TaskRepository $taskRepository
) {

public function edit(Request $request): Response
$task = $this->taskRepository

// …

Another example of 1: if I can choose between accessing a service in a global, static way (e.g. using a façade) or as a constructor-injected dependency, I choose the latter, which is the less magical one.

As an example of 2: instead of letting Doctrine save/flush my entity to the database, including any other entity it has loaded, I often explicitly map the data of one entity, so I can do an UPDATE or INSERT query myself (see my article about ORMless).

As an example of 3: when defining routes, or column mappings, I do it close to where the developer is already looking. I use a @Route annotation (or attribute) instead of defining it in a .yml file that lives in a completely different place. If I still like to let Doctrine map my entities, I make sure to have the @Column annotations next to the entity’s properties, instead of in a separate file. If a developer needs to change something, it will be easier to understand what’s going on and what else needs to be changed. The annotations serve as a reminder of the magic that’s going on.

By using these tactics I think you can get rid of a lot of code that relies on some of the framework’s most over-the-top magic. If we manage to do that, we can spend less time learning about the inner workings of framework X. We’ll have fewer questions on StackOverflow that are like “How can I do … with framework Y?” If we apply these tactics, there will be fewer differences between code written for framework X or framework Y. It means the choice for a particular framework becomes less relevant. All of the framework-specific knowledge doesn’t have to be preserved; new team members don’t have to be “framework” developers. If they can accept a request and return a response, they should be fine. Job ads no longer have to mention any framework. Developers don’t have to do certification exams, watch video courses, read books, and so on. They can just start coding from day 1.

Transforming identity claims in ASP.NET Core and Cache

The article shows how to add extra identity claims to an ASP.NET Core application which authenticates using the Microsoft.Identity.Web client library and Azure AD B2C or Azure AD as the identity provider (IDP). This could easily be switched to OpenID Connect and use any IDP which supports OpenID Connect. The extra claims are added after an Azure Microsoft Graph HTTP request and it is important that this is only called once for a user session.

Code https://github.com/damienbod/azureb2c-fed-azuread

Normally I use the IClaimsTransformation interface to add extra claims to an ASP.NET Core session. This interface gets called multiple times and has no caching solution. If using this interface to add extra claims to you application, you must implement a cache solution for the extra claims and prevent extra API calls or database requests with every request. Instead of implementing a cache and using the IClaimsTransformation interface, alternatively you could just use the OnTokenValidated event with the OpenIdConnectDefaults.AuthenticationScheme scheme. This gets called after a successfully authentication against your identity provider. If Microsoft.Identity.Web is used as the OIDC client which is specific for Azure AD and Azure B2C, you must add the configuration to the MicrosoftIdentityOptions otherwise downstream APIs will not work. If using OpenID Connect directly and a different IDP, then use the OpenIdConnectOptions configuration. This can be added to the services of the ASP.NET Core application.

OpenIdConnectDefaults.AuthenticationScheme, options =>
options.Events.OnTokenValidated = async context =>
if (ApplicationServices != null && context.Principal != null)
using var scope = ApplicationServices.CreateScope();
context.Principal = await scope.ServiceProvider


If using default OpenID Connect and not the Microsoft.Identity.Web client to authenticate, use the OpenIdConnectOptions and not the MicrosoftIdentityOptions.

Here’s an example of an OIDC setup.

builder.Services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
options.Events.OnTokenValidated = async context =>
if(ApplicationServices != null && context.Principal != null)
using var scope = ApplicationServices.CreateScope();
context.Principal = await scope.ServiceProvider

The IServiceProvider ApplicationServices are used to add the scoped MsGraphClaimsTransformation service which is used to add the extra calls using Microsoft Graph. This needs to be added to the configuration in the startup or the program file.

protected IServiceProvider ApplicationServices { get; set; } = null;

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
ApplicationServices = app.ApplicationServices;

The Microsoft Graph services are added to the IoC.


The MsGraphClaimsTransformation uses the Microsoft Graph client to get groups of a user, create a new ClaimsIdentity, add the extra claims to this group and add the ClaimsIdentity to the ClaimsPrincipal.

using AzureB2CUI.Services;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;

namespace AzureB2CUI;

public class MsGraphClaimsTransformation
private readonly MsGraphService _msGraphService;

public MsGraphClaimsTransformation(MsGraphService msGraphService)
_msGraphService = msGraphService;

public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
ClaimsIdentity claimsIdentity = new();
var groupClaimType = “group”;
if (!principal.HasClaim(claim => claim.Type == groupClaimType))
var objectidentifierClaimType = “http://schemas.microsoft.com/identity/claims/objectidentifier”;
var objectIdentifier = principal.Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType);

var groupIds = await _msGraphService.GetGraphApiUserMemberGroups(objectIdentifier.Value);

foreach (var groupId in groupIds.ToList())
claimsIdentity.AddClaim(new Claim(groupClaimType, groupId));

return principal;

The MsGraphService service implements the different HTTP requests to Microsoft Graph. Azure AD B2C is used in this example and so an application client is used to access the Azure AD with the ClientSecretCredential. The implementation is setup to use secrets from Azure Key Vault directly in any deployments, or from user secrets for development.

using Azure.Identity;
using Microsoft.Extensions.Configuration;
using Microsoft.Graph;
using System.Threading.Tasks;

namespace AzureB2CUI.Services;

public class MsGraphService
private readonly GraphServiceClient _graphServiceClient;

public MsGraphService(IConfiguration configuration)
string[] scopes = configuration.GetValue<string>(“GraphApi:Scopes”)?.Split(‘ ‘);
var tenantId = configuration.GetValue<string>(“GraphApi:TenantId”);

// Values from app registration
var clientId = configuration.GetValue<string>(“GraphApi:ClientId”);
var clientSecret = configuration.GetValue<string>(“GraphApi:ClientSecret”);

var options = new TokenCredentialOptions
AuthorityHost = AzureAuthorityHosts.AzurePublicCloud

// https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential
var clientSecretCredential = new ClientSecretCredential(
tenantId, clientId, clientSecret, options);

_graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes);

public async Task<User> GetGraphApiUser(string userId)
return await _graphServiceClient.Users[userId]

public async Task<IUserAppRoleAssignmentsCollectionPage> GetGraphApiUserAppRoles(string userId)
return await _graphServiceClient.Users[userId]

public async Task<IDirectoryObjectGetMemberGroupsCollectionPage> GetGraphApiUserMemberGroups(string userId)
var securityEnabledOnly = true;

return await _graphServiceClient.Users[userId]

When the application is run, the two ClaimsIdentity instances exist with every request and are available for using in the ASP.NET Core application.


This works really well but you should not add too many claims to the identity in this way. If you have many identity descriptions or a lot of user data, then you should use the IClaimsTransformation interface with a good cache solution.