Use FIDO2 passwordless authentication with Azure AD

This article shows how to implement FIDO2 passwordless authentication with Azure AD for users in an Azure tenant. FIDO2 provides one of the best user authentication methods and is a more secure authentication compared with other account authentication implementations such authenticator apps, SMS, email, password alone or SSI authentication. FIDO2 authentication protects against phishing.

To role out the FIDO2 authenitcation in Azure AD and setup up an account, I used the Feitian FIDO2 BioPass K26 and K43 security keys. By using biometric security keys, you get an extra factor which is more secure than a pin. The biometric data never leaves the key. You should never share any biometric data anywhere or share on any shared server.

I used the Feitian BioPass FIDO2 Manager to setup my security keys using my fingerprint. This is really easy to use and very user friendly.

Setting up the Azure AD tenant

The FIDO2 security key authentication method should be activated on the Azure AD tenant. Disable all other authentication methods unless required. Also disable SMS authentication, this should not be used anymore. All users should be required to use MFA.

Now that the Azure AD tenant can use FIDO2 authentication, an account can be setup for this. If you implement this for a company’s tenant, you would role this out using scripts and automate the process. You can sign in to your account at Microsofts myaccount website and use the security info menu to configure the Feitian FIDO2 keys.

https://myaccount.microsoft.com/

I added my two security keys using USB. I use at least two security keys for each account. You should use a FIDO2 key as a fallback for the first key. Do not use SMS fallback or some type of email, password recovery. Worst case, your IT admin can reset the account for you and issue you new FIDO2 keys.

https://mysignins.microsoft.com/security-info

Using FIDO2 keys with Azure AD is really easy to setup and works great. I use FIDO2 everywhere which supports this and avoid other authentication methods. Some of the account popups for Azure AD is annoying trying to authenticate using password, SMS or email, I would prefer FIDO2 first for a better user experience. The Feitian BioPass FIDO2 security keys are excellent and I would recommend these.

Links:

https://www.microsoft.com/en-us/p/biopass-fido2-manager/9p2zjpwk3pxw

https://myaccount.microsoft.com/

https://mysignins.microsoft.com/security-info

https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-methods

https://www.ftsafe.com/article/619.html

Configure a FEITIAN FIDO2 BioPass security key

https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-security-key

https://www.w3.org/TR/webauthn/

Protobuf In C# .NET – Part 3 – Using Length Prefixes

This is a 4 part series on working with Protobuf in C# .NET. While you can start anywhere in the series, it’s always best to start at the beginning!

Part 1 – Getting Started
Part 2 – Serializing/Deserializing
Part 3 – Using Length Prefixes
Part 4 – Performance Comparisons (Coming Soon)

In the last post in this series, we looked at how we can serialize and deserialize a single piece of data to and from Protobuf. For the most part, this is going to be your bread and butter way of working with Protocol Buffers. But there’s actually a slightly “improved” way of serializing data that might come in handy in certain situations, and that’s using “Length Prefixes”.

What Are Length Prefixes In Protobuf?

Length Prefixes sound a bit scary but really it’s super simple. Let’s first start with a scenario of “why” we would want to use length prefixes in the first place.

Imagine that I have multiple objects that I want to push into a single Protobuf stream. Let’s say using our example from previous posts, I have multiple “Person” objects that I want to push across the wire to another application.

Because we are sending multiple objects at once, and they are all encoded as bytes, we need to know when one person ends, and another begins. There are really two ways to solve this :

Have a unique byte code that won’t appear in your data, but can be used as a “delimiter” between items
Use a “Length Prefix” whereby the first byte (Or bytes) in a stream say how long the first object is, and you know after that many bytes, you can then read the next byte to figure out how long the next item is.

I’ve actually seen *both* options used with Protobuf, but the more common one these days is to use the latter. Mostly because it’s pretty fail safe (You don’t have to pick some special delimited character), but also because you can know ahead of time how large the upcoming object is (You don’t have to just keep reading blindly until you reach a special byte character).

I’m not much of a photoshop guy, so here’s how the stream of data might look in MS Paint :

When reading this data, it might work like so :

Read the first 4 bytes to understand how long Message 1 will be
Read exactly that many bytes and store as Message 1
We can now read the next 4 bytes to understand exactly how long Message 2 will be
Read exactly that many bytes and store as Message 2

And so on, and we could actually do this forever if the stream was a constant pump of data. As long as we read the first set of bytes to know how long the next message is, we don’t need any other breaking up of the messages. And again, it’s a boon to us to use this method as we never have to pre-fetch data to know what we are getting.

In all honesty, Length Prefixing is not Protobuf specific. After all the data following could be in *any* format, but it’s probably one of the few data formats that seem to have it really baked in. So much so that of course our Protobuf.NET library from earlier posts has out of the box functionality to handle it! So let’s jump into that now.

Using Protobuf Length Prefixes In C# .NET

As always, if you’re only just jumping into this post without reading the previous ones in the series, you’ll need to install the Protobuf.NET library by using the following command on your package manager console.

Install-Package protobuf-net

Then the code to serialize multiple items to the same data stream might look like so :

var person1 = new Person
{
FirstName = “Wade”,
LastName = “G”
};

var person2 = new Person
{
FirstName = “John”,
LastName = “Smith”
};

using (var fileStream = File.Create(“persons.buf”))
{
Serializer.SerializeWithLengthPrefix(fileStream, person1, PrefixStyle.Fixed32);
Serializer.SerializeWithLengthPrefix(fileStream, person2, PrefixStyle.Fixed32);
}

This is a fairly verbose example to write to a file, but obviously you could be writing to any data stream, looping through a list of people etc. The important thing is that our Serialize call changes to “SerializeWithLengthPrefix”.

Nice and easy!

And then to deserialize, there are some tricky things to look out for. But our basic code might look like so :

using (var fileStream = File.OpenRead(“persons.buf”))
{
Person person = null;
do
{
person = Serializer.DeserializeWithLengthPrefix<Person>(fileStream, PrefixStyle.Fixed32);
} while (person != null);
}

Notice how we actually *loop* the DeserializeWithLengthPrefix. This is because if there are multiple items within the stream, calling this method will return *one* item each time it’s called (And also move the stream to the start of the next item). If we reach the end of the stream and call this again, we will instead return a null object.

Alternatively, you can call DeserializeItems to instead return an IEnumerable of items. This is actually very similar to serializing one at a time because the IEnumerable is lazy loaded.

using (var fileStream = File.OpenRead(“persons.buf”))
{
var persons = Serializer.DeserializeItems<Person>(fileStream, PrefixStyle.Fixed32, -1);
}

Because the Protobuf.NET library is so easy to use, I don’t want to really dive into every little overloaded method. But the important thing to understand is that when using Length Prefixes, we can push multiple pieces of data to the same stream without any additional legwork required. It’s really great!

Of course, all of this isn’t really worth it unless there is some sort of performance gains right? And that’s what we’ll be looking at in the next part of this series. Just how does ProtoBuf compare to something like JSON?

The post Protobuf In C# .NET – Part 3 – Using Length Prefixes appeared first on .NET Core Tutorials.

State of the Windows Forms Designer for .NET Applications

For the last several Visual Studio release cycles, the Windows Forms (WinForms) Team has been
working hard to bring the WinForms designer for .NET applications to parity with
the .NET Framework designer. As you may be aware, a new WinForms
designer was needed to support .NET Core 3.1 applications, and later .NET 5+
applications. The work required a near-complete rearchitecting of the designer,
as we responded to the differences between .NET and the .NET Framework based
WinForms designer everyone knows and loves. The goal of this blog post is to
give you some insight into the new architecture and what sorts of changes we
have made. And of course, how those changes may impact you as you create custom
controls and .NET WinForms applications.

After reading this blog post you will be familiar with the underlying problems
the new WinForms designer is meant to solve and have a high-level understanding
of the primary components in this new approach. Enjoy this look into the
designer architecture and stay tuned for future blogs!

A bit of history

WinForms was introduced with the first version of .NET and Visual Studio in
2001. WinForms itself can be thought of as a wrapper around the complex Win32
API. It was built so that enterprise developers didn’t need to be ace C++
developers to create data driven line-of-business applications. WinForms was
immediately a hit because of its WYSIWYG designer where even novice developers
could throw together an app in minutes for their business needs.

Until we added a support for .NET Core applications there was only a single
process, devenv.exe, that both the Visual Studio environment and the application
being designed ran within. But .NET Framework and .NET Core can’t both run
together within devenv.exe, and as a result we had to take the designer out of
process
, thus we called the new designer – WinForms Out of Process Designer (or
OOP designer for short).

Where are we today?

While we aimed at complete parity between the OOP designer and the .NET
Framework designer for the release of Visual Studio 2022,
there are still a few issues on our backlog. That said, the OOP designer in its current iteration
already has most of the significant improvements at all important levels:

Performance: Starting with Visual Studio 2019 v16.10, the performance of
the OOP designer has been improved considerably. We’ve worked on reducing
project load times and improved the experience of interacting with controls
on the design surface, like selecting and moving controls.

Databinding Support: WinForms in Visual Studio 2022 brings a
streamlined approach for managing Data Sources in the OOP designer with the
primary focus on Object Data Sources. This new approach is unique to the
OOP designer and .NET based applications.

WinForms Designer Extensibility SDK: Due to the conceptional differences
between the OOP designer and the .NET Framework designer, providers for 3rd
party controls for .NET will need to use a dedicated WinForms Designer SDK
to develop custom Control Designers which run in the context of the OOP
designer. We have published a pre-release version of the SDK last month as a
NuGet package, and you can download it
here.
We
will be updating this package to make it provide IntelliSense in the first
quarter of 2022. There will also be a dedicated blog post about the SDK in
the coming weeks.

A look under the hood of the WinForms designer

Designing Forms and UserControls with the WinForms designer holds a couple of
surprises for people who look under the hood of the designer for the first time:

The designer doesn’t “save” (serialize) the layout in some sort of XML or
JSON. It serializes the Forms/UserControl definition directly to code – in
the new OOP designer that is either C# or Visual Basic .NET. When the user
places a Button on a Form, the code for creating this Button and assigning
its properties is generated into a method of the Form called
`InitializeComponent`. When the Form is opened in the designer, the
`InitializeComponent` method is parsed and a shadow .NET assembly is
being created on the fly from that code. This assembly contains an
executable version of `InitializeComponent` which is loaded in the context
of the designer. `InitializeComponent` method is then executed, and the
designer is now able to display the resulting Form with all its control
definitions and assigned properties. We call this kind of serialization
Code Document Object Model serialization, or CodeDOM serialization for
short. This is the reason, you shouldn’t edit `InitializeComponent`
directly: the next time you visually edit something on the Form and save it,
the method gets overwritten, and your edits will be lost.
All WinForms controls have two code layers to them. First there is the code
for a control that runs during runtime, and then there is a control
designer, which controls the behavior at design-time. The control designer
functionality for each control is not implemented in the designer
itself. Rather, a dedicated control designer interacts with Visual Studio
services and features.Let’s look at `SplitContainer` as an example:
(SplitPanelUIDemo.gif)

The design-time behavior of the SplitContainer is implemented in an
associated designer, in this case the `SplitContainerDesigner`. This class
provides the key functionality for the design-time experience of the
`SplitContainer` control:

The way the outer Panel and the inner Panels get selected on mouse
click.
The ability of the splitter bar to be moved to adjust the sizes of the
inner panels.
To provide the Designer Action Glyph, which allows a developer using the
control to manage the Designer Actions through the respective short cut
menu.

When we decided to support apps built on .NET Core 3.1 and .NET 5+ in
the original designer we faced a major challenge. Visual Studio is
built on .NET Framework but needs to round-trip the designer code by serializing
and deserializing this code for projects which target a different runtime. While, with some
limitations, you can run .NET Framework based types in a .NET Core/.NET 5+
applications, the reverse is not true. This problem is known as “type
resolution problem
”. A great example of this can be seen in the TextBox
control: in .NET Core 3.1 we added a new property called `PlaceholderText`. In
.NET Framework that property does not exist on `TextBox`. So, if the .NET
Framework based CodeDom Serializer (running in Visual Studio) encountered the
`PlaceholderText` property it would fail.

In addition, a Form with all its controls and components renders itself in the designer
at design time. Therefore, the code that instantiates the form and shows it in the
Designer window must also be executed in .NET and not in .NET Framework, so that
newer properties available only in .NET also reflect the actual appearance and
behavior of the controls, components, and ultimately the entire Form or UserControl.

Because we plan to continue innovating and adding new features in the future,
the problem only grows over time. So we had to design a mechanism that supported
such cross-framework interactions between the WinForms designer and Visual Studio.

Enter the DesignToolsServer

Developers need to see their Forms in the designer looking precisely the way it
will at runtime (WYSIWYG). Whether it is `PlaceholderText` property from the
earlier example, or the layout of a form with the desired default font – the
CodeDom serializer must run in the context of the version of .NET the project is
targeting. And we naturally can’t do that, if the CodeDom serialization is
running in the same process as Visual Studio. To solve this, we run the designer
out-of-process (hence the moniker Out of Process Designer) in a new
.NET (Core) process called DesignToolsServer. The DesignToolsServer process
runs the same version of .NET and the same bitness (x86 or x64) as your
application.

Now, when you double-click on a Form or a UserControl in Solution Explorer,
Visual Studio’s designer loader service determines the targeted .NET version and
launches a DesignToolsServer process. Then the designer loader passes the code
from the `InitializeComponent` method to the DesignToolsServer process where
it can now execute under the desired .NET runtime and is now able to deal with
every type and property this runtime provides.

While going out of process solves the type-resolution-problem , it introduces a
few other challenges around the user interaction inside Visual Studio. For
example, the Property Browser, which is part of Visual Studio (and therefore
also .NET Framework based). It is supposed to show the .NET Types, but it can’t
do this for the same reasons the CodeDom serializer cannot (de)serialize .NET
types.

Custom Property Descriptors and Control Proxies

To facilitate interaction with Visual Studio, the DesignToolsServer introduces
proxy classes for the components and controls on a form which are created in the
Visual Studio process along with the real components and controls on the form in
the DesignToolsServer.exe process. For each one on the form, an object proxy is
created. And while the real controls live in the DesignToolsServer process, the
object proxy instances live in the client – the Visual Studio process. If you
now select an actual .NET WinForms control on the form, from Visual Studio’s
perspective an object proxy is what gets selected. And that object proxy doesn’t
have the same properties of its counterpart control on the server side. It
rather maps the control’s properties 1:1 with custom proxy property
descriptors through which Visual Studio can talk to the server process.

So, clicking now on a button control on the form, leads to the following
(somewhat simplified) chain of events to get the properties to show in the
Property Browser:

The mouse click happens on special window in the Visual Studio process,
called the Input Shield. It acts like a sneeze guard, if you will, and is
purely to intercept the mouse messages which it sends to the
DesignToolsServer process.
The DesignToolsServer receives the mouse click and passes it to the
Behavior Service. The Behavior Service finds the control and passes it to
the Selection Service that takes the necessary steps to select that
control.
In that process, the Behavior Service has also located the correlating
Control Designer, and initiates the necessary steps to let that Control
Designer render whatever adorners and glyphs it needs to render for that
control. Think of the Designer Action Glyphs or the special selection
markers from the earlier SplitPanel example.
The Selection Service reports the control selection back to Visual Studio’s
Selection Service.
Visual Studio now knows, what object proxy maps to the selected control in
the DesignToolsServer. The Visual Studio’s selection service selects that
object proxy. This again triggers an update for the values of the selected
control (object proxy) in the Property Browser.
The Property Browser in turn now queries the Property Descriptors of the
selected object proxy which are mapped to the proxy descriptors of the
actual control in the DesignToolsServer’s process. So, for each property the
Property Browser needs to update, the Property Browser calls GetValue on
the respective proxy Property Descriptor, which leads to a cross-process
call to the server to retrieve the actual value of that control’s
property, which is eventually displayed in the Property Browser.

Compatibility of Custom Controls with the DesignToolsServer

With the knowledge of these new concepts, it is obvious that adjustments to
existing custom control designers targeting .NET will be required. The extent to
which the adjustments are necessary depends purely on how extensively the custom
control utilize the typical custom Control Designer functionality.

Here’s a simple a simplified guide on how to decide whether a control would
likely require adjustments for the OOP designer for typical Designer
functionality:

Whenever a control brings a special UI functionality (like custom adorners,
snap lines, glyphs, mouse interactions, etc.), the control will need to be
adjusted for .NET and at least recompiled against the new WinForms
Designer SDK. The reason for this is that the OOP Designer re-implements a
lot of the original functionality, and that functionality is organized in
different namespaces. Without recompiling, the new OOP designer wouldn’t
know how to deal with the control designer and would not recognize the
control designer types as such.
If the control brings its own Type Editor, then the required adjustments
are more considerable. This is the same process the team underwent
with the library of the standard controls: While the modal dialogs
of a control’s designer can only work in the context of the Visual Studio
process, the rest of the control’s designer runs in the context of the
DesignToolServer’s process. That means a control with a custom type editor,
which is shown in a modal dialog, always needs a Client/Server
Control Designer combination. It needs to communicate between the modal UI
in the Visual Studio process and the actual instance of the control in the
DesignToolsServer process.
Since the control and most of its designers now live in the
DesignToolsServer (instead of Visual Studio) process, reacting to a
developer’s UI interaction by handling those in WndProc code won’t work
anymore. As already mentioned, we will publishing a blog post that will
cover the authoring of custom controls for .NET and dive into the .NET
Windows Forms SDK in more details.

If a Control’s property, however, does only implement a custom Converter, then
no change is needed, unless the converter needs a custom painting in the
property grid. Properties, however, which are using custom Enums or provide a
list of standard settings through the custom Converter at design time, are
running just fine.

Features yet to come and phased out Features

While we reached almost parity with the .NET Framework Designer, there are still
a few areas where the OOP Designer needs work:

The Tab Order interaction has been implemented and is currently tested.
This feature will be available in Visual Studio 17.1 Preview 3.
Apart from the Tab Order functionality you already found in the .NET
Framework Designer, we have planned to extend the Tab Order Interaction,
which will make it easier to reorder especially in large forms or parts of a
large form.
The Component Designer has not been finalized yet, and we’re actively
working on that. The usage of Components, however, is fully supported, and
the Component Tray has parity with the .NET Framework Designer. Note though,
that not all components which were available by default in the ToolBox in
.NET Framework are supported in the OOP Designer. We have decided not to
support those components in the OOP Designer, which are only available
through .NET Platform Extensions (see Windows Compatibility
Pack
).
You can, of course, use those components directly in code in .NET, should
you still need them.
The Typed DataSet Designer is not part of the OOP Designer. The same is
true for type editors which lead directly to the SQL Query Editor in .NET
Framework (like the DataSet component editor). Typed DataSets need the
so-called Data Source Provider Service, which does not belong to WinForms.
While we have modernized the support for Object Data Sources and encourage
Developers to use this along with more modern ORMs like EFCore, the OOP
Designer can handle typed DataSets on existing forms, which have been ported
from .NET Framework projects, in a limited scope.

Summery and key takeaways

So, while most of the basic Designer functionality is in parity with the .NET Framework Designer,
there are key differences:

We have taken the .NET WinForms Designer out of proc. While Visual Studio 2022 is 64-Bit .NET Framework only,
the new Designer’s server process runs in the respective bitness of the project and as a .NET process.
That, however, comes with a couple of breaking changes, mostly around the authoring of Control Designers.
Databinding is focused around Object Data Sources. While legacy support for maintaining
Typed DataSet-based data layers is currently supported in a limited way, for .NET we recommend
using modern ORMs like EntityFramework or even better: EFCore. Use the DesignBindingPicker
and the new Databinding Dialog to set up Object Data Sources.
Control library authors, who need more Design Time Support for their controls than custom type editors,
need the WinForms Designer Extensibility SDK.
Framework control designers no longer work without adjusting them for the
new OOP architecture of the .NET WinForms Designer.

Let us know what topics you would like hear from us around the WinForms Designer-
the new Object Data Source functionality in the OOP Designer and the WinForms Designer SDK
are the topics already in the making and on top of our list.

Please also note that the WinForms .NET runtime is open source, and you can contribute!
If you have ideas, encountered bugs, or even want to take on PRs around the WinForms runtime,
have a look at the WinForms Github repo.
If you have suggestions around the WinForms Designer,
feel free to file new issues there as well.

Happy coding!

The post State of the Windows Forms Designer for .NET Applications appeared first on .NET Blog.

Protobuf In C# .NET – Part 1 – Getting Started

This is a 4 part series on working with Protobuf in C# .NET. While you can start anywhere in the series, it’s always best to start at the beginning!

Part 1 – Getting Started
Part 2 – Serializing/Deserializing
Part 3 – Using Length Prefixes (Coming Soon)
Part 4 – Performance Comparisons (Coming Soon)

I had just started programming in C# when JSON started gaining steam as the “XML Killer”. Being new to software development, I didn’t really have a horse in the race, but I was fascinated by the almost tribal level of care people put into such a simple thing as competing data formats.

Surprisingly, Google actually released Protobuf (Or Protocol Buffers) in 2008, but I think it’s only just started to pick up steam (Or maybe that’s just in the .NET world). I recently worked on a project that used it, and while not to the level of JSON vs XML, I still saw some similarities in how Protobuf was talked about. Mainly that it was almost made out to be some voodoo world changing piece of technology. All I could think was “But.. It’s just a data serialization format right?”.

The Protobuf docs (just in my view) are not exactly clear in spelling out just what Protobuf is and how it works. Mainly I think that’s because much of the documentation out there takes a language neutral approach to describing how it works. But imagine if you were just learning XML, and you learnt all of the intricacies of XML namespaces, declarations, or entities before actually doing the obvious and serializing a simple piece of data down, looking at it, then deserializing it back up.

That’s what I want to do with this article series. Take Protobuf and give you a dead simple overview with C# in mind and show you just how obvious and easy it really is.

Defining Proto Message Contracts

The first thing we need to understand is the Proto Message Contract. These look scary and maybe even a bit confusing as to what they are used for, but they are actually dead simple. A proto message definition would look like this (In proto format) :

syntax=”proto3″;

message Person {
string firstName = 1;
string lastName = 2;
repeated string emails = 3;
}

Just look at this like any other class definition in any language :

We have our message name (Person)
We have our fields and their types (For example firstName is a string)
We can have “repeated” fields (Basically arrays/lists in C#)
We have an integer identifier for each field. This integer is used *instead* of the field name when we serialize. For example, if we serialized someone with the first name Bob, the serialized content would not have “firstName=’bob’”, it would have “1=’bob’”.

The last point there may be tricky at first but just think about it like this. Using numerical identifiers for each field means you can save a lot of space when dealing with big data because you aren’t subject to storing the entire field name when you serialize.

These contracts are nothing more than a universal way to describe what a message looks like when we serialize it. In my view, it’s no different than an XML or JSON schema. Put simply, we can take this contract and give it to anyone and they will know what the data will look like when we send it to them.

If we take this proto message, and paste it into a converter like the one by Marc Gravell here : https://protogen.marcgravell.com/ We can get what a generated C# representation of this data model will look like (And a bit more on this later!).

The fact is, if you are talking between two systems with Protobuf, you may not even need to worry about every writing or seeing contracts in this format. It’s really no different than someone flicking you an email with something like :

Hey about that protobuf message, it’s going to be in this format :

Firstname will be 1. It’s a string.
LastName will be 2. It’s also a string.
Emails will be 3, and it’s going to be an array of strings

It’s that simple.

Proto Message Contracts In C#

When it comes to working with JSON in C# .NET, you have JSON.NET, so it only makes sense when you are working with Protobuf in C# .NET you have… Protobuf.NET (Again by the amazing Marc Gravell)! Let’s spin up a dead simple console application and add the following package via the package manager console :

Install-Package protobuf-net

Now I will say there are actually a few Protobuf C# libraries floating around, including one by Google. But what I typically find is that these are converted Java libraries, and as such they don’t really conform to how C# is typically written. Protobuf.NET on the other hand is very much a C# library from the bottom up, which makes it super easy and intuitive to work with.

Let’s then take our person class, and use a couple of special attributes given to us by the Protobuf.NET library :

[ProtoContract]
class Person
{
[ProtoMember(1)]
public string FirstName { get; set; }

[ProtoMember(2)]
public string LastName { get; set; }

[ProtoMember(3)]
public List Emails { get; set; }
}

If we compare this to our Proto contact from earlier, it’s a little less scary right? It’s just a plain old C# class, but with a couple of attributes to ensure that we are serializing to the correct identifiers.

I’ll also point something else out here, because we are using integer identifiers, the casing of our properties no longer matters at all. Coming from the C# world where we love PascalCase, this is enormously easy on the eyes. But even more so, when we take a look at performance a bit later on in this series, it will become even clearer what a good decision this is because we no longer have to fiddle around parsing strings, including whether the casing is right or not.

I’ll say it  again that if you have an existing Proto message contract given to you (For example, someone else is building an application in Java and they have given you the contract only), you can simply run it through Marc Gravell’s Protogen tool here : https://protogen.marcgravell.com/

It does generate a bit of a verbose output :

[global::ProtoBuf.ProtoContract()]
public partial class Person : global::ProtoBuf.IExtensible
{
private global::ProtoBuf.IExtension __pbn__extensionData;
global::ProtoBuf.IExtension global::ProtoBuf.IExtensible.GetExtensionObject(bool createIfMissing)
=> global::ProtoBuf.Extensible.GetExtensionObject(ref __pbn__extensionData, createIfMissing);

[global::ProtoBuf.ProtoMember(1)]
[global::System.ComponentModel.DefaultValue(“”)]
public string firstName { get; set; } = “”;

[global::ProtoBuf.ProtoMember(2)]
[global::System.ComponentModel.DefaultValue(“”)]
public string lastName { get; set; } = “”;

[global::ProtoBuf.ProtoMember(3, Name = @”emails”)]
public global::System.Collections.Generic.List Emails { get; } = new global::System.Collections.Generic.List();

}

But for larger contracts it may just work well as a scaffolding tool for you!

So defining contracts is all well and good, how do we go about Serializing the data? Let’s check that out in Part 2! https://dotnetcoretutorials.com/2022/01/13/protobuf-in-c-net-part-2-serializing-deserializing/

The post Protobuf In C# .NET – Part 1 – Getting Started appeared first on .NET Core Tutorials.

Protobuf In C# .NET – Part 2 – Serializing/Deserializing

This is a 4 part series on working with Protobuf in C# .NET. While you can start anywhere in the series, it’s always best to start at the beginning!

Part 1 – Getting Started
Part 2 – Serializing/Deserializing
Part 3 – Using Length Prefixes (Coming Soon)
Part 4 – Performance Comparisons (Coming Soon)

In our last post, we spent much of the time talking about how proto contracts work. But obviously that’s all for nothing if we don’t start serializing some data. Thankfully for us, the Protobuf.NET library takes almost all of the leg work out of it, and we more or less follow the same paradigms that we did when working with XML or JSON in C#.

Of course, if you haven’t already, install Protobuf.NET into your application using the following package manager console command :

Install-Package protobuf-net

I’m going to be using the same C# contract we used in the last post. But for reference, here it is again.

[ProtoContract]
class Person
{
[ProtoMember(1)]
public string FirstName { get; set; }

[ProtoMember(2)]
public string LastName { get; set; }

[ProtoMember(3)]
public List Emails { get; set; }
}

And away we go!

Serializing Data

To serialize or write our data in protobuf format, we simply need to take our object and push it into a stream. An in memory example (For example if you needed a byte array to send somewhere else), would look like this :

var person = new Person
{
FirstName = “Wade”,
LastName = “Smith”,
Emails = new List
{
[email protected]”,
[email protected]
}
};

using(var memoryStream = new MemoryStream())
{
Serializer.Serialize(memoryStream, person);
var byteArray = memoryStream.ToArray();
}

So ignoring our set up code there for the Person object, we’ve basically serialized in 1 or 5 lines of code depending on if you want to count the setup of the memory stream. Pretty trivial and it makes all that talk about Protobuf being some sort of voodoo really just melt away.

If we wanted to, we could instead serialize directly to a file like so :

using (var fileStream = File.Create(“person.buf”))
{
Serializer.Serialize(fileStream, person);
}

This leaves us with a person.buf file locally. Of course, if we open this file in a text editor it’s unreadable (Protobuf is not human readable when serialized), but we can use a tool such as https://protogen.marcgravell.com/decode to open the file and tell us what’s inside of it.

Doing that, we get :

Field #1: 0A String Length = 4, Hex = 04, UTF8 = “Wade”
Field #2: 12 String Length = 5, Hex = 05, UTF8 = “Smith”
Field #3: 1A String Length = 20, Hex = 14, UTF8 = “[email protected] …” (total 20 chars)
Field #3: 1A String Length = 18, Hex = 12, UTF8 = “[email protected] …” (total 18 chars)

Notice that the fields within our protobuf file are identified by their integer identifer, *not* by their string property name. Again, this is important to understand because we need the same proto contract identifiers on both ends to know that Field 1 is actually a persons first name.

Well that’s serialization done, how about deserializing?

Deserializing Data

Of course if serializing data can be done in 1 line of code, deserializing or reading back the data is going to be just as easy.

using (var fileStream = File.OpenRead(“person.buf”))
{
var myPerson = Serializer.Deserialize(fileStream);
Console.WriteLine(myPerson.FirstName);
}

This is us simply reading a file and deserializing it into our myPerson object. It’s somewhat trivial and really straight forward if I’m being honest and there actually isn’t too much to deep dive into.

That is.. Until we start talking about length prefixes. Length prefixes are protobufs way of serializing several piece of data into the same data stream. So imagine that if we have 5 people, how can we store 5 people in the same file or data stream and know when one persons data ends, and another begins. In the next part of this series we’ll be taking a look at just how that works with Protobuf.NET!

The post Protobuf In C# .NET – Part 2 – Serializing/Deserializing appeared first on .NET Core Tutorials.

.NET Framework January 2022 Security and Quality Rollup Updates

Yesterday, we are released the January 2022 Security and Quality Rollup Updates for .NET Framework.

Security

CVE-2022-21911 – .NET Framework Denial of Service

This security update addresses an issue where an unauthenticated attacker could cause a denial of service on an affected system.

CVE-2022-21911

Quality and Reliability

This release contains the following quality and reliability improvements.

SQL Connectivity

nder certain error cases caused due to NullReferenceException thrown while populating SqlParameter values using customer provided delegates, the SqlClient driver may not cleanup the state of connection state. The connection in bad state, can make its way into the connection pool and may be picked up for reuse causing unexpected failures on the connection. If such a condition is recognized, an AppContext Switch “Switch.System.Data.SqlClient.CleanupParserOnAllFailures”, may be enabled to clean up connections on any kind of failures even while running into errors with delegates.

WCF1

Addresses a failure to correctly timeout a failed request when making an asynchronous WCF call over HTTP. If the service has sent a partial response message and fails to send the remainder of the response, the client may not fail the call after the configured timeout.

WPF2

Addresses an issue where WPF does not respond to touch if the WPF window was activated by a touch manipulation (e.g. swiping a listbox).
Adds a mitigation for an issue involving tearing, flickering, or incorrect composition of visual content under high GPU-load conditions.
Addresses an issue where the extra information associated with a WM_KEYDOWN message is discarded before the handlers for the PreviewKeyDown or KeyDown events can retrieve it via GetMessageExtraInfo.
Addresses an issue where AutomationElement.FindFirst or FindAll do not search the subtree of an hwnd whose UIA_WindowVisibilityOverridden property is set to 1.
Addresses an issue where a binding on TextBox.Text with UpdateSourceTrigger=PropertyChanged produces incorrect results when the Microsoft Quick IME is used.

1 Windows Communication Foundation (WCF)
2 Windows Presentation Foundation (WPF)

@@End ‘Quality and Reliability’ [email protected]@

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog. The Security Only Update is available via Windows Server Update Services and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update.

**Note**: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version
Cumulative Update

Windows 11

.NET Framework 3.5, 4.8
Catalog
5008880

Microsoft server operating systems version 21H2

.NET Framework 3.5, 4.8
Catalog
5008882

Windows 10 21H2

.NET Framework 3.5, 4.8
Catalog
5008876

Windows 10 21H1

.NET Framework 3.5, 4.8
Catalog
5008876

Windows 10, version 20H2 and Windows Server, version 20H2

.NET Framework 3.5, 4.8
Catalog
5008876

Windows 10 1909

.NET Framework 3.5, 4.8
Catalog
5008879

Windows 10 1809 (October 2018 Update) and Windows Server 2019

5009718

.NET Framework 3.5, 4.7.2
Catalog
5008873

.NET Framework 3.5, 4.8
Catalog
5008878

Windows 10 1607 (Anniversary Update) and Windows Server 2016

.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2
Catalog
5009546

.NET Framework 4.8
Catalog
5008877

Windows 10 1507

.NET Framework 3.5, 4.6, 4.6.1, 4.6.2
Catalog
5009585

The following table is for earlier Windows and Windows Server versions.

Product Version
Security and Quality Rollup
Security Only Update

Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2

5009721

5009713

.NET Framework 3.5
Catalog
5008868
Catalog
5008891

.NET Framework 4.5.2
Catalog
5008870
Catalog
5008893

.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2
Catalog
5008875
Catalog
5008895

.NET Framework 4.8
Catalog
5008883
Catalog
5008897

Windows Server 2012

5009720

5009712

.NET Framework 3.5
Catalog
5008865
Catalog
5008888

.NET Framework 4.5.2
Catalog
5008869
Catalog
5008892

.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2
Catalog
5008874
Catalog
5008894

.NET Framework 4.8
Catalog
5008881
Catalog
5008896

Windows 7 SP1 and Windows Server 2008 R2 SP1

5009719

5009711

.NET Framework 3.5.1
Catalog
5008867
Catalog
5008890

.NET Framework 4.5.2
Catalog
5008860
Catalog
5008887

.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2
Catalog
5008859
Catalog
5008886

.NET Framework 4.8
Catalog
5008858
Catalog
5008885

Windows Server 2008

5009722

5009714

.NET Framework 2.0, 3.0
Catalog
5008866
Catalog
5008889

.NET Framework 4.5.2
Catalog
5008860
Catalog
5008887

.NET Framework 4.6
Catalog
5008859
Catalog
5008886

 

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

.NET Framework November 2021 Cumulative Update
.NET Framework October 2021 Security and Quality Rollup
.NET Framework August 2021 Security and Quality Rollup
.NET Framework July 2021 Cumulative Update Preview

The post .NET Framework January 2022 Security and Quality Rollup Updates appeared first on .NET Blog.

350: 2021’s Most Hearted

It’s back! We counted up all the hearts given to every Pen created in 2021 and created a list of the top 100. Marie and Chris chat about this year’s list. Who’s on it, what’s on it, and digging into the numbers where we can.

Remember that people can heart pens up to 3 times each, so if it looks like a Pen lower down the list has more hearts than one higher up the list, it’s because of the density of hearts. The number you see on the card only reflects the number of people that have hearted not the true number of hearts.

Lots of folks hitting multiple times. George Francis hit 6(!) times (7, 28, 59, 75, 80, 82), an impressive feat for a member who only joined in late 2020. Four people with four placements: Aysenur Turk (3, 11, 14… and 1), Yoav Kadosh (17, 33, 72, 95), Dilum Sanjaya (22, 24, 64, 65), and Aybüke Ceylan (38, 46, 63, 91), and a couple of 2-position people. Woot!

“Full page” layouts were quite a trend on the Top 100 this year. That is, Pens that look like complete websites with widgets and cards and navigation and sidebars and the whole nine yards. That’s opposed to some past years where more minimal small-yet-surprising Pens were more dominant in the Top 100.

Advice for those shooting for the top? Talk about your work. Almost nobody on the list creates work and then never shares it. Share it on social media, blog about it, make a video, re-promote it multiple times. Be part of the community by liking other people’s work. Remember, your hearts come from other CodePen members. Also, feel free to update and revise your Pens. Many of the Top 100 are updated and improved even after their initial wave of popularity.

Time Jumps

01:43 Multiple hearting a Pen

03:26 Where to find the list for 2021

04:11 #1 on the list

07:31 George Francis on the list 6 times

10:13 Sponsor: Netlify

11:55 Getting pens in front of the community

14:52 It’s just gotta have something special

15:15 Doing the CodePen Challenges

15:32 Posting in January vs December

17:56 High fives to Yoav Kadosh

18:34 Carousel carousels

20:18 New faces and old faces on the list

21:47 Personal favorite pens

26:33 Full page UI

28:29 Bigger than smaller for 2021

31:05 If you’re new to CodePen…

Sponsor: Netlify

Did you Netlify offers auth? They call it Netlify Identity. Why would you need auth on a static site? Well, a static site can also by quite dynamic, that’s the nature of Jamstack. Say you’re building a TODO app. No problem! You can have users sign up and log in. You can store their data in a cloud database. You can pull the data for that user from the database based on information about the logged-in user because of Netlify Identity.

The post 350: 2021’s Most Hearted appeared first on CodePen Blog.

Hot Reload In C# .NET 6 / Visual Studio 2022

Now that the flames have simmered down on the Hot Reload Debacle, maybe it’s time again to revisit this feature!

I legitimately feel this is actually one of the best things to be released with .NET in a while. The amount of frustrating times I’ve had to restart my entire application because of one small typo… whereas now it’s Hot Reload to the rescue!

It’s actually a really simple feature so this isn’t going to be too long. You’ll just have to give it a crack and try it out yourself. In short, it looks like this when used :

In case it’s too small, you can click to make it bigger. But in short, I have a console application that is inside a never ending loop. I can change the Console.WriteLine text, and immediately see the results of my change *without* restarting my application. That’s the power of Hot Reload!

And it isn’t just limited to Console Applications. It (should) work with Web Apps, Blazor, WPF applications, really anything you can think of. Obviously there are some limitations. Notably that if you edit your application startup (Or other run-once type code), your application will hot reload, it doesn’t re-run any code blocks, meaning you’ll need to restart your application to get that startup ran again. I’ve also at times had the Hot Reload fail with various errors, usually meaning I just restart and we are away again.

Honestly, one of the biggest things to get used to is the mentality of Hot Reload actually doing something. It’s very hard to “feel” like your changes have been applied. If I’m fixing a bug, and I do a reload and the bug still exists…. It’s hard for me to not stop the application completely and restart just to be sure!

Hot Reload In Visual Studio 2022

Visual Studio 2019 *does* have a hot reload functionality, but it’s less featured (Atleast for me). Thus I’m going to show off Visual Studio 2022 instead!

All we need to do is edit our application while it’s running, then look to our nice little task bar in Visual Studio for the following icon :

That little icon with two fishes swimming after each other (Or.. atleast that’s what it looks like to me) is Hot Reload. Press it, and you are done!

If that’s a little too labour intensive for you, there is even an option to Hot Reload on file save.

If you’re coming from a front end development background you’ll be used to file watchers recompiling your applications based on a save only. On larger projects I’ve found this to maybe be a little bit more pesky (If Hot Reload is having issues, having popups firing off every save is a bit annoying), but on smaller projects I’ve basically run this without a hitch everytime.

Hot Reload From Terminal

Hot Reload from a terminal or command line is just as easy. Simple run the following from your project directory :

dotnet watch

Note *without* typing run after (Just incase you used to use “dotnet watch run”). And that’s it!

Your application will now run with Hot Reload on file save switched on! Doing this you’ll see output looking something like

watch : Files changed: F:ProjectsCore ExamplesHotReloadProgram.cs~RF1f7ccc54.TMP, F:ProjectsCore ExamplesHotReloadProgram.cs, F:ProjectsCore ExamplesHotReloadqiprgg31.zfd~
watch : Hot reload of changes succeeded.

And then you’re away laughing again!

 

The post Hot Reload In C# .NET 6 / Visual Studio 2022 appeared first on .NET Core Tutorials.

Audio Captcha – Use an API to solve audio based #captchas.

Speech to Text : Specifically for captchas

This is no ordinary speech to text API, it is specifically designed to crack audio captchas

If you use AWS Transcribe, or Google Cloud Speech to Text on a captcha audio, then you will have poor results, because it’s a general-purpose speech to text API, designed to transribe video, narrated text, and phone calls. This API is different, it is designed to quickly and accurately solve the short, distorted, random letter and number assortment found in captcha audio.

Audio can be provided as a URL or Base64 encoded data
Standard alphabet and NATO alphabet supported (Alpha, Bravo, Charlie …)
Returns on average in 5 seconds.

You can always call the API multiple times, most websites don’t count failed attempts.

Read more about this new API at https://www.audiocaptcha.com/

Create an account on Rapid API, and get an API key for this API. Once done, you can try out the API for free, by setting a HTTP header “x-rapidapi-key” to your API Key, then posting to the following URL:

https://audio-captcha.p.rapidapi.com/AudioCaptchaLambda

{
“url” : “https://pepite.s3.eu-west-1.amazonaws.com/65UHC.wav”,
“base64” : “”,
“useNato” : false
}

With the above JSON – Obviously, the Wave file URL here is a demo, but it was extracted from a real captcha.

Otherwise, you can provide the audio in base 64 format in the base64 field and omit the URL element.

If the audio is in the NATO alphabet (Alpha, Bravo, Charlie …) then you change useNato to true, otherwise, it’s assumed to be (A,B,C,D …)

Top 40 Nerd Jokes for Programmers to Liven Up Your Day | [Golden Collection]

Overview

Top Best Nerd Memes for Developers
Junior level memes
Middle level programmers’ memes
Senior level memes

About Flatlogic Platform [This is not a joke, this is for real]

Overview

Rev up your mood with a list of the best nerd memes for developers! Programming jokes are all over the internet and we are super delighted to show you a portion of our favourite ones.

Previously we’ve published a list of popular javascript memes, and this is a sort of sequel, just to mix our JavaScript tutorials, guides, and listings with some frivolous content. Though can’t compete with Reddit or Pikabu in the number of nerd jokes for web developers, we still want to make our list of memes both for beginners and advanced coders.

Probably, one of the most important things that you should learn when starting programming – don’t touch something until it works. The point is, one false step and you will have to debug and support it.

Programming jokes are endless, because of the errors to which human nature is prone. Just like any company we have a channel in Slack where our developers share the funniest and most ridiculous situations they face each day.

There is a common opinion that the majority of web developers are very private people, or so-called introverts, to some extent. Many of them are not strangers with a difficult fate, looking like hermits in their worn sweaters and horn-rim eyeglasses. But as it often happens, they are underrated geniuses and prodigy of their ages with borderless imagination and exquisite sense of humor.

Some life situations are reflected in some of these memes as well. Perhaps you will recognize in these pictures some of your colleagues or friends. TBH, sometimes we also don’t understand each other, and computer nerd jokes are a separate chapter in the world of comedy.

However, humor is a very subjective thing, something that seems funny for programmers or any tech crowd, may be absolutely lame or just unfunny for everyone else; For instance, I promise, here you will not see jokes like ‘Why did the database admin broke up with his wife? She had one-to-many relationships.

If you feel like that is not the most hilarious meme, or you simply have something better, send us your favourite gags! We are always open to exchanging our vibe, feel free to send us some funny programming memes on Twitter or Facebook.

Top Best Coding Jokes for Developers

Junior Level Memes

Memes for Middle Developers

Senior Level Meme Jokes

About Flatlogic Platform

Recently we’ve launched a platform for building web application without coding. It is flexible, easy to use tool which help to create a custom application with no effort. You just choose the stack of technologies, the design and add the business logic by adding entities, fields and by defining relationships between them.

Your opinion counts! Make your own application in a few clicks now and share your thoughts with us! We want to get better and you can help us improve our web app builder. It won’t cost you anything!

The post Top 40 Nerd Jokes for Programmers to Liven Up Your Day | [Golden Collection] appeared first on Flatlogic Blog.