339: Apollo at CodePen

Rachel and Chris chat all things Apollo GraphQL. Apollo is in this weird category of software where like by far most websites do not need it or anything like it. But for CodePen, we consider it nearly essential. The typical sales pitch for GraphQL applies to us for sure (e.g. only ask for the data you need!) but you can add to that the fact that it is empowering for front-end developers, which we have plenty of here on the CodePen Staff. But beyond GraphQL, we need ergonomic ways to write those queries and trust technology to do the right things. For example, 15 components on any given page might need to know the PRO status of a user, and thus be part of that components query, but the network very much does not need to do 15 requests. Apollo does stuff like figuring out what the page needs in aggregate and requesting is and dissemeninating that information as efficiently as possible and caching it. Plus we leverage Apollo for client-only state too, meaning we have a unified system for state management that plays very nicely in our React-based world.

Time Jumps

00:34 Working on Apollo stuff at CodePen

02:48 How do you think of GraphQL and Apollo?

06:52 Dealing with pagination

09:04 Building out the server for GraphQL

13:48 Sponsor: Jetpack Backup

15:28 Apollo pricing

17:41 Apollo Studio and schema

21:18 Why we did this work now

26:34 Manipulating the cache

Sponsor: Jetpack Backup Stand-Alone Plugin

If the only feature of Jetpack you need is the backups, now you can install that as a stand-alone plugin and have a paid plan for that feature alone. Built and hosted on WordPress.com’s secure infrastructure, Jetpack Backup provides peace of mind — you can rest easy knowing that what you’ve built will always be there and can be easily recovered in an emergency.

The post 339: Apollo at CodePen appeared first on CodePen Blog.

Building React Admin Step By Step

Introduction
What is React Admin
How to build React Admin
How to build React Admin Easier with Flatlogic’s Full Stack Web App Generator
Conclusions

Introduction

Every web project has two sides: the one seen by users and the admin page that its manager uses to control each aspect of each page of said project. To draw a parallel, the user side of the site is our usual beautiful world, and an Admin Page or Admin Console is like the Upside Down from “Stranger Things”, where it is dark and scary, but all the important stuff happens. Or, to draw another analogy, a React-based Admin Page is like the Enterprise spacecraft engine rooms: all the main characters like Captain Kirk and Spock are on the beautiful and well-lit main deck, which would be rendered useless if all the ship’s engineers left the above-mentioned engine rooms. So, the importance of a well-made Admin Page can not be underestimated if you need your whole project to run smoothly and correctly. And, first off let’s understand fully what a React Admin Page is and how it operates.

What is React Admin

React Admin Page or React Admin Console is, to put it simply, a framework that contains all the information about the site and its content, including information about your products, users, etc. React Admin page also gives you the ability to control everything about this content on your website or app.

In other words, it is the control tool that you use to manage and improve your web project. Thus, it is a tool of great importance able to make or break your business, especially if your specialty is of an E-commerce nature. Don’t get us wrong, we are not telling you this to scare you, but to merely emphasize the significance of creating a React Admin Page worthy of your business.

And, before we give you a quick rundown on how to create your own basic React Admin Page, there is only one little question left standing: why use React as a basis for your Admin Page in the first place? React is, no doubt, one of the best bases for an Admin Page. It is easy to create, improve, use, and, most importantly, easy to maintain. That fact renders the decision on what to use as a basis for not only your Admin Page or Admin Console, but pretty much your whole web project preemptively made for you.

That being said, let’s have a look at how to create your own crud React Admin Page in two ways:

1.    By actually sitting down and writing a code, spending so much precious time and effort;

2.    Or by seamlessly and effortlessly creating it with the help of Flatlogic’s Full Stack Web App Generator.

More on seamlessness and effortlessness of option number two later, as now we take a look at path number one.

How to Build React Admin

In order to create your own React Admin Page you will require some preliminary preparations that mainly consist of installing npx, which version would be newer than its eighth iteration, and create-react-app.

With the preliminaries out of the way, your next step is to create a new folder, which will contain and store your React Admin Page’s codebase. When that is done, you need to use your preinstalled create-react-app with the following line of coding:

npx create-react-app my-react-admin-page

This line would create a blank React application that will serve as your React Admin Page after we fill it with all the needed innards. Now it is time to install the react-admin package, as well as the data provider that will help us connect to a fake API:

cd my-react-admin-page

npm install react-admin ra-data-json-server

npm start

And now it is time to start working on the above-mentioned React Admin Page innards. Bear in mind that we don’t pay much attention to filling up our frontend with any real data and instead we are going to use an API for testing and prototyping. This will help us by letting us forget about creating custom data providers for now. The first step we are to partake is replacing the src/app.js element with the next lines of code to set up your React Admin’s default page:

import { Admin } from ‘react-admin’;
import jsonServerProvider from ‘ra-data-json-server’;

const dataProvider = jsonServerProvider(‘https://jsonplaceholder.typicode.com’);

function App() {
return (
<Admin dataProvider={dataProvider} />
);
}

export default App;

The next step is setting up the Resource component, which allows you to command react-admin to fetch and subsequently display a user resource. The process is quite simple: your data provider will process the fetch command and display the requested user with the help of the ListGuesser, which takes the data the resource was provided with and tries its best to guess upon the format of the initial data grid. This, subsequently, allows us to use the above-mentioned initial data grid in order to generate our initial list code. And to set the Resource component up you will need the following lines of coding:

import { Admin, Resource,ListGuesser } from ‘react-admin’;
import jsonServerProvider from ‘ra-data-json-server’;

const dataProvider = jsonServerProvider(‘https://jsonplaceholder.typicode.com’);

function App() {
return (
<Admin dataProvider={dataProvider}>
<Resource name=”users” list={ListGuesser}/>

</Admin>
);
}
export default App;

Now, in order to customize the mishmash of columns, you will have to look through the table element in your browser and copy the parts you would like to systemize and customize. After that, copy the selected parts into your console by using the inspect element → console chain of commands. The result will look something like this:

export const UserList = props => (
<List {…props}>
<Datagrid rowClick=”edit”>
<TextField source=”id” />
<TextField source=”name” />
<TextField source=”username” />
<EmailField source=”email” />
<TextField source=”address.street” />
<TextField source=”phone” />
<TextField source=”website” />
<TextField source=”company.name” />
</Datagrid>
</List>
);

To sort everything nice and tidy you will need to create a components folder in your src and paste the data in need of sorting into the user.js file. What you get as a result should look as follows:

import { List, Datagrid, TextField, EmailField } from ‘react-admin’;

export const UserList = props => (
<List {…props}>
<Datagrid rowClick=”edit”>
<TextField source=”id” />
<TextField source=”name” />
<TextField source=”username” />
<EmailField source=”email” />
<TextField source=”address.street” />
<TextField source=”phone” />
<TextField source=”website” />
<TextField source=”company.name” />
</Datagrid>
</List>
);

Now you can get rid of the unnecessary information. For this example, let’s get rid of the Ids and usernames, as well as disable the phone sorting and change the street address and company name field label with the label prop. Now, this part should look like this:

import { List, Datagrid, TextField, EmailField } from ‘react-admin’;

export const UserList = props => (
<List {…props}>
<Datagrid rowClick=”edit”>
<TextField source=”address.street” label=”Street Address”/>
<TextField source=”phone” sortable={false}/>
<TextField source=”company.name” label=”Company Name”/></Datagrid>
</List>
);

At this point, it is time to replace the ListGuesser with the list above in the Resource component. To do that, get back to the App.js and add the following lines:

import {UserList} from “./components/users”;

<Resource name=”users” list={UserList} />

And this part of the process is finished. Now you will need to repeat the process to set up your posts. But keep in mind that each post should be connected to its userId to create a reference between a post and the user that created it.

So, let’s get a closer look at this aspect, as the steps of the post set up previous to it are similar to user set up. In order to ensure the correlation between a post and its user-creator, add the following lines:

<p><em>import { Admin, Resource,ListGuesser } from ‘react-admin’;</em></p>

<p><em>import jsonServerProvider from ‘ra-data-json-server’;</em></p>

<p><em>import {UserList} from “./components/users”;</em></p>

<p><em>const dataProvider = jsonServerProvider(‘https://jsonplaceholder.typicode.com’);</em></p>

<p><em>D</em></p>

<p><em>function App() {</em></p>

<p><em>return (</em></p>

<p><em>&lt;Admin dataProvider={dataProvider}&gt;</em></p>

<p><em>&lt;Resource name=”users” list={UserList}/&gt;</em></p>

<p><em>&lt;Resource name=”posts” list={ListsGuesser}/&gt;</em></p>

<p><em>&lt;/Admin&gt;</em></p>

<p><em>);</em></p>

<p><em>}</em></p>

<p><em>export default App;</em></p>

To create relationships between the post and the user you will need to use the ReferenceField component, setting up the foreign key with the source=”userId” prop. After that, you will need to change the list prop for the new posts resource to reference PostList in App.js. To do that, replace the ListGuesser in the post’s resources list prop with the PostList.

The next step in creating your React Admin is to create an edit button in order to allow content modifications. And the first thing you will need to do here is to add the EditButton component into your Datagrid. The coding for this operation will look like this:

import { List, Datagrid,ReferenceField, TextField, EmailField,EditButton } from ‘react-admin’;

export const PostList = props => (
<List {…props}>
<Datagrid rowClick=”edit”>
<ReferenceField source=”userId” reference=”users”><TextField source=”name” /></ReferenceField>
<TextField source=”id” />
<TextField source=”title” />
<TextField source=”body” />
<EditButton/>
</Datagrid>
</List>
);

The second thing you will need to do here is to pass an edit prop to your resource. To do that, use the EditGuesser component and pass it to the posts resource in src/App.js. What you need to get is as follows:

import { Admin, Resource,ListGuesser,EditGuesser } from ‘react-admin’;
import jsonServerProvider from ‘ra-data-json-server’;
import {UserList} from “./components/users”;
import {PostList} from “./components/posts”;

const dataProvider = jsonServerProvider(‘https://jsonplaceholder.typicode.com’);

function App() {
return (
<Admin dataProvider={dataProvider}>
<Resource name=”users” list={UserList}/>
<Resource name=”posts” list={PostList} edit={EditGuesser}/>

</Admin>
);
}

export default App;

At this point, the EditGuesser component will generate edits. You will need to take those edits in order and copy them into src/components/posts.js. The whole thing will look like this: 

<export const
PostEdit = props => (
<Edit {…props}>
<SimpleForm>
<ReferenceInput source=”userId” reference=”users”>
<SelectInput optionText=”id”/>
</ReferenceInput>
<TextInput source=”id”/>
<TextInput source=”title”/>
<TextInput source=”body”/>
</SimpleForm>
</Edit>

If everything is fine and dandy with this, you will copy and paste the edit code, after which it is time to create the CreatePost component. This component is quite similar to the previous ones with the exception of using two different wrapper components. Here, you will need the Create component.

But that’s not the end of this whole ordeal, as you will need to supply the create prop in the React Admin’s resource as well. In order to do that, you will need to add the PostEdit and PostCreate components into the import. After that, you will need to add them into the posts resource:

<Resource name=”posts” list={PostList} edit={PostEdit} create={PostCreate}/>

“That surely must be it. My React Admin is ready!” – you might think. But unfortunately, as we told you at the beginning of this article, writing your React Admin from scratch is an extremely long and winding road. After all, it surely needs authentication, so your API will not be accessible to the general public. What you will need to do in order to add it is to create a new directory and a new file, which will be src/providers/authProvider.js. Your coding for this part should look somewhat like this:

export default {
login: ({ username }) => {
localStorage.setItem(‘username’, username);
return Promise.resolve();
},
logout: () => {
localStorage.removeItem(‘username’);
return Promise.resolve();
},
checkError: ({ status }) => {
if (status === 401 || status === 403) {
localStorage.removeItem(‘username’);
return Promise.reject();
}
return Promise.resolve();
},
checkAuth: () => {
return localStorage.getItem(‘username’)
? Promise.resolve()
: Promise.reject();
},
getPermissions: () => Promise.resolve(),
};

After that the addition of the authProvider={authProvider} prop and the authProvider component in the src/app.js’s admin component would be required:

import authProvider from “./providers/authProvider”;

And only now you will have a very crude and extremely basic React Admin that will still require services and backend tinkering and wiring up, but we digress. The main take out of this part of the article should be that this process, also not particularly difficult and somewhat simplified, could be best described as time-consuming.

But what if we told you that you can create a fully functional and stunningly beautiful React Admin Page in under five minutes? Let us introduce you to your new best friend, as we get to the next part of the article! 

How to build React Admin easier with Flatlogic’s Full Stack Web App Generator

When we said that creating a React Admin in under five minutes is possible, we weren’t joking around. It is more than possible with the help of Flatlogic’s Full Stack Web App Generator, which allows you to create ready-made React Admin Pages in just five easy steps. So, take out your stopwatch, and let’s undertake this pleasant little journey together!

Step №1. Choosing a name for your React Admin Page

The process of creating a React Admin Page with the help of Flatlogic’s Full Stack Web App Generator is already a thousand times easier than doing it by hand, as one of the steps is not writing or pre-installing anything, but a simple task of choosing a name for you API. After you do it, it’s already time for the second step.

Step №2. Choosing your React Admin Page’s Stack

This step is important, but also easy. Just pick the basis for your backend, frontend, and database. For the purposes of this article, we will, of course, choose the React as a frontend option. The rest is all up to you.

Step №3. Choosing design for your React Admin Page

This step is visually pleasing, as you get to choose from a number of stunningly beautiful ready-for-usage designs. For our example, we’ve decided to pick the marvelous “Flatlogic” design.

Step №4. Creating your React Admin Page’s Database Schema

This step is quite important, as it is the basis for your React Admin Page. But fear not, as it is highly editable and customizable to your project’s needs. For the purpose of this example, we decided that our imaginary project is an E-commerce one and, quite luckily, Flatlogic’s Full Stack Web App Generator has a ready-made database schema just for this purpose. Bear in mind that even though it is ready-made it is still customizable and ready to be tailored to your project’s specialties.

Step №5. Reviewing and generating your React Admin Page

Now we are already at the finish line. All we have to do is just ensure that we’ve chosen everything we wanted and press the majestic “Create Project” button.

After that, just sit back and let Flatlogic’s Full Stack Web App Generator do what it does best. And after a laughably short time, you have on your hands the done and dusted React Admin Page.

Conclusion

Summarizing, it ought to be said that the main goal of this article was simple: to show you how easy and effortless the process of creating such a pivotal part of a web project is, as an Admin Page/App/Console, can be with the help of Flatlogic’s Full Stack Web App Generator. And we are absolutely sure that this goal can be achieved without any hitches. Now you don’t have to spend the valuable and precious time of you and your colleagues on this important task, but instead, you can do it in a jiffy. Finally, thank you for spending your time on this article that we hope you have found really helpful. Have a nice day and, as always, feel free to read up on more of the articles in our blog!

The post Building React Admin Step By Step appeared first on Flatlogic Blog.

Web Summit Tickets Give Away! 🧨

Win the Tickets to Web Summit 2022

Hello people! How do you read, over?

Do you want to take part in Web Summit 2022, in Lisbon? We’ll provide you with this online opportunity! Web Summit is an annual event that brings together the leading companies, professionals, and amateurs of the global tech industry from all over the world! Forbes says that it is one of the greatest tech conferences on the planet!

We’re announcing an exciting contest, and we want you just to leave feedback on our brand new tool Web Application Generator on our forum! Test our new tool, try to build your own project, and write a comment in the branch of our forum with your honest feedback on the web app builder

We have 6 tickets on a web summit in Portugal to give away, and here is the way to win your ticket.

How to Enter Web Summit Contest 2022

Write your detailed feedback on our web app generator in the branch of our forum . Tell us what you think of our idea, and which features of the web app builder you like most.
Entry period: October 27th until October 30th.
Be an active member of our Flatlogic Community and stay tuned for the Web Summit tickets contest final results on October 30th!  
6 lucky winners will be chosen on Friday night!  

The contest will run until October 30th, 2021, 12 AM.

How Will the Websummit Winners Be Chosen?

We will announce 6 lucky winners on our socials (Twitter and Facebook), on Friday night, at 01:00 EST. Our super jury will choose the best of all feedback examples.

Jury Members

Philip DAINEKA – CEO of Flatlogic

Philip is a multitasking tech genius and the inspiration of Flatlogic. His deep expertise lies in engineering and marketing with a focus on creativity. He has worked with major brands like Cisco, Samsung, Walmart, and many others. He also believes that the driving force of any business lies in relationships with clients, with customers of the product. 

Eugene STEPNOV – 2nd Boss of Flatlogic, Product owner

Eugene is an inspirational team-lead for our Flatlogic sales and marketing processes. He is a passionate writer,  self-taught coder, and a person that is continually trying to enhance his knowledge in the world of web development. He can’t imagine a day without learning more about the latest tech news, startups, apps, and much more.

Stay tuned!

We encourage everyone to participate in our giveaway! We wish you good luck and look forward to your posts with feedback on Flatlogic Web App Generator on our forum

The post Web Summit Tickets Give Away! 🧨 appeared first on Flatlogic Blog.

.NET Conf, Packt Publishing, and My Book – Learn WinUI 3

.NET Conf 2021 is coming up on November 9-11. It’s going to be an exciting week with both Visual Studio 2022 and .NET 6 being released. There has been no date announced yet for the Windows App SDK 1.0 and WinUI 3 release, but I expect them to launch very soon as well. We do know that the third release candidate for Windows App SDK 1.0 will be coming this week!

Packt Publishing is one of the sponsors of .NET Conf this year. They will also be taking part in the secret decoder challenge for conference attendees. The contest will give the audience the chance to win a $500 prize from Packt. Make sure you watch .NET Conf to find out how you can participate and win great prizes from many of the sponsors, including Packt.

As part of the challenge, Packt has launched a .NET Conf storefront on Amazon featuring books about .NET, ASP.NET Core, Visual Studio, Blazor, WinUI, and more. Check out the books to get a jump on the latest from Microsoft and .NET.

The full .NET Conf 2021 agenda is now available. Save the date, learn about .NET, and win some great prizes! And don’t forget to check out my book, Learn WinUI 3 from Packt!

A comprehensive guide to go generate

Developers have strong tendencies to automate repetitive tasks, and this applies
to writing code as well. Therefore, the topic of metaprogramming is a hot
area of development and research, and hails back to Lisp in the 1960s. One
specific aspect of metaprogramming that has been particularly useful is
code-generation, or writing programs that emit other programs, or parts of
themselves. Languages that support macros have this capability built-in; other
languages extend existing features to support this (e.g. C++ template
metaprogramming
).

While Go does not have macros or other forms of metaprogramming, it’s a
pragmatic language and it embraces code generation with support in the official
toolchain.

The go generate command has been introduced all the way back in Go 1.4, and since then has been widely used in the Go
ecosystem. The Go project itself relies on go generate in dozens of places;
I’ll do a quick overview of these use cases later on in the post.

The basics

Let’s start with some terminology. The way go generate works is an
orchestration between three major players:

Generator: is a program or a script that is invoked by go generate. In
any given project multiple generators may be invoked, a single generator can
be invoked multiple times, etc.

Magic comments: are comments formatted in a special way in .go files
that specify which generator to invoke and how. Any comment that starts at
the very beginning of the line with the text //go:generate qualifies.

go generate: is the Go tool that reads Go source files,
finds and parses magic comments and runs the generators, as specified.

It’s very important to emphasize that the above is the whole extent of
automation Go provides for code generation. For anything else, the developer is
free to use whatever workflow works for them. For example, go generate
should always be run manually by the developer; it’s never invoked automatically
(say as part of go build). Moreover, since with Go we typically ship
binaries to users or execution environments, it is well understood that go
generate is only run during development (likely just before running go
build); users of Go programs shouldn’t know whether parts of the code are
generated and how.

This applies to shipping modules as well; go generate won’t run the
generators of imported packages. Therefore, when a project is published,
whatever generated code is part of it should be checked in and distributed
along with the rest of the code.

A simple example

Learning is best done by doing; to this end, I created a couple of simple Go
projects that will help me illustrate the topics explained by this post. The
first is samplegentool,
a basic Go tool that’s intended to simulate a generator. Here’s its entire
source code:

package main

import (
“fmt”
“os”
)

func main() {
fmt.Printf(“Running %s go on %sn”, os.Args[0], os.Getenv(“GOFILE”))

cwd, err := os.Getwd()
if err != nil {
panic(err)
}
fmt.Printf(” cwd = %sn”, cwd)
fmt.Printf(” os.Args = %#vn”, os.Args)

for _, ev := range []string{“GOARCH”, “GOOS”, “GOFILE”, “GOLINE”, “GOPACKAGE”, “DOLLAR”} {
fmt.Println(” “, ev, “=”, os.Getenv(ev))
}
}

This tool doesn’t read any code and doesn’t write any code; all it does is
carefully report how it’s invoked. We’ll get to the details shortly. Let’s first
examine another project – mymod.
This is a sample Go module with 3 files split into two packages:

$ tree
.
├── anotherfile.go
├── go.mod
├── mymod.go
└── mypack
└── mypack.go

The contents of these files are fillers; what matters is the go:generate
magic comments in them. Let’s take the one in mypack/mypack.go for example:

//go:generate samplegentool arg1 “multiword arg”

We see that it invokes samplegentool with some arguments. To make this
invocation work, samplegentool should be found somewhere in PATH. This
can be accomplished by running go build in the samplegentool project
to build the binary and then setting PATH accordingly [1]. Now, if we
run go generate ./… in the root of the mymod project, we’ll see
something like:

$ go generate ./…
Running samplegentool go on anotherfile.go
cwd = /tmp/mymod
os.Args = []string{“samplegentool”, “arg1”, “arg2”, “arg3”, “arg4”}
GOARCH = amd64
GOOS = linux
GOFILE = anotherfile.go
GOLINE = 1
GOPACKAGE = mymod
DOLLAR = $
Running samplegentool go on mymod.go
cwd = /tmp/mymod
os.Args = []string{“samplegentool”, “arg1”, “arg2”, “-flag”}
GOARCH = amd64
GOOS = linux
GOFILE = mymod.go
GOLINE = 3
GOPACKAGE = mymod
DOLLAR = $
Running samplegentool go on mypack.go
cwd = /tmp/mymod/mypack
os.Args = []string{“samplegentool”, “arg1”, “multiword arg”}
GOARCH = amd64
GOOS = linux
GOFILE = mypack.go
GOLINE = 3
GOPACKAGE = mypack
DOLLAR = $

First, note that samplegentool is invoked on each file in which it appears
in a magic comment; this includes sub-directories, because we ran go generate
with the ./… pattern. This is really convenient for large projects that
have many generators in various places.

There’s a lot of interesting stuff in the output; let’s dissect it line by line:

cwd reports the working directory where samplegentool is invoked.
This is always the directory where the file with the magic comment
was found; this is guaranteed by go generate, and lets the generator
know where it’s located in the directory tree.

os.Args reports the command-line arguments passed to the generator.
As the output above demonstrates, this includes flags as well as multi-word
arguments surrounded by quotes.
The env vars passed to the generator are then printed out; see the
official documentation for a
full explanation of these. The most interesting env vars here are GOFILE
which specify the file name in which the magic comment was found (this path is
relative to the working directory) and GOPACKAGE that tells the generator
which package this file belongs to.

What can generators do?

Now that we have a good understanding of how generators are invoked by
go generate, what can they do? Well, in fact they can do anything we’d like.
Really. Generators are just computer programs, after all. As mentioned earlier,
generated files are typically checked into the source code as well, so
generators may only need to run rarely. In many projects, developers won’t run
go generate ./… from the root as I did in the example above; rather,
they’ll just run specific generators in specific directories as needed.

In the next section I will provide a deep dive of a very popular generator – the
stringer tool. In the meantime, here are some tasks the Go project
itself uses generators for (this is not a full list; all uses can be found
by grepping go:generate in the Go source tree):

The gob package uses generators to emit repetitive helper functions for
encoding/decoding data.
The math/bits package uses a generator to emit fast lookup tables for
some of the bitwise operations it provides.
Several crypto packages use generators to emit hash function shuffle
patterns and repetitive assembly code for certain operations.
Some crypto packages also use generators to grab certificates from
specific HTTP URLs. Obviously, these aren’t designed to be run very
frequently…

net/http is using a generator to emit various HTTP constants.
There are several interesting uses of generators in the Go runtime’s source
code, such as generating assembly code for various tasks, lookup tables
for mathematical operations, etc.
The Go compiler implementation uses a generator to emit repetitive types and
methods for IR nodes.

In addition, there are at least two places in the standard library that use
generators for generics-like functionality, where almost duplicate code is
generated from existing code with different types. One place that does this is
the sort package [2], and the other is the suffixarray package.

Generator deep-dive: stringer

One of the most commonly used generators in Go projects is stringer – a tool that automates
the creation of String() methods for types so they implement the
fmt.Stringer interface. It is most frequently used to generate textual
representations for enumerations.

Let’s take an example from the standard library (math.big package);
specifically, the RoundingMode
type, which is defined as follows:

type RoundingMode byte

const (
ToNearestEven RoundingMode = iota
ToNearestAway
ToZero
AwayFromZero
ToNegativeInf
ToPositiveInf
)

At least up to Go 1.18, this is an idiomatic Go enumeration; to make the names
of these enum values printable, we’ll need to implement a String()
method for this type that would be a kind of switch statement enumerating
each value with its string representation. This is very repetitive work, which
is why the stringer tool is used.

I’ve replicated the RoundingMode type and its values in a small example
module

so we can experiment with the generator more easily. Let’s add the appropriate
magic comment to the file:

//go:generate stringer -type=RoundingMode

We’ll discuss the flags stringer accepts shortly. Let’s be sure to install
it first:

$ go install golang.org/x/tools/cmd/[email protected]

Now we can run go generate; since in the sample project the file with the
magic comment lives in a sub-package, I’ll just run this from the module root:

$ go generate ./…

If everything is set up properly, this command will complete successfully
without any standard output. Checking the contents of the project, you’ll find
that a file named roundingmode_string.go has been generated, with these
contents:

// Code generated by “stringer -type=RoundingMode”; DO NOT EDIT.

package float

import “strconv”

func _() {
// An “invalid array index” compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[ToNearestEven0]
_ = x[ToNearestAway1]
_ = x[ToZero2]
_ = x[AwayFromZero3]
_ = x[ToNegativeInf4]
_ = x[ToPositiveInf5]
}

const _RoundingMode_name = “ToNearestEvenToNearestAwayToZeroAwayFromZeroToNegativeInfToPositiveInf”

var _RoundingMode_index = []uint8{0, 13, 26, 32, 44, 57, 70}

func (i RoundingMode) String() string {
if i >= RoundingMode(len(_RoundingMode_index)1) {
return “RoundingMode(“ + strconv.FormatInt(int64(i), 10) + “)”
}
return _RoundingMode_name[_RoundingMode_index[i]:_RoundingMode_index[i+1]]
}

The stringer tool has multiple codegen strategies, depending on the nature
of the enumeration values it’s invoked on. Our case is the simplest one with a
“single consecutive run” of values. stringer will generate somewhat
different code if the values form multiple consecutive runs, and yet another
version if the values form no run at all. For fun and education, study the
source of stringer for the details; here let’s focus on the currently used
strategy.

First, the _RoundingMode_name constant is used to efficiently hold all
string representations in a single consecutive string. _RoundingMode_index
serves as a lookup table into this string; for example let’s take ToZero,
which has the value 2. _RoundingMode_index[2] is 26, so the code will index
into _RoundingMode_name at index 26, which leads us to the ToZero part
(the end is the next index, 32 in this case).

The code in String() also has a fallback in case more enum values were added
but the stringer tool was not rerun. In this case the value produced will
be RoundingMode(N) where N is the numeric value.

This fallback is useful because nothing in the Go toolchain guarantees that
generated code will remain in sync with the source; as mentioned before, running
the generators is entirely the developer’s responsibility.

What about the odd code in func _() though? First, notice that it literally
compiles to nothing: the function doesn’t return anything, has no side effects
and isn’t invoked. The goal of this function is to serve as a
compilation guard; it’s an extra bit of safety in case the original enum
changes in a way that’s fundamentally incompatible with the generated code, and
the developer forgets to rerun go generate. Specifically, it will protect
against existing enum values being modified. In this event, unless go
generate was rerun, the String() method may succeed but produce completely
wrong values. The compilation guard tries to catch this case by having code that
will fail to compile an out-of-bounds array lookup.

Now let’s talk a bit about how stringer works; first, it’s instructional
to read its -help:

$ stringer -help
Usage of stringer:
stringer [flags] -type T [directory]
stringer [flags] -type T files… # Must be a single package
For more information, see:
https://pkg.go.dev/golang.org/x/tools/cmd/stringer
Flags:
-linecomment
use line comment text as printed text when present
-output string
output file name; default srcdir/<type>_string.go
-tags string
comma-separated list of build tags to apply
-trimprefix prefix
trim the prefix from the generated constant names
-type string
comma-separated list of type names; must be set

We’ve used the -type parameter to tell stringer which type(s) to
generate the String() method for. In a realistic code base one
may want to invoke the tool on a package that has a number of types defined
within it; in this case, we likely want stringer to produce String()
methods for only specific types.

We haven’t specified the -output flag, so the default is used; in this case,
the generated file is named roundingmode_string.go.

Sharp-eyed readers will notice that when we invoked stringer, we didn’t
specify what file it’s supposed to use as input. A quick scan of the tool’s
source code shows that it doesn’t use the GOFILE env var either. So how does
it know which files to analyze? It turns out that stringer uses
golang.org/x/tools/go/packages to load the whole package from its
current working directory (which, as you recall, is the directory the file
containing the magic comment is in). This means that no matter what file the
magic comment is in, stringer will analyze the whole package by default.
If you think about it for a moment, it makes sense because who said that the
constants have to be in the same file as the type declaration, for instance? In
Go, a file is just a convenient container for code; a package is the real unit
of input the tooling cares about.

In-source generators and build tags

So far we’ve assumed that the generator is somewhere in PATH while go
generate is running, but this may not always be the case.

Consider a very common scenario where your module has its own generator that’s
only useful for this specific module. When someone is hacking on the module,
you’d like them to be able to clone the code, run go generate and go
build, etc. However, if magic comments assume that generators are always in
PATH this won’t work unless the generators are built and properly pointed to
before running go generate.

The solution in Go is simple because of go run, which is a perfect match
for running generators that are just .go files somewhere in the module’s
tree. A simple example is available here.
This is a package file with a magic comment:

package mypack

//go:generate go run gen.go arg1 arg2

func PackFunc() string {
return “insourcegenerator/mypack.PackFunc”
}

Notice how the generator is invoked here: with go run gen.go. This means
that go generate will expect to find gen.go in the same directory as the
file containing the magic comment. The contents of gen.go are:

//go:build ignore

package main

import (
“fmt”
“os”
)

func main() {
// … same main() as the simple example at the top of the post
}

It’s just a small Go program (in package main). The only thing of note is
the //go:build constraint that tells the Go toolchain to ignore this file
when building the project [3]. Indeed, gen.go is not a part of the package;
it’s in package main itself and is intended to be run with go
generate instead of being compiled into the package.

The standard library has many examples of small programs intended to be
invoked with go run that serve as generators.

The typical pattern is that 3 files are involved in code generation, and these
all coexist in the same directory/package:

The source file contains some of the package’s code, along with a magic
comment to invoke a generator with go run.
The generator, which is a single .go file with package main;
this generator is invoked by the go run in a magic comment from the
source file to produce the generated file. The generator .go file will
typically have a //go:build ignore constraint to exclude it from the build
of the package itself.
The generated file is emitted by the generator; in some conventions it would
have the same name as the source file, but followed by _gen (like
pack.go –> pack_gen.go); alternatively it could be some sort of
prefix (like gen). The code in the generated file is in the same package
as the code in the source file. In many cases, the generated file contains
some implementation details as unexported symbols; the source file can
refer to this in its code because the two files are in the same package.

Of course, none of this is mandated by the tooling – it just describes a common
convention; specific projects can be set up in a different way (where a single
generator emits code for multiple packages, for example).

Advanced features

This section discusses some of the advanced or lesser used features of go
generate.

The -command flag

This flag lets us define aliases for go:generate lines; this could be
useful if some generator was a multi-word command that we wanted to shorten for
multiple invocations.

The original motivation was likely to shorten go tool yacc to just yacc
with:

//go:generate -command yacc go tool yacc

After which yacc could be invoked multiple times with just this 4-letter
name instead of three words.

Interestingly, go tool yacc was removed from the core Go toolchain in 1.8, and I haven’t found any usage of
-command in either the main Go repository (outside of testing go
generate itself) or the x/tools modules.

The -run flag

This flag is for the go generate command itself, used to select which
generators to run. Recall our simple example where we had 3 invocations of
samplegentool in the same project. We can select only one of them to run
with the -run flag:

$ go generate -run multi ./…
Running samplegentool go on mypack.go
cwd = /tmp/mymod/mypack
os.Args = []string{“samplegentool”, “arg1”, “multiword arg”}
GOARCH = amd64
GOOS = linux
GOFILE = mypack.go
GOLINE = 3
GOPACKAGE = mypack
DOLLAR = $

The utility of this should be obvious for debugging: in a large project with
multiple generators, we often want to run only a subset for debugging / quick
edit-run loop purposes.

DOLLAR

Of the env vars auto-magically passed into generators, one stands out –
DOLLAR. What is it for? Why dedicate an env var to a character? There is
no use of this env var in the Go source tree.

The origin of DOLLAR can be traced to this commit by Rob Pike. As the change description
says, the motivation here is passing the $ char into a generator without
complicated shell escaping.
This is useful if go generate invokes a shell script or something that takes
a regexp as an argument.

The effect of DOLLAR can be observed with our samplegentool generator.
If we change one of the magic comments to:

//go:generate samplegentool arg1 $somevar

The generator reports its arguments to be

os.Args = []string{“samplegentool”, “arg1”, “”}

This is because $somevar is interpreted by the shell as referencing the
somevar variable, which doesn’t exist so its default value is empty. Instead
we can use DOLLAR as follows:

//go:generate samplegentool arg1 ${DOLLAR}somevar

And then the generator reports

os.Args = []string{“samplegentool”, “arg1”, “$somevar”}

[1]
An alternative approach is place a link to samplegentool in the
GOBIN directory. If your GOBIN is not set, it defaults to
GOPATH/bin. If GOPATH is not set either (which is what I do,
since I’m all in on modules), it should be in $HOME/go/bin.

[2]
The sort package is a very interesting example, because it actually
parses the code of the package using go/parser and go/ast and
uses that as a basis for generating an alternative version. This approach
may change with CL 353069, though.

[3]
Looking at real-world examples of build constraints you’ll likely see
// +build ignore instead. This is the older syntax which was
superseded by the go:build syntax in Go 1.17.
For the time being, most generators will likely include both kinds of
constraints, to support building with Go 1.16 and earlier, as well as
future-proofing for upcoming releases.

New Alerts, DR with zero-downtime upgrades: Seq 2021.3 has shipped! 🎉

Today we’re very pleased to announce the release of Seq 2021.3. You can download the Windows installer from datalust.co, or pull the latest datalust/seq image from Docker Hub.

Seq 2021.3 includes improvements across the whole product.

New, completely rewritten Alerts — We’ve redesigned Seq Alerts as a full-fledged, top-level feature. Alerts get their own status-oriented dashboard, a much better editing experience, and rich, multi-channel notifications.

Disaster recovery instances — Reliably and securely replicate all Seq data to two nodes, preventing data loss even in the face of a total machine failure.

Zero-downtime upgrades — Fail over to a second Seq node to upgrade or perform maintenance on the first, all the while seamlessly ingesting live data and serving user queries.

Improvements under Docker — Seq 2021.3 plays much more nicely with the Linux kernel’s virtual memory manager, improving performance and stability. The datalust/seq Docker container now natively supports TLS, strengthening security and in some cases avoiding the need for a reverse proxy. Managing Seq on Docker is made easier with init script support.

PostgreSQL metadata storage 2021.3 adds PostgreSQL (alongside MSSQL and the embedded metadata store) as a robust option for storing Seq’s internal configuration.

And there’s a lot more: bug fixes, query language improvements, a clearer search bar layout, icon updates, better secret storage, a dedicated /health endpoint, new seqcli features, millisecond precision in the date range picker… Read on for a summary of the major features, or check out the 2021.3 issue tracker milestone for all of the details.

What is Seq?

Seq is a centralized search and analysis server for structured application logs. It combines a flexible JSON data model and familiar query language to drive real-time log exploration, dashboarding, and alerting.

We build Seq to help teams easily identify and diagnose problems in complex applications and microservices.

Alerts in 2021.3

Structured log data is perfect for alerting. For any event or query result you can find with search, you can set an alert to notify you when that condition next occurs.

Is an app throwing exceptions unexpectedly? Have response times spiked? Are connections to a back-end web API timing out more than usual? Seeing a high rate of login failures? With the data of interest in the Events screen, press the (rather magical) bell 🔔 icon, and in a few clicks you’ll have a matching alert!

This not only works for simple signals and searches, but also for more complex SQL-style queries.

The new Alerts dashboard, pictured at the top of this post, provides at-a-glace status information for all the alerts on a Seq server. The notification history for an alert is tracked, so you can find out when and how often an alert has been triggered, and click through to the underlying data.

Alert notifications can be sent to email, Slack, Teams, and many other integrations.

If you’re familiar with Seq’s earlier alerting implementation, you’ll also be pleased to find that notifications now include a sample of contributing events 😎.

Read more in the new Alerts documentation.

Disaster recovery and zero-downtime upgrades

2021.3 is the first Seq release to support multi-node deployment, and a huge milestone on Seq’s clustering roadmap.

The DR configuration in Seq 2021.3 serves two important purposes:

All event data is precisely and securely replicated to a second Seq node, so that if the first Seq node is completely lost, historical data can be recovered
By switching between nodes, Seq itself, along with the hardware and operating system it runs on, can be upgraded and maintained during business hours without any interruption of log ingestion or access to the Seq UI

We’ve paid special attention to making DR instances easy to configure and maintain. If you’d like a walk-through of the process, or need some help deploying DR in your own environment, we’d love to help: please get in touch via [email protected]

The DR configuration in 2021.3 provides redundancy, but does not implement high availability (HA) or scale-out. These are a major part of our aims for 2022, and we’ll be able to talk more about our plans early in the new year.

Read more about setting up a DR instance in the Seq documentation.

Improvements under Docker

In the three years since Seq added Docker support, we’ve seen a massive shift towards Docker deployment. We’ve been continually learning, and 2021.3 has the benefit of a lot more experience running Seq under Docker.

This release addresses a common cause of Seq being OOM-killed by the Docker runtime: improvements in Seq’s storage engine release disk pages faster, leaving more container memory for query execution and to absorb alloction spikes. The result is a smoother, more stable Docker hosting experience with fewer restarts.

Also in 2021.3, administration is made easier through init script support. Init scripts are regular shell scripts placed in either a mounted /seqinit directory, or under /data/Init, that perform configuration tasks and interact with the Seq command line interface. When the datalust/seq container starts, it will detect new init scripts and run them before starting the Seq server process.

Native support for TLS/SSL termination means that the Seq container can now be deployed in production without a reverse proxy.

Finally, secret key providers make it possible to secure Seq’s internal encryption key using an external key management service, avoiding the use of environment variables or plain-text configuration for this on Docker/Linux.

What else is new?

A lot! You’ll immediately notice the new search bar button layout and icon set. While a tiny amount of muscle memory will need to be reprogrammed, we’ve been living with this design for a few months now and feel like it’s a worthwhile improvement.

You’ll notice that the JSON and CSV export buttons have moved to a much more discoverable position above the result set that they act on.

Also of note –

The Events screen date range picker gains support for millisecond precision
The Seq query language gains let bindings and lambda syntax for collection search operations

PostgreSQL is added alongside SQL Server/Azure SQL Database for robust external metadata storage
A dedicated /health endpoint and complimentary seqcli node heath command make monitoring Seq itself easier
Seq can now be installed and run under a Group Managed Service Account (gMSA) on Windows Server
Signals, dashboards, queries, retention policies, and workspaces can now be exported, imported, and synchronized between servers with seqcli template export and seqcli template import

Upgrading

Seq 2021.3 is a highly-compatible in-place update. All recent Seq versions can be upgraded by running the Windows MSI or pulling the datalust/seq:latest tag from Docker Hub.

Since Seq 2021.2, the Alerts API has changed significantly. If you’re integrating with Seq alerts programmatically and need help to move your code across from the dashboard-based alerting implementation, please reach out and we’ll be happy to assist.

Check out the upgrade guide for version-specific instructions, or if you’re upgrading from Seq 4.2 and earlier.

We hope you enjoy Seq 2021.3!

— The Seq Team

User Registration Form .com – A #Javascript library to handle user registration and login.

Every website nowadays has a login and register page, and as a developer this boilerplate code takes up time, and gets little credit. Using this JavaScript library, you can manage users, and associate data with your users that will be persisted remotely, all without any server-side coding. It’s secure, fast, and simple to use – and above all it’s free.

User Registration Form.com is a JavaScript library that you can drop into a page that will perform basic user login / registration and allow you store data to associate with that user. It also has a password reset functionality, to allow your users reset their password if they forget it.

Quick Start

Include a reference to our JS library in the head of your page:

<script src=”https://userregistrationform.com/user.js”></script>

To handle a new user registration, use this code

user.apiKey = “1A6F4B67DB0937D50386054DE40AA767”;
user.email = “[email protected]”;
user.password = “a-strong-password”;
user.register().then(() => {
// This happens after a successful login
location.href = “Dashboard.html#” + user.id;
}).catch(() => {
// This happens if the registration fails.
alert(“register failed”);
});

To handle a user login, use this code:

user.apiKey = “1A6F4B67DB0937D50386054DE40AA767”;
user.email = “[email protected]”;
user.password = “a-strong-password”;
user.login().then(() => {
// This happens after a successful login
location.href = “Dashboard.html#” + user.id;
}).catch(() => {
// This happens if the login fails
alert(“login failed”);
});

You can also perform a user login using the user id alone, as follows:

user.apiKey = “1A6F4B67DB0937D50386054DE40AA767”;
user.id = “{user id here}”;
user.login().then(() => {
// user is now logged in
}).catch(() => {
alert(“Login failed”);
});

Once logged in you can store data pertaining to the user by setting the “data” property as follows:

user.data = “Anything”;

Although persisting data for a user is fast, it’s not instant, so you shouldn’t navigate away from the page until the data is saved, you can check for this by using the code;

user.data = “Anything”;
user.onSetData = () => {
// Safe to leave the page now.
};

Please be aware that if you intend to store personally identifiable information in the data field, you must comply with GDPR. You must let your users know that their data is being stored with Infinite Loop Development Ltd, and your users must give informed consent for this. Your data will be deleted if you do not follow GDPR guidelines.

FAQs:

Add Email verification

If you need your users to verify their email address before having full access to your system, then you can do this in conjunction with an email library such as SMTPJS.COM

The flow would be as follows;

User Registers
After registration, an email is sent with a link containing the user.id
On visiting this url, your page sets user.data to “verified” and then to the dashboard
User Logs in
If the user.data is not set to “verified”, then access is denied, otherwise the user is sent to the dashboard.

Add a password reset

There is no way to recover a lost password, but you can ask a user to reset his/her own password. You will need a means to send email, and we recommend SMTPJS.COM for this.

The flow would be as follows;

User requests a password reset
An email is sent to the email address of the user with a link to a password reset page.
On the password reset page, the following code is executed;

async function reset() {
user.apiKey = “1A6F4B67DB0937D50386054DE40AA767”;
user.email = “[email protected]”;
user.password = “new-password”;
try {
await user.resetPassword();
alert(“password reset ok”);
} catch (e) {
alert(“password reset failed”);
}
}

List all users

On an admin page of your website, you can list all the users of under your account, by calling the following code:

async function list() {
user.apiKey = “1A6F4B67DB0937D50386054DE40AA767”;
user.password = “**root password here**”;
try {
var list = await user.list();
alert(JSON.stringify(list));
} catch (e) {
alert(“list failed”);
}
}

Since your root password is visible on this page, you should make sure that your admin page is not accessible to unauthorized visitors. The user list returned will contain all user ids, user email, and user data. If needed, you can assume the identity of one of your users by using the user id provided in the return data.

Connect a user with Shopping cart, CMS system, product X

This system is designed to be flexible enough, such that you can store any data you need regarding a user (up to 8Kb of data). Instead of storing plain text, like the examples above, you can also store JSON, so that you can represent a Shopping basket of products, or a rich CMS user profile with name, address and contact details. The user ID is always represented by a GUID (Genuinely unique Identifier), so it is statistically impossible for there to be an overlap between the ID returned by User.js and any other system. Although we’re unlikely to be able to offer free advice on how to connect this system to your Product X, it’s beyond the remit of our support for this free software. If you would like to sponsor the development of a feature, please contact us.

.NET Hot Reload Support via CLI

Last week, our blog post and the removal of the Hot Reload capability from the .NET SDK repo led to a lot of feedback from the community.

First and foremost, we want to apologize. We made a mistake in executing on our decision and took longer than expected to respond back to the community. We have approved the pull request to re-enable this code path and it will be in the GA build of the .NET 6 SDK.

As a team, we are committed to .NET being an open platform and doing our development in the open. The very fact that we decided to adopt an open posture by default from the start for developing the Hot Reload feature is a testament to that. That said, like any development team, from time to time we have to look at quality, time, resources to make tradeoffs while continuing to make forward progress. The vast majority of the .NET developers are using Visual Studio, and we want to make sure VS delivers the best experience for .NET 6.

With the runway getting short for the .NET 6 release and Visual Studio 2022, we chose to focus on bringing Hot Reload to VS2022 first. We made a mistake in executing on this plan in the way it was carried out. In our effort to scope, we inadvertently ended up deleting the source code instead of just not invoking that code path. We underestimated the number of developers that are dependent upon this capability in their environments across scenarios, and how the CLI was being used alongside Visual Studio to drive inner loop productivity by many.

We are always listening to our customers’ feedback to deliver on their needs. Thank you for making your feedback heard. We are sorry that we made so many community members upset via this change across many parameters including timing and execution.

Our desire is to create an open and vibrant ecosystem for .NET. As is true with many companies, we are learning to balance the needs of OSS community and being a corporate sponsor for .NET. Sometimes we don’t get it right. When we don’t, the best we can do is learn from our mistakes and be better moving forward.

Thank you for all of your feedback and your contributions over the years. We are committed to developing .NET in the open and look forward to continuing to work closely with the community.

Thank you!

The post .NET Hot Reload Support via CLI appeared first on .NET Blog.

Women IC engineer mentoring ring

During this fiscal year I ran a women IC mentoring ring in the Developer Division at Microsoft. It was part of the women’s mentoring ring program in our division. I’ve always felt a little sad when I looked around and saw very few women ICs at very senior levels. Most women who advanced to those levels became managers. This was what prompted me to suggest such a mentoring ring to the organizers of the women’s mentoring ring program. I’m happy to report that the ring remains one of the most requested so it will keep going for next fiscal year (I will however be leading a different mentoring ring just because we tend to change up the mentors in each ring from year to year).

As we are discussing next fiscal year’s mentoring program, I came across the notes from the last one and wanted to share some of the discussions we had (that can be shared publicly) as I think these are generally applicable and could help other women (or men) too. These were a collective set of wisdom from everyone in my mentoring ring, many of them were suggested by mentees, not myself

Time management

“There are so many PRs on my team, I want to review a lot of them so I can learn from them, but I don’t have enough time!”

This was a pretty common question from especially more junior engineers and engineers who work on teams with a very diverse set of technology, eg, an API/SDK team. We talked about the following –

Spend your time wisely

You have to choose what you spend your time on. It’s always good to be curious and to want to learn more but you have to decide what’s relevant enough for you to spend time on. Know who the experts for areas relevant to you and ask them for their wisdom! On a healthy team, experienced folks are absolutely supposed to help new comers – it’s part of their job.

Spend your time efficiently

If you believe there’s something reasonable to mention in a PR but is not mentioned in the description, it’s perfectly reasonable to ask to have that kind of info put in. For example, if a PR introduces a new pattern in the code base that’s widely used in that PR, it’s a reasonable thing to ask the author to describe that new pattern in the PR. Especially for APIs, asking for tests (or sometimes if you can, enforcing tests) to go with the implementation is a great to help you understand how to use the API. This can help you tremendously to understand the PR instead of having to figure out everything on your own.

“How to achieve better work life balance, especially for women who are mothers?”

Being clear about what other folks can expect from you is very helpful, eg, “I don’t work on weekends or at night”, “I don’t answer emails outside business hours unless it’s absolutely urgent, in which case text me”, or “the baby is here and may start crying and I’ll need to attend to him/her during the meeting”.
Give clear description of what work will be accomplished during a timeframe, eg, “it takes X hours/weeks to get to functional, no stress, no perf, no doc and this is with no distraction/interruption”.
Be strategic about your time management. For example, with kids you might need to scatter your work hours out around the kids’ activities.
When discussing work items with your manager, be clear about what to cut if you need to add something.
Adapt to changes, eg, if you WFH when your coworkers work from the office, be more responsive to their issues to not be the bottleneck, and request that they keep important conversations accessible to you (eg, in Teams channels instead of only chatting amongst themselves).
At the end of the day, it’s about your choices, do what is comfortable for you. More importantly, stick to your choices/priority list when you make them. For example, if you decide having kids is a priority, it means be ok with not putting in as much time for work when the kids are little. It doesn’t make much sense to compare how many hours you are putting in with your coworkers who don’t have kids and are willing to put in more hours, since spending time on your kids is also a fulfilling part of your life.

Communication skills

“How to voice my opinions in a conflicting situation instead of remaining quiet”

We’ve all been in conflicting situations before. Women tend to have more trouble voice opinions in such situations. This was actually brought up in the context of conflicting situations when reviewing PRs. The following was suggested –

Determine if it’s something worth spending time on and/or how much time you should spend it

Is it a trivial issue? Is it subjective? Does the other person have stronger opinions on it than you? If the answer to any of those is yes, that would be a factor that makes it less worthwhile for you to spend a lot of time on.
Recognize that it’s perfectly valid to have emotions; also important to recognize whether these emotions have any long last effect. If it does that makes it a problem more worth solving.
If you need to interact with the person you are talking to long term (eg, your teammate) it also makes it more worthwhile to sort it out.
Recognize you don’t have to accept all comments on a PR.

When it does get emotional

Much easier to talk in person (or on Teams) if you have such options, than continue the discussion in email/GH. If we are talking about your teammates, you certain have the options to talk to them face to face.
Take some time to cool down before chatting with the person.

Make the other person take some responsibility!

In case of fairly subjective matters, ask the other person to take responsibility to prove their way is more desirable. For example, if the person claims something should be the new pattern, ask them to make it a pattern and convince others on the team it should be this way, especially if they are more senior than you

“How to explain something clearly (eg, what you do; your idea; your suggestion) to others that may not be intimately familiar with it already?”

When applicable use code snippets to help explain.
Do a demo for folks

Use pictures! Get a tablet. I use this one which works very well with the whiteboard app and super easy to set up

Send out info before the meeting. Some people would read it and others may not. If the doc isn’t very long you could even dedicate the first few minutes of the meeting to make sure everyone has read it (briefly).

A good meme/video/analogy with an everyday item could make your explanation much easier to understand. I have an example of explaining the GC tuning with food courts here.

Share your explanation with one person to see how they think of it before sharing with more people
Ask the person if the thing you just explained made sense – this doesn’t work 100% of the time as some people would say it does when it doesn’t but there’re definitely people who will say “no” and asking this question gives them a perfect opportunity to ask questions.

“I’m junior on my team, how can I possibly change the culture of the team?”

One trick is to find a more senior person who’s willing to mentor and advocate for you – when you ask nicely, usually folks will say yes. Asking someone for 15 mins of their time for a meeting will almost always yield a “yes”. Be sure to be respectful of their time – one key thing I always make sure of is to have topics prepared before the mentoring session so I don’t waste the other person’s time.

“English is not my first language and I don’t feel confident when speaking it. And when my teammates who share the same first language with me always talk to me in that language, what should/can I do?”

(This question obviously assumes English is the common language used at your work place, replace it with whatever language is at your work place)

If you are not talking about work it’s totally okay to talk in whatever language you are comfortable with but if you are talking about work, really force yourself to get into a habit of speaking English. This is not just to practice your English; you are also appearing a little rude to your coworkers who don’t understand that language (it’s pretty easy to figure out whether you are talking about work – you hear English words here and there ). Indeed it can be awkward to force yourself and your coworkers who want to talk to you about work in an language that’s not English but it creates a much larger long-term benefit.

“Do I need to appear confident all the time?”

Not at all! It’s totally fine to not be confident in topics you aren’t knowledgeable about. Be okay with asking naive questions – if you are really worried, you could prefix your questions with “I’m not familiar with this so my questions are probably gonna be naive”.

Productivity tools

Use SharpLab to see codegen for C# code

Use godbolt to see codegen for C++ code

Use ILSpy to see source for a .NET assembly

Use MermaidJS if you want to draw professional looking charts with simple lines of text. An example is here which was done with some simple mermaidjs script with a mermaidjs extension in VS code:

graph TD

A(obj0/1/2 in gen0) –>|gen0 GC| B(obj0 dies<br>obj1/2 in gen1)
B –>|more allocation| C(obj3/4 in gen0<br>obj1/2 in gen1)
C –>|gen0 GC| D(obj3 dies<br>obj1/2/4 in gen1)
C –>|gen1 GC| E(obj1/3 dies<br>obj4 in gen1<br>obj2 in gen2)
C –>|gen2 GC| F(obj1/3 dies<br>obj4 in gen1 obj2 in gen2)
classDef nodeStyle fill:#eee,stroke:#555;
class A,B,C,D,E,F nodeStyle;
linkStyle 0,2,3,4 color:blue;
linkStyle 1 color:teal;

Use the Docs Authoring Pack extension in VS code to help with doc work

Use the Code spell checker extension in VS code to check for spelling

Use the OpenAPI editor extension in VS code to make looking at swagger files (REST API description) easy

 

The post Women IC engineer mentoring ring appeared first on .NET Blog.

Create and issuer verifiable credentials in ASP.NET Core using Azure AD

This article shows how Azure AD verifiable credentials can be issued and used in an ASP.NET Core application. An ASP.NET Core Razor page application is used to implement the credential issuer. To issue credentials, the application must manage the credential subject data as well require authenticated users who would like to add verifiable credentials to their digital wallet. The Microsoft Authenticator mobile application is used as the digital wallet.

Code: https://github.com/swiss-ssi-group/AzureADVerifiableCredentialsAspNetCore

Blogs in this series

Getting started with Self Sovereign Identity SSI
Challenges to Self Sovereign Identity

Setup

Two ASP.NET Core applications are implemented to issue and verify the verifiable credentials. The credential issuer must administrate and authenticate its identities to issue verifiable credentials. A verifiable credential issuer should never issue credentials to unauthenticated subjects of the credential. As the verifier normally only authorizes the credential, it is important to know that the credentials were at least issued correctly. We do not know as a verifier who or and mostly what sends the verifiable credentials but at least we know that the credentials are valid if we trust the issuer. It is possible to use private holder binding for a holder of a wallet which would increase the trust between the verifier and the issued credentials.

The credential issuer in this demo issues credentials for driving licenses using Azure AD verifiable credentials. The ASP.NET Core application uses Microsoft.Identity.Web to authenticate all identities. In a real application, the application would be authenticated as well requiring 2FA for all users. Azure AD supports this good. The administrators would also require admin rights, which could be implemented using Azure security groups or Azure roles which are added to the application as claims after the OIDC authentication flow.

Any authenticated identity can request credentials (A driving license in this demo) for themselves and no one else. The administrators can create data which is used as the subject, but not issue credentials for others.

Azure AD verifiable credential setup

Azure AD verifiable credentials is setup using the Azure Docs for the Rest API and the Azure verifiable credential ASP.NET Core sample application.

Following the documentation, a display file and a rules file were uploaded for the verifiable credentials created for this issuer. In this demo, two credential subjects are defined to hold the data when issuing or verifying the credentials.

{
“default”: {
“locale”: “en-US”,
“card”: {
“title”: “National Driving License VC”,
“issuedBy”: “Damienbod”,
“backgroundColor”: “#003333”,
“textColor”: “#ffffff”,
“logo”: {
“uri”: “https://raw.githubusercontent.com/swiss-ssi-group/TrinsicAspNetCore/main/src/NationalDrivingLicense/wwwroot/ndl_car_01.png”,
“description”: “National Driving License Logo”
},
“description”: “Use your verified credential to prove to anyone that you can drive.”
},
“consent”: {
“title”: “Do you want to get your Verified Credential?”,
“instructions”: “Sign in with your account to get your card.”
},
“claims”: {
“vc.credentialSubject.name”: {
“type”: “String”,
“label”: “Name”
},
“vc.credentialSubject.details”: {
“type”: “String”,
“label”: “Details”
}
}
}
}

The rules file defines the attestations for the credentials. Two standard claims are used to hold the data, the given_name and the family_name. These claims are mapped to our name and details subject claims and holds all the data. Adding custom claims to Azure AD or Azure B2C is not so easy and so I decided for the demo, it would be easier to use standard claims which works without custom configurations. The data sent from the issuer to the holder of the claims can be sent in the application. It should be possible to add credential subject properties without requiring standard AD id_token claims, but I was not able to set this up in the current preview version.

{
“attestations”: {
“idTokens”: [
{
“id”: “https://self-issued.me”,
“mapping”: {
“name”: { “claim”: “$.given_name” },
“details”: { “claim”: “$.family_name” }
},
“configuration”: “https://self-issued.me”,
“client_id”: “”,
“redirect_uri”: “”
}
]
},
“validityInterval”: 2592001,
“vc”: {
“type”: [ “MyDrivingLicense” ]
}
}

The rest of the Azure AD credentials are setup exactly like the documentation.

Administration of the Driving licenses

The verifiable credential issuer application uses a Razor page application which accesses a Microsoft SQL Azure database using Entity Framework Core to access the database. The administrator of the credentials can assign driving licenses to any user. The DrivingLicenseDbContext class is used to define the DBSet for driver licenses.

ublic class DrivingLicenseDbContext : DbContext
{
public DbSet<DriverLicense> DriverLicenses { get; set; }

public DrivingLicenseDbContext(DbContextOptions<DrivingLicenseDbContext> options)
: base(options)
{
}

protected override void OnModelCreating(ModelBuilder builder)
{
builder.Entity<DriverLicense>().HasKey(m => m.Id);

base.OnModelCreating(builder);
}
}

A DriverLicense entity contains the infomation we use to create verifiable credentials.

public class DriverLicense
{
[Key]
public Guid Id { get; set; }
public string UserName { get; set; } = string.Empty;
public DateTimeOffset IssuedAt { get; set; }
public string Name { get; set; } = string.Empty;
public string FirstName { get; set; } = string.Empty;
public DateTimeOffset DateOfBirth { get; set; }
public string Issuedby { get; set; } = string.Empty;
public bool Valid { get; set; }
public string DriverLicenseCredentials { get; set; } = string.Empty;
public string LicenseType { get; set; } = string.Empty;
}

Issuing credentials to authenticated identities

When issuing verifiable credentials using Azure AD Rest API, an IssuanceRequestPayload payload is used to request the credentials which are to be issued to the digital wallet. Verifiable credentials are issued to a digital wallet. The credentials are issued for the holder of the wallet. The payload classes are the same for all API implementations apart from the CredentialsClaims class which contains the subject claims which match the rules file of your definition.

public class IssuanceRequestPayload
{
[JsonPropertyName(“includeQRCode”)]
public bool IncludeQRCode { get; set; }
[JsonPropertyName(“callback”)]
public Callback Callback { get; set; } = new Callback();
[JsonPropertyName(“authority”)]
public string Authority { get; set; } = string.Empty;
[JsonPropertyName(“registration”)]
public Registration Registration { get; set; } = new Registration();
[JsonPropertyName(“issuance”)]
public Issuance Issuance { get; set; } = new Issuance();
}

public class Callback
{
[JsonPropertyName(“url”)]
public string Url { get; set; } = string.Empty;
[JsonPropertyName(“state”)]
public string State { get; set; } = string.Empty;
[JsonPropertyName(“headers”)]
public Headers Headers { get; set; } = new Headers();

}

public class Headers
{
[JsonPropertyName(“api-key”)]
public string ApiKey { get; set; } = string.Empty;
}

public class Registration
{
[JsonPropertyName(“clientName”)]
public string ClientName { get; set; } = string.Empty;
}

public class Issuance
{
[JsonPropertyName(“type”)]
public string CredentialsType { get; set; } = string.Empty;
[JsonPropertyName(“manifest”)]
public string Manifest { get; set; } = string.Empty;
[JsonPropertyName(“pin”)]
public Pin Pin { get; set; } = new Pin();
[JsonPropertyName(“claims”)]
public CredentialsClaims Claims { get; set; } = new CredentialsClaims();

}

public class Pin
{
[JsonPropertyName(“value”)]
public string Value { get; set; } = string.Empty;
[JsonPropertyName(“length”)]
public int Length { get; set; } = 4;
}

/// Application specific claims used in the payload of the issue request.
/// When using the id_token for the subject claims, the IDP needs to add the values to the id_token!
/// The claims can be mapped to anything then.
public class CredentialsClaims
{
/// <summary>
/// attribute names need to match a claim from the id_token
/// </summary>
[JsonPropertyName(“given_name”)]
public string Name { get; set; } = string.Empty;
[JsonPropertyName(“family_name”)]
public string Details { get; set; } = string.Empty;
}

The GetIssuanceRequestPayloadAsync method sets the data for each identity that requested the credentials. Only a signed in user can request the credentials for themselves. The context.User.Identity is used and the data is selected from the database for the signed in user. It is important that credentials are only issued to authenticated users. Users and the application must be authenticated correctly using 2FA and so on. Per default, the credentials are only authorized on the verifier which is probably not enough for most security flows.

public async Task<IssuanceRequestPayload> GetIssuanceRequestPayloadAsync(HttpRequest request, HttpContext context)
{
var payload = new IssuanceRequestPayload();
var length = 4;
var pinMaxValue = (int)Math.Pow(10, length) – 1;
var randomNumber = RandomNumberGenerator.GetInt32(1, pinMaxValue);
var newpin = string.Format(“{0:D” + length.ToString() + “}”, randomNumber);

payload.Issuance.Pin.Length = 4;
payload.Issuance.Pin.Value = newpin;
payload.Issuance.CredentialsType = “MyDrivingLicense”;
payload.Issuance.Manifest = _credentialSettings.CredentialManifest;

var host = GetRequestHostName(request);
payload.Callback.State = Guid.NewGuid().ToString();
payload.Callback.Url = $”{host}:/api/issuer/issuanceCallback”;
payload.Callback.Headers.ApiKey = _credentialSettings.VcApiCallbackApiKey;

payload.Registration.ClientName = “Verifiable Credential NDL Sample”;
payload.Authority = _credentialSettings.IssuerAuthority;

var driverLicense = await _driverLicenseService.GetDriverLicense(context.User.Identity.Name);

payload.Issuance.Claims.Name = $”{driverLicense.FirstName} {driverLicense.Name} {driverLicense.UserName}”;
payload.Issuance.Claims.Details = $”Type: {driverLicense.LicenseType} IssuedAt: {driverLicense.IssuedAt:yyyy-MM-dd}”;

return payload;
}

The IssuanceRequestAsync method gets the payload data and request credentials from the Azure AD verifiable credentials REST API and returns this value which can be scanned using a QR code in the Razor page. The request returns fast. Depending on how the flow continues, a web hook in the application will update the status in a cache. This cache is persisted and polled from the UI. This could be improved by using SignalR.

[HttpGet(“/api/issuer/issuance-request”)]
public async Task<ActionResult> IssuanceRequestAsync()
{
try
{
var payload = await _issuerService.GetIssuanceRequestPayloadAsync(Request, HttpContext);
try
{
var (Token, Error, ErrorDescription) = await _issuerService.GetAccessToken();
if (string.IsNullOrEmpty(Token))
{
_log.LogError($”failed to acquire accesstoken: {Error} : {ErrorDescription}”);
return BadRequest(new { error = Error, error_description = ErrorDescription });
}

var defaultRequestHeaders = _httpClient.DefaultRequestHeaders;
defaultRequestHeaders.Authorization = new AuthenticationHeaderValue(“Bearer”, Token);

HttpResponseMessage res = await _httpClient.PostAsJsonAsync(
_credentialSettings.ApiEndpoint, payload);

var response = await res.Content.ReadFromJsonAsync<IssuanceResponse>();

if(response == null)
{
return BadRequest(new { error = “400”, error_description = “no response from VC API”});
}

if (res.StatusCode == HttpStatusCode.Created)
{
_log.LogTrace(“succesfully called Request API”);

if (payload.Issuance.Pin.Value != null)
{
response.Pin = payload.Issuance.Pin.Value;
}

response.Id = payload.Callback.State;

var cacheData = new CacheData
{
Status = IssuanceConst.NotScanned,
Message = “Request ready, please scan with Authenticator”,
Expiry = response.Expiry.ToString()
};
_cache.Set(payload.Callback.State, JsonSerializer.Serialize(cacheData));

return Ok(response);
}
else
{
_log.LogError(“Unsuccesfully called Request API”);
return BadRequest(new { error = “400”, error_description = “Something went wrong calling the API: ” + response });
}
}
catch (Exception ex)
{
return BadRequest(new { error = “400”, error_description = “Something went wrong calling the API: ” + ex.Message });
}
}
catch (Exception ex)
{
return BadRequest(new { error = “400”, error_description = ex.Message });
}
}

The IssuanceResponse is returned to the UI.

public class IssuanceResponse
{
[JsonPropertyName(“requestId”)]
public string RequestId { get; set; } = string.Empty;
[JsonPropertyName(“url”)]
public string Url { get; set; } = string.Empty;
[JsonPropertyName(“expiry”)]
public int Expiry { get; set; }
[JsonPropertyName(“pin”)]
public string Pin { get; set; } = string.Empty;
[JsonPropertyName(“id”)]
public string Id { get; set; } = string.Empty;
}

The IssuanceCallback is used as a web hook for the Azure AD verifiable credentials. When developing or deploying, this web hook needs to have a public IP. I use ngrok to test this. Because the issuer authenticates the identities using an Azure App registration, everytime the ngrok URL changes, the redirect URL needs to be updated. Each callback request updates the cache. This API also needs to allow anonymous requests if the rest of the application is authenticated using OIDC. The AllowAnonymous attribute is required, if you use an authenticated ASP.NET Core application.

[AllowAnonymous]
[HttpPost(“/api/issuer/issuanceCallback”)]
public async Task<ActionResult> IssuanceCallback()
{
string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync();
var issuanceResponse = JsonSerializer.Deserialize<IssuanceCallbackResponse>(content);

try
{
//there are 2 different callbacks. 1 if the QR code is scanned (or deeplink has been followed)
//Scanning the QR code makes Authenticator download the specific request from the server
//the request will be deleted from the server immediately.
//That’s why it is so important to capture this callback and relay this to the UI so the UI can hide
//the QR code to prevent the user from scanning it twice (resulting in an error since the request is already deleted)
if (issuanceResponse.Code == IssuanceConst.RequestRetrieved)
{
var cacheData = new CacheData
{
Status = IssuanceConst.RequestRetrieved,
Message = “QR Code is scanned. Waiting for issuance…”,
};
_cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData));
}

if (issuanceResponse.Code == IssuanceConst.IssuanceSuccessful)
{
var cacheData = new CacheData
{
Status = IssuanceConst.IssuanceSuccessful,
Message = “Credential successfully issued”,
};
_cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData));
}

if (issuanceResponse.Code == IssuanceConst.IssuanceError)
{
var cacheData = new CacheData
{
Status = IssuanceConst.IssuanceError,
Payload = issuanceResponse.Error?.Code,
//at the moment there isn’t a specific error for incorrect entry of a pincode.
//So assume this error happens when the users entered the incorrect pincode and ask to try again.
Message = issuanceResponse.Error?.Message
};
_cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData));
}

return Ok();
}
catch (Exception ex)
{
return BadRequest(new { error = “400”, error_description = ex.Message });
}
}

The IssuanceCallbackResponse is returned to the UI.

public class IssuanceCallbackResponse
{
[JsonPropertyName(“code”)]
public string Code { get; set; } = string.Empty;
[JsonPropertyName(“requestId”)]
public string RequestId { get; set; } = string.Empty;
[JsonPropertyName(“state”)]
public string State { get; set; } = string.Empty;
[JsonPropertyName(“error”)]
public CallbackError? Error { get; set; }

}

The IssuanceResponse method is polled from a Javascript client in the Razor page UI. This method updates the status in the UI using the cache and the database.

[HttpGet(“/api/issuer/issuance-response”)]
public ActionResult IssuanceResponse()
{
try
{
//the id is the state value initially created when the issuance request was requested from the request API
//the in-memory database uses this as key to get and store the state of the process so the UI can be updated
string state = this.Request.Query[“id”];
if (string.IsNullOrEmpty(state))
{
return BadRequest(new { error = “400”, error_description = “Missing argument ‘id'” });
}
CacheData value = null;
if (_cache.TryGetValue(state, out string buf))
{
value = JsonSerializer.Deserialize<CacheData>(buf);

Debug.WriteLine(“check if there was a response yet: ” + value);
return new ContentResult { ContentType = “application/json”, Content = JsonSerializer.Serialize(value) };
}

return Ok();
}
catch (Exception ex)
{
return BadRequest(new { error = “400”, error_description = ex.Message });
}
}

The DriverLicenseCredentialsModel class is used for the credential issuing for the sign-in user. The HTML part of the Razor page contains the Javascript client code which was implemented using the code from the Microsoft Azure sample.

public class DriverLicenseCredentialsModel : PageModel
{
private readonly DriverLicenseService _driverLicenseService;

public string DriverLicenseMessage { get; set; } = “Loading credentials”;
public bool HasDriverLicense { get; set; } = false;
public DriverLicense DriverLicense { get; set; }

public DriverLicenseCredentialsModel(DriverLicenseService driverLicenseService)
{
_driverLicenseService = driverLicenseService;
}
public async Task OnGetAsync()
{
DriverLicense = await _driverLicenseService.GetDriverLicense(HttpContext.User.Identity.Name);

if (DriverLicense != null)
{
DriverLicenseMessage = “Add your driver license credentials to your wallet”;
HasDriverLicense = true;
}
else
{
DriverLicenseMessage = “You have no valid driver license”;
}
}
}

Testing and running the applications

Ngrok is used to provide a public callback for the Azure AD verifiable credentials callback. When the application is started, you need to create a driving license. This is done in the administration Razor page. Once a driving license exists, the View driver license Razor page can be used to issue a verifiable credential to the logged in user. A QR Code is displayed which can be scanned to begin the issue flow.

Using the Microsoft authenticator, you can scan the QR Code and add the verifiable credentials to your digital wallet. The credentials can now be used in any verifier which supports the Microsoft Authenticator wallet. The verify ASP.NET Core application can be used to verify and used the issued verifiable credential from the Wallet.

Links:

https://docs.microsoft.com/en-us/azure/active-directory/verifiable-credentials/

https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet

https://www.microsoft.com/de-ch/security/business/identity-access-management/decentralized-identity-blockchain

https://didproject.azurewebsites.net/docs/issuer-setup.html

https://didproject.azurewebsites.net/docs/credential-design.html

https://github.com/Azure-Samples/active-directory-verifiable-credentials

https://identity.foundation/

https://www.w3.org/TR/vc-data-model/

https://daniel-krzyczkowski.github.io/Azure-AD-Verifiable-Credentials-Intro/

https://dotnetthoughts.net/using-node-services-in-aspnet-core/

https://identity.foundation/ion/explorer

https://www.npmjs.com/package/ngrok

https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript

https://identity.foundation/ion/explorer

https://www.npmjs.com/package/ngrok

https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript