Summary of reading: July – September 2021

“Trick or Treatment: The Undeniable Facts about Alternative Medicine” by
Edzard Ernst and Simon Singh – a scientifically-inclined exposé about
alternative medicine, focusing on the most common types like chiropractic,
acupuncture and herbs, but also with notes on the more obscure approaches
like Reiki. Lots of interesting historical context, as well as damning
evidence against the practitioners of various alternative medicine regimes.
“The Immortal Life of Henrietta Lacks” by Rebecca Skloot – a fascinating
account of a cell line (called HeLa) that’s been instrumental in medical
research since the 1950s and to this day. This book combines a great human
story (of Henrietta herself and her progeny) with a scientific story. This is
a story that had to be told, and the author worked long and hard to bring it
to fruition. I have some minor criticism too – for example, it seems to me
that the author was trying really hard to dig for a political angle on the
story, but without real success.
“Demian” by Hermann Hesse – I really enjoyed the author’s “Siddhartha” and
was hoping to like this book as well, but it didn’t turn out this way.
Although it starts well enough as a raw, honest and realistic coming-of-age
story, it gets progressively weirder as time goes by. The last third or so
is so metaphorically entwined that I’m not sure I even understand it.
“The Last Hunger Season” by Roger Thurow – the story of four smallholder
farmer families in Kenya on the brink of a significant change brought by
improved farmer technologies. Sobering account of the hardships and hunger
experienced by millions of people in Africa during an age of plenty in the
21st century.
“The Great Alone” by Kristin Hannah – I was lured to read this book by the
promise of Alaska, and indeed the descriptions of Alaska (focusing on the
Kenai peninsula) in it are pretty great. Otherwise I found it a generally
unremarkable romantic novel with too many suboptimal decisions leading its
protagonists into bad situations.
“Fermat’s Last Theorem” by Simon Singh – technically it’s a re-read, but the
last (and also the first) time I read this book was more than 20 years ago.
In fact, I will forever fondly remember it as the book that kick-started my
adult life nonfiction reading streak, which is going strong to this day.
A great book, for sure. Obviously, I’d be happy if it went a little bit more
into the details of the proof, but I also realize that would probably turn
away more readers than it would attract.
“Numerical Methods in Physics with Python” by Alex Gezerlis – I received a
free copy of this book for review. I was expecting a more code-oriented book,
but in fact there’s very little code here. Each chapter develops the math
for some aspect of numerical analysis and includes 2-3 short Python samples
implementing some computations. These code samples use Numpy and are fairly
straightforward once the math is understood. The math is very high level
(early graduate level, I’d guess), and thus it’s a really hard book to read
cover-to-cover. Instead, it could serve as a good reference in some cases,
so full judgement has to be deferred until the book has indeed served in
this role.
“The Power Broker” by Robert A. Caro – a monumental (1300+ pages) biography
of Robert Moses – the powerful NYC park commissioner in the 1920s-1960s who
built many of the city’s parks, roads, bridges and neighborhoods in those
years. The book focuses on the interplay of power and politics in NYC in that
era, and serves as a sobering reminder of just how corrupt things can get
under the surface. The book casts a critical light at Caro’s accomplishments,
questioning the benefit of his influence on the city vs. the downsides of
his approach.
“Humble Pi – A comedy of maths errors” by Matt Parker – a fun book about some
well and less well-known mathematical and engineering errors in history.
The book is informative and full of humor, but it feels like towards the
end the author was scraping the bottom of the barrel to find more relevant
examples, and mostly discussed programming bugs of different sorts.
“Project Hail Mary” by Andy Weir – with this book, Weir attempts to recreate
the magic of The Martian, but this time on an interstellar scale. Engrossing
read full of fun scientific and engineering geekery, just like The Martian.
Quite a bit less realistic, of course – but that’s what you get by expanding
the scope so much.
“The Machine that Changed the World” by J. Womack et al – describes the
lean production method of the Japanese car industry, and how it’s different
from (and superior to) traditional mass production. Very interesting book
overall, though somewhat outdated (written in 1990). Some terms popularized
by this book – like Kaizen, Kanban, Just In Time – have become iconic in
many modern industries.
“Noise: A Flaw in Human Judgement” by Daniel Kahneman et al – I really liked
Kahneman’s previous work so I had high hopes for this book. Unfortunately,
it was disappointing. The central thesis is interesting and important, but
could be covered in an extended article. 400+ pages for so little actual
material is a drag, and I couldn’t wait for the book to end.
“The Working Poor: Invisible in America” by David K. Shipler – describes the
lives of several poor families in the USA around the turn of the 21st century.
Interesting book that manages to be only slighly preachy and stay mostly on
topic. Covers the income vs. expenses conundrum of folks employed at minimum
wage, and a wide assortment of related topics like welfare, health, child
development, parenting and job training for drug addicts. Recommended!
“Programming Rust, 2nd edition” by Blandy, Orendorff and Tindall – a thorough
overview and reference for Rust 1.50; this book is the “Stroustrup” for Rust,
the closest document to a formal spec the language currently has. As such,
it’s great as a reference and only “pretty good” as an introduction to the
language to be read cover to cover. In the latter role the book covers all the
important topics thoroughly, but suffers from a lack of realistic examples,
projects and exercises. The 2nd chapter, “A Tour of Rust” is excellent in this
regard, working through a couple of interesting small projects; it’s a shame
the rest of the book does not follow through, and most examples are very
artificial. Folks new to programming are unlikely to benefit from the book,
since it assumes a fairly high level of familiarity with low-level
programming, particularly memory management. Overall, for an experienced
developer (preferably with C++ background) who knows how to use alternative
sources for projects, this book is a good and thorough overview of Rust.
“A Walk Across America” by Peter Jenkins – in which the author describes
his early 1970s walk from NY to New Orleans – the first part of his multi-year
quest to walk coast-to-coast. It’s one of those books that was probably much
more impressive when it was originally published (1979), as it apparently
kicked off a travel writing trend. The book is entertaining, but I’m not sure
I like it overall. It’s not clear what the author achieved here, other than
collecting a bunch of anecdotes written in flowery and over-excited language.
I’ve certainly read much better travelogues written after this book, but then
again, it was one of the first in the genre.


“Tortilla Flat” by John Steinbeck
“The Other Wes Moore: One Name, Two Fates” by Wes Moore
“In Defense of Food: An Eater’s Manifesto” by Michael Pollan
“California: A History” by Kevin Starr

#Ajax #Proxy add HTTP proxy support to Ajax requests with

This is a new tool that developers can add to their toolbelt. transparently permits the use of HTTP proxies from JavaScript Ajax requests. It also bypasses CORS restrictions, to help with accessing services not designed for direct consumption from client-side JavaScript.

What is it

If you need to use a HTTP Proxy with Ajax, this polyfill library will allow you to specify a HTTP proxy to be used with Ajax requests. This could be useful to ensure that your Ajax requests come from a fixed IP address.


Step 1;

Add the library AjaxProxy.js from the following URL

<script src=””><script>

Step 2;

Before any Ajax requests are made, call 


Step 3:

Define your Proxy server as follows;

ajaxProxy.proxy.url = “http://<your proxy>”;

ajaxProxy.proxy.credentials.username = “<proxy username>”;

ajaxProxy.proxy.credentials.password = “<proxy password>”;

Step 4:

If you are using JQuery, then modify your $.Ajax to add 

headers: ajaxProxy.proxyHeaders()

Such as like follows;


    type: “GET”,

    url: “https://ICANHAZIP.COM&#8221;,

    headers: ajaxProxy.proxyHeaders(),

    dataType: “text”

}).done (function (data) {



If you are using plain XHR requests, then add xhr.addProxyHeaders(); 

Such as shown below;

var xhr = new XMLHttpRequest();

xhr.onreadystatechange = function() {

    if (this.readyState === 4 && this.status === 200) {



};“GET”, “https://ICANHAZIP.COM&#8221;, true);



Step 5:

If you run your code, then the request should be proxied through your proxy server.

Under the hood. 

What is happening? In effect, this goes through two levels of proxies, first your request is sent to an AWS Lambda function, which checks for the following http request headers;

X-target : The destination URL

X-proxy : The proxy HTTP Address

X-proxy-username: The proxy username

X-proxy-password: The proxy password

Please note that if your proxy is limited by IP address, then this technique will not work, since the egress IP for the AWS Lambda function is dynamic. You will need a proxy that is either open (not advised), or restricted by username and password.

The AWS Lambda function will then make a connection to your proxy server, and supply it with the original destination URL, and will pass through other common headers such as the Content-Type and Authorization headers. 

Additional security

Using the technique above, your proxy username and password will be visible to anyone who can view the source of your website. If you have an intranet, and trust your users, then this may be fine, however, we do recommend taking the following security step;

You can encrypt your proxy username and password by calling:


Which will return a string such as follows


You can pass this string in place of the username and/or password and the script will do the decryption under the hood. 

There is no public method to reverse this encryption, so it is not possible for an attacker to reverse engineer your password without stealing our private keys, which we keep secret. 

Additions / Requests

This library has been offered for free, it was developed for internal use, but we are offering it to the public out of goodwill and public service. Please do not abuse this service, we will throttle requests from excessive usage.

If you would like an addition to this software, or need help with your specific application, then we may be able to help. However, nothing in this life is for free, so we do invite you to sponsor this project, if you would like an addition or change made to it.

We can supply source code if required, under NDA, please contact us for more information. 

Rewriting Go source code with AST tooling

Go is well-known for having great tooling for analyzing code written in the
language, right in the standard library with the go/* packages
(go/parser, go/ast, go/types etc.); in addition, the module contains
several supplemental packages that are even more powerful. I’ve used one of
these packages to describe how to write multi-package analysis tools in a post
from last year

Here I want to write about a slightly different task: rewriting Go source code
using AST-based tooling. I will begin by providing a quick introduction to
how existing capabilities of the stdlib go/ast package can be used to find
points of interest in an AST. Then, I’ll show how some simple rewrites can be
done with the go/ast package without requiring additional tooling. Finally,
I’ll discuss the limitations of this approach and the package which provides much more powerful AST
editing capabilities.

This post assumes some basic level of familiarity with ASTs (Abstract Syntax
Trees) in general, and ASTs for Go in particular.

Finding points of interest in a Go AST

Throughout this post, we’re going to be using the following simple Go snippet
as our lab rat:

package p

func pred() bool {
return true

func pp(x int) int {
if x > 2 && pred() {
return 5

var b = pred()
if b {
return 6
return 0

Let’s start by finding all calls to the pred function in this code. The
go/ast package provides two approaches for finding points of interest in
the code. First, we’ll discuss ast.Walk. The full code sample for this part
is on GitHub.
We begin by parsing the source code (which we’ll be piping into standard input):

fset := token.NewFileSet()
file, err := parser.ParseFile(fset, “src.go”, os.Stdin, 0)
if err != nil {

Now we create a new value implementing the ast.Visitor interface and call

visitor := &Visitor{fset: fset}
ast.Walk(visitor, file)

Finally, the interesting part of the code is the Visitor type:

type Visitor struct {
fset *token.FileSet

func (v *Visitor) Visit(n ast.Node) ast.Visitor {
if n == nil {
return nil

switch x := n.(type) {
case *ast.CallExpr:
id, ok := x.Fun.(*ast.Ident)
if ok {
if id.Name == “pred” {
fmt.Printf(“Visit found call to pred() at %sn”, v.fset.Position(n.Pos()))
return v

Our visitor is only interested in AST nodes of type CallExpr. Once it sees
such a node, it checks the name of the called function, and reports matches.
Note the type assertion on x.Fun; we only want to report calls when the
function is referred to by an ast.Ident. In Go, we could call functions in
other ways, like invoking anonymous functions directly – e.g. func(){}().

We have a FileSet stored in the visitor; this is only used here
to report positions in the parsed code properly. To save space, the AST stores
all position information in a single int (aliased as the token.Pos
type), and the FileSet is required to translate these numbers into actual
positions of the expected <filename>:line:column form.

Visualizing the Go AST

At this point it’s worth mentioning some useful tools that help writing
analyzers for Go ASTs. First and foremost, the go/ast package has a
Print function that will emit an AST in a textual format. Here’s how the
full if statement in our code snippet would look if printed this way:

. . 1: *ast.IfStmt {
. . . If: 9:2
. . . Cond: *ast.BinaryExpr {
. . . . X: *ast.BinaryExpr {
. . . . . X: *ast.Ident {
. . . . . . NamePos: 9:5
. . . . . . Name: “x”
. . . . . . Obj: *(obj @ 72)
. . . . . }
. . . . . OpPos: 9:7
. . . . . Op: >
. . . . . Y: *ast.BasicLit {
. . . . . . ValuePos: 9:9
. . . . . . Kind: INT
. . . . . . Value: “2”
. . . . . }
. . . . }
. . . . OpPos: 9:11
. . . . Op: &&
. . . . Y: *ast.CallExpr {
. . . . . Fun: *ast.Ident {
. . . . . . NamePos: 9:14
. . . . . . Name: “pred”
. . . . . . Obj: *(obj @ 11)
. . . . . }
. . . . . Lparen: 9:18
. . . . . Ellipsis: –
. . . . . Rparen: 9:19
. . . . }
. . . }
. . . Body: *ast.BlockStmt {
. . . . Lbrace: 9:21
. . . . List: []ast.Stmt (len = 1) {
. . . . . 0: *ast.ReturnStmt {
. . . . . . Return: 10:3
. . . . . . Results: []ast.Expr (len = 1) {
. . . . . . . 0: *ast.BasicLit {
. . . . . . . . ValuePos: 10:10
. . . . . . . . Kind: INT
. . . . . . . . Value: “5”
. . . . . . . }
. . . . . . }
. . . . . }
. . . . }
. . . . Rbrace: 11:2
. . . }

A somewhat more interactive way to explore this AST dump is using the web page
at, where you can paste your source and get an AST
dump with expandable and collapsible sections. This helps focus only
on parts we’re interested in; here’s an extract from our AST:

Using the ast.Inspect API

Using ast.Walk for finding interesting nodes is pretty straightforward, but
it requires scaffolding that feels a bit heavy for simple needs – defining a
custom type that implements the ast.Visitor interface, and so on. Luckily,
the go/ast package provides a lighter-weight API – Inspect; it only
needs to be provided a closure. Here’s our program to find calls to pred()
rewritten with ast.Inspect:

func main() {
fset := token.NewFileSet()
file, err := parser.ParseFile(fset, “src.go”, os.Stdin, 0)
if err != nil {

ast.Inspect(file, func(n ast.Node) bool {
switch x := n.(type) {
case *ast.CallExpr:
id, ok := x.Fun.(*ast.Ident)
if ok {
if id.Name == “pred” {
fmt.Printf(“Inspect found call to pred() at %sn”, fset.Position(n.Pos()))
return true

The actual AST node matching logic is the same, but the surrounding code is
somewhat simpler. Unless there’s a strong need to use ast.Walk specifically,
ast.Inspect is the approach I recommend, and it’s the one we’ll be using in
the next section to actually rewrite the AST.

Simple AST rewrites

To begin, it’s important to highlight that the AST returned by the parser is a
mutable object. It’s a collection of node values interconnected via pointers to
each other. We can change this set of nodes in any way we wish – or even create
a wholly new set of nodes – and then use the go/printer package to emit Go
source code back from the AST. The following program will simply emit back the
Go program it’s provided (though it will drop the comments with the default

func main() {
fset := token.NewFileSet()
file, err := parser.ParseFile(fset, “src.go”, os.Stdin, 0)
if err != nil {

printer.Fprint(os.Stdout, fset, file)

Now, back to rewriting that AST. Let’s make a couple of changes:

We’ll rename the function pred to pred2, and rename all the call
sites to call the new function name.
We’ll inject a printout into the beginning of each function body – emulating
some sort of instrumentation we could add this way.

Given the original code snippet, the output will look like this (with the
changed/new lines highlighted):

package p

func pred2() bool {
return true

func pp(x int) int {
if x > 2 && pred2() {
return 5

var b = pred2()
if b {
return 6
return 0

The full code of our rewriting program is available here.
It’s using ast.Inspect to find the nodes it wants to operate on. Here’s the
renaming of the call sites:

ast.Inspect(file, func(n ast.Node) bool {
switch x := n.(type) {
case *ast.CallExpr:
id, ok := x.Fun.(*ast.Ident)
if ok {
if id.Name == “pred” {
id.Name += “2”
// …

If the function is called by an identifier, the code just appends “2” to the
name. Again, we’re not operating on some copy of the AST – this is the
real, living AST we’re editing here.

Now let’s move on to the next case, where we’re handing function

case *ast.FuncDecl:
if x.Name.Name == “pred” {
x.Name.Name += “2”

newCallStmt := &ast.ExprStmt{
X: &ast.CallExpr{
Fun: &ast.SelectorExpr{
X: &ast.Ident{
Name: “fmt”,
Sel: &ast.Ident{
Name: “Println”,
Args: []ast.Expr{
Kind: token.STRING,
Value: `”instrumentation”`,

x.Body.List = append([]ast.Stmt{newCallStmt}, x.Body.List)

The first three lines in this case do the same as we did for the call sites
– just rename the pred function to pred2. The rest of the code is
adding the printout to the start of a function body.

That task is fairly easy to accomplish since each FuncDecl has a Body
which is an ast.StmtList, which itself holds a slice of ast.Stmt in its
List attribute. Out program prepends a new expression to this slice, in
effect adding a new statement to the very beginning of the function body. The
statement is a hand-crafted AST node. You must be thinking – how did I know
how to build this node?

It’s really not a big deal once you get the hang of it. Parsing small snippets
of code and dumping their ASTs helps, as well as the detailed documentation of
the go/ast package. I also found the go2ast tool very useful; it takes a piece of
code and emits exactly the Go code needed to build its AST.

Finally, at the end of the program we emit back the modified AST:

fmt.Println(“Modified AST:”)
printer.Fprint(os.Stdout, fset, file)

And this gets us the modified snippet shown at the beginning of this section.

Limitations of AST editing with Walk and Inspect

So far we’ve managed to rewrite the AST in a couple of interesting ways using
ast.Inspect for finding the nodes. Can we do any kind of rewrite this way?

It turns out the answer to this question is no, or at least not easily. As a
motivating example, consider the following task: we’d like to rewrite each call
to pred() so that it’s logically negated, or turns into !pred(). How do
we do that?

It’s worth spending a few minutes thinking about this question before reading

The issue is that when ast.Inspect (or ast.Walk) hands us an
ast.Node, we can change the node’s contents and its children, but we cannot
replace the node itself. To replace the node itself, we’d need access to its
parent, but ast.Inspect does not give us any way to access its parent.
A different, slightly more technical way to think about it is: we get handed
a node pointer by value, meaning that we can tweak the node it points to,
but can’t set the pointer to point to a different node. To achieve the latter,
ast.Inspect would have to hand us a pointer to a pointer to the node.

This limitation was discussed several years ago, and finally in 2017 a new
package appeared in the “extended stdlib” module –

More powerful rewriting with astutil

The APIs astutil provides let us not only find nodes of interest in the AST,
but also a way to replace the node itself, not just its contents. In fact, the
package provides several useful helpers to delete, replace and insert new nodes
through the Cursor type. A full walkthrough of the capabilities of
astutil is outside the scope of this post, but I will show how to use it in
order to implement our task of turning each pred() into !pred(). Here we

func main() {
fset := token.NewFileSet()
file, err := parser.ParseFile(fset, “src.go”, os.Stdin, 0)
if err != nil {

astutil.Apply(file, nil, func(c *astutil.Cursor) bool {
n := c.Node()
switch x := n.(type) {
case *ast.CallExpr:
id, ok := x.Fun.(*ast.Ident)
if ok {
if id.Name == “pred” {
Op: token.NOT,
X: x,

return true

fmt.Println(“Modified AST:”)
printer.Fprint(os.Stdout, fset, file)

Instead of calling ast.Inspect, we call astutil.Apply, which also walks
the AST recursively and gives our closure access to the node. Apply lets us
register a callback for the node both before and after it was visited; in
this case we only provide the after case.

Our closure identifies the call to pred in a way that should be similar by
now. It then uses the Cursor type to replace this node by a new one which
is just the same node wrapped in a unary NOT expression. Hidden in its
implementation, the Cursor type does have access to the parent of each node,
making it possible to replace the actual node with something else.

REST Web API in Practice: Naming Endpoints, Filtering, Sorting, and Pagination


The REST architectural style emphasizes a uniform interface between components, in which the information is transferred in a standardized form (.NET Nakama, 2021 September 4). One of the architectural constraints (identification of resources) to achieve that is to uniquely identify the location of each resource through a single URL.

In practice, when we are trying to design the URLs structure, various questions and possible conflicts between the team members may arise. In this article, we will see some practical suggestions for adopting consistent: naming conventions in our URLs (API endpoints) and URL representation of the filtering, sorting, and pagination operations.

Naming the Endpoints

Let’s remember some basic concepts regarding the REST resources and collections (.NET Nakama, 2021 September 4):

A resource is any information that can be named, an entity (e.g., person, ticket, product, etc.), document, file, etc.
A group of resources is called a collection (e.g., persons, tickets, products, etc.).

Our priority when naming our endpoints is to identify the resources that our API will expose to the consumers. Therefore, we have to be very careful to avoid creating APIs that mirror the internal structure of our database(s) (Price E., 2018).

Tip: Avoid creating APIs that mirror the backend implementation or internal structure of our database(s).

A resource can represent multiple internal items. For example, we can expose a product resource to the consumer, which might be implemented internally as several database tables (e.g., service products, physical products, description translations, metadata etc.).

The identification of the resources should be made based on the consumer needs and business logic.

Nouns or Verbs

A URL in REST should refer to a resource (represented as a noun) and not to an action (verb) (Au-Yeung J.,& Donovan R., 2020) (Price E., 2018). The actions in REST will be defined as HTTP verbs (GET, POST, etc.), not in the URL.

URL examples using Nouns
URL examples using Verbs



Singular or Plural

It’s decided! We will use nouns to refer to our resources, but now we should decide if we will represent a resource in the singular form (e.g., product) or plural (e.g., products). Let’s see some examples to have a better understanding of the dilemma.


Plural form

The /products/123 path refers to a specific product collection resource (with id=123).

– //–
The /products path refers to the collection of all products.

Singular form

The /product/123 path makes sense as we refer to a specific product.

– //–
However, the /product path is unclear if we refer to one or many products.

Singular and Plural form

By combining singular and plural forms, we do not have consistent URLs and may increase our code complexity.

– //–
– //–

It is generally recommended to use plural nouns to reference a collection of resources in the URLs (Price E., 2018) (Au-Yeung J.,& Donovan R., 2020). In this way, we have a clear and consistent representation of the resources in the URLs.

Resource Relationships (Nesting, Hierarchy, Sub-collections)

The different types of resources can have relationships among them to show which resource type contains the other. Several terms are used in the literature to refer to the relationships of resources, such as Nesting, Hierarchy, and Sub-collections.

To represent the relationships among resources in a URL, we use the slash “/” character to create a “path.” For example, the GET /users/5/orders endpoint represents the logical one-to-many relationship of the users with orders, meaning that it will return all orders (many) of the user with id=5 (one user).

There is no limit in the depth of the path of the relationship. However, I recommend using a path that is not more complex than: /collection/item/collection/item.

Tip: I recommend using a path that is not more complex than: /collection/item/collection/item.

Flat Endpoints

Some developers do not accept using resource relationships in the URL path because it can be confusing and complex (Florimond Manca, 2018). Using flat endpoints is not bad when our resources have simple relationships, and it may be preferable depending on the consumer’s needs.

The following examples show a simple resource relationship of users and orders, and their representation with a flat endpoint compared to a resource relationship endpoint.

Resource URL Example

Get the orders of the user with id=5.

A flat endpoint to get the user’s orders (with id=5) by using a query string filter.
We will see more details about filtering in the following section.

Derived Resources

Let’s assume that we are implementing some endpoints to GET the available products and purchased products of a specific order, as follows:

The available products (e.g. https://api.mydomain.tld/products).

“productId”: 2,
“Title”: “.NET T-Shirt”,
“Price”: 10,
“Sizes”: [“M”,“L”]
“productId”: 3,
“Title”: “C# T-Shirt”,
“Price”: 8,
“Sizes”: [“S”,“M”,“L”,“XL”]

2 . The purchased products of a specific order:

“id”: 1,
“productId”: 2,
“Title”: “.NET T-Shirt”,
“Price”: 10,
“Size”: “M”,
“Quantity”: 2,
“Discount”: 5,
“TotalPrice”: 15

Conceptually, we will return a list of products in both cases, but their data would be different. In the first case, we will get the details of each product (base resource). However, in the second case, we will get additional information about the purchase of each product item (derived resource), e.g., the quantity, discount, total price, etc.

The following table shows how we could represent the URL of the cases above with flat endpoints compared to relationships endpoints. For this scenario, I prefer the URL endpoints that clearly illustrate the relationships of the resources and not the flat endpoints.

Resource URL with Relationships
Flat Resource URL

The purchased products of the order with id = 123.

The first purchased product (id=1) of the order with id = 123.

Non-Resource Endpoints for Processes and Executable Functions

In REST, we should represent our resources as plural nouns (e.g., products, users, etc.). However, all rules have their exceptions. There are non-resource scenarios in which we cannot represent the API operations with nouns (Price E., 2018) (, 2021). For example, there are processes and executable functions such as the following which a verb is preferable.


In the non-resource scenarios where the actions don’t have parameters, we could use action results as resource property (see them as resources). For example, when we restart a server (e.g., using the endpoint /server/321/restart), the server’s status is modified (e.g., from Running to Restarting, Stopped, etc.). In such a case, we could avoid using a verb, and instead, use the PATCH method to update the specific resource property (e.g., the status to Restarting).

Method: PATCH
URL: /server/321

“status”: “Restarting”

Trailing Forward Slashes

A common question when naming endpoints it’s if we should use a trailing forward-slash (“/”) or not. Conventionally, using a trailing slash indicates a directory (hierarchy), and not using a trailing slash denotes a file (GNU, 2021).

URL Example

A URL with a trailing slash indicating a directory.

A URL without a trailing slash denoting a file.

Google states that they treat each URL separately (and equally) regardless of whether it’s a file or a directory, or it contains a trailing slash, or it doesn’t contain a trailing slash (Maile Ohye, 2010).

The important thing is to choose one way or the other and be consistent. That means that we should redirect (301 – Moved Permanently) the non-preferable way to the other. We will not serve duplicate content in the same URLs (i.e., the URL with and without a trailing slash) by performing this redirection.

When implementing APIs, we do not care about SEO. So, if our API works both ways, we can let the consumers choose how to use it 🙂.

To perform these redirections in .NET Core, we could use the URL Rewriting Middleware with one of the following regular expressions (regex):

Regex Goal
Regex String & Match Example
Replacement String & Output Example

Strip trailing slash
Regex: (.*)/$
Example: /path/

Replacement: ` $1 `
Example: /path

Enforce trailing slash
Regex: (.*[^/])$
Example: /path

Replacement: $1/
Example: /path/

Data Filtering, Sorting, and Pagination

In a previous section, we saw that we could use an endpoint such as the https://api.mydomain.tld/products to get all available products. However, the response may be huge depending on the number of available products in our database. So, it would be preferable to request a subset of the available products depending on the user’s preferences. For that purpose, we could provide in our APIs the filtering, sorting, and pagination functionalities.

For the following examples, let’s assume that we have the following products:

“productId”: 2,
“Title”: “.NET T-Shirt”,
“Price”: 10,
“Sizes”: [“M”,“L”]
“productId”: 3,
“Title”: “C# T-Shirt”,
“Price”: 8,
“Sizes”: [“S”,“M”,“L”,“XL”]

Tip: The filtering, sorting, and pagination criteria can be combined!


In our case, apply filtering means limiting the results by specific criteria. To represent that in a URL, we should use query string parameters. The query string is a part of the URL which starts after the question mark “?” character. Each query parameter has the field=value format, and multiple query parameters are separated by the ampersand “&” symbol. For example, the following table presents some possible query strings to filter the JSON example of the previous products.

GET Query String Example

Get all products with a price equals to 10.

Get all products with a minimum price of 50
and a maximum price of 100.

Get all products with a size of medium or large.


To represent the sorting criteria in the URL, we need to define a special query string field name (e.g., sort) that we will use only for sorting. We can determine the multiple property names (e.g., price, title, etc.) used to sort the data in this field. In addition, the plus “+” and minus “-“ characters can be used as a prefix in the property names to define if the sorting should be ascending or descending, respectively.

GET Query String Example

Sort the products list by ascending prices (smallest to largest).

Sort the products list by descending prices and then by
ascending titles (A to Z).


Filtering can limit the response data, but the user’s choices determine the limitation. However, in pagination, the user usually gets a specific number of items (a first subset of the data) and can navigate to the next and previous pages (subsets).

In our APIs, we can limit the amount of data returned in a single request to have fast responses and limit the use of network bandwidth.

To represent the pagination criteria, we can define two special query string fields, limit and offset, to set the maximum number of returned items and the number of them that should be skipped (offset) (Price E., 2018). The following table presents some query string examples to paginate the products data by five items.

GET Query String Example

Get the first five products. When we do not set the
offset parameter, we can assume that it has a zero value.

Get the following five products (limit=5) by skipping
the first 5 (offset=5).

Get the following five products (limit=5) by skipping
the first 10.

When implementing pagination, the main question is, “How would we know when the data ends?”. There are two solutions to this problem (which can also be combined).

Get the total items (in our case products) in the data response or a custom HTTP header (e.g., with the name X-Total-Count). In this way, we can generate the next, final, etc., URLs and navigation links (HTML <a> tag) on the client-side.
Get the navigation links from the server in the Link HTTP header (Nottingham M. & IETF, 2010). The Link header is a standard way to provide a list of links to the consumer. The basic format of a Link value is <URL>;rel=”TheRelationType”, which can be separated by a comma (“,”) character when multiple link values exist, as we can see in the following example. Thus, by using the Link header values (URLs and rel), we can generate the navigation links (HTML <a> tag) on the client-side.

Link: <https://api.mydomain.tld/products?limit=5>; rel=”first”,
<https://api.mydomain.tld/products?limit=5>; rel=”prev”,
<https://api.mydomain.tld/products?limit=5&offset=5>; rel=”next”,
<https:// api.mydomain.tld/products?limit=5&offset=10>; rel=”last”


Naming our endpoints based on the REST architectural constraint of identification of resources can be complicated. This article showed some practical suggestions for adopting consistent naming conventions in our URLs (API endpoints) and how we could perform filtering, sorting, and pagination.

The following table summarizes these practical suggestions and tips for naming our URLs and perform filtering, sorting, and pagination.

Practical Suggestion

The identification of the resources should be made based on the consumer needs and business logic.

Avoid creating APIs that mirror the backend implementation or internal structure of our database(s).

Use plural nouns for referring to resources.
– /products
– /products/123
– /orders

Non-resource scenarios, such as processes and executable functions can use verbs (as exception in the previous suggestion).
– /cart/checkout
– /server/321/restart
– /search

Decide to either represent the resource relationships in the URL or use flat endpoints depending on the complexity of the resource relationships.

Simple Resource Relationships
– /users/5/orders
– /orders?user=5

More Complex Resource Relationships
– /orders/123/products/1
– /order-products?order=123&id=1

It is recommended to use a path that is not more complex than: /collection/item/collection/item.

Using a trailing slash in API URLs is just a personal preference. For APIs, if both ways are supported, we are all set. On the contrary, we should redirect (301 – Moved Permanently) the non-preferable way to the other.
– /products/
– /products

On GET methods use query strings to represent the filter criteria.
– /products?price=10
– products?minPrice=50&maxPrice=100

On GET methods use the sort query string to represent the sorting criteria.
– /products?sort=+price
– products?sort=-price,+title

On GET methods use the limit and offset query strings to represent the pagination criteria.
– /products?limit=5
– /products?limit=5&offset=10

For pagination, read about the X-Total-Count and Link HTTP headers to provide needed information to create the paging navigation URLs.

If you don’t like these suggestions or have already selected different approaches, please share them with us in the comments. The important thing is to be consistent with your chosen URL and query string naming conventions to help your API consumers.


.NET Nakama (2021, September 4). Designing a RESTful Web API.

Au-Yeung J.,& Donovan R. (2020, March 2). Best practices for REST API design.

Florimond Manca (2018, August 26). RESTful API Design: 13 Best Practices to Make Your Users Happy.

GNU (2021, September 24). GNU coreutils: 2.9 Trailing slashes.

Maile Ohye (2010, April 21). To slash or not to slash.

Nottingham M. & IETF (2010, October). RFC 5988: Web Linking.

Price E. (2018, December 1).RESTful web API design. (2021, October 1). REST Resource Naming Guide.

Implement a secure API and a Blazor app in the same ASP.NET Core project with Azure AD authentication

The article shows how an ASP.NET Core API and a Blazor BBF application can be implemented in the same project and secured using Azure AD with Microsoft.Identity.Web. The Blazor application is secured using the BFF pattern with its backend APIs protected using cookies with anti-forgery protection and same site. The API is protected using JWT Bearer tokens and used from a separate client from a different domain, not from the Blazor application. This API is not used for the Blazor application. When securing Blazor WASM hosted in an ASP.NET Core application, BFF architecture should be used for the Blazor application and not JWT tokens, especially in Azure where it is not possible to logout correctly.



The Blazor application consists of three projects. The Server project implements the OpenID Connect user interaction flow and authenticates the client as well as the user authentication. The APIs created for the Blazor WASM are protected using cookies. A second API is implemented for separate clients and the API is protected using JWT tokens. Two separate Azure App registrations are setup for the UI client and the API. If using the API, a third Azure App registration would be used for the client, for example an ASP.NET Core Razor page, or a Power App.


The API is implemented and protected with the MyJwtApiScheme scheme. This will be implemented later in the Startup class. The API uses swagger configurations for Open API 3 and a simple HTTP GET is implemented to validate the API security.

[Authorize(AuthenticationSchemes = “MyJwtApiScheme”)]
[SwaggerTag(“Using to provide a public api for different clients”)]
public class MyApiJwtProtectedController : ControllerBase
[ProducesResponseType(StatusCodes.Status200OK, Type = typeof(string))]
[SwaggerOperation(OperationId = “MyApiJwtProtected-Get”,
Summary = “Returns string with details”)]
public IActionResult Get()
return Ok(“yes my public api protected with Azure AD and JWT works”);

Blazor BFF

The Blazor applications are implemented using the backend for frontend security architecture. All security is implemented in the backend and the client requires a secret or a certificate to authenticate. The security data is stored to an encrypted cookie with same site protection. This is easier to secure than storing tokens in the browser storage, especially since Blazor does not support strong CSPs due to the generated Javascript and also that AAD does not support a proper logout for access tokens, refresh tokens stored in the browser. The following blog post explains this in more details.

Securing Blazor Web assembly using cookies


The Microsoft.Identity.Web Nuget package is used to implement the Azure AD clients. This setup is different to the documentation. The default schemes need to be set correctly when using Cookie (App) authentication and also API Auth together. The AddMicrosoftIdentityWebApp method sets up the Blazor authentication for one Azure App registration using configuration from the AzureAd settings. The AddMicrosoftIdentityWebApi method implements the second Azure App registration for the JWT Bearer token Auth using the AzureAdMyApi settings and the MyJwtApiScheme scheme.

services.AddAuthentication(options =>
options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
options.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
options.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;
.AddMicrosoftIdentityWebApp(Configuration, “AzureAd”,

Configuration, “AzureAdMyApi”, “MyJwtApiScheme”);


The ASP.NET Core project uses app.settings and user secrets in development to configure the Azure AD clients. The two Azure App registrations values are added here.

“AzureAd”: {
“Instance”: “”,
“Domain”: “”,
“TenantId”: “7ff95b15-dc21-4ba6-bc92-824856578fc1”,
“ClientId”: “46d2f651-813a-4b5c-8a43-63abcb4f692c”,
“CallbackPath”: “/signin-oidc”,
“SignedOutCallbackPath “: “/signout-callback-oidc”
// “ClientSecret”: “add secret to the user secrets”
“AzureAdMyApi”: {
“Instance”: “”,
“Domain”: “”,
“TenantId”: “7ff95b15-dc21-4ba6-bc92-824856578fc1”,
“ClientId”: “b2a09168-54e2-4bc4-af92-a710a64ef1fa”


Swagger is added to make it easier to view and test the API. A simple UI is created so that you can paste your access token into the UI and test the APIs manually if required. You could also implement a user flow directly in the Swagger UI but then you would have to open up the security headers protection to allow this.

services.AddSwaggerGen(c =>

// add JWT Authentication
var securityScheme = new OpenApiSecurityScheme
Name = “JWT Authentication”,
Description = “Enter JWT Bearer token **_only_**”,
In = ParameterLocation.Header,
Type = SecuritySchemeType.Http,
Scheme = “bearer”, // must be lower case
BearerFormat = “JWT”,
Reference = new OpenApiReference
Id = JwtBearerDefaults.AuthenticationScheme,
Type = ReferenceType.SecurityScheme
c.AddSecurityDefinition(securityScheme.Reference.Id, securityScheme);
c.AddSecurityRequirement(new OpenApiSecurityRequirement
{securityScheme, Array.Empty<string>() }
c.SwaggerDoc(“v1”, new OpenApiInfo
Title = “My API”,
Version = “v1”,
Description = “My API”


The Swagger middleware is added after the security headers middleware. Some people only add this to dev and not production deployments.

app.UseSwaggerUI(c =>
c.SwaggerEndpoint(“/swagger/v1/swagger.json”, “MyApi v1”);


The UITestClientForApiTest Razor Page application can be used to login and get an access token to test the API. Before starting this application, the AzureAD configuration in the settings need to be updated to match your Azure App registration and your tenant. The access token can be used directly in the Swagger UI. The API only accepts delegated access tokens and no CC tokens etc. The configuration in the Blazor server application also needs to match the Azure App registrations in your tenant.

This setup is good for simple projects where you would like to avoid creating a second deployment or you want to re-use a small amount of business logic from the Blazor server. At some stage, it would probably make sense to split the API and the Blazor UI into two separate projects which would make this security setup more simple again but result in more infrastructure.


Securing Blazor Web assembly using cookies