Starting a Web App in 2022 [Research Results]

We are finally happy to share with you the results of the world’s first study on how developers start a web application in 2022. For this research, we wanted to do a deep dive into how engineers around the globe are starting web apps, how popular the use of low-code platforms and what tools are decisive in creating web applications.

To achieve this, we conducted a survey with 191 software engineers of all experience around the globe. We asked questions around the technology they use to start web applications.

Highlights of the key findings:

The usage of particular technologies in the creation of web apps is closely related to engineers’ experience. New technologies, such as no-code/low-code solutions, GraphQL, and non-relational databases, appeal to developers with less expertise;

Engineers with less experience are more likely to learn from online sources, whereas developers with more expertise in software development prefer to learn from more conventional sources such as books;

Retool and Bubble are the most popular no-code/low-code platforms;

React, Node.js, PostgreSQL, Amazon AWS, and Bootstrap are the most popular web application development stacks.

To read the full report, including additional insights, and full research methodology, visit this page

With Flatlogic you can create full-stack web applications literally in minutes. If you’re interested in trying Flatlogic solutions, sign up for free

The post Starting a Web App in 2022 [Research Results] appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner

10 KPI Templates and Dashboards for Tracking KPI’s

What Instruments Do We Need to Build an Effective Dashboard for KPIs?

The Top Dashboards for Tracking KPIs
Sing App Admin Dashboard
Retail Dashboard from Simple KPI
Light Blue React Node.js
Limitless Dashboard
Cork Admin Dashboard
Paper Admin Template
Pick Admin Dashboard Template
Able Pro Admin Dashboard
Architect UI Admin Template
Flatlogic One Admin Dashboard Template

You might also like these articles


KPIs or Key Performance Indicators are a modern instrument to make a system (business, for example) work effectively. KPIs show how successful the business is, or how professional the employee is. It works with the help of measurable values, that are intended to show the success of achieving your strategic goals. KPIs are measurable indicators that you should track, calculate, analyze, and represent.  If you read this article, it means that you want to find or build an app to help you in all operations above. But before we list the top dashboard KPI templates, it’s essential to understand how exactly to choose a set of indicators that boost the growth of a business. For KPIs to be useful, they should be relevant to a business. That is crucial not only for entrepreneurs who try to improve their businesses but also for the developers of the software for tracking KPIs. Why?

Developers need to be aware of what instruments they should include in the app so the users will be able to use KPI’s easily and effectively. Since there are much more than a handful of articles and approaches on how to find the right performance indicators, what KPIs to choose, how to track them, development of a quality web application can be complicated. 

However, from our point of view, the most challenging part of such an app is building a dashboard that displays all necessary KPIs on a single screen. We have explored the Internet, analyzed different types of tools to represent KPIs, found great dashboards, and make two lists: one consists of the charts and instruments you should definitely include in your future app, the other is top dashboards we found that contain elements from the first top. Each KPI template on the list is a potent tool that will boost your metrics considerably. Let’s start from the first list.  

Enjoy reading! 

What Instruments Do We Need to Build an Effective Dashboard for KPIs?

Absolute numerical values and percentage (in absolute amount)

With the help of percentage, you can make it more informative by adding the comparison of KPI with the previous periods.

The source:

Non-linear chart

One of the core charts.

The source:

Bar chart

Another core element to display KPIs.

The source:

Stacked Bar Graphs

It’s a more complex instrument, but more informative respectively.

Progress bars

Can be confused with a horizontal bar chart. The main difference: a horizontal bar chart is used to compare the values in several categories, while a progress bar is supposed to show the progress in a single category.

The source:

Pie charts

The source:

Donut chart

You can replace pie charts with a donut chart, the meaning is the same.

The source:

Gauge chart

This chart helps users to track their progress towards achieving goals. It’s interchangeable with a progress bar. 

The source:


Instead of using an axis with numbers, it uses pictures to represent a relative or an absolute number of items.

The source:

Process behavior chart

Especially valuable for financial KPIs. The mainline shows measurement over time or categories, while two red lines are control limits that shouldn’t be surpassed.

The source:

Combined bar and line graph

The source:

Some additional tools:

These tools are also essential for building a dashboard for tracking KPI: calendar, dropdowns, checkboxes, input fields. The option to create and download a report will also be helpful.

The Top Dashboards for Tracking KPIs

Sing App Admin Dashboard

The source:

If you look through a huge number of KPI templates and don’t find one that you need, you should take a look at Sing app. Sing is a premium admin dashboard template that offers all necessary opportunities to turn data into easy to understand graphs and charts. Besides all charts and functions listed above, with Sing, you get such options as downloading graphs in SVG and PNG format, animated and interactive pointer that highlights the point where the cursor is placed, and change the period for values calculation inside the frame with the graph!


Retail Dashboard from Simple KPI

The source:

That is a dashboard focused on the retail trade sphere. It already contains relevant KPIs and Metrics for that sector, so you need just to download it and use it. Since it’s an opinioned dashboard you will not get a great customization option. If you are a retailer or trader you should try that dashboard to track the performance when selling goods or services.


Light Blue React Node.js

The source:

It is a React Admin dashboard template with Node.JS backend. The template is more suited for KPIs that reflect goals in web app traffic analysis, revenue and current balance tracking, and sales management. However, Light blue contains a lot of ready-to-use working components and charts to build a necessary dashboard. It’s very easy to customize and implement, both beginners in React and professional developers can benefit from that template and get a track on KPIs, metrics, and business data.


Limitless Dashboard

The source:

Limitless is a powerful admin template and a best-seller on ThemeForest. It goes with a modern business KPI dashboard that simplifies the processes of monitoring, analyzing, and generating insights. With the help of that dashboard, you can easily monitor the progress of growing sales or traffic and adjust the sales strategy according to customer behavior. Furthermore, the dashboard contains a live update function to keep you abreast of the latest changes.


Cork Admin Dashboard

The source:

That is an awesome bootstrap-based dashboard template that follows the best design and programming principles.  The template provides you with more than 10 layout options and Laravel Version of the extremely rare dashboard. Several pages with charts and two dashboards with different metrics ensure you have the basic elements to build a great dashboard for tracking KPI.


Paper Admin Template

The source:

This template fits you if you are looking for a concrete solution since Paper goes with eleven dashboards in the package! They all are unnamed so it will take time to look through them, but that time will be less than time for building your dashboard. Every dashboard provides a simple single-screen view of data and allows sharing it with your collages.


Pick Admin Dashboard Template

The source:

Pick is a modern and stylish solution for the IT industry. It’s a multipurpose dashboard that helps you to gain full control over the performance.


Able Pro Admin Dashboard

The source:

If you believe that the most qualified products are the most rated products, take a look at Able pro. Able pro is a best-rated bootstrap admin template on Themeforest. The human eye captures information within the graph blazingly fast! With that dashboard, you can go much deeper into the understanding of KPIs and make the decision-making process much easier.


Architect UI Admin Template

The source:

Those who download Architect UI make the right choice. This KPI template created with hundreds of build-in elements and components, and three blocks of charts. The modular frontend architecture makes dashboard customization fast and easy, while animated graphs provide insights about KPIs.


Flatlogic One Admin Dashboard Template

The source:

Flatlogic one is a one-size-fits-all solution for any type of dashboard. It is a premium bootstrap admin dashboard template that has been released recently in July 2020. It goes with two developed dashboards that serve well as KPI templates: analytics and visits. But it also offers four additional pages with smoothly animated charts for any taste and needs. The dashboard is flexible and highly customizable, so you easily get the benefit from that template.


Thanks for reading.

You might also like these articles:

14+ Best Node.js Open Source Projects

8 Essential Bootstrap Components for Your Web App

Best 14+ Bootstrap Open- Source Projects

The post 10 KPI Templates and Dashboards for Tracking KPI’s appeared first on Flatlogic Blog.

Flatlogic Admin Templates banner

Announcing the Plan for EF7

Today we are excited to share with you the plan for Entity Framework Core 7. This plan brings together input from many stakeholders and outlines where and how we intend to invest in Entity Framework Core 7 (EF Core 7). For brevity, EF Core 7.0 is also referred to as just EF7.

The plan is being tracked through GitHub dotnet/efcore repo issue #26994 and any updates will be posted there.

IMPORTANT This plan is not a commitment; it will evolve as we continue to learn throughout the release. Some things not currently planned for EF7 may get pulled in. Some things currently planned for EF7 may get punted out.

To review the plans for other products, areas, and .NET 7 overall, visit and read the ThemesOf.Net.

General information

EF Core 7 is the next release after EF Core 6 and is currently scheduled for release in November 2022 at the same time as .NET 7. There are no plans for an EF Core 6.1 release.

EF7 will align with the .NET support policy and will therefore will not be a long-term support (LTS) release.

EF7 currently targets .NET 6. This may be updated to .NET 7 as we near the release. EF7 does not target any .NET Standard version; for more information see the future of .NET Standard. EF7 will not run on .NET Framework.


The large investments in EF7 will fall mainly under the following themes.

Highly requested features

As always, a major input into the planning process comes from votes () for features on GitHub.

JSON columns: Save and query into JSON-based documents stored in relational database columns.

Bulk updates: Efficient, predicate-based updates for many database rows without loading data into memory.

Lifecycle hooks: Allow applications to react when interesting things happen in EF code.

Table-per-concrete-type (TPC) mapping: Map entities in a hierarchy to separate tables without taking the performance hit of TPT mapping.

Map CUD operations to stored procedures: Use stored procedures to manage data modifications.

Value objects: Applications can use DDD-style value objects in EF models.

Support value generation when using value converters: DDD-style encapsulated key types can make full use of automatically generated key values.

Raw SQL queries for unmapped types: Applications can execute more types of raw SQL query without dropping down to ADO.NET or using third-party libraries.

Database scaffolding templates: The code generated by dotnet ef database scaffold can be fully customized.

.NET platforms and ecosystem

Much of the work planned for EF7 involves improving the data access experience for .NET across different platforms and domains. This involves work in EF Core where needed, but also work in other areas to ensure a great experience across .NET technologies.

Distributed transactions: .NET Framework applications using distributed transactions can be ported to .NET 7 on Windows.

EF Core tooling: Ensure dotnet ef commands are easy to use and work with modern platforms and technologies.

EF Core and graphical user interfaces: Make it easy to build data-bound graphical applications with EF Core.

SqlServer.Core (Woodstar): Fast, fully managed access to SQL Server and Azure SQL for modern .NET applications.

Azure Cosmos DB provider: Continue to make EF Core the easiest and most productive way to work with Azure Cosmos DB.

Migrations experience: Make it easy to get started with migrations and later use them effectively in CI/CD pipelines.

Trimming: Smaller applications that can be efficiently AOT compiled.

Evolve System.Linq.Expression: Use modern C# language features in LINQ queries.

Translate new LINQ operators: Use new LINQ operators when translating LINQ queries to SQL.

Open telemetry for ADO.NET providers: Cross-platform, industry-standard telemetry that can be monitored in your tool of choice.

Enhancements to System.Data: Better low-level data access to benefit all higher-level code.

Research data access for cloud-native: Future evolution of .NET data access that supports modern approaches such as microservices and cloud native.

Clear path forward from EF6

EF Core has always supported many scenarios not covered by the legacy EF6 stack, as well as being generally much higher performing. However, EF6 has likewise supported scenarios not covered by EF Core. EF7 will add support for many of these scenarios, allowing more applications to port from legacy EF6 to EF7. At the same time, we are planning a comprehensive porting guide for applications moving from legacy EF6 to EF Core.


Great performance is a fundamental tenet of EF Core, lower-level data access, and indeed all of .NET. Every release includes significant work on improving performance.

Performance of database inserts and updates: High performance database inserts and updates from EF Core

TechEmpower composite score: High performing low-level data updates for all .NET applications.

Find out more and give feedback

This post is a brief summary of the full EF7 plan. Please see the full plan for more information.

Your feedback on planning is important. The best way to indicate the importance of an issue is to vote () for that issue on GitHub. This data will then feed into the planning process for the next release.

In addition, please comment on the plan issue (#26994) if you believe we are missing something that is critical for EF7, or are focusing on the wrong areas.

The post Announcing the Plan for EF7 appeared first on .NET Blog.

A journey towards SpeakerTravel – Building a service from scratch

For close to two years now, I’ve had SpeakerTravel up & running. It’s a tool that helps conference organizers to book flights for speakers. You invite speakers, they pick their flight of choice (within a budget the organizer can specify), and the organizer can then approve and book the flight with a single click.

Why I started building a travel booking tool

How flight tickets work…
Global Distribution Service (GDS)
Flight search affiliate programs
Online Travel Agencies (OTA)
A travel agent from Sweden

The business side…
Legal requirements

Building SpeakerTravel
Attempt to a single-page application…
…replaced with boring technology
The domain model
Ready for take-off!
COVID-19 💊 and working on the backlog

What’s next?
What’s next on the technical side?

What’s next on the business side?
Why not pivot?

Conclusion and Takeaways

In this post, I want to go a bit into the process of building this tool. Why I started it in the first place, how it works, a look at it from the business side, and maybe a follow-up post that covers any questions you may have after reading.

There’s also a table of contents, so brace yourself for a longread!

Why I started building a travel booking tool

Before COVID threw a wrench in offline activities, our user group was organizing CloudBrew, a 2-day conference with speakers from across the world (mostly Europe).

Every year, I was complaining on Twitter around the time travel for those speakers needed to be booked. Booking flights for a speaker would mean several e-mails back-and-forth about the ideal schedule, checking travel budgets, and then sending the travel confirmation. And because our user group is a legal entity, we’d need invoices for our accountant, which meant contacting the travel agency and more e-mails.

When we started, we did all of this for 5 speakers, which was doable. Then we grew, and in the end needed to do this for 19 speakers. Madness!

That got me thinking, and almost pleading for someone to come up with a solution:

Startup idea: “Give travel site a bunch of e-mail addresses and budgets. Site lets those people select flights within that budget. I say yes/no and get billed.” – Would love this for conference organizing!

— Maarten Balliauw (@maartenballiauw) July 4, 2018

Alas, by the time we had that 19 speaker booking coming up, no such solution came about, and we were once again doing the manual process.

How flight tickets work…

In the back of my mind, the idea stuck. Would it be possible to build a solution to this problem, and make booking travel for speakers at our conference an easier task?

Of course, building the app itself would be possible. It’s what we all do for a living! But what about the heart of this solution… You know, actually booking a flight ticket in an automated way?

After researching and reading a lot, it seems that booking a flight ticket always consists of 4 steps:

You search an inventory of available seats for a flight combination;
For that flight combination, a price is requested;
For that flight combination, a booking is created;
For that booking, tickets are issued.

Book flights via any website, and you’ll go through these steps. There’s a reason for this:

The flight inventory is really a big database with all seats on all (or at least, many) airlines. As far as I could find, airlines populate this database a coule of times a year. It does not contain prices, just seats and conditions to book seats.
Pricing checks a given seat with the airline (or other party in between). Requesting a price means the airline can give an actual price for a seat. They can also track interest in a specific seat/group of seats, and price accordingly.
Booking reserves the seat, and removes that seat from the big flight inventory database. Ideally, booking has to happen soon after pricing. If no tickets have been issued after a couple of hours, the seat is made available again.
Issuing tickets confirms the seat, and gives you the actual ticket that can be used to board a plane. Having these two steps separate means that in between, a booking website can ask you for payment, and only when that is confirmed, issue tickets.

So in short, I needed something that could perform all of these steps somehow. More research!

Global Distribution Service (GDS)

One of the first services that popped up were different Global Distribution Services (GDS) for air travel. The world has many of them. You may have heard of Amadeus, Sabre or Travelport, but there are others.

These GDS are an interoperability layer between inventories from airlines, travel agents, and more. They have software in place to handle interactions between all parties involved (airlines, travel agents, hotels, …), and until a few years ago, were always involved in booking flights. Nowadays, airlines often sell their inventory directly, without these middle-men involved.

I explored various GDS’, and quickly found that this was not the way to go. First, they expect certain volumes of sales. I contacted one of them, and essentially got laughed at when I said I wanted to book around 20 flights a year. Second, from a technical point of view, a lot of them had documentation available that talked about XML-over-SOAP, WS-* standards, and all that. Been there, done that, but prefer the more lightweight integrations of recent years.

Flight search affiliate programs

There are a number of affiliate programs out there that provide an API that you can use to search flights (including an approximate price), and give you a link to the booking site. Examples are Travelpayouts and SkyScanner.

The conditions for using these APIs were somewhat restrictive for my use case, but e-mailing one of them confirmed this use case was something that could fit.

Let the speaker search and request a flight, and then the organizer would click through and make the booking. This would still mean entering credit card details and invoice address a number of times, but it could work.

Online Travel Agencies (OTA)

Somewhere in-between GDS and affiliate programs, there are the Online Travel Agencies (OTA) and the likes. These companies are travel agents, and have their contracts with zero, one or several GDS, airlines, and more.

Searching this space, I found a couple of them that had APIs available for the above 4 steps – which seemed promising as it would give full control over the booking process (including automation of sending the correct invoice details when purchasing a ticket):


After contacting them all, some responded only after a couple of weeks, others had requirements in terms of number of tickets sold (volume), and this got me disillusioned.

A travel agent from Sweden

Having talked with a couple of folks about this idea and finding an API, a friend suggested I contact a travel agent they knew well as they could be able to help.

We had a long call about the idea, and they were very helpful in providing some additional insights into the world of flight booking. They were using the TravelPort GDS themselves, and were building their own API on top of that to power their own websites. Unfortunately, they weren’t sure it would ever get completed, so this wasn’t a viable solution.

Nevertheless, lesson learned: it never hurts to talk, even if it’s just for sharing insights and learnings.


Some weeks after my disillusion with OTAs, I was searching the Internet once more and found another service:

I decided to get in touch with some questions about my use case and low volumes. With zero expectations: I considered this my last attempt before shelving the entire idea.

Responding in 3 days would have been a record, but these folks responded in 30 seconds (!). A good 10 minutes later I was on Skype with their founder. We chatted about the service I wanted to build, and he even gave some thoughts on how to implement certain parts and workflows.

On their website, a 30-day trial of their staging environment was promoted, and their founder confirmed this was flexible if needed. So I decided to go with this and experiment with the API to see what was possible and what was not, and maybe finally start building this application!

The business side…

With the AllMyles API docs in hand, I set out to writing some code and experimenting with their staging environment. All seemed to work well for my use case.

There was one thing in the way still… To get production access, a one-time certification fee of 3000 EUR would have to be paid. Definitely better than the volume requirements of other solutions, but still quite steep for booking 20 flights a year.

What if this tool would be something that can be used by any conference out there, and I charge a small fee per passenger to cover this certification fee and other costs?

Time to put on the business hat.


A couple of years ago, a friend recommended reading The Millionaire Fastlane by MJ DeMarco. It’s a good book with ideas on getting out of the rat race that controls many of us, and very opinionated. You may or may not like this book. There’s one idea from the book that stuck in my head though: CENTS.

CENTS is an acronym for the five aspects on which any idea can be vetted for viability as a business. It’s not a startup canvas or anything, just a simple way of checking if there is some viability:

Control – Do you control as many elements of the business as possible, or would something like a price or policy change with a vendor mess with your business?

Entry – How hard is entering this market? Can anyone do it in 10 minutes, or would they need a lot of time, money, and other resources?

Need – Does anyone actually need this thing you are thinking about?

Time – Will you be converting time into money, or can you decouple the two and also earn while you’re asleep?

Scale – Can you see this scale, are there pivots that would work, …

Before diving into the deep and coughing up that certification fee (and building the tool), I wanted to check these…

For flight booking, Control is never going to be the case. Someone is flying the airplane, someone handles booking. There are parties in between you and that flight, and there’s no way around that. From my research, I knew if really needed I could find another OTA or GDS, and go with that, so I felt there was just enough control to give this aspect a green checkmark.

Entry was steep enough: that certification fee, research, building the app. Something everyone could overcome, but definitely not something everyone would do. As an added bonus, I had to figure out some tricks to find the same flight twice: once by the speaker making the search, once by the conference organizer to confirm booking. Pricing and booking have to be close together (as in, 20-30 minutes), but for SpeakerTravel there could even be a few days between both parties doing this. In any case, it requires some proper magic to get this right and fine the same (or a very comparable) seat. So Entry? Check!

The Need aspect was easy. There are lots of conferences out there that are probably going through the same pain with booking flights. Check!

Same with Time. This would be a software-as-a-service, that would allow folks to do self-service booking and payments, even when I’m not around. Check!

Finally, Scale. This solution could work for IT conferences, medical conferences, pretty much anything where a third party would pay for someone else’s flights. Business travel could be a pivot, where employees could book and employers would pay. Another pivot could be handling travel for music festivals, etc. So definitely not a hurdle in the long run!

In short: it made sense CENTS!

Legal requirements

Building a tool for our own conference is one thing, building it for third-party use is another. Could I sell flight tickets from my Belgian company?

Instead of trying to figure this out myself, I asked for advise here from a lawyer. The response came in (together with an invoice for their time researching), and for my Belgian company there were a few things to know about:

Flights-only is fine. You’re never selling flights, you are facilitating a transaction between the traveler and the airline.
If you combine flights and hotels, flights and rental cars, etc., you’re selling travel packages. Travel packages have stricter requirements.

Great! So I could go ahead with flights (and only flights), and start building the app!


While building the app (more on that later), I also was thinking about how to handle flight ticket payments… I’d have a fee per traveler (fixed), and the flight fare itself (variable, and one I’d have to pay directly to AllMyles).

The two-step ticket issuing seemed like a perfect place to shove in a payment gateway, for example Stripe, and collect payment before making the actual booking through the API.

Unfortunately, none of the payment gateways I found let you do “risky business”. All of them have different lists of business types that are not allowed, and travel is always on those lists. One payment gateway from The Netherlands confirmed they could support my scenario, but after requesting written confirmation that stance changed. In other words: credit cards were not an option.

For now, I decided to go with an upfront deposit, to ensure flight fares can be paid when someone confirms their booking.

Building SpeakerTravel

With a good idea in mind, and a blank canvas in front of me, it was time for the excitement of creating a new project in the IDE!

The most important question: Which project template to start with?

Attempt to a single-page application…

Since I’d already built some API integration with AllMyles in C#, at least part of the application would probably be ASP.NET Core. With close to no experience with single page applications at the time, I thought this would be a good learning experience!

So I went with an ASP.NET Core backend, IdentityServer, and React.

About an hour of cursing on a simple “Hello, World” later, React was replaced with Vue.js which seemed easier to get started with. I did have to replicate the ASP.NET Core SPA development experience (blog post) to support Vue.js, but that was fun to do and write about.

What wasn’t fun though, was the slow-going. New to Vue.js, a lot of things went very slow while building. After 2 weeks of spending evenings on just a login that worked smoothly, I started wondering…

“Am I doing this to solve a problem, or to learn new tech?”

Building this thing over the weekend and in the evening hours, I reconsidered the tech stack and started anew.

…replaced with boring technology

This time, I started with an ASP.NET MVC Core project. Individual user accounts using ASP.NET Core Identity, Entity Framework, and SQL Server. A familiar stack for me, and a stack in which I was immediately productive.

A few hours into development, I had the login/register/manage account pages customized. The layout page was converted to load a Bootswatch UI theme (on top of Bootstrap), and was starting to get into building the flows of inviting speakers, searching flights (with 100% made up data), approving and rejecting flights, and all that. This was finished in a week or 6 and then another few weeks to properly integrate with AllMyles’ staging environment.

While developing the app, a lot of new ideas and improvements popped up. I tried to be ruthless in asking myself “do I really need this for version 1?”, and log anything else in the issue tracker and pick it up in the future. This definitely helped with productivity.

Some fun was had implementing tag helpers to show/hide HTML elements (blog post), which realy works well to make certain parts of the UI available based on user permissions and roles.

The first version was ready near the end of August 2019, including a basic product website that is powered by a very simple Markdown-to-HTML script that seems to work well.

The application itself was built and deployed on the following stack:

ASP.NET Core MVC + Razor pages for the scaffolded identity area, on .NET Core 2.1
Bootstrap and Bootswatch for UI
A sprinkle of jQuery

Hangfire for background jobs (the actual bookingm, sending e-mails, anything that’s async/retryable)
SQL Server LocalDb for development, Azure SQL Database for production
Azure Web Apps for the app and product website
Private GitHub repository
Azure DevOps to build and deploy
SendGrid for sending e-mails

Overall, this was and is a very familiar stack for me, and as a result a stack in which I was immediately productive. Server-side rendering is fine 😉 And .NET is truly great!

When you have an idea you want to build out, I can highly recommend going with what you know – unless of course the goal is exploring another tech.

The domain model

When I asked folks on Twitter for what they wanted to see in this post, Cezary Piatek wanted to know about the domain model.

From a high-level, the domain model of this application is simple. There’s an Event that has Travelers, and at some point, they will have a Booking.

For every traveler, the system keeps a TravelerStatus history, which represents state transitions. From invited, to accepted, to bookingrequested, to confirmed/rejected/canceled, to ticketsissued, and potentially back to the start where a rejected traveler goes to accepted again so they can make a new search.

The TravelerStatus history is evaluated for every traveler, and the system takes these into account. In fact, they are somewhat visible in the application UI as well (though some of these state transitions are combined for UX purposes).

When a Traveler requests a booking, some PII is stored. Passenger name, birth date, and whatever the airline requires to book a given seat. This data is stored as a JSON blob – the fields are dynamic and may differ depending on the airline. This data is always destroyed after tickets are issues, the booking request was rejected, or when the booking was still waiting for approval but the event has concluded 10 days ago.

For flight search and booking, the domain model is a 1:1 copy of what AllMyles has in their API. Looking at other APIs, it’s roughly the standard model in the world of flights. A Search returns one or more SearchResults. Each of those has one or more Combinations, typically flights that have the same conditions and price, but different times. E.g. a shuttle flight from Brussels to Frankfurt may return 3 combinations here – same price, and conditions, just 3 different times during the day. A Combination can also have upgrade and baggage option. The booking itself is essentially makign a call that passes a given Combination identifier (and what options are selected on top).

Ready for take-off!

The app was deployed (targeting AllMyles staging), and I requested certification (coughing up the initial fee – no turning back now!). This process took a couple of days, but at some point I was given production access and SpeakerTravel was live!

This was right on time for our CloudBrew conference in 2019, and it was really exciting to see folks request flights, booking them via the API, and seeing actual flight tickets sent out by airlines. Not to mention, much easier in terms of workload and back-and-forth compared to the manual process that triggered this entire endeavour! And speakers themselves also enjoyed this workflow:

Massive props to @CloudBrewConf – their travel booking system for the speakers has really raised the bar!

— Paul Stack (@stack72) August 14, 2019

Thanks, Paul 🤗

Very quickly, a couple of organizer-friends jumped aboard as well. And for a conference I was attending myself, I used it to book a flight in my own name. Pretty cool!

First time taking a flight booked through my own @SpeakerTravel_ – pretty cool to fly on a ticket you issued yourself 😎

— Maarten Balliauw (@maartenballiauw) January 15, 2020

A couple of conferences later, some bugs were ironed out, some feature requests were handled, and the certification fee was covered. Business-wise, and conveniently brushing aside time spent building this thing, SpeakerTravel was break-even!

COVID-19 💊 and working on the backlog

And then, half a year after release, a pandemic hit the world. Conferences all went online, travel virtually halted, and no new conferences onboarded SpeakerTravel for a long time.

This was a bummer, but a good time to work on that backlog of features I wanted to add. Some technical debt got fixed, and thanks to fast release cadences in both the front-end and .NET world, I’ve been upgrading a lot of things, many times.

Today’s tech stack:

ASP.NET Core MVC + Razor pages for the scaffolded identity area, on .NET 6.0 RC2
Bootstrap and Bootswatch
A sprinkle of jQuery (that I want to replace with HTMX)

Hangfire for background jobs (the actual bookingm, sending e-mails, anything that’s async/retryable)
SQL Server LocalDb for development, Azure SQL Database for production
Product website and application are Docker images now, deployed to Azure Web Apps for Containers

JetBrains Space for Git, CI/CD, and container registry

Mailjet for sending out e-mails. Smaller company, better support.

Note: If you’re interested in seeing CI/CD with Space, check this Twitter thread.

What’s next?

Good question! I think this question can be split as well…

What’s next on the technical side?

Let’s start with this one. As in the past months, working on some items from the backlog and just keeping things up to date. Very high on my wishlist is ripping out jQuery and replacing the few bits that require client-side interactivity with HTMX.

One of the things I do want to try at some point is seeing if I can run the entire stack on Kubernetes, but that’s purely out of personal interest.

Any other nerd snipes are welcome in the comments!

What’s next on the business side?

What’s immediately next, is definitely uncertainty. We’re still in a pandemic, and while parts of the world seem to be evolving into the right direction for SpeakerTravel, it’s unsure when in-person conferences will pick up again.

Apart from infrastructure, there’s no real cost to running the application, so I can be patient on that side and keep pitching it to anyone I meet, and provide good support for those who do sign up in the meanwhile.

Speaking of which, I’m super happy that since September 2021, a few conferences have been using the product for in-person travel!

Why not pivot?

A question I got recently was: Why not pivot to business travel? – great idea!

Earlier in this post, I described the model where employees could search and pick travel options, and the company can approve and pay. This would indeed be a great pivot, but there are a couple of things holding me back on this:

It’s a very crowded market (with some big players like American Express). This is not a big issue though, it validates there is a market, but it would need quite some effort to get traction.
I’d have to expand from flights into flights + hotels + cars. While possible in terms of APIs, it does require fulfilling some extra regulations.

Both of these would mean going bigger than what I currently want to handle.

Conclusion and Takeaways

Sometimes, you have a story in you that you just want to write down. This was one of those.

Instead of sharing the event of having SpeakerTravel online, I wanted to share the story about the process that brought it about. Maybe we all focus on the event too much, and not enough on the process towards the event.

Social media consists of short bits, while blogs, articles and tutorials about the process have so much value. Leave breadcrumbs for those on a similar path like you in the future.

Speaking of that: if there’s anything in this blog post you would like to see a follow-up on with more details, let me know via the comments.

Take care!

Technical Steering Group

With all the exciting changes in the .NET Ecosystem and the opening up of the platform to individuals and companies outside Microsoft, the .NET Foundation has recognized that it’s important that we help open up how technical decisions are made in the .NET platform as well as keep everyone on the same page as to the direction of the combined projects that make up the core components of the .NET platform. Therefore, today we are creating a new working group in the .NET Foundation to fulfil this role – the Technical Steering Group.

I am pleased to announce that Red Hat, JetBrains and Unity have agreed to join Microsoft on the .NET Foundation Technical Steering Group. This marks an important milestone in opening the technical decision making processes of the core .NET components and also demonstrates the commitment of these partners in helping to make sure .NET continues to be an open, innovate and exciting development platform.

When I talked with developers in the early days of .NET there was a common misconception that there was a single all-powerful .NET team somewhere hidden in an building on the Microsoft campus working away on all the various .NET APIs that they used. However that’s never been the case. While there is a core team of engineers working on the compiler, languages, core libraries, web frameworks etc, there have also always been other teams in Microsoft working on other parts of .NET. These teams are spread across many groups and also spread across the globe. To ensure there was consensus in how the platform should move forward and co-ordination when it came to changes and major updates these teams would keep in touch via email, conference calls and occasional meetings.

In many ways, the Technical Steering Group is formalizing those existing processes to ensure co-ordination and strong technical review that happened between core .NET project teams in the past and opening this up so that all leaders of core .NET components in the .NET Foundation are part of the Technical Steering Group and also other companies and organizations who are basing the developer tools and products on a deep integration with the .NET platform.

The Technical Steering Group does not replace the efforts that individual projects do to ensure open community involvement in verifying their plans (such as theAPI review process from the CoreFX project or the C# Language Design process), but exists to ensure that all the core components are aligned with each other.

As discussed earlier, as well as the leaders of the core .NET components in the foundation, the following companies have also announced today that they are joining Microsoft in the Technical Steering Group

Red Hat are leading the charge when it comes to helping companies host .NET workloads on Linux with Red Hat Enterprise Linux (RHEL). Microsoft have a good, close partnership with Red Hat and they have already been involved in discussions around bringing features in .NET to Linux but the Technical Steering Group increases the strength of this relationship and opens it up to all the teams working on core .NET technologies – not just between individual teams in Red Hat and Microsoft.

JetBrains have built tools that .NET developers love for many years, including the hugely productive Visual Studio add-in ReSharper. At the start of the year, JetBrains also announced Project Rider, a cross-platform C# IDE, based on the IntelliJ Platform and using ReSharper technology. While ReSharper is hosted inside Visual Studio, Project Rider is a full, standalone IDE that runs on Mac OS X and Linux as well as Windows.  Project Rider has deep integrations across the .NET stack to allow it to make programing and debugging so productive, relying heavily on Mono and .NET Core.

Unity is far and away the world’s favourite game engine for creating mobile games on iOS, Android and Windows Phone. They are also leading the VR revolution with Native Oculus Rift, Gear VR, and Playstation VR support already available and Microsoft HoloLens + Steam VR/Vive on the way. Unity power many of the console and desktop games loved by gamers worldwide. The C# scripting engine at the heart of Unity is used by games developers across the world and continuing to keep this on the cutting edge of .NET development is critical to maintaining the performance and productivity for developers building on Unity. It also helps ensure all the blazing fast speed improvements being made flow both ways.

The excitement and innovation around .NET keeps growing and growing. I’m looking forward to seeing what this increased openness, co-ordination and collaboration will bring. Please join me in welcoming Red Hat, JetBrains and Unity to the Technical Steering Group.

Martin Woodward
Executive Director | .NET Foundation

Do you have an exit strategy?

It’s an extremely common problem in legacy code bases: a new way of doing things was introduced before the team decided on a way to get the old thing out.

Famous examples are:

Introducing Doctrine ORM next to Propel
Introducing Symfony FrameworkBundle while still using Zend controllers
Introducing Twig for the new templates, while using Smarty for the old ones
Introducing a Makefile while the rest of the project still uses Phing

And so on… I’m sure you also have plenty examples to add here!

Introducing a new tool while keeping the old one

For a moment we are so happy that we can start using that new tool, but every time we need to change something in this area we have to roll out the same solution twice, for each tool we introduced. Something changes about the layout of the site? We have to update both Twig and Smarty templates. Something changes about the authentication logic? We have to change a Symfony request listener and the Zend application bootstrap file too. There will be lots of copy/pasting, and head scratching. Finally, we have to keep both dependencies up-to-date for a long time.

Okay, everybody knows that this is bad, and that you shouldn’t do it. Still, every day we tend to make problematic decisions like this. We try to bridge some kind of gap, but that leaves us with one extra thing to maintain. And software is already so hard (and expensive) to maintain…

Multiple versions in the project

The same goes for decisions at a larger scale. How many projects have a V2 and a V3 directory in their code base? One day the developers wanted to escape the mess by creating this green spot next to the big brown spot. Then some time later the same happened again, and maybe even again.

The problem with these decisions: there is usually no exit strategy. A new thing is created next to an old thing. The old thing will be forever there. Often developers defend such a decision by saying that the old things will be migrated one by one to the new thing. But this simply can’t be true, unless:

A very serious effort is made to do so (but this will be incredibly expensive)
A long-term commitment is made to keep doing this continuously (alongside other important work)
There isn’t much to migrate anyway (but that usually isn’t the case)

On an even larger scale teams may want to rewrite entire products. A rewrite suffers from all the above-mentioned problems. And we already know that they are usually aren’t successful too. To be honest, I’ve been part of several successful rewrite projects, but they have been very expensive, and they were extensively redesigned. They didn’t go for feature parity, which may have contributed largely to their success.

Class and method deprecations

It’s not always about new tools, new libraries, new project versions, or rewrites. Even at a much smaller scale developers make decisions that complicate maintenance in the long run. For instance, developers introduce new classes and new methods. They mark the old ones as @deprecated, yet they don’t upgrade existing clients, so the old classes and methods can never be deleted and will be dragged along forever.

We want the new thing, but we don’t want to clean up the old mess. For a moment we can escape the legacy mess and be happy in the green field, but the next day we see the mess around us and realize that we have to maintain even more code today than we did yesterday.

Design heuristics

So at different scales we make these design decisions that actually increase the already unbearable maintenance burden. How can we stop this?

We have to make better decisions, essentially using better heuristics for making them. When introducing a new thing that is supposed to replace an old thing we have to keep asking ourselves:

Do we have a realistic exit strategy for the old thing?
Will we actually get the old thing out?

If not, I think you owe it to the team to consider fixing or improving the old thing instead.

Quick Testing Tips: Self-Contained Tests

Whenever I read a test method I want to understand it without having to jump around in the test class (or worse, in dependencies). If I want to know more, I should be able to “click” on one of the method calls and find out more.

I’ll explain later why I want this, but first I’ll show you how to get to this point.

As an example, here is a test I encountered recently:

public function testGetUsernameById(): void
$userRepository = $this->createUserRepository();

$username = $userRepository->getUsernameById(1);

self::assertSame(‘alice’, $username);

The way I read this:

* Ah, we’re testing the UserRepository, so we instantiate it.
* The factory method probably injects a connection to the test
* database or something:
$userRepository = $this->createUserRepository();

* Now we fetch a username by its ID. The ID is 1. That’s the
* first time I see it in this test. This probably means that
* there is no user with this ID and the method will throw
* an exception or return a default name or something:
$username = $userRepository->getUsernameById(1);

* Wait, the username is supposed to be “alice”?
* Where did that come from?
self::assertSame(‘alice’, $username);

So while trying to understand this test that last line surprised me. Where does Alice come from?

As it turns out there is a setupTables() function which is called during the setup phase. This method populates the database with some user data that is used in various ways by one of the test methods in the class.

private function setupTables(): void
[‘user_id’ => 1, ‘username’ => ‘alice’, ‘password’ => ‘alicepassword’],
[‘user_id’ => 2, ‘username’ => ‘bob’, ‘password’ => ‘bobpassword’],
[‘user_id’ => 3, ‘username’ => ‘john’, ‘password’ => ‘johnpassword’],
[‘user_id’ => 4, ‘username’ => ‘peter’, ‘password’ => ‘peterpassword’],
// …

There are some problems with this approach:

It’s not clear which tests rely on which database records (a common issue with shared database fixtures). So it’s hard to change or remove tests, or the test data, when needed. As an example, if we remove one test, maybe some test data could also be removed but we don’t really know. If we change some test data, one of the tests may break.
It’s not clear which of the values is actually relevant. For example, we’re interested in user 1, ‘alice’, but is the password relevant? Most likely not.

The first thing we need to do is ensure that each test only creates the database records that it really needs, e.g.

public function testGetUsernameById(): void
‘user_id’ => 1,
‘username’ => ‘alice’,
‘password’ => ‘alicepassword’

$userRepository = $this->createUserRepository();

$username = $userRepository->getUsernameById(1);

self::assertSame(‘alice’, $username);

At this point the test is already much easier to understand on its own. You can clearly see where the number 1 and the string ‘alice’ come from. There’s only that ‘alicepassword’ string that is irrelevant for this test. Leaving it out gives us an SQL constraint error. But we can still get rid of it here by extracting a method for creating a user record, moving the insert() out of sight:

public function testGetUsernameById(): void
$this->createUser(1, ‘alice’);

$userRepository = $this->createUserRepository();

$username = $userRepository->getUsernameById(1);

self::assertSame(‘alice’, $username);

private function createUser(int $id, string $username): void
‘user_id’ => $id,
‘username’ => $username,
‘password’ => ‘a-password’

Going back to the beginning of this post:

When I read a test method I want to understand it without having to jump around in the test class (or worse, in dependencies).
If I want to know more, I should be able to “click” on one of the method calls and find out more.

With just a few simple refactoring steps we’ve been able to achieve these things. As a consequence we achieve the greater goal, the reason why I stick to these rules: each test method is now self-contained, meaning we can delete or change any of them without influencing the other test methods.

At the point where a test is self-contained like this, I try to go the extra mile by rephrasing it using the Given/When/Then syntax:

Given the user with ID 1 has username “alice”
When getting the username of the user with ID 1
Then the username is “alice”

In my opinion this doesn’t add much insight and only shows that the repository can do a SELECT query for something that was just INSERTed. The big question here is: why do we even have to find out what the username is? Once we know we should codify the answer to this question in a test. So instead of testing that single repository method, I’d rather see it being used in its bigger context and read the test for that. E.g.

Given user 1 has username “alice”
When we send a mail to this user
Then the footer of the mail shows “To find out more, log in with your username: alice”

This is actually much better, since this test takes a much safer distance to the subject under test; it leaves the design design to use a repository method for finding the username an implementation detail.

This post has been inspired by some development coaching work I’m doing for PinkWeb at the time of writing. Check out their vacancies if you’d like to join the team as well!

Tips and Tricks to Ace the Certified Kubernetes Application Developer

I recently passed the Certified Kubernetes Application Developer exam and thought to share some tips and tricks that might come in handy if you are also planning to take the exam in the future.

📔 Background

About a month ago, I decided to learn more about Kubernetes as it would be really useful for the stuff I’m working at GitHub daily. Prior to that, I was always fascinated by Kubernetes but never got the chance to work on an actual system that used it. I knew how it worked from a 10,000 feet view, but didn’t have an idea of core components, basic constructs and literally to be able to do anything with it.

Having taken the exam, I’m quite comfortable navigating through Kubernetes and now it makes sense when I’m doing something with it, rather than merely following some commands.

CKAD is a hands-on exam and managing your time is absolutely crucial. I hope you find the following tips useful✌️

🗒️ Summary of the exam

To summarize the key facts about the CKAD exam,

Passing score is 66%
2 hours duration, comprised of 19 questions
Questions will have varying weights (from 2% – 13%)
You can also open only one tab to browse Kubernetes documentation
Remotely proctored

💻 Aliases and bash tricks

This is a really important first that I can’t recommend enough. I was using the full kubectl command during the study phase but later started using just k by setting up an alias when I was practising simply to cut down the time when typing commands.

alias k=kubectl

Initially, it will take a few seconds to type this out but it will pay dividends throughout the exam. Here are a few more if you are interested. You don’t need to use everything in here though. In fact, I only used the above alias.

Feel free to mix and match the commands you are comfortable with 👍

alias kd=‘kubectl describe’
alias kr=‘kubectl run’
alias kc=‘kubectl create’
alias ke=‘kubectl explain’
alias kgp=‘kubectl get pods’
alias kgs=‘kubectl get svc’

You don’t need to be a Linux guru to take the exam, but, remember you will do it in some Linux env. (potentially Ubuntu). So it helps to know a few basic Bash commands if you are coming from Windows.

cp – Copy files

mv – Move/Rename files

mkdir – Create new folder

ls – List files

rm – Remove/Delete files

grep – Search through text. Useful when you want to filter a list of pods. Eg: kubectl get pods | grep -i status:

Ctrl+R – To do a reverse search to find a command you have previously run

Extra tip: Use short names of resources whenever possible.

Not sure what are the short names? You can check it with kubectl api-resources command.

⌨️ Get a good grasp of VIM

I found having previous experience in VIM came in handy. However, you don’t need to be a master at it. Using nano would be fine too if you are good.

Take the time to set the following to your VIM profile before attempting any questions.

vi ~/.vimrc

Add the following lines and save it.

set expandtab
set tabstop=2
set shiftwidth=2

These commands will save you from having indentation issues and weird syntax issues while working with YAML files during the exam.

Here are some other commands that may be of help if you are not familiar with VIM.

/ – Search through text. Also, use n to go to the next result.

dd – Delete a line

u – Undo

Shift+A – Go to the end of the line and enter the INSERT mode

gg – Go to the beginning of the file

G – Go to the end of the file

o – Go to the next line and enter INSERT mode

v – Enter VISUAL mode. You can select a block of lines with arrow keys or j and k keys. You can copy with y and paste with p . Also, you can indent a block with Shift + > to right and Shift + < to indent to the left

And finally, while you are in NORMAL mode you can type ZZ to quickly save and go back to the terminal without having to type :wq How cool is that? ⚡

☄️ Mastering the imperative commands

You would come across many questions where you would have to create pods, deployments, services etc. In such cases, don’t bother writing up YAML definitions from scratch – or even finding the relevant reference in the k8s docs.

You can save a lot of time by using imperative commands. For instance, if you are tasked to create a pod with nginx as the image, tier:frontend as labels with the port 80 exposed:

kubectl run tmp –image=nginx –labels tier=frontend –port 80

Say you are asked to expose a deployment nginx with a NodePort service called nginx-svc,

kubectl expose deploy nginx –name=nginx-svc –port=80 –target-port=80 –type=NodePort

But what if you can’t get everything included in a single command you can use the –dry-run=client -o yaml > tmp.yaml to export it to a file before creating the resource.

Oh btw, if you need to delete a pod quickly you can use the –grace-period=0 –force command to quickly delete them without waiting.

kubectl delete po <pod name> –grace-period=0 –force

🤔 When in trouble

Pay attention to the weightage of the question and a rough idea of how long it will take you to solve it. I can remember, I was looking at a question that was quite long and had a fair bit of configuration to be done. But the weightage was only 2% 😆 I marked it down on the provided notepad and skipped it (you can also Flag a question). The next question was 4% and was really really easy! I hope you get the point.

💡 Don’t be afraid to skip and revisit questions.

If you forgot how something is placed in a resource definition, you can use kubectl explain <resource name> –recursive | less to find what you are looking for.

Another useful tip I can give you is, the kubectl <resource name> -h command. You can use it like so.

k run -h

☝️ A note on clusters & namespaces

This is also a very important point you should pay attention to. At the top of each question, if you will be given a command to set the current context. Make sure to run it for each question as different questions will be in different clusters.

Another point is, pay attention to any namespaces in the given question text. Sometimes it will be worded within the question. Sometimes it will be at the bottom of the question as a separate note!

In a question where you will have to ssh into servers please make sure to remember (or note it down) which cluster and server you are in. And remember to exit out of it before going to the next question.

📄 Leverage the docs

In certain cases, it’s better to visit the docs than to spend time to figure out what needs to be done. For instance, if there’s a question on setting up a Persistent Volume, the question will also have a section to create a Persistent Volume Claim and to create a Pod to use that.

Go to the docs, type pv at the search bar and click on the link that says “Configure a Pod to Use a PersistentVolume for Storage”. And yes, you need to know where things are at within the K8S docs!

👟 Practice, practice, practice

Speed is key to the exam. Although you get 2 hours, it will just fly! 🦅

When you pay for the exam you will get 2 free mock exam sessions before sitting the real exam.

As Jeremy Clarkson would say, “SPEEEEEEEEED!!!!” 😂

Here are some more exercises I used. [Free] [Free] [Paid] [Free]

👋 Conclusion

Do you know what’s the hardest thing to do after the exam? waiting for the results! 🤣 It might take up to 24 – 36 hours to get your result. Here’s my certificate if you are interested.

I hope you found these tips helpful. Feel free to comment below if you have got any tips and tricks too! Good luck with your exam!!! 🎉