Securely validate business application resilience with AWS FIS and IAM

To avoid high costs of downtime, mission critical applications in the cloud need to achieve resilience against degradation of cloud provider APIs and services.

In 2021, AWS launched AWS Fault Injection Simulator (FIS), a fully managed service to perform fault injection experiments on workloads in AWS to improve their reliability and resilience. At the time of writing, FIS allows to simulate degradation of Amazon Elastic Compute Cloud (EC2) APIs using API fault injection actions and thus explore the resilience of workflows where EC2 APIs act as a fault boundary. 

In this post we show you how to explore additional fault boundaries in your applications by selectively denying access to any AWS API. This technique is particularly useful for fully managed, “black box” services like Amazon Simple Storage Service (S3) or Amazon Simple Queue Service (SQS) where a failure of read or write operations is sufficient to simulate problems in the service. This technique is also useful for injecting failures in serverless applications without needing to modify code. While similar results could be achieved with network disruption or modifying code with feature flags, this approach provides a fine granular degradation of an AWS API without the need to re-deploy and re-validate code.

Overview

We will explore a common application pattern: user uploads a file, S3 triggers an AWS Lambda function, Lambda transforms the file to a new location and deletes the original:

Figure 1. S3 upload and transform logical workflow: User uploads file to S3, upload triggers AWS Lambda execution, Lambda writes transformed file to a new bucket and deletes original. Workflow can be disrupted at file deletion.

We will simulate the user upload with an Amazon EventBridge rate expression triggering an AWS Lambda function which creates a file in S3:

Figure 2. S3 upload and transform implemented demo workflow: Amazon EventBridge triggers a creator Lambda function, Lambda function creates a file in S3, file creation triggers AWS Lambda execution on transformer function, Lambda writes transformed file to a new bucket and deletes original. Workflow can be disrupted at file deletion.

Using this architecture we can explore the effect of S3 API degradation during file creation and deletion. As shown, the API call to delete a file from S3 is an application fault boundary. The failure could occur, with identical effect, because of S3 degradation or because the AWS IAM role of the Lambda function denies access to the API.

To inject failures we use AWS Systems Manager (AWS SSM) automation documents to attach and detach IAM policies at the API fault boundary and FIS to orchestrate the workflow.

Each Lambda function has an IAM execution role that allows S3 write and delete access, respectively. If the processor Lambda fails, the S3 file will remain in the bucket, indicating a failure. Similarly, if the IAM execution role for the processor function is denied the ability to delete a file after processing, that file will remain in the S3 bucket.

Prerequisites

Following this blog posts will incur some costs for AWS services. To explore this test application you will need an AWS account. We will also assume that you are using AWS CloudShell or have the AWS CLI installed and have configured a profile with administrator permissions. With that in place you can create the demo application in your AWS account by downloading this template and deploying an AWS CloudFormation stack:

git clone https://github.com/aws-samples/fis-api-failure-injection-using-iam.git
cd fis-api-failure-injection-using-iam
aws cloudformation deploy –stack-name test-fis-api-faults –template-file template.yaml –capabilities CAPABILITY_NAMED_IAM

Fault injection using IAM

Once the stack has been created, navigate to the Amazon CloudWatch Logs console and filter for /aws/lambda/test-fis-api-faults. Under the EventBridgeTimerHandler log group you should find log events once a minute writing a timestamped file to an S3 bucket named fis-api-failure-ACCOUNT_ID. Under the S3TriggerHandler log group you should find matching deletion events for those files.

Once you have confirmed object creation/deletion, let’s take away the permission of the S3 trigger handler lambda to delete files. To do this you will attach the FISAPI-DenyS3DeleteObject  policy that was created with the template:

ROLE_NAME=FISAPI-TARGET-S3TriggerHandlerRole
ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ROLE_NAME}’].Arn” –output text )
echo Target Role ARN: $ROLE_ARN

POLICY_NAME=FISAPI-DenyS3DeleteObject
POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

aws iam attach-role-policy
–role-name ${ROLE_NAME}
–policy-arn ${POLICY_ARN}

With the deny policy in place you should now see object deletion fail and objects should start showing up in the S3 bucket. Navigate to the S3 console and find the bucket starting with fis-api-failure. You should see a new object appearing in this bucket once a minute:

Figure 3. S3 bucket listing showing files not being deleted because IAM permissions DENY file deletion during FIS experiment.

If you would like to graph the results you can navigate to AWS CloudWatch, select “Logs Insights“, select the log group starting with /aws/lambda/test-fis-api-faults-S3CountObjectsHandler, and run this query:

fields @timestamp, @message
| filter NumObjects >= 0
| sort @timestamp desc
| stats max(NumObjects) by bin(1m)
| limit 20

This will show the number of files in the S3 bucket over time:

Figure 4. AWS CloudWatch Logs Insights graph showing the increase in the number of retained files in S3 bucket over time, demonstrating the effect of the introduced failure.

You can now detach the policy:

ROLE_NAME=FISAPI-TARGET-S3TriggerHandlerRole
ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ROLE_NAME}’].Arn” –output text )
echo Target Role ARN: $ROLE_ARN

POLICY_NAME=FISAPI-DenyS3DeleteObject
POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

aws iam detach-role-policy
–role-name ${ROLE_NAME}
–policy-arn ${POLICY_ARN}

We see that newly written files will once again be deleted but the un-processed files will remain in the S3 bucket. From the fault injection we learned that our system does not tolerate request failures when deleting files from S3. To address this, we should add a dead letter queue or some other retry mechanism.

Note: if the Lambda function does not return a success state on invocation, EventBridge will retry. In our Lambda functions we are cost conscious and explicitly capture the failure states to avoid excessive retries.

Fault injection using SSM

To use this approach from FIS and to always remove the policy at the end of the experiment, we first create an SSM document to automate adding a policy to a role. To inspect this document, open the SSM console, navigate to the “Documents” section, find the FISAPI-IamAttachDetach document under “Owned by me”, and examine the “Content” tab (make sure to select the correct region). This document takes the name of the Role you want to impact and the Policy you want to attach as parameters. It also requires an IAM execution role that grants it the power to list, attach, and detach specific policies to specific roles.

Let’s run the SSM automation document from the console by selecting “Execute Automation”. Determine the ARN of the FISAPI-SSM-Automation-Role from CloudFormation or by running:

POLICY_NAME=FISAPI-DenyS3DeleteObject
POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

Use FISAPI-SSM-Automation-Role, a duration of 2 minutes expressed in ISO8601 format as PT2M, the ARN of the deny policy, and the name of the target role FISAPI-TARGET-S3TriggerHandlerRole:

Figure 5. Image of parameter input field reflecting the instructions in blog text.

Alternatively execute this from a shell:

ASSUME_ROLE_NAME=FISAPI-SSM-Automation-Role
ASSUME_ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ASSUME_ROLE_NAME}’].Arn” –output text )
echo Assume Role ARN: $ASSUME_ROLE_ARN

ROLE_NAME=FISAPI-TARGET-S3TriggerHandlerRole
ROLE_ARN=$( aws iam list-roles –query “Roles[?RoleName==’${ROLE_NAME}’].Arn” –output text )
echo Target Role ARN: $ROLE_ARN

POLICY_NAME=FISAPI-DenyS3DeleteObject
POLICY_ARN=$( aws iam list-policies –query “Policies[?PolicyName==’${POLICY_NAME}’].Arn” –output text )
echo Impact Policy ARN: $POLICY_ARN

aws ssm start-automation-execution
–document-name FISAPI-IamAttachDetach
–parameters “{
“AutomationAssumeRole”: [ “${ASSUME_ROLE_ARN}” ],
“Duration”: [ “PT2M” ],
“TargetResourceDenyPolicyArn”: [“${POLICY_ARN}” ],
“TargetApplicationRoleName”: [ “${ROLE_NAME}” ]
}”

Wait two minutes and then examine the content of the S3 bucket starting with fis-api-failure again. You should now see two additional files in the bucket, showing that the policy was attached for 2 minutes during which files could not be deleted, and confirming that our application is not resilient to S3 API degradation.

Permissions for injecting failures with SSM

Fault injection with SSM is controlled by IAM, which is why you had to specify the FISAPI-SSM-Automation-Role:

Figure 6. Visual representation of IAM permission used for fault injections with SSM.

This role needs to contain an assume role policy statement for SSM to allow assuming the role:

AssumeRolePolicyDocument:
Statement:
– Action:
– ‘sts:AssumeRole’
Effect: Allow
Principal:
Service:
– “ssm.amazonaws.com”

The role also needs to contain permissions to describe roles and their attached policies with an optional constraint on which roles and policies are visible:

– Sid: GetRoleAndPolicyDetails
Effect: Allow
Action:
– ‘iam:GetRole’
– ‘iam:GetPolicy’
– ‘iam:ListAttachedRolePolicies’
Resource:
# Roles
– !GetAtt EventBridgeTimerHandlerRole.Arn
– !GetAtt S3TriggerHandlerRole.Arn
# Policies
– !Ref AwsFisApiPolicyDenyS3DeleteObject

Finally the SSM role needs to allow attaching and detaching a policy document. This requires

an ALLOW statement
a constraint on the policies that can be attached
a constraint on the roles that can be attached to

In the role we collapse the first two requirements into an ALLOW statement with a condition constraint for the Policy ARN. We then express the third requirement in a DENY statement that will limit the ‘*’ resource to only the explicit role ARNs we want to modify:

– Sid: AllowOnlyTargetResourcePolicies
Effect: Allow
Action:
– ‘iam:DetachRolePolicy’
– ‘iam:AttachRolePolicy’
Resource: ‘*’
Condition:
ArnEquals:
‘iam:PolicyARN’:
# Policies that can be attached
– !Ref AwsFisApiPolicyDenyS3DeleteObject
– Sid: DenyAttachDetachAllRolesExceptApplicationRole
Effect: Deny
Action:
– ‘iam:DetachRolePolicy’
– ‘iam:AttachRolePolicy’
NotResource:
# Roles that can be attached to
– !GetAtt EventBridgeTimerHandlerRole.Arn
– !GetAtt S3TriggerHandlerRole.Arn

We will discuss security considerations in more detail at the end of this post.

Fault injection using FIS

With the SSM document in place you can now create an FIS template that calls the SSM document. Navigate to the FIS console and filter for FISAPI-DENY-S3PutObject. You should see that the experiment template passes the same parameters that you previously used with SSM:

Figure 7. Image of FIS experiment template action summary. This shows the SSM document ARN to be used for fault injection and the JSON parameters passed to the SSM document specifying the IAM Role to modify and the IAM Policy to use.

You can now run the FIS experiment and after a couple minutes once again see new files in the S3 bucket.

Permissions for injecting failures with FIS and SSM

Fault injection with FIS is controlled by IAM, which is why you had to specify the FISAPI-FIS-Injection-EperimentRole:

Figure 8. Visual representation of IAM permission used for fault injections with FIS and SSM. It shows the SSM execution role permitting access to use SSM automation documents as well as modify IAM roles and policies via the SSM document. It also shows the FIS execution role permitting access to use FIS templates, as well as the pass-role permission to grant the SSM execution role to the SSM service. Finally it shows the FIS user needing to have a pass-role permission to grant the FIS execution role to the FIS service.

This role needs to contain an assume role policy statement for FIS to allow assuming the role:

AssumeRolePolicyDocument:
Statement:
– Action:
– ‘sts:AssumeRole’
Effect: Allow
Principal:
Service:
– “fis.amazonaws.com”

The role also needs permissions to list and execute SSM documents:

– Sid: RequiredReadActionsforAWSFIS
Effect: Allow
Action:
– ‘cloudwatch:DescribeAlarms’
– ‘ssm:GetAutomationExecution’
– ‘ssm:ListCommands’
– ‘iam:ListRoles’
Resource: ‘*’
– Sid: RequiredSSMStopActionforAWSFIS
Effect: Allow
Action:
– ‘ssm:CancelCommand’
Resource: ‘*’
– Sid: RequiredSSMWriteActionsforAWSFIS
Effect: Allow
Action:
– ‘ssm:StartAutomationExecution’
– ‘ssm:StopAutomationExecution’
Resource:
– !Sub ‘arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:automation-definition/${SsmAutomationIamAttachDetachDocument}:$DEFAULT’

Finally, remember that the SSM document needs to use a Role of its own to execute the fault injection actions. Because that Role is different from the Role under which we started the FIS experiment, we need to explicitly allow SSM to assume that role with a PassRole statement which will expand to FISAPI-SSM-Automation-Role:

– Sid: RequiredIAMPassRoleforSSMADocuments
Effect: Allow
Action: ‘iam:PassRole’
Resource: !Sub ‘arn:aws:iam::${AWS::AccountId}:role/${SsmAutomationRole}’

Secure and flexible permissions

So far, we have used explicit ARNs for our guardrails. To expand flexibility, we can use wildcards in our resource matching. For example, we might change the Policy matching from:

Condition:
ArnEquals:
‘iam:PolicyARN’:
# Explicitly listed policies – secure but inflexible
– !Ref AwsFisApiPolicyDenyS3DeleteObject

or the equivalent:

Condition:
ArnEquals:
‘iam:PolicyARN’:
# Explicitly listed policies – secure but inflexible
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/${FullPolicyName}

to a wildcard notation like this:

Condition:
ArnEquals:
‘iam:PolicyARN’:
# Wildcard policies – secure and flexible
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/${PolicyNamePrefix}*’

If we set PolicyNamePrefix to FISAPI-DenyS3 this would now allow invoking FISAPI-DenyS3PutObject and FISAPI-DenyS3DeleteObject but would not allow using a policy named FISAPI-DenyEc2DescribeInstances.

Similarly, we could change the Resource matching from:

NotResource:
# Explicitly listed roles – secure but inflexible
– !GetAtt EventBridgeTimerHandlerRole.Arn
– !GetAtt S3TriggerHandlerRole.Arn

to a wildcard equivalent like this:

NotResource:
# Wildcard policies – secure and flexible
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${RoleNamePrefixEventBridge}*’
– !Sub ‘arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${RoleNamePrefixS3}*’
and setting RoleNamePrefixEventBridge to FISAPI-TARGET-EventBridge and RoleNamePrefixS3 to FISAPI-TARGET-S3.

Finally, we would also change the FIS experiment role to allow SSM documents based on a name prefix by changing the constraint on automation execution from:

– Sid: RequiredSSMWriteActionsforAWSFIS
Effect: Allow
Action:
– ‘ssm:StartAutomationExecution’
– ‘ssm:StopAutomationExecution’
Resource:
# Explicitly listed resource – secure but inflexible
# Note: the $DEFAULT at the end could also be an explicit version number
# Note: the ‘automation-definition’ is automatically created from ‘document’ on invocation
– !Sub ‘arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:automation-definition/${SsmAutomationIamAttachDetachDocument}:$DEFAULT’

to

– Sid: RequiredSSMWriteActionsforAWSFIS
Effect: Allow
Action:
– ‘ssm:StartAutomationExecution’
– ‘ssm:StopAutomationExecution’
Resource:
# Wildcard resources – secure and flexible
#
# Note: the ‘automation-definition’ is automatically created from ‘document’ on invocation
– !Sub ‘arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:automation-definition/${SsmAutomationDocumentPrefix}*’

and setting SsmAutomationDocumentPrefix to FISAPI-. Test this by updating the CloudFormation stack with a modified template:

aws cloudformation deploy –stack-name test-fis-api-faults –template-file template2.yaml –capabilities CAPABILITY_NAMED_IAM

Permissions governing users

In production you should not be using administrator access to use FIS. Instead we create two roles FISAPI-AssumableRoleWithCreation and FISAPI-AssumableRoleWithoutCreation for you (see this template). These roles require all FIS and SSM resources to have a Name tag that starts with FISAPI-. Try assuming the role without creation privileges and running an experiment. You will notice that you can only start an experiment if you add a Name tag, e.g. FISAPI-secure-1, and you will only be able to get details of experiments and templates that have proper Name tags.

If you are working with AWS Organizations, you can add further guard rails by defining SCPs that control the use of the FISAPI-* tags similar to this blog post.

Caveats

For this solution we are choosing to attach policies instead of permission boundaries. The benefit of this is that you can attach multiple independent policies and thus simulate multi-step service degradation. However, this means that it is possible to increase the permission level of a role. While there are situations where this might be of interest, e.g. to simulate security breaches, please implement a thorough security review of any fault injection IAM policies you create. Note that modifying IAM Roles may trigger events in your security monitoring tools.

The AttachRolePolicy and DetachRolePolicy calls from AWS IAM are eventually consistent, meaning that in some cases permission propagation when starting and stopping fault injection may take up to 5 minutes each.

Cleanup

To avoid additional cost, delete the content of the S3 bucket and delete the CloudFormation stack:

# Clean up policy attachments just in case
CLEANUP_ROLES=$(aws iam list-roles –query “Roles[?starts_with(RoleName,’FISAPI-‘)].RoleName” –output text)
for role in $CLEANUP_ROLES; do
CLEANUP_POLICIES=$(aws iam list-attached-role-policies –role-name $role –query “AttachedPolicies[?starts_with(PolicyName,’FISAPI-‘)].PolicyName” –output text)
for policy in $CLEANUP_POLICIES; do
echo Detaching policy $policy from role $role
aws iam detach-role-policy –role-name $role –policy-arn $policy
done
done
# Delete S3 bucket content
ACCOUNT_ID=$( aws sts get-caller-identity –query Account –output text )
S3_BUCKET_NAME=fis-api-failure-${ACCOUNT_ID}
aws s3 rm –recursive s3://${S3_BUCKET_NAME}
aws s3 rb s3://${S3_BUCKET_NAME}
# Delete cloudformation stack
aws cloudformation delete-stack –stack-name test-fis-api-faults
aws cloudformation wait stack-delete-complete –stack-name test-fis-api-faults

Conclusion 

AWS Fault Injection Simulator provides the ability to simulate various external impacts to your application to validate and improve resilience. We’ve shown how combining FIS with IAM to selectively deny access to AWS APIs provides a generic path to explore fault boundaries across all AWS services. We’ve shown how this can be used to identify and improve a resilience problem in a common S3 upload workflow. To learn about more ways to use FIS, see this workshop.

About the authors:

Dr. Rudolf Potucek

Dr. Rudolf Potucek is Startup Solutions Architect at Amazon Web Services. Over the past 30 years he gained a PhD and worked in different roles including leading teams in academia and industry, as well as consulting. He brings experience from working with academia, startups, and large enterprises to his current role of guiding startup customers to succeed in the cloud.

Rudolph Wagner

Rudolph Wagner is a Premium Support Engineer at Amazon Web Services who holds the CISSP and OSCP security certifications, in addition to being a certified AWS Solutions Architect Professional. He assists internal and external Customers with multiple AWS services by using his diverse background in SAP, IT, and construction.

Starting a Web App in 2022 [Research Results]

We are finally happy to share with you the results of the world’s first study on how developers start a web application in 2022. For this research, we wanted to do a deep dive into how engineers around the globe are starting web apps, how popular the use of low-code platforms and what tools are decisive in creating web applications.

To achieve this, we conducted a survey with 191 software engineers of all experience around the globe. We asked questions around the technology they use to start web applications.

Highlights of the key findings:

The usage of particular technologies in the creation of web apps is closely related to engineers’ experience. New technologies, such as no-code/low-code solutions, GraphQL, and non-relational databases, appeal to developers with less expertise;

Engineers with less experience are more likely to learn from online sources, whereas developers with more expertise in software development prefer to learn from more conventional sources such as books;

Retool and Bubble are the most popular no-code/low-code platforms;

React, Node.js, PostgreSQL, Amazon AWS, and Bootstrap are the most popular web application development stacks.

To read the full report, including additional insights, and full research methodology, visit this page

With Flatlogic you can create full-stack web applications literally in minutes. If you’re interested in trying Flatlogic solutions, sign up for free

The post Starting a Web App in 2022 [Research Results] appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

10 KPI Templates and Dashboards for Tracking KPI’s

Introduction
What Instruments Do We Need to Build an Effective Dashboard for KPIs?

The Top Dashboards for Tracking KPIs
Sing App Admin Dashboard
Retail Dashboard from Simple KPI
Light Blue React Node.js
Limitless Dashboard
Cork Admin Dashboard
Paper Admin Template
Pick Admin Dashboard Template
Able Pro Admin Dashboard
Architect UI Admin Template
Flatlogic One Admin Dashboard Template

You might also like these articles

Introduction

KPIs or Key Performance Indicators are a modern instrument to make a system (business, for example) work effectively. KPIs show how successful the business is, or how professional the employee is. It works with the help of measurable values, that are intended to show the success of achieving your strategic goals. KPIs are measurable indicators that you should track, calculate, analyze, and represent.  If you read this article, it means that you want to find or build an app to help you in all operations above. But before we list the top dashboard KPI templates, it’s essential to understand how exactly to choose a set of indicators that boost the growth of a business. For KPIs to be useful, they should be relevant to a business. That is crucial not only for entrepreneurs who try to improve their businesses but also for the developers of the software for tracking KPIs. Why?

Developers need to be aware of what instruments they should include in the app so the users will be able to use KPI’s easily and effectively. Since there are much more than a handful of articles and approaches on how to find the right performance indicators, what KPIs to choose, how to track them, development of a quality web application can be complicated. 

However, from our point of view, the most challenging part of such an app is building a dashboard that displays all necessary KPIs on a single screen. We have explored the Internet, analyzed different types of tools to represent KPIs, found great dashboards, and make two lists: one consists of the charts and instruments you should definitely include in your future app, the other is top dashboards we found that contain elements from the first top. Each KPI template on the list is a potent tool that will boost your metrics considerably. Let’s start from the first list.  

Enjoy reading! 

What Instruments Do We Need to Build an Effective Dashboard for KPIs?

Absolute numerical values and percentage (in absolute amount)

With the help of percentage, you can make it more informative by adding the comparison of KPI with the previous periods.

The source: https://vuestic.epicmax.co/admin/dashboard

Non-linear chart

One of the core charts.

The source: https://visme.co/blog/types-of-graphs/

Bar chart

Another core element to display KPIs.

The source: http://ableproadmin.com/angular/default/charts/apex

Stacked Bar Graphs

It’s a more complex instrument, but more informative respectively.

Progress bars

Can be confused with a horizontal bar chart. The main difference: a horizontal bar chart is used to compare the values in several categories, while a progress bar is supposed to show the progress in a single category.

The source: https://vinteedois.com.br/progress-bars/

Pie charts

The source: https://www.cleanpng.com/png-pie-chart-finance-accounting-financial-statement-3867064/preview.html

Donut chart

You can replace pie charts with a donut chart, the meaning is the same.

The source: http://webapplayers.com/luna_admin-v1.4/chartJs.html

Gauge chart

This chart helps users to track their progress towards achieving goals. It’s interchangeable with a progress bar. 

The source: https://www.datapine.com/kpi-examples-and-templates/finance

Pictograms

Instead of using an axis with numbers, it uses pictures to represent a relative or an absolute number of items.

The source: https://www.bootstrapdash.com/demo/corona/jquery/template/modern-vertical/pages/charts/justGage.html

Process behavior chart

Especially valuable for financial KPIs. The mainline shows measurement over time or categories, while two red lines are control limits that shouldn’t be surpassed.

The source: https://www.leanblog.org/2018/12/using-process-behavior-charts-to-compare-red-bead-game-willing-workers-and-baseball-teams/

Combined bar and line graph

The source: https://www.pinterest.co.uk/pin/254031235216555663/

Some additional tools:

These tools are also essential for building a dashboard for tracking KPI: calendar, dropdowns, checkboxes, input fields. The option to create and download a report will also be helpful.

The Top Dashboards for Tracking KPIs

Sing App Admin Dashboard

The source: https://demo.flatlogic.com/sing-app-vue/#/app/main/visits

If you look through a huge number of KPI templates and don’t find one that you need, you should take a look at Sing app. Sing is a premium admin dashboard template that offers all necessary opportunities to turn data into easy to understand graphs and charts. Besides all charts and functions listed above, with Sing, you get such options as downloading graphs in SVG and PNG format, animated and interactive pointer that highlights the point where the cursor is placed, and change the period for values calculation inside the frame with the graph!

MORE INFO
DEMO

Retail Dashboard from Simple KPI

The source: https://dashboards.simplekpi.com/dashboards/shared/NT0B93_AnEG1AD7YKn60zg

That is a dashboard focused on the retail trade sphere. It already contains relevant KPIs and Metrics for that sector, so you need just to download it and use it. Since it’s an opinioned dashboard you will not get a great customization option. If you are a retailer or trader you should try that dashboard to track the performance when selling goods or services.

MORE INFO
DEMO

Light Blue React Node.js

The source: https://flatlogic.com/templates/light-blue-react-node-js/demo

It is a React Admin dashboard template with Node.JS backend. The template is more suited for KPIs that reflect goals in web app traffic analysis, revenue and current balance tracking, and sales management. However, Light blue contains a lot of ready-to-use working components and charts to build a necessary dashboard. It’s very easy to customize and implement, both beginners in React and professional developers can benefit from that template and get a track on KPIs, metrics, and business data.

MORE INFO
DEMO

Limitless Dashboard

The source: http://demo.interface.club/limitless/demo/Template/layout_1/LTR/default/full/index.html

Limitless is a powerful admin template and a best-seller on ThemeForest. It goes with a modern business KPI dashboard that simplifies the processes of monitoring, analyzing, and generating insights. With the help of that dashboard, you can easily monitor the progress of growing sales or traffic and adjust the sales strategy according to customer behavior. Furthermore, the dashboard contains a live update function to keep you abreast of the latest changes.

MORE INFO
DEMO

Cork Admin Dashboard

The source: https://designreset.com/cork/ltr/demo4/index2.html

That is an awesome bootstrap-based dashboard template that follows the best design and programming principles.  The template provides you with more than 10 layout options and Laravel Version of the extremely rare dashboard. Several pages with charts and two dashboards with different metrics ensure you have the basic elements to build a great dashboard for tracking KPI.

MORE INFO
DEMO

Paper Admin Template

The source: https://xvelopers.com/demos/html/paper-panel/index.html

This template fits you if you are looking for a concrete solution since Paper goes with eleven dashboards in the package! They all are unnamed so it will take time to look through them, but that time will be less than time for building your dashboard. Every dashboard provides a simple single-screen view of data and allows sharing it with your collages.

MORE INFO
DEMO

Pick Admin Dashboard Template

The source: http://html.designstream.co.in/pick/html/index-analytic.html

Pick is a modern and stylish solution for the IT industry. It’s a multipurpose dashboard that helps you to gain full control over the performance.

MORE INFO
DEMO

Able Pro Admin Dashboard

The source: http://ableproadmin.com/angular/default/dashboard/analytics

If you believe that the most qualified products are the most rated products, take a look at Able pro. Able pro is a best-rated bootstrap admin template on Themeforest. The human eye captures information within the graph blazingly fast! With that dashboard, you can go much deeper into the understanding of KPIs and make the decision-making process much easier.

MORE INFO
DEMO

Architect UI Admin Template

The source: https://demo.dashboardpack.com/architectui-html-pro/index.html

Those who download Architect UI make the right choice. This KPI template created with hundreds of build-in elements and components, and three blocks of charts. The modular frontend architecture makes dashboard customization fast and easy, while animated graphs provide insights about KPIs.

MORE INFO
DEMO

Flatlogic One Admin Dashboard Template

The source: https://templates-flatlogic.herokuapp.com/flatlogic-one/html5/dashboard/visits.html

Flatlogic one is a one-size-fits-all solution for any type of dashboard. It is a premium bootstrap admin dashboard template that has been released recently in July 2020. It goes with two developed dashboards that serve well as KPI templates: analytics and visits. But it also offers four additional pages with smoothly animated charts for any taste and needs. The dashboard is flexible and highly customizable, so you easily get the benefit from that template.

MORE INFO
DEMO

Thanks for reading.

You might also like these articles:

14+ Best Node.js Open Source Projects

8 Essential Bootstrap Components for Your Web App

Best 14+ Bootstrap Open- Source Projects

The post 10 KPI Templates and Dashboards for Tracking KPI’s appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

Announcing Oracle to Snowflake Migration Solutions

Thinking about going from Oracle to Snowflake? Now is the time to get excited. Mobilize.Net, makers of the heavily-used SnowConvert for Teradata, is taking its talents to Oracle. This shouldn’t be too surprising to those who know us well as we processed over 100 million lines of Oracle code through our automated tooling in 2021. Given that number, we tend to get the same question all the time, when will we make our automation tool publicly available? Well… today. Today is that day [link to press release]. Welcome to SnowConvert 2: The Redux.

SnowConvert for Oracle takes in your Oracle SQL and PL/SQL and converts it to functionally equivalent Snowflake SQL and procedural code embedded in JavaScript. (Interested in Snowflake Scripting? Just as Snowflake is working on its scripting, we’re working with them on the translation. Stay tuned.)

SnowConvert for Oracle accelerates any migration from Oracle to Snowflake by providing high levels of automation not just across tables and views, but stored procedures, functions, and packages. Here’s a list of Oracle object types SnowConvert will be able to convert to functionally equivalent Snowflake SQL:  

Tables – The cornerstone of any data platform, tables are converted at well over 99%.

Views – The next step past tables. View conversion can get slightly more complex than tables, but SnowConvert routinely sees conversion rates similar to tables.

Procedures – That’s right. SnowConvert for Oracle automates PL/SQL to JavaScript embedded in Snowflake SQL. Procedures are also converted at a very high level (95%+).  

Functions – While there are some functional gaps between what you can do with a function in Oracle versus the same in Snowflake, SnowConvert accounts for that. We’ve seen enough code to understand the patterns that will cause problems. 

Packages – They said it wasn’t possible, and yet… here we are. With the ability to customize how the objects contained in a package are organized, SnowConvert can take any package regardless of complexity and convert it to a functionally equivalent set of procedures and/or functions in Snowflake SQL.

Synonyms – Using the power of the Abstract Syntax Tree (AST), any object referencing a synonym is referred directly back to the original object.

Sequences – Basic, but effective. All of your sequences are automatically reproduced in Snowflake.

Of course, it’s not just the DDL for each of these objects, all DML and any queries are also converted at well over 99% percent.

Assessment and Conversion

SnowConvert is all about accelerating your migration, and we understand that starts with assessment. Before you decide to migrate, SnowConvert for Oracle can provide you assessment data on the kind of objects you have, the expected conversion percentage, and some next steps on how you can finish the migration using the reports that SnowConvert provides. The tool can generate the following reports:

Assessment Report – This report gives you a summary of the code that you have, and gives you an estimated level of conversion, not just of the entire workload, but of each object type listed above (tables, views, procedures, etc.)

Issues Inventory – This is your roadmap to complete a migration. SnowConvert automates on average over 95% of the code present in a typical Oracle migration, but what do you do with the last 5%? That’s just as important as the other 95%. Your issues inventory will give you a list of the warnings, issues, and errors that are present in the migration. 

Object Inventory – You’ll get a complete inventory of every object that was found. Each object’s name, schema, lines of code, and other metrics associated with each object.

Interested in what these reports look like? Here’s a walkthrough video highlighting the output of SnowConvert for Oracle. This video will walk you through the ins and outs of how to use and interpret the assessment capabilities of the tool. You can also find more information on how to interpret the output of the tool on our main Oracle conversion page. 

How do I get started?

Easy. Fill out the form on our Get Started page linked below. We’ll give you a walkthrough of how to use the tool, and let you experiment with the assessment version. (Note that SnowConvert is a local install that you can use on any Windows or Mac machine, but if you’re more interested in a web-based version, let us know. Such a thing may already be in the works.) If you need more information on getting started, visit our documentation page. You’ll find a complete guide on getting started with SnowConvert, and more information on how to evaluate the output from the tool. You can also learn more about the types of conversion done by SnowConvert for Oracle. 

Whether you’re just considering a possible change or are so deep in PL/SQL that you can’t see the light of day anymore, get started on the road to a migration from Oracle to Snowflake today with Mobilize.Net SnowConvert for Oracle.

Flatlogic Platform Updates: November 2021

Flatlogic Platform, also known as Web App Builder is gathering pace!. Yay! We have already 2570 applications generated, and that’s not the limit!

TypeScript added.

Migrations added.

GitHub integration included

Free 7-Day Trial Added

We keep on updating our Flatlogic Platform , a powerful tool for building fully working CRUD web applications with front-end, back-end, and database.

November was a productive month with four new releases.   Check out the highlights:

Quick Overview

Arbeit macht frei.

1. TypeScript Added

TypeScript is a well-known programming language that makes app development easily maintainable and faces fewer bugs while coding. Thanks to TypeScript you can add static typing to JavaScript to enhance developer productivity. TypeScript lends structure and safety to your app and simplifies the process of code writing. It assumes and adds the data types to variables and functions.

2. Migrations

When developing an application, a situation often arises when not only the program code changes but also the database. In order to change databases in a consistent way in all development environments, the migration mechanism is used.

Now you don’t need to delete your app data to change the database. It is a crucial step to providing hosting for your application and an integral part of the app production process.

3. GitHub Integration

Can we do without GitHub? No, no and no again.

Track all your changes made to the source code of the application. GitHub version control assists you in this keeping and synchronizing your edits to the code in the GitHub repository. Set it up in the settings of the curated project, tap to connect your GitHub account, save it.

4. 7-Day Trial Period Added

We are happy to share the option to test the Flatlogic platform for free. Get a free 7-day trial and test our service in full. Feel free to share your observations and expectations, if wanted.

If you need more information on how our platform works, read please Flatlogic Platform documentation. Every day we add more and more options to our Flatlogic platform and we strive to make it the best one, interviewing the customers and making it the place where all of its functions will be tested and fine-tuned. Click here to learn more about upcoming features, our Boss is sharing the list of some features from the roadmap.

That’s it for today!

Stay tuned for more updates from Flatlogic and subscribe to our socials! 

The post Flatlogic Platform Updates: November 2021 appeared first on Flatlogic Blog.

Sending Alerts to Slack

Seq can monitor the event stream and trigger alerts when configured conditions occur. For example, a system produce an alert when an item runs out of stock.

Alerts are useful because they can generate notifications. If I receive a notification that a product is out of stock I can take action, such as ordering more stock. There are many ways to receive alert notifications. One popular notification option is to configure Seq to write a message to a Slack channel when an alert is triggered.

Getting Started

The Seq sample data includes an event type representing when a product runs out of stock in a fictional coffee retailer.

Out of stock event

Before creating an alert it is necessary to have a way to send a notification when the alert triggers.

Configure Slack to receive notifications

For Slack to receive notifications from Seq we firstly create a new Slack app, with the ‘Incoming Webhooks’ feature. Then add a new Webhook, and give the new Slack app permission to post to one of your Slack channels. I configured a Webhook to write to a ‘#stock-alerts’ Slack channel. When your Webhook is configured take a copy of the ‘Webhook URL’ for later.

Configure Slack to receive notifications

Setting up Slack integration in Seq

To integrate Seq and Slack you need to install the ‘Slack Notifier’ Seq app (developed and maintained by David Pfeffer and the Seq community). Go to Settings > Apps > Install from NuGet and install the ‘Seq.App.Slack’ app. Add an instance of the app (mine is called ‘Alert notifications’), setting the ‘Webhook URL’ to the value you copied from the Slack app. Now Seq has a way to write to a Slack channel.

Creating the Alert

Seq alerts are triggered by a query that produces a result. This query will produce a result for each minute in which there is at least one ‘out of stock’ event.

Counting out of stock events, grouped by minute

The chart at the bottom of the window shows that ‘out of stock’ events occur regularly. The inventory controller needs to know!

To make this query into an alert I click the bell button. I’ve named the new alert ‘Out of stock’ and selected the Slack Notifier Seq app instance as the ‘Output app instance’. When the alert triggers it will send the notification to the ‘#stock-alerts’ channel.

Creating an alert that sends events to Slack

Within a few minutes an ‘out of stock’ event has triggered and an ‘out of stock’ alert and a message has been sent to the ‘#stock-alerts’ Seq channel.

Seq alert has arrived in Slack

When a notification appears in Slack I can follow the links back to the alert that generated the notification or to the query the alert is based on. For these links to work you will need to have set the api.canonicalUri Seq server setting.

There is a lot more that can be done with alerts and the Slack Notifier Seq app. Refer to the Seq documentation for more detail.

There are also notification apps for email and Microsoft Teams.

A journey towards SpeakerTravel – Building a service from scratch

For close to two years now, I’ve had SpeakerTravel up & running. It’s a tool that helps conference organizers to book flights for speakers. You invite speakers, they pick their flight of choice (within a budget the organizer can specify), and the organizer can then approve and book the flight with a single click.

Why I started building a travel booking tool

How flight tickets work…
Global Distribution Service (GDS)
Flight search affiliate programs
Online Travel Agencies (OTA)
A travel agent from Sweden
AllMyles

The business side…
CENTS
Legal requirements
Payments

Building SpeakerTravel
Attempt to a single-page application…
…replaced with boring technology
The domain model
Ready for take-off!
COVID-19 💊 and working on the backlog

What’s next?
What’s next on the technical side?

What’s next on the business side?
Why not pivot?

Conclusion and Takeaways

In this post, I want to go a bit into the process of building this tool. Why I started it in the first place, how it works, a look at it from the business side, and maybe a follow-up post that covers any questions you may have after reading.

There’s also a table of contents, so brace yourself for a longread!

Why I started building a travel booking tool

Before COVID threw a wrench in offline activities, our user group was organizing CloudBrew, a 2-day conference with speakers from across the world (mostly Europe).

Every year, I was complaining on Twitter around the time travel for those speakers needed to be booked. Booking flights for a speaker would mean several e-mails back-and-forth about the ideal schedule, checking travel budgets, and then sending the travel confirmation. And because our user group is a legal entity, we’d need invoices for our accountant, which meant contacting the travel agency and more e-mails.

When we started, we did all of this for 5 speakers, which was doable. Then we grew, and in the end needed to do this for 19 speakers. Madness!

That got me thinking, and almost pleading for someone to come up with a solution:

Startup idea: “Give travel site a bunch of e-mail addresses and budgets. Site lets those people select flights within that budget. I say yes/no and get billed.” – Would love this for conference organizing!

— Maarten Balliauw (@maartenballiauw) July 4, 2018

Alas, by the time we had that 19 speaker booking coming up, no such solution came about, and we were once again doing the manual process.

How flight tickets work…

In the back of my mind, the idea stuck. Would it be possible to build a solution to this problem, and make booking travel for speakers at our conference an easier task?

Of course, building the app itself would be possible. It’s what we all do for a living! But what about the heart of this solution… You know, actually booking a flight ticket in an automated way?

After researching and reading a lot, it seems that booking a flight ticket always consists of 4 steps:

You search an inventory of available seats for a flight combination;
For that flight combination, a price is requested;
For that flight combination, a booking is created;
For that booking, tickets are issued.

Book flights via any website, and you’ll go through these steps. There’s a reason for this:

The flight inventory is really a big database with all seats on all (or at least, many) airlines. As far as I could find, airlines populate this database a coule of times a year. It does not contain prices, just seats and conditions to book seats.
Pricing checks a given seat with the airline (or other party in between). Requesting a price means the airline can give an actual price for a seat. They can also track interest in a specific seat/group of seats, and price accordingly.
Booking reserves the seat, and removes that seat from the big flight inventory database. Ideally, booking has to happen soon after pricing. If no tickets have been issued after a couple of hours, the seat is made available again.
Issuing tickets confirms the seat, and gives you the actual ticket that can be used to board a plane. Having these two steps separate means that in between, a booking website can ask you for payment, and only when that is confirmed, issue tickets.

So in short, I needed something that could perform all of these steps somehow. More research!

Global Distribution Service (GDS)

One of the first services that popped up were different Global Distribution Services (GDS) for air travel. The world has many of them. You may have heard of Amadeus, Sabre or Travelport, but there are others.

These GDS are an interoperability layer between inventories from airlines, travel agents, and more. They have software in place to handle interactions between all parties involved (airlines, travel agents, hotels, …), and until a few years ago, were always involved in booking flights. Nowadays, airlines often sell their inventory directly, without these middle-men involved.

I explored various GDS’, and quickly found that this was not the way to go. First, they expect certain volumes of sales. I contacted one of them, and essentially got laughed at when I said I wanted to book around 20 flights a year. Second, from a technical point of view, a lot of them had documentation available that talked about XML-over-SOAP, WS-* standards, and all that. Been there, done that, but prefer the more lightweight integrations of recent years.

Flight search affiliate programs

There are a number of affiliate programs out there that provide an API that you can use to search flights (including an approximate price), and give you a link to the booking site. Examples are Travelpayouts and SkyScanner.

The conditions for using these APIs were somewhat restrictive for my use case, but e-mailing one of them confirmed this use case was something that could fit.

Let the speaker search and request a flight, and then the organizer would click through and make the booking. This would still mean entering credit card details and invoice address a number of times, but it could work.

Online Travel Agencies (OTA)

Somewhere in-between GDS and affiliate programs, there are the Online Travel Agencies (OTA) and the likes. These companies are travel agents, and have their contracts with zero, one or several GDS, airlines, and more.

Searching this space, I found a couple of them that had APIs available for the above 4 steps – which seemed promising as it would give full control over the booking process (including automation of sending the correct invoice details when purchasing a ticket):

AirHob
Kiwi.com
Travelopro
Travelomatic

After contacting them all, some responded only after a couple of weeks, others had requirements in terms of number of tickets sold (volume), and this got me disillusioned.

A travel agent from Sweden

Having talked with a couple of folks about this idea and finding an API, a friend suggested I contact a travel agent they knew well as they could be able to help.

We had a long call about the idea, and they were very helpful in providing some additional insights into the world of flight booking. They were using the TravelPort GDS themselves, and were building their own API on top of that to power their own websites. Unfortunately, they weren’t sure it would ever get completed, so this wasn’t a viable solution.

Nevertheless, lesson learned: it never hurts to talk, even if it’s just for sharing insights and learnings.

AllMyles

Some weeks after my disillusion with OTAs, I was searching the Internet once more and found another service: AllMyles.com.

I decided to get in touch with some questions about my use case and low volumes. With zero expectations: I considered this my last attempt before shelving the entire idea.

Responding in 3 days would have been a record, but these folks responded in 30 seconds (!). A good 10 minutes later I was on Skype with their founder. We chatted about the service I wanted to build, and he even gave some thoughts on how to implement certain parts and workflows.

On their website, a 30-day trial of their staging environment was promoted, and their founder confirmed this was flexible if needed. So I decided to go with this and experiment with the API to see what was possible and what was not, and maybe finally start building this application!

The business side…

With the AllMyles API docs in hand, I set out to writing some code and experimenting with their staging environment. All seemed to work well for my use case.

There was one thing in the way still… To get production access, a one-time certification fee of 3000 EUR would have to be paid. Definitely better than the volume requirements of other solutions, but still quite steep for booking 20 flights a year.

What if this tool would be something that can be used by any conference out there, and I charge a small fee per passenger to cover this certification fee and other costs?

Time to put on the business hat.

CENTS

A couple of years ago, a friend recommended reading The Millionaire Fastlane by MJ DeMarco. It’s a good book with ideas on getting out of the rat race that controls many of us, and very opinionated. You may or may not like this book. There’s one idea from the book that stuck in my head though: CENTS.

CENTS is an acronym for the five aspects on which any idea can be vetted for viability as a business. It’s not a startup canvas or anything, just a simple way of checking if there is some viability:

Control – Do you control as many elements of the business as possible, or would something like a price or policy change with a vendor mess with your business?

Entry – How hard is entering this market? Can anyone do it in 10 minutes, or would they need a lot of time, money, and other resources?

Need – Does anyone actually need this thing you are thinking about?

Time – Will you be converting time into money, or can you decouple the two and also earn while you’re asleep?

Scale – Can you see this scale, are there pivots that would work, …

Before diving into the deep and coughing up that certification fee (and building the tool), I wanted to check these…

For flight booking, Control is never going to be the case. Someone is flying the airplane, someone handles booking. There are parties in between you and that flight, and there’s no way around that. From my research, I knew if really needed I could find another OTA or GDS, and go with that, so I felt there was just enough control to give this aspect a green checkmark.

Entry was steep enough: that certification fee, research, building the app. Something everyone could overcome, but definitely not something everyone would do. As an added bonus, I had to figure out some tricks to find the same flight twice: once by the speaker making the search, once by the conference organizer to confirm booking. Pricing and booking have to be close together (as in, 20-30 minutes), but for SpeakerTravel there could even be a few days between both parties doing this. In any case, it requires some proper magic to get this right and fine the same (or a very comparable) seat. So Entry? Check!

The Need aspect was easy. There are lots of conferences out there that are probably going through the same pain with booking flights. Check!

Same with Time. This would be a software-as-a-service, that would allow folks to do self-service booking and payments, even when I’m not around. Check!

Finally, Scale. This solution could work for IT conferences, medical conferences, pretty much anything where a third party would pay for someone else’s flights. Business travel could be a pivot, where employees could book and employers would pay. Another pivot could be handling travel for music festivals, etc. So definitely not a hurdle in the long run!

In short: it made sense CENTS!

Legal requirements

Building a tool for our own conference is one thing, building it for third-party use is another. Could I sell flight tickets from my Belgian company?

Instead of trying to figure this out myself, I asked for advise here from a lawyer. The response came in (together with an invoice for their time researching), and for my Belgian company there were a few things to know about:

Flights-only is fine. You’re never selling flights, you are facilitating a transaction between the traveler and the airline.
If you combine flights and hotels, flights and rental cars, etc., you’re selling travel packages. Travel packages have stricter requirements.

Great! So I could go ahead with flights (and only flights), and start building the app!

Payments

While building the app (more on that later), I also was thinking about how to handle flight ticket payments… I’d have a fee per traveler (fixed), and the flight fare itself (variable, and one I’d have to pay directly to AllMyles).

The two-step ticket issuing seemed like a perfect place to shove in a payment gateway, for example Stripe, and collect payment before making the actual booking through the API.

Unfortunately, none of the payment gateways I found let you do “risky business”. All of them have different lists of business types that are not allowed, and travel is always on those lists. One payment gateway from The Netherlands confirmed they could support my scenario, but after requesting written confirmation that stance changed. In other words: credit cards were not an option.

For now, I decided to go with an upfront deposit, to ensure flight fares can be paid when someone confirms their booking.

Building SpeakerTravel

With a good idea in mind, and a blank canvas in front of me, it was time for the excitement of creating a new project in the IDE!

The most important question: Which project template to start with?

Attempt to a single-page application…

Since I’d already built some API integration with AllMyles in C#, at least part of the application would probably be ASP.NET Core. With close to no experience with single page applications at the time, I thought this would be a good learning experience!

So I went with an ASP.NET Core backend, IdentityServer, and React.

About an hour of cursing on a simple “Hello, World” later, React was replaced with Vue.js which seemed easier to get started with. I did have to replicate the ASP.NET Core SPA development experience (blog post) to support Vue.js, but that was fun to do and write about.

What wasn’t fun though, was the slow-going. New to Vue.js, a lot of things went very slow while building. After 2 weeks of spending evenings on just a login that worked smoothly, I started wondering…

“Am I doing this to solve a problem, or to learn new tech?”

Building this thing over the weekend and in the evening hours, I reconsidered the tech stack and started anew.

…replaced with boring technology

This time, I started with an ASP.NET MVC Core project. Individual user accounts using ASP.NET Core Identity, Entity Framework, and SQL Server. A familiar stack for me, and a stack in which I was immediately productive.

A few hours into development, I had the login/register/manage account pages customized. The layout page was converted to load a Bootswatch UI theme (on top of Bootstrap), and was starting to get into building the flows of inviting speakers, searching flights (with 100% made up data), approving and rejecting flights, and all that. This was finished in a week or 6 and then another few weeks to properly integrate with AllMyles’ staging environment.

While developing the app, a lot of new ideas and improvements popped up. I tried to be ruthless in asking myself “do I really need this for version 1?”, and log anything else in the issue tracker and pick it up in the future. This definitely helped with productivity.

Some fun was had implementing tag helpers to show/hide HTML elements (blog post), which realy works well to make certain parts of the UI available based on user permissions and roles.

The first version was ready near the end of August 2019, including a basic product website that is powered by a very simple Markdown-to-HTML script that seems to work well.

The application itself was built and deployed on the following stack:

ASP.NET Core MVC + Razor pages for the scaffolded identity area, on .NET Core 2.1
Bootstrap and Bootswatch for UI
A sprinkle of jQuery

Hangfire for background jobs (the actual bookingm, sending e-mails, anything that’s async/retryable)
SQL Server LocalDb for development, Azure SQL Database for production
Azure Web Apps for the app and product website
Private GitHub repository
Azure DevOps to build and deploy
SendGrid for sending e-mails

Overall, this was and is a very familiar stack for me, and as a result a stack in which I was immediately productive. Server-side rendering is fine 😉 And .NET is truly great!

When you have an idea you want to build out, I can highly recommend going with what you know – unless of course the goal is exploring another tech.

The domain model

When I asked folks on Twitter for what they wanted to see in this post, Cezary Piatek wanted to know about the domain model.

From a high-level, the domain model of this application is simple. There’s an Event that has Travelers, and at some point, they will have a Booking.

For every traveler, the system keeps a TravelerStatus history, which represents state transitions. From invited, to accepted, to bookingrequested, to confirmed/rejected/canceled, to ticketsissued, and potentially back to the start where a rejected traveler goes to accepted again so they can make a new search.

The TravelerStatus history is evaluated for every traveler, and the system takes these into account. In fact, they are somewhat visible in the application UI as well (though some of these state transitions are combined for UX purposes).

When a Traveler requests a booking, some PII is stored. Passenger name, birth date, and whatever the airline requires to book a given seat. This data is stored as a JSON blob – the fields are dynamic and may differ depending on the airline. This data is always destroyed after tickets are issues, the booking request was rejected, or when the booking was still waiting for approval but the event has concluded 10 days ago.

For flight search and booking, the domain model is a 1:1 copy of what AllMyles has in their API. Looking at other APIs, it’s roughly the standard model in the world of flights. A Search returns one or more SearchResults. Each of those has one or more Combinations, typically flights that have the same conditions and price, but different times. E.g. a shuttle flight from Brussels to Frankfurt may return 3 combinations here – same price, and conditions, just 3 different times during the day. A Combination can also have upgrade and baggage option. The booking itself is essentially makign a call that passes a given Combination identifier (and what options are selected on top).

Ready for take-off!

The app was deployed (targeting AllMyles staging), and I requested certification (coughing up the initial fee – no turning back now!). This process took a couple of days, but at some point I was given production access and SpeakerTravel was live!

This was right on time for our CloudBrew conference in 2019, and it was really exciting to see folks request flights, booking them via the API, and seeing actual flight tickets sent out by airlines. Not to mention, much easier in terms of workload and back-and-forth compared to the manual process that triggered this entire endeavour! And speakers themselves also enjoyed this workflow:

Massive props to @CloudBrewConf – their travel booking system for the speakers has really raised the bar!

— Paul Stack (@stack72) August 14, 2019

Thanks, Paul 🤗

Very quickly, a couple of organizer-friends jumped aboard as well. And for a conference I was attending myself, I used it to book a flight in my own name. Pretty cool!

First time taking a flight booked through my own @SpeakerTravel_ – pretty cool to fly on a ticket you issued yourself 😎

— Maarten Balliauw (@maartenballiauw) January 15, 2020

A couple of conferences later, some bugs were ironed out, some feature requests were handled, and the certification fee was covered. Business-wise, and conveniently brushing aside time spent building this thing, SpeakerTravel was break-even!

COVID-19 💊 and working on the backlog

And then, half a year after release, a pandemic hit the world. Conferences all went online, travel virtually halted, and no new conferences onboarded SpeakerTravel for a long time.

This was a bummer, but a good time to work on that backlog of features I wanted to add. Some technical debt got fixed, and thanks to fast release cadences in both the front-end and .NET world, I’ve been upgrading a lot of things, many times.

Today’s tech stack:

ASP.NET Core MVC + Razor pages for the scaffolded identity area, on .NET 6.0 RC2
Bootstrap and Bootswatch
A sprinkle of jQuery (that I want to replace with HTMX)

Hangfire for background jobs (the actual bookingm, sending e-mails, anything that’s async/retryable)
SQL Server LocalDb for development, Azure SQL Database for production
Product website and application are Docker images now, deployed to Azure Web Apps for Containers

JetBrains Space for Git, CI/CD, and container registry

Mailjet for sending out e-mails. Smaller company, better support.

Note: If you’re interested in seeing CI/CD with Space, check this Twitter thread.

What’s next?

Good question! I think this question can be split as well…

What’s next on the technical side?

Let’s start with this one. As in the past months, working on some items from the backlog and just keeping things up to date. Very high on my wishlist is ripping out jQuery and replacing the few bits that require client-side interactivity with HTMX.

One of the things I do want to try at some point is seeing if I can run the entire stack on Kubernetes, but that’s purely out of personal interest.

Any other nerd snipes are welcome in the comments!

What’s next on the business side?

What’s immediately next, is definitely uncertainty. We’re still in a pandemic, and while parts of the world seem to be evolving into the right direction for SpeakerTravel, it’s unsure when in-person conferences will pick up again.

Apart from infrastructure, there’s no real cost to running the application, so I can be patient on that side and keep pitching it to anyone I meet, and provide good support for those who do sign up in the meanwhile.

Speaking of which, I’m super happy that since September 2021, a few conferences have been using the product for in-person travel!

Why not pivot?

A question I got recently was: Why not pivot to business travel? – great idea!

Earlier in this post, I described the model where employees could search and pick travel options, and the company can approve and pay. This would indeed be a great pivot, but there are a couple of things holding me back on this:

It’s a very crowded market (with some big players like American Express). This is not a big issue though, it validates there is a market, but it would need quite some effort to get traction.
I’d have to expand from flights into flights + hotels + cars. While possible in terms of APIs, it does require fulfilling some extra regulations.

Both of these would mean going bigger than what I currently want to handle.

Conclusion and Takeaways

Sometimes, you have a story in you that you just want to write down. This was one of those.

Instead of sharing the event of having SpeakerTravel online, I wanted to share the story about the process that brought it about. Maybe we all focus on the event too much, and not enough on the process towards the event.

Social media consists of short bits, while blogs, articles and tutorials about the process have so much value. Leave breadcrumbs for those on a similar path like you in the future.

Speaking of that: if there’s anything in this blog post you would like to see a follow-up on with more details, let me know via the comments.

Take care!

Do you have an exit strategy?

It’s an extremely common problem in legacy code bases: a new way of doing things was introduced before the team decided on a way to get the old thing out.

Famous examples are:

Introducing Doctrine ORM next to Propel
Introducing Symfony FrameworkBundle while still using Zend controllers
Introducing Twig for the new templates, while using Smarty for the old ones
Introducing a Makefile while the rest of the project still uses Phing

And so on… I’m sure you also have plenty examples to add here!

Introducing a new tool while keeping the old one

For a moment we are so happy that we can start using that new tool, but every time we need to change something in this area we have to roll out the same solution twice, for each tool we introduced. Something changes about the layout of the site? We have to update both Twig and Smarty templates. Something changes about the authentication logic? We have to change a Symfony request listener and the Zend application bootstrap file too. There will be lots of copy/pasting, and head scratching. Finally, we have to keep both dependencies up-to-date for a long time.

Okay, everybody knows that this is bad, and that you shouldn’t do it. Still, every day we tend to make problematic decisions like this. We try to bridge some kind of gap, but that leaves us with one extra thing to maintain. And software is already so hard (and expensive) to maintain…

Multiple versions in the project

The same goes for decisions at a larger scale. How many projects have a V2 and a V3 directory in their code base? One day the developers wanted to escape the mess by creating this green spot next to the big brown spot. Then some time later the same happened again, and maybe even again.

The problem with these decisions: there is usually no exit strategy. A new thing is created next to an old thing. The old thing will be forever there. Often developers defend such a decision by saying that the old things will be migrated one by one to the new thing. But this simply can’t be true, unless:

A very serious effort is made to do so (but this will be incredibly expensive)
A long-term commitment is made to keep doing this continuously (alongside other important work)
There isn’t much to migrate anyway (but that usually isn’t the case)

On an even larger scale teams may want to rewrite entire products. A rewrite suffers from all the above-mentioned problems. And we already know that they are usually aren’t successful too. To be honest, I’ve been part of several successful rewrite projects, but they have been very expensive, and they were extensively redesigned. They didn’t go for feature parity, which may have contributed largely to their success.

Class and method deprecations

It’s not always about new tools, new libraries, new project versions, or rewrites. Even at a much smaller scale developers make decisions that complicate maintenance in the long run. For instance, developers introduce new classes and new methods. They mark the old ones as @deprecated, yet they don’t upgrade existing clients, so the old classes and methods can never be deleted and will be dragged along forever.

We want the new thing, but we don’t want to clean up the old mess. For a moment we can escape the legacy mess and be happy in the green field, but the next day we see the mess around us and realize that we have to maintain even more code today than we did yesterday.

Design heuristics

So at different scales we make these design decisions that actually increase the already unbearable maintenance burden. How can we stop this?

We have to make better decisions, essentially using better heuristics for making them. When introducing a new thing that is supposed to replace an old thing we have to keep asking ourselves:

Do we have a realistic exit strategy for the old thing?
Will we actually get the old thing out?

If not, I think you owe it to the team to consider fixing or improving the old thing instead.

Send Emails using Microsoft Graph API and a desktop client

This article shows how to use Microsoft Graph API to send emails for a .NET Core Desktop WPF application. Microsoft.Identity.Client is used to authenticate using an Azure App registration with the required delegated scopes for the Graph API. The emails can be sent with text or html bodies and also with any file attachments uploaded in the WPF application.

Code: https://github.com/damienbod/EmailCalandarsClient

To send emails using Microsoft Graph API, you need to have an office license for the Azure Active Directory user which sends the email.

You can sign-in here to check this:

https://www.office.com

Setup the Azure App Registration

Before we can send emails using Microsoft Graph API, we need to create an Azure App registration with the correct delegated scopes. In our example, the URI http://localhost:65419 is used for the AAD redirect to the browser opened by the WPF application and this is added to the authentication configuration. Once created, the client ID of the Azure App registration is used in the app settings in the application as well as the tenant ID and the scopes.

You need to add the required scopes for the Graph API to send emails. These are delegated permissions, which can be accessed using the Add a permission menu.

The Mail.Send and the Mail.ReadWrite delegated scopes from the Microsoft Graph API are added to the Azure App registration.

To add these, scroll down through the items in the App a permission, Microsoft Graph API delegated scopes menu, check the checkboxes for the Mail.Send and the Mail.ReadWrite.

Desktop Application

The Microsoft.Identity.Client and the Microsoft.Identity.Web.MicrosoftGraphBeta Nuget packages are used to authenticate and use the Graph API. You probably could use the Graph API Nuget packages directly instead of Microsoft.Identity.Web.MicrosoftGraphBeta, I used this since I normally do web and it has everything required.

<ItemGroup>
<PackageReference Include=”Microsoft.Identity.Client” Version=”4.35.1″ />
<PackageReference Include=”Microsoft.Identity.Web.MicrosoftGraphBeta” Version=”1.15.2″ />
<PackageReference Include=”Newtonsoft.Json” Version=”13.0.1″ />
</ItemGroup>

The PublicClientApplicationBuilder class is used to define the redirect URL which matches the URL from the Azure App registration. The TokenCacheHelper class is the same as from the Microsoft examples.

public void InitClient()
{
_app = PublicClientApplicationBuilder.Create(ClientId)
.WithAuthority(Authority)
.WithRedirectUri(“http://localhost:65419”)
.Build();

TokenCacheHelper.EnableSerialization(_app.UserTokenCache);
}

The identity can authentication using the SignIn method. If a server session exists, a token is acquired silently otherwise an interactive flow is used.

public async Task<IAccount> SignIn()
{
try
{
var result = await AcquireTokenSilent();
return result.Account;
}
catch (MsalUiRequiredException)
{
return await AcquireTokenInteractive().ConfigureAwait(false);
}
}

private async Task<IAccount> AcquireTokenInteractive()
{
var accounts = (await _app.GetAccountsAsync()).ToList();

var builder = _app.AcquireTokenInteractive(Scopes)
.WithAccount(accounts.FirstOrDefault())
.WithUseEmbeddedWebView(false)
.WithPrompt(Microsoft.Identity.Client.Prompt.SelectAccount);

var result = await builder.ExecuteAsync().ConfigureAwait(false);

return result.Account;
}

public async Task<AuthenticationResult> AcquireTokenSilent()
{
var accounts = await GetAccountsAsync();
var result = await _app.AcquireTokenSilent(Scopes, accounts.FirstOrDefault())
.ExecuteAsync()
.ConfigureAwait(false);

return result;
}

The SendEmailAsync method uses a message object and Graph API to send the emails. If the identity has the permissions, the licenses and is authenticated, then an email will be sent using the definitions from the Message class.

public async Task SendEmailAsync(Message message)
{
var result = await AcquireTokenSilent();

_httpClient.DefaultRequestHeaders.Authorization
= new AuthenticationHeaderValue(“Bearer”, result.AccessToken);
_httpClient.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue(“application/json”));

GraphServiceClient graphClient = new GraphServiceClient(_httpClient)
{
AuthenticationProvider = new DelegateAuthenticationProvider(async (requestMessage) =>
{
requestMessage.Headers.Authorization
= new AuthenticationHeaderValue(“Bearer”, result.AccessToken);
})
};

var saveToSentItems = true;

await graphClient.Me
.SendMail(message, saveToSentItems)
.Request()
.PostAsync();
}

The EmailService class is used to added the recipient, header (subject) and the body to the message which represents the email. The attachments are added separately using the MessageAttachmentsCollectionPage class. The AddAttachment method is used to add as many attachments to the email as required which are uploaded as a base64 byte array. The service can send html bodies or text bodies.

public class EmailService
{
MessageAttachmentsCollectionPage MessageAttachmentsCollectionPage
= new MessageAttachmentsCollectionPage();

public Message CreateStandardEmail(string recipient, string header, string body)
{
var message = new Message
{
Subject = header,
Body = new ItemBody
{
ContentType = BodyType.Text,
Content = body
},
ToRecipients = new List<Recipient>()
{
new Recipient
{
EmailAddress = new EmailAddress
{
Address = recipient
}
}
},
Attachments = MessageAttachmentsCollectionPage
};

return message;
}

public Message CreateHtmlEmail(string recipient, string header, string body)
{
var message = new Message
{
Subject = header,
Body = new ItemBody
{
ContentType = BodyType.Html,
Content = body
},
ToRecipients = new List<Recipient>()
{
new Recipient
{
EmailAddress = new EmailAddress
{
Address = recipient
}
}
},
Attachments = MessageAttachmentsCollectionPage
};

return message;
}

public void AddAttachment(byte[] rawData, string filePath)
{
MessageAttachmentsCollectionPage.Add(new FileAttachment
{
Name = Path.GetFileName(filePath),
ContentBytes = EncodeTobase64Bytes(rawData)
});
}

public void ClearAttachments()
{
MessageAttachmentsCollectionPage.Clear();
}

static public byte[] EncodeTobase64Bytes(byte[] rawData)
{
string base64String = System.Convert.ToBase64String(rawData);
var returnValue = Convert.FromBase64String(base64String);
return returnValue;
}
}

Azure App Registration settings

The app settings specific to your Azure Active Directory tenant and the Azure App registration values need to be added to the app settings in the .NET Core application. The Scope configuration is set to use the required scopes required to send emails.

<appSettings>
<add key=”AADInstance” value=”https://login.microsoftonline.com/{0}/v2.0″/>
<add key=”Tenant” value=”5698af84-5720-4ff0-bdc3-9d9195314244″/>
<add key=”ClientId” value=”ae1fd165-d152-492d-b4f5-74209f8f724a”/>
<add key=”Scope” value=”User.read Mail.Send Mail.ReadWrite”/>
</appSettings>

WPF UI

The WPF application provides an Azure AD login for the identity. The user of the WPF application can sign-in using a browser which redirects to the AAD authentication page. Once authenticated, the user can send a html email or a text email. The AddAttachment method uses the OpenFileDialog to upload a file in the WPF application, get the raw bytes and add these to the attachments which are sent with the next email message. Once the email is sent, the attachments are removed.

public partial class MainWindow : Window
{
AadGraphApiDelegatedClient _aadGraphApiDelegatedClient = new AadGraphApiDelegatedClient();
EmailService _emailService = new EmailService();

const string SignInString = “Sign In”;
const string ClearCacheString = “Clear Cache”;

public MainWindow()
{
InitializeComponent();
_aadGraphApiDelegatedClient.InitClient();
}

private async void SignIn(object sender = null, RoutedEventArgs args = null)
{
var accounts = await _aadGraphApiDelegatedClient.GetAccountsAsync();

if (SignInButton.Content.ToString() == ClearCacheString)
{
await _aadGraphApiDelegatedClient.RemoveAccountsAsync();

SignInButton.Content = SignInString;
UserName.Content = “Not signed in”;
return;
}

try
{
var account = await _aadGraphApiDelegatedClient.SignIn();

Dispatcher.Invoke(() =>
{
SignInButton.Content = ClearCacheString;
SetUserName(account);
});
}
catch (MsalException ex)
{
if (ex.ErrorCode == “access_denied”)
{
// The user canceled sign in, take no action.
}
else
{
// An unexpected error occurred.
string message = ex.Message;
if (ex.InnerException != null)
{
message += “Error Code: ” + ex.ErrorCode + “Inner Exception : ” + ex.InnerException.Message;
}

MessageBox.Show(message);
}

Dispatcher.Invoke(() =>
{
UserName.Content = “Not signed in”;
});
}
}

private async void SendEmail(object sender, RoutedEventArgs e)
{
var message = _emailService.CreateStandardEmail(EmailRecipientText.Text,
EmailHeader.Text, EmailBody.Text);

await _aadGraphApiDelegatedClient.SendEmailAsync(message);
_emailService.ClearAttachments();
}

private async void SendHtmlEmail(object sender, RoutedEventArgs e)
{
var messageHtml = _emailService.CreateHtmlEmail(EmailRecipientText.Text,
EmailHeader.Text, EmailBody.Text);

await _aadGraphApiDelegatedClient.SendEmailAsync(messageHtml);
_emailService.ClearAttachments();
}

private void AddAttachment(object sender, RoutedEventArgs e)
{
var dlg = new OpenFileDialog();
if (dlg.ShowDialog() == true)
{
byte[] data = File.ReadAllBytes(dlg.FileName);
_emailService.AddAttachment(data, dlg.FileName);
}
}

private void SetUserName(IAccount userInfo)
{
string userName = null;

if (userInfo != null)
{
userName = userInfo.Username;
}

if (userName == null)
{
userName = “Not identified”;
}

UserName.Content = userName;
}
}

Running the application

When the application is started, the user can sign-in using the Sign in button.

The standard Azure AD login is used in a popup browser. Once the authentication is completed, the browser redirect sends the tokens back to the application.

If a file attachment needs to be sent, the Add Attachment button can be used. This opens up a dialog and any single file can be selected.

When the email is sent successfully, the email and the file can be viewed in the recipients inbox. The emails are also saved to the senders sent emails. This can be disabled if required.

Links

https://docs.microsoft.com/en-us/graph/outlook-send-mail-from-other-user

https://stackoverflow.com/questions/43795846/graph-api-daemon-app-with-user-consent

https://winsmarts.com/managed-identity-as-a-daemon-accessing-microsoft-graph-8d1bf87582b1

https://cmatskas.com/create-a-net-core-deamon-app-that-calls-msgraph-with-a-certificate/

https://docs.microsoft.com/en-us/answers/questions/43724/sending-emails-from-daemon-app-using-graph-api-on.html

https://stackoverflow.com/questions/56110910/sending-email-with-microsoft-graph-api-work-account

https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS#InteractiveProvider

https://converter.telerik.com/