Maintaining Code Quality with Amazon CodeCatalyst Reports

Amazon CodeCatalyst reports contain details about tests that occur during a workflow run. You can create tests such as unit tests, integration tests, configuration tests, and functional tests. You can use a test report to help troubleshoot a problem during a workflow.

Introduction

In prior posts in this series, I discussed reading The Unicorn Project, by Gene Kim, and how the main character, Maxine, struggles with a complicated Software Development Lifecycle (SDLC) after joining a new team. One of the challenges she encounters is the difficulties in shipping secure, functioning code without an automated testing mechanism. To quote Gene Kim, “Without automated testing, the more code we write, the more money it takes for us to test.”

Software Developers know that shipping vulnerable or non-functioning code to a production environment is to be avoided at all costs; the monetary impact is high and the toll it takes on team morale can be even greater. During the SDLC, developers need a way to easily identify and troubleshoot errors in their code.

In this post, I will focus on how developers can seamlessly run tests as a part of workflow actions as well as configure unit test and code coverage reports with Amazon CodeCatalyst. I will also outline how developers can access these reports to gain insights into their code quality.

Prerequisites

If you would like to follow along with this walkthrough, you will need to:

Have an AWS Builder ID for signing in to CodeCatalyst.
Belong to a CodeCatalyst space and have the Space administrator role assigned to you in that space. For more information, see Creating a space in CodeCatalyst, Managing members of your space, and Space administrator role.
Have an AWS account associated with your space and have the IAM role in that account. For more information about the role and role policy, see Creating a CodeCatalyst service role.

Walkthrough

As with the previous posts in the CodeCatalyst series, I am going to use the Modern Three-tier Web Application blueprint. Blueprints provide sample code and CI/CD workflows to help you get started easily across different combinations of programming languages and architectures. To follow along, you can re-use a project you created previously, or you can refer to a previous post that walks through creating a project using the Three-tier blueprint.

Once the project is deployed, CodeCatalyst opens the project overview. This view shows the content of the README file from the project’s source repository, workflow runs, pull requests, etc. The source repository and workflow are created for me by the project blueprint. To view the source code, I select Code → Source Repositories from the left-hand navigation bar. Then, I select the repository name link from the list of source repositories.

Figure 1. List of source repositories including Mythical Mysfits source code.

From here I can view details such as the number of branches, workflows, commits, pull requests and source code of this repo. In this walkthrough, I’m focused on the testing capabilities of CodeCatalyst. The project already includes unit tests that were created by the blueprint so I will start there.

From the Files list, navigate to web → src → components→ __tests__ → TheGrid.spec.js. This file contains the front-end unit tests which simply check if the strings “Good”, “Neutral”, “Evil” and “Lawful”, “Neutral”, “Chaotic” have rendered on the web page. Take a moment to examine the code. I will use these tests throughout the walkthrough.

Figure 2. Unit test for the front-end that test strings have been rendered properly. 

Next, I navigate to the  workflow that executes the unit tests. From the left-hand navigation bar, select CI/CD → Workflows. Then, find ApplicationDeploymentPipeline, expand Recent runs and select  Run-xxxxx . The Visual tab shows a graphical representation of the underlying YAML file that makes up this workflow. It also provides details on what started the workflow run, when it started,  how long it took to complete, the source repository and whether it succeeded.

Figure 3. The Deployment workflow open in the visual designer.

Workflows are comprised of a source and one or more actions. I examined test reports for the back-end in a prior post. Therefore, I will focus on the front-end tests here. Select the build_and_test_frontend action to view logs on what the action ran, its configuration details, and the reports it generated. I’m specifically interested in the Unit Test and Code Coverage reports under the Reports tab:

Figure 4. Reports tab showing line and branch coverage.

Select the report unitTests.xml (you may need to scroll). Here, you can see an overview of this specific report with metrics like pass rate, duration, test suites, and the test cases for those suites:

Figure 5. Detailed report for the front-end tests.

This report has passed all checks.  To make this report more interesting, I’ll intentionally edit the unit test to make it fail. First, navigate back to the source repository and open web → src → components→ __tests__→TheGrid.spec.js. This test case is looking for the string “Good” so change it to say “Best” instead and commit the changes.

Figure 6. Front-End Unit Test Code Change.

This will automatically start a new workflow run. Navigating back to CI/CD →  Workflows, you can see a new workflow run is in progress (takes ~7 minutes to complete).

Once complete, you can see that the build_and_test_frontend action failed. Opening the unitTests.xml report again, you can see that the report status is in a Failed state. Notice that the minimum pass rate for this test is 100%, meaning that if any test case in this unit test ever fails, the build fails completely.

There are ways to configure these minimums which will be explored when looking at Code Coverage reports. To see more details on the error message in this report, select the failed test case.

Figure 7. Failed Test Case Error Message.

As expected, this indicates that the test was looking for the string “Good” but instead, it found the string “Best”. Before continuing, I return to the TheGrid.spec.js file and change the string back to “Good”.

CodeCatalyst also allows me to specify code and branch coverage criteria. Coverage is a metric that can help you understand how much of your source was tested. This ensures source code is properly tested before shipping to a production environment. Coverage is not configured for the front-end, so I will examine the coverage of the back-end.

I select Reports on the left-hand navigation bar, and open the report called backend-coverage.xml. You can see details such as line coverage, number of lines covered, specific files that were scanned, etc.

Figure 8. Code Coverage Report Succeeded.

The Line coverage minimum is set to 70% but the current coverage is 80%, so it succeeds. I want to push the team to continue improving, so I will edit the workflow to raise the minimum threshold to 90%. Navigating back to CI/CD → Workflows → ApplicationDeploymentPipeline, select the Edit button. On the Visual tab, select build_backend. On the Outputs tab, scroll down to Success Criteria and change Line Coverage to 90%.

Figure 9. Configuring Code Coverage Success Criteria.

On the top-right, select Commit. This will push the changes to the repository and start a new workflow run. Once the run has finished, navigate back to the Code Coverage report. This time, you can see it reporting a failure to meet the minimum threshold for Line coverage.

Figure 10. Code Coverage Report Failed.

There are other success criteria options available to experiment with. To learn more about success criteria, see Configuring success criteria for tests.

Cleanup

If you have been following along with this workflow, you should delete the resources you deployed so you do not continue to incur charges. First, delete the two stacks that CDK deployed using the AWS CloudFormation console in the AWS account you associated when you launched the blueprint. These stacks will have names like mysfitsXXXXXWebStack and mysfitsXXXXXAppStack. Second, delete the project from CodeCatalyst by navigating to Project settings and choosing Delete project.

Summary

In this post, I demonstrated how Amazon CodeCatalyst can help developers quickly configure test cases, run unit/code coverage tests, and generate reports using CodeCatalyst’s workflow actions. You can use these reports to adhere to your code testing strategy as a software development team. I also outlined how you can use success criteria to influence the outcome of a build in your workflow.  In the next post, I will demonstrate how to configure CodeCatalyst workflows and integrate Software Composition Analysis (SCA) reports. Stay tuned!

About the authors:

Imtranur Rahman

Imtranur Rahman is an experienced Sr. Solutions Architect in WWPS team with 14+ years of experience. Imtranur works with large AWS Global SI partners and helps them build their cloud strategy and broad adoption of Amazon’s cloud computing platform.Imtranur specializes in Containers, Dev/SecOps, GitOps, microservices based applications, hybrid application solutions, application modernization and loves innovating on behalf of his customers. He is highly customer obsessed and takes pride in providing the best solutions through his extensive expertise.

Wasay Mabood

Wasay is a Partner Solutions Architect based out of New York. He works primarily with AWS Partners on migration, training, and compliance efforts but also dabbles in web development. When he’s not working with customers, he enjoys window-shopping, lounging around at home, and experimenting with new ideas.

Javascript Clean Code Principles

One of the books that has most influenced my life is The Elements of Style by Strunk and White. I took a technical writing class in college where we closely studied its recommendations. The book is short and contains over 100 side-by-side comparisons of less effective and more effective writing.

Reading it made me realize that I learn well by example and comparison. I’ve long wanted to write an article that shows less effective and more effective programming approaches by comparison for those who also learn well by comparison.

Today I’m going to lay out what I’ve found to be the most important principles for writing clean code. In the first section, the examples are written in JavaScript, but they apply to almost every language. In the second section, the examples are specific to React.

Before we start the side-by-side comparisons, I want to make a recommendation that needs no side-by-side view.

Use prettier

If you have not heard of it, prettier is an automated code formatting tool. The idea is that you add a prettier config file to your project, and request all your teammates or contributors to enable an IDE plugin that re-formats code on save.

Never again will my team have an argument about tabs vs spaces or 80-column wrap vs 120-column wrap. It will also settle disputes about what types of quotes to use, whether to use semicolons or what spacing to use around brackets.

Prettier was created for JavaScript, JSX, and JSON, but it has plugins for HTML, CSS, md, XML, YAML, toml, PHP, python, ruby, java, shell and many more

My favorite thing is that I concentrate on code and not on formatting. I can quickly add code without proper newlines or spacing and then watch prettier magically format the new code.

Let’s start

Each recommendation below has a very short description and a code example so you can compare more effective vs. less-effective approaches.

Exit early when possible

When writing a function, consider the negative outcomes that would allow you to exit early from the function. You’ll find your code has fewer indentations and is easier to read.

Be expressive, not clever

Of the two functions below, which would you rather come across in a project? Maybe the first one is clever and concise, but how much time does it take you to tweak the functionality?

Make variable names descriptive

When you write code, you may have only one thing on your mind. But when you come back later to look at code, descriptive variable names are very helpful.

Prefer for-of loops

for-of loops have some advantages over for-i, forEach and for-in loops:

Fewer characters
Ability to continue, return or break from the loop
Easier to read and follow

Prefix booleans with verbs such as “is” and “has”

Verbs help set a boolean apart from other types.

Avoid double negatives.

Sometimes they’re subtle and lead to cheeky bugs.

Avoid using “!” with “else”

Instead, use the positive form in the if condition.

Prefer string interpolation over concatenation

It’s almost always more readable to interpolate.

Avoid using the ternary operator to a return boolean value

In return statements, ternary operators are redundant.

Use try-catch with await

async await makes code more readable than a tree of .then() calls. But don’t forget that you need to catch rejections that await-ed values might throw.

Avoid using “magic” numbers

Any number or string that has a non-obvious meaning should be declared as a separate, descriptively named variable.

Avoid declaring functions with more than 2 arguments

Arguments should have a logical order. When you have 3 or more arguments, the order is often not obvious. Yes, we have intellisense in our IDE, but save some thought cycles by accepting “named” arguments if appropriate.

Prefer objects to boolean arguments

Code that calls the function will be cleaner and more obvious.

A Section on React

JSX and React have their own challenges that deserve some extra attention.

Declare DOM only once per function

You can take one of three approaches to avoid it:

Break components into smaller units
Use && as a stand-in for if blocks
Use the ternary operator for if-else blocks

Make your own wrappers on top of UI libraries

Your project might rely on MUI or another UI library for all it’s components. But keeping your UI consistent can be challenging if you have to remember sizes, colors and variants. In the example below, the project wants to always use medium outlined buttons in MUI.

Mind the guard operator

In Javascript, && is the guard operator not a boolean operator; it returns the first operand that is truthy.

A final word

Writing clean code takes practice. Be a code craftsman and take the time to learn good principles and make good habits.

The post Javascript Clean Code Principles appeared first on Flatlogic Blog.Flatlogic Admin Templates banner

A pattern for dealing with #legacy code in c#

static string legacy_code(int input)
{
// some magic process
const int magicNumber = 7;

var intermediaryValue = input + magicNumber;

return “The answer is ” + intermediaryValue;
}

When dealing with a project more than a few years old, the issue of legacy code crops up time and time again. In this case, you have a function that’s called from lots of different client applications, so you can’t change it without breaking the client apps.

I’m using the code example above to keep the illustration simple, but you have to imagine that this function “legacy_code(int)”, in reality, could be hundreds of lines long, with lots of quirks and complexities. So you really don’t want to duplicate it.

Now, imagine, that as an output, I want to have just the intermediary value, not the string “The answer is …”. My client could parse the number out of the string, but that’s a horrible extra step to put on the client.

Otherwise you could create “legacy_code_internal()” that returns the int, and legacy_code() calls legacy_code_internal() and adds the string. This is the most common approach, but can end up with a rat’s nest of _internal() functions.

Here’s another approach – you can tell me what you think :

static string legacy_code(int input, Action<int> intermediary = null)
{
// some magic process
const int magicNumber = 7;

var intermediaryValue = input + magicNumber;

if (intermediary != null) intermediary(intermediaryValue);

return “The answer is ” + intermediaryValue;
}

Here, we can pass an optional function into the legacy_code function, that if present, will return the intermediaryValue as an int, without interfering with how the code is called by existing clients.

A new client looking to use the new functionality could call;

int intermediaryValue = 0;
var answer = legacy_code(4, i => intermediaryValue = i);
Console.WriteLine(answer);
Console.WriteLine(intermediaryValue);

This approach could return more than one object, but this could get very messy.Flatlogic Admin Templates banner

Code Analysis to the Rescue!

Introduction

Do you remember when you introduced a new project which was documented, with unit tests, with a clean architecture and fully decoupled, that you were so proud of? … And after some time, poof! The project is a mess, vulnerabilities, spaghetti code, tightly coupled, without any style consistency. …And there is more, another tight deadline is here! This is not a science-fiction scenario. I believe that a lot of us have lived such days as developers.

Figure 1. – My new cool project after a while is a mess 😭 (Source).

There is high pressure on developers to meet tight deadlines, while not compromising the quality of the software, which should be clean, readable, consistent, reusable, maintainable, testable, efficient, secure, etc. Even in smaller projects or teams, it is a struggle to properly sustain the code’s quality and architecture.

It is a difficult task and that’s why constructive and quality feedback from code peer reviews and manual testing, provides software quality assurance. But, is it enough? To improve code peer review’s efficiency, reduce its required time and automate testing processes, Static Code Analysis and Dynamic Code Analysis can be used.

In this article, we will learn about static code analysis, dynamic code analysis, how they can help us, their limitations, and how to choose the right tools depending on our needs. So, if we are ready… ikuzo (let’s go).

Static Code Analysis

Static Code Analysis (also known as Static Program Analysis, Source Code Analysis or Static Analysis) is the examination of the source code that is performed without running the program (just by “reading” the code) to identify:

Code Quality issues,
Vulnerabilities (security weaknesses),
Violations of coding standards, etc.

The main advantage of static code analysis is to detect and eliminate issues early in the software development process, resulting in lower fixing cost.

Probably in the majority of the software development teams, this analysis is already performed through code peer reviews (manual). The downsides of the manual code reviews are that requires a lot of time (i.e. it’s expensive) and may not be always effective and in-depth. For that reasons, several tools have been implemented to automate this process.

Static Code Analysis Tools

The static code analysis tools are reviewing the source code automatically, based on multiple coding rules. The most known static code analysis tool for .NET developers may be the .NET Compiler Platform (Roslyn) Analyzers that inspect code for style, quality, maintainability, design, and other issues.

It is important to notice that the static code analysis tools have limitations. They cannot identify whether the business requirements, the developer’s intent and the agreed implementation logic, have been fulfilled in the code. Thus, code peer reviews are still an important factor of the development process (a program cannot replace a peer review). Also, static code analysis tools may:

Identify false positive issues (i.e. issues that don’t require any fix-action) or
Not identify some actual issues (false negatives).

Choosing a Static Code Analysis Tool

Each static code analysis tool has its own features, integrations and supports different programming languages. So, it’s important to choose a static code analysis tool based on your needs, for example:

Programming Language: Supports your programming language(s).

Integrations: Integrates with your Integrated Development Environment (IDE) and your Continuous Integration (CI) system.

Suppression Feature: Provides the ability to dismiss false positive issues. Ideally, these suppressions should be in a separate file and not by getting the code dirty with suppressions attributes.

Rules Extendibility Feature: Provides the ability to add new rules that would suit your team’s and your business’s requirements.

Summary Metrics: Provides a summary of the metrics under investigation.

Collaborative/Reporting Features: Provide a way to share the project’s metrics with developers and management.

Fast Analysis Results: The static code analysis will be executed multiple times. So, it’s important to not add delay to the developers. If it does, the developers will avoid using it.

Dynamic Code Analysis

The dynamic code analysis (also known as Dynamic Testing or Dynamic Program Analysis) is the opposite of the static code analysis. In dynamic code analysis, the examination of the code is performed while the code is running. The main idea is to interact with the running application by providing it with different inputs (test data) and examine the results.

The test data can include cases that examine different business scenarios but also malicious inputs such as extreme inputs (long strings, negative and large positive numbers), unexpected inputs, SQL injections etc.

As we can understand, the efficiency of such an analysis is depending on the quality and quantity of the input test data. The code coverage measure can be used to describe the degree to which the source code is executed for the selected input test data.

Dynamic code analysis can be used to identify critical cases, such as:

Runtime Vulnerabilities (e.g. security threats).
Program Reliability (e.g. program errors, memory leaks, race conditions, etc.).
Response Time (e.g. delays on specific requests or scenarios).
Consumed Resources (e.g. CPU usage, memory usage, number of third-party requests, etc.).

To perform such tests, significant computational resources are required. In addition, an isolated (testing) environment with all the necessary dependencies on third-party resources (e.g. databases, APIs, etc.) is required so production systems aren’t affected.

Performing Dynamic Code Analysis

Dynamic code analysis can be performed by applying both white-box and black-box testing. In white-box testing, we are using the information of the internal structure of the code to design the test cases. For example, white-box testing for dynamic code analysis can be performed by unit and integration tests.

On the contrary, in black-box testing, we do not need the information about the internal structure of the code to examine its functionalities. For example, black-box testing for dynamic code analysis can be performed by integration tests and with third-party utilities.

These third-party utilities can support the identification of several pre-defined cases (e.g. vulnerabilities) or they can “record” the performed actions as the program is being executed, to be re-executed easily afterwards. The selection of these utilities is based on the critical cases that we would like to identify.

Summary

Developers are under high pressure to meet tight deadlines and at the same time not to compromise the quality of the software. The software quality can be described by several attributes (e.g. Maintainability, Security, Efficiency, etc.) which require a high effort to be accomplished.

There are some manual processes (e.g. code peer reviews and manual testing) that can help to maintain and improve software quality but it’s expensive (because it requires a lot of time) and may not always be effective. Static code analysis tools and Dynamic code analysis (in white-box and black-box) should be used along with the existing manual processes to boost the software quality.

Static Code Analysis tools provide diagnostics (about code quality, coding standards, etc.) early in the software development process, resulting in a lower fixing cost. They cannot identify whether the business requirements have been fulfilled in the code.

Dynamic Code Analysis (for example as unit tests, integration tests and third-party utilities) can identify vulnerabilities, memory leaks, race conditions, etc.

The selection of these tools and methods should be based on our needs (programming language, integrations, etc.) and goals (e.g. find threats quickly, keep low response times, improve memory usage, etc.).

To achieve the highest quality in our software, we have to use various tools and methods from both Static Code Analysis and Dynamic Code Analysis. The use of these tools will educate us about the rules that we should follow and their impact, thus our skills will keep improving.

In future articles, I would share with you my experiences by using some of these tools in .NET projects, so stay tuned!

What’s New for Visual Basic in Visual Studio 2022

Visual Studio 2022 and .NET 6.0 have some great new features for Visual Basic developers. Some of these features can affect the way you write code every day. Many of the productivity features covered here are available to you whether you program for .NET Framework or for the latest version of .NET.

Overall

Visual Studio 2022 has a new look, with the new Cascadia font and updated icons. If you have customized your font, you may need to explicitly set your font to Cascadia. You have several Cascadia choices with different weights and two styles: Mono and Code. Cascadia Mono is the default. Choose Cascadia Code if you would like ligatures. Ligatures express two or more characters as a single unit. For Visual Basic, this mainly affects the appearance of >=, <= and <>. Cascadia Code with ligatures looks like:

And with Cascadia Mono, you get the more traditional look:

You can change your font in Tools > Options > Environment > Fonts and Colors.

For folks that like a dark theme, Visual Studio 2022 updates its dark theme with new colors that reduce eyestrain, improve accessibility, and provide consistency with the Windows dark theme. You can change your theme in Tools > Options > Environment > General. You can set your theme to match your Windows theme by selecting “Use system settings.”

Visual Studio now runs as a 64-bit process, with better performance for many operations and better handling of very large solutions. Since it’s now a 64 bit process, you may need to update extensions or MSBuild tasks.

Debugging

Visual Studio has several great variations of breakpoints. In Visual Studio 2022, as you move your cursor within the left margin, a dot will show up. Click it to find available types of breakpoints:

When you select a breakpoint type that needs more information, the breakpoint settings dialog will appear in-line in your code:

I setup several breakpoint variations to show the glyphs for different kinds of breakpoints:

The breakpoint showing the clock symbol is a temporary breakpoint. As soon as it is hit, it will disappear. This is great if you want your application to run until you get to a certain spot, and then you have other exploration to do. The solid circle is a normal breakpoint, and the plus inside a circle indicates a conditional breakpoint.

The last glyph with the arrow is the most powerful. It represents a new kind of breakpoint – the dependent breakpoint. This breakpoint is hit if, and only if, another breakpoint is hit. I set a condition for the breakpoint on line 7 to x = 5. Since that breakpoint is not hit, the breakpoint on line 8 is not hit. I plan to use this when I want to stop at the first iteration of a loop, but only if an outer condition is met. Previously, I needed to enable one breakpoint until the outer condition was met, and only after that was hit enable a breakpoint inside the loop.

A tracepoint is like a breakpoint, but it continues execution. Use this to display a message in the output window. When you select a tracepoint, the breakpoint settings dialog opens and hovering over the information icon has great help:

Tracepoints have been around for a while and Visual Studio 2022 makes them easier to setup.

You can combine many types of breakpoints, like the very powerful conditional tracepoints. Note that some combinations, like dependent tracepoints aren’t legal and you’ll see an error at the top of the dialog.

If you realize a breakpoint or a tracepoint is not on the correct line after you set it up, in Visual Studio 2022 you can just move it by dragging and dropping. Any conditions or breakpoint dependencies are maintained. Of course, if you move it outside the context of the condition, you will get errors notifying you of this. You can also use Ctrl-Z to reset a breakpoint you accidentally delete.

Sometimes you have breakpoints set up, but for now you want to skip all of them to get to your current location in code. You can do that in Visual Studio 2022 by right clicking the line you want to stop at and selecting “Force run to cursor”. This will act like “Run to cursor”, but bypass any breakpoints encountered along the way.

Editor

A number of new features in the editor make your everyday coding smoother and more efficient.

Subword navigation

You probably use Ctrl-Left and Ctrl-Right to move one word to the left and right. In Visual Studio 2022, you can use Ctrl-Alt-Left and Ctrl-Alt-Right to move left and right by parts of words:

This supports the Pascal style of symbol naming.

Inheritance Margin

The inheritance margin adds icons to the left margin representing where code is derived from other code, and where other code derives from this code. The icon is marked here with an arrow:

If you click on one of these icons, a popup will display the base or derived classes or methods:

You can click on the popup entries to quickly navigate to the base or derived class or method.

You can enable or disable this feature in Tools > Options > Text Editor > Basic > Advanced and deselect Enable Inheritance Margin.

Underline reassigned

The new underline reassigned feature is for folks that want to know whether variables are reassigned, or remain set to their initial value. Any variables that are reassigned will look like:

This is turned off by default. If you want hints to where data changes are happening, turn it on in Tools > Options > Text Editor > Basic > Advanced.

IntelliSense for preprocessor symbols

Preprocessor symbols now have IntelliSense. The available symbols will appear after you type #If:

Inline parameter name hints

Another optional feature is parameter name hints. When turned on it displays a small indicator for the parameter name:

You can enable or disable this feature in Tools > Options > Text Editor > Basic > Advanced. This is disabled by default.

Add missing Imports on paste

Visual Studio 2022 will add required namespace imports when you paste a code block which contains a code element requires a namespace import:

You can enable or disable this feature in Tools > Options > Text Editor > Basic > Advanced.

Inline diagnostics

A very cool experimental feature is inline diagnostics. You can turn this on near the top of Tools > Options > Text Editor > Basic > Advanced as “Display diagnostics inline (experimental).” With this on, you’ll see errors on the line where they occur, in addition to seeing them in the Error List and as dots in the right scrollbar.

Red squiggles help us find code that has an error, but they can sometimes be hard to see. Inline diagnostics, make the errors are obvious and you don’t have to hover to see the error text. Turning this on in code with few or no errors makes any new errors immediately obvious.

Check it out and let us know what you think using the Send Feedback icon in the upper right corner of Visual Studio.

Refactoring

Several new refactorings help you write more correct and efficient code.

Simplify LINQ expression

Passing a predicate to a LINQ method like Any is more efficient than first calling Where and then calling Any. You can position the cursor in a LINQ expression and hit “Ctrl .” to determine if the expression can be simplified. For example:

Change methods to Shared

Shared methods do not rely on data within the instance of the class. You can identify the methods in your classes that could be Shared, and update them with the “Make static” refactoring. The use of the C# keyword static instead of Shared will be updated in an upcoming version of Visual Studio.

Generate Overrides dialog supports search

The Generate Overrides dialog appears when you type “Ctrl .”, you are in a class that has overridable methods, and you select “Generate Overrides.” With a large number of potential overrides, it can be troublesome to pick out the item you want to override. The Generate Overrides dialog now offers a text box so you to filter to the overridable methods you’re interested in:

WinForms startup

The Visual Basic Application Model provides the familiar Visual Basic experience for application startup, particularly for WinForms and WPF applications. This model has been updated in .NET 6.0 to support a new event: ApplyApplicationDefaults. This event lets you set application wide values that must be set before any forms or controls are created. These values are:

Font: The global font for the application. The default font changed for .NET to provide an updated experience. This font is often desirable, but can cause issues where pixel perfect layout was done with the traditional .NET Framework font. (See the section Known issues)

MinimumSplashScreenDisplayTime: The minimum time for the splash screen to be displayed.

HighDpiMode: WinForms applications can respond to the High DPI characteristics of the monitor where the application is being run. This happens via the default value HighDpiMode.SystemAware. If you need the .NET Framework behavior, use HighDpiMode.DpiUnaware.

To set these values, first open the file ApplicationEvents.vb:

Then select “(MyApplication events)” in the middle combobox at the top of the edit window (marked with an arrow below). When this is selected, you’ll be able to select “ApplyApplicationDefaults” from the right combobox:

This will create the method and when you set values you set on the ApplyApplicationDefaultsEventArgs parameter, those values will be applied to your application before the first forms are created.

The common defaults will work for most applications and you will not need to specify anything for those applications.

Support for C# init properties

C# introduced a feature called init properties which indicate that a property is immutable after the constructor completes. In Visual Basic 16.9, we added support for the following scenario:

A C# project/assembly has a class that includes an init property
A Visual Basic project has a class that inherits from this class
The Visual Basic class sets the init property in its constructor

In Visual Basic 16 and below, this causes a compiler error. Starting in Visual Basic 16.9, you may set init properties in inherited Visual Basic constructors. If you need this feature, set the language version to latest or 16.9.

<PropertyGroup>
<OutputType>Exe</OutputType>
<RootNamespace>ConsoleApp1</RootNamespace>
<TargetFramework>net6.0</TargetFramework>
<LangVersion>latest</LangVersion>
</PropertyGroup>

Roslyn Source Generators

Roslyn Source Generators let you generate code at design time. We added this feature to Visual Basic earlier this year. You can find out more about Roslyn Source Generators in this article.

Hot Reload

Hot Reload builds on Edit and Continue technology. Instead of stopping at a breakpoint and then changing your code, just change your code and hit the Hot Reload button. For example, if you want to change the logic that runs on a button click – change your code and hit Hot Reload. The next time you click the button, your new logic will run – no breakpoints needed.

One of the cool things about Hot Reload is that it works wherever Edit and Continue work, both .NET and .NET Framework.

Hot Reload is a core technology that covers many scenarios, however there are some edits that are not supported at this time. For example, if you move a WinForms control or change something in Properties, that change won’t currently appear in Hot Reload for C# or Visual Basic. Also, if you add a control or other object that uses WithEvents to a Visual Basic application, you will receive a message that that you’ve made an unsupported change.

Hot Reload will cover more scenarios as it evolves in upcoming releases. Try it out with your applications to simplify the way you interact with Edit and Continue.

Upgrade your Visual Basic apps to .NET 6.0

If you are happy using .NET Framework, you can continue to use it with confidence. .NET Framework 4.8 is the latest version of .NET Framework and will continue to be distributed with future releases of Windows..

Many of the great features in this post apply to both .NET Framework and .NET.

If you want the new features we are introducing in the common libraries and supported workloads, you can move to .NET 6.0.

To explore upgrading, check out the Upgrade Assistant which now supports Visual Basic and is out of preview and generally available.

As part of your transition, you can also target .NET Standard 2.0 to share code between .NET Framework and .NET, as well as between C# and Visual Basic.

Known issues

“Remove unused references” is present in the Solution Explorer right click dialog. While it can be used to remove project references, in at least some cases unused package references are not recognized.

The new .editorconfig dialog contains features for all languages, and many of the settings are specific to C#. This is because .editorconfig is often used at the root of a solution that contains multiple languages. We are working to improve how to include the language where settings are language specific.

Regarding WinForms fonts: The Visual Basic project (.vbproj) IntelliSense includes <ApplicationDefaultFont>. This is intended to determine the design time font for the WinForms designer. As mentioned in the WinForms startup section above, the font used when your application runs is the Font property you assign to the ApplyApplicationDefaults event argument in MyApplication (in ApplicationEvents.vb). Setting these both to the same font is intended to set the same font for design time and application runtime. However, the <ApplicationDefaultFont> in .vbproj is currently ignored. The WinForms team is working on this issue.

In closing

As you can see here, the experience of Visual Basic just keeps getting better. We’ve enabled many new features in Visual Studio 2022 and we’re excited for you to download the new release and give it a try on your Visual Basic solutions.

The post What’s New for Visual Basic in Visual Studio 2022 appeared first on .NET Blog.

On using PSR abstractions

Several years ago, when the PHP-FIG (PHP Framework Interop Group) created its first PSRs (PHP Standard Recommendations) they started some big changes in the PHP ecosystem. The standard for class auto-loading was created to go hand-in-hand with the then new package manager Composer. PSRs for coding standards were defined, which I’m sure helped a lot of teams to leave coding standard discussions behind. The old tabs versus spaces debate was forever settled and the jokes about it now feel quite outdated.

Next up were the PSRs that aimed for the big goal: framework interoperability. It started with an easy one: PSR-3 for logging, but it took quite some time before the bigger ones were tackled, e.g. request/response interfaces, HTTP client and server middleware interfaces, service container interfaces, and several others. The idea, if I remember correctly, was that frameworks could provide implementation packages for the proposed interfaces. So you could eventually use the Symfony router, the Zend container, a Laravel security component, and so on.

I remember there were some troubles though. Some PSRs were abandoned, some may not have been developed to their full potential, and some may have been over-developed. I think it’s really hard to find common ground between framework implementations for all these abstractions, and to define abstractions in such a way that users and implementers can both be happy (see for example an interesting discussion by Anthony Ferrara about the HTTP middleware proposal and an older discussion about caching).

One of the concerns I personally had about PSR abstractions is that once you have a good abstraction, you don’t need multiple implementation packages. So why even bother creating a separate abstraction for others to use? Why not just create a single package that has both the implementation and the abstraction? It turns out, that doesn’t work. Why? Because package maintainers sometimes just abandon a package. And if that happens, the abstraction becomes useless too because it is inside that abandoned package. So developers do like to have a separate abstraction package that isn’t even tied to their favorite vendor.

(By the way, I think it’s strange for frameworks to have their own Interfaces or Contracts package for their abstractions. I bet there are 0 cases where someone using Laravel or Symfony keeps using its abstractions, but not its implementations. Anyway… If you have a different experience, or want to share your story about these packages, please submit a comment below!)

Is it safe to depend on PSR abstraction packages?

Back in 2013, Igor Wiedler made a lasting impression with their article about dependency responsibility. By now we all know that by installing a vendor package you can import bugs and security issues into your project. Another common concern is the stability of the package: is it going to be maintained for a long time? Are the maintainers going to change it often?

Yes, these concerns should be addressed, and I think in general they are not considered well enough. But we need to distinguish between different kinds of packages. Packages have a certain level of stability which is in part related to its abstractness and the number of dependencies it has (if you’re interested in this topic, check out my book “Principles of Package Design”).

The abstractness of a package is based on the number of interfaces versus the number of classes. Since abstract things are supposed to change less often than concrete things, and in fewer ways, an abstract package will be a stable package and it will be more reliable than less abstract, i.e. concrete packages (I think this is why frameworks provide those Interface or Contract packages: as an indication of their intended stability).

Another reason for a package to become stable is when it is used by many people. This is more of a social principle: the maintainers won’t change the package in drastic ways if that makes the users of the package angry. Of course, we have semantic versioning and backward compatibility promises for that, but abstract packages are less likely to change anyway, so it should be safe for many projects to rely on the abstractions and swap out the implementation packages if it’s ever needed.

Applying these considerations to PSR abstraction packages: they are abstract packages, and they are likely going to have many users (this depends on package and framework adoption actually), so they are also not likely to change and become a maintenance liability.

Should a project have its own wrappers for PSR abstractions?

Since decoupling from vendor code has become a common strategy for developers, developers may now wonder: should we also decouple from PSR abstractions? E.g. create our own interfaces, wrapper classes, and so on? This is often inspired by some team rule that says you can never depend directly on vendor code. This may sometimes even be inspired by something I personally advocate: to decouple domain code from infrastructure code. However, keep in mind that:

Not all code in vendor is infrastructure code, and
Decoupling is only required in domain code

As an example, if you want to propagate a domain event to a remote web service, don’t do it in the Domain layer. Do it in the Infrastructure layer, where in terms of dependency directions it’s totally okay to use the PSR-18 HTTP client interface. No need to wrap that thing or create your own interface for it. It is a great abstraction already: it does what you need, nothing more, nothing less. After all, what you need is to send an HTTP request and do something with the returned HTTP response:

namespace PsrHttpClient;

use PsrHttpMessageRequestInterface;
use PsrHttpMessageResponseInterface;

interface ClientInterface
{
/**
* @throws PsrHttpClientClientExceptionInterface
*/
public function sendRequest(
RequestInterface $request
): ResponseInterface;
}

The only problem about this interface is maybe: how can you create a RequestInterface instance? PSR-18 relies on PSR-7 for the request and response interfaces, but you need an implementation package to actually create these objects. Every time I need an HTTP client I struggle with this again: what packages to install, and how to get a hold of these objects? Once it’s done, I may feel the need to never having to figure it out again and introduce my own interface for HTTP clients, e.g.

interface HttpClient
{
public function get(string $uri, array $headers, array $query): string;

public function post(string $uri, array $headers, string $body): string;
}

Then I should add an implementation of this interface that handles the PSR-18 and PSR-7 abstractions for me. Unfortunately, by creating my own abstraction I lose the benefits of using an established abstraction, being:

You don’t have to design a good abstraction yourself.
You can use the interface and rely on an implementation package to provide a good implementation for it. If you find another package does a better job, it will be a very easy switch.

If you wrap PSR interfaces with your own classes you lose these benefits. You may end up creating an abstraction that just isn’t right, or one that requires a heavy implementation that can’t be easily replaced. In the example above, the interface inspires all kinds of questions for its user:

What is the structure of the $headers array: header name as key, header value as value? All strings?
Same for $query; but does it support array-like query parameters? Shouldn’t the query be part of the URI?
Should $uri contain the server hostname as well?
What if we want to use other request methods than get or post?
What if we want to make a POST request without a body?
What if we want to add query parameters to a POST request?
How can we deal with failure? Do we always get a string? What kind of exceptions do these methods throw?

These questions have already been answered by PSR-18 and PSR-7, but by making our own abstraction we reintroduce the vagueness. Of course, we can remove the vagueness by improving the design, adding type hints, etc., but then we spend valuable time on something that was already done for us, and by more minds, with more knowledge about the HTTP protocol and more experience maintaining HTTP clients than us. So we should really think twice before wrapping these abstractions.

What about PSR abstractions that end up being outdated?

Without meaning to discredit the effort that went into it, nor anyone involved, there will always be standards that end up being outdated, like in my opinion PSR-11: Container interface. This PSR describes an interface that several container implementations support:

<?php
namespace PsrContainer;

/**
* Describes the interface of a container that exposes methods to read its entries.
*
*/
interface ContainerInterface
{
/**
* Finds an entry of the container by its identifier and returns it.
*
* @param string $id Identifier of the entry to look for.
*
* @throws NotFoundExceptionInterface No entry was found for **this** identifier.
* @throws ContainerExceptionInterface Error while retrieving the entry.
*
* @return mixed Entry.
*/
public function get($id);

/**
* Returns true if the container can return an entry for the given identifier.
* Returns false otherwise.
*
* `has($id)` returning true does not mean that `get($id)` will not throw an exception.
* It does however mean that `get($id)` will not throw a `NotFoundExceptionInterface`.
*
* @param string $id Identifier of the entry to look for.
*
* @return bool
*/
public function has($id);
}

PHP developers are relying more and more on types and this interface doesn’t provide much help in that area. Even if you use class or interface names as “entry IDs”, you still have to add type hints after using get(), before the IDE and other static analysers can understand what’s going on:

/** @var Router $router */
$router = $container->get(Router::class);

That’s because the declared return type of get() is mixed.

Another issue with the PSR-11 container is that it supports a pattern called service locator, which is generally considered an anti-pattern, except near the composition root. This means that the primary use for a container like this is when the request path will be matched with a controller that needs to be invoked. So the “request dispatcher” will load the controller from the container, e.g.

$controllerId = $router->match($request);

/** @var Controller $controller */
$controller = $container->get($controllerId);

$response = $controller->handle($request);

Maybe $controllerId is an undefined entry, in which case you get a NotFoundExceptionInterface. If you want to deal with this error in your own way you could catch the exception, or call has() first:

if ($container->has($controlledId)) {
// throw custom exception
}

However, why would a controller not be defined? This will always be a developer mistake. So has() is completely unnecessary. The container should just throw an exception.

Considering the return type problem again, we’d be much better off if a container would return only services of a specific type. For controllers, we’d have a ControllerContainer or ControllerFactory (after all, a container is some kind of generic factory). You could only get Controllers from it:

interface ControllerFactory
{
/**
* @throws CouldNotCreateController
*/
public function createController(string $controllerId): Controller;
}

The only other thing we’d need is a container for the application’s entry point, but again, this doesn’t need to be a generic container either:

final class ApplicationFactory
{
public function createApplication(): Application
{
// …
}
}

// in the front controller (e.g. index.php):

(new ApplicationFactory())->createApplication()->run();

This is only a simplified example to show the problem and provide possible solutions. In practice you’ll need a bit more; see also my article about Hand-written service containers. Still we should conclude that ContainerInterface is somewhat outdated (mainly because the lack of type support), and that it comes with design issues or may design issues in your own project.

It still doesn’t mean we should wrap ContainerInterface or other PSR interfaces. It means we may skip it entirely, and just not use it in our own code. A PSR abstraction isn’t good because it’s an abstraction, or because it has been designed by smart people. And even if it’s great today, it may not be good forever. So, as always, we should be prepared to modernize and upgrade our code base when needed. At the same time, we should also use PSR abstractions whenever it makes sense, since they will save us a lot of design work and will make our code less sensitive to changes in vendor packages.

Quick Testing Tips: Write Unit Tests Like Scenarios

I’m a big fan of the BDD Books by Gáspár Nagy and Seb Rose, and I’ve read a lot about writing and improving scenarios, like Specification by Example by Gojko Adzic and Writing Great Specifications by Kamil Nicieja. I can recommend reading anything from Liz Keogh as well. Trying to apply their suggestions in my development work, I realized: specifications benefit from good writing. Writing benefits from good thinking. And so does design. Better writing, thinking, designing: this will make us do a better job at programming. Any effort put into these activities has a positive impact on the other areas, even on the code itself.

Unit tests vs automated scenarios

For instance, when you write a test in your favorite test runner (like PHPUnit), you’ll write code. You’ll focus on technology, and on implementation details (methods, classes, argument types, etc.):

$config = Mockery::mock(Config::class);
$config->shouldReceive(‘get’)
->with(‘reroute_sms_to_email’)
->andReturn(‘[email protected]’);

$fallbackMailer = Mockery::mock(Mailer::class);
$fallbackMailer->shouldReceive(‘send’)
->andReturnUsing(function (Mail $mail) {
self::assertEquals(‘The message’, $mail->plainTextBody());
self::assertEquals(‘SMS for 0612345678’, $mail->subject());
});

$smsSender = new SmsSender($config, $fallbackMailer);
$smsSender->send(‘0612345678’, ‘The message’);

It takes a lot of reading and interpreting before you even understand what’s going on here. When you write a scenario first, you can shift your focus to a higher abstraction level. It’ll be easier to introduce words from the business domain as well:

Given the system has been configured to reroute all SMS messages to the email address [email protected]
When the system sends an SMS
Then the SMS message will be sent as an email to [email protected] instead

When automating the scenario steps it will be natural to copy the words from the scenario into the code, establishing the holy grail of Domain-Driven Design – a Ubiquitous Language; without too much effort. And it’s definitely easier to understand, because you’re describing in simple words what you’re doing or are planning to do.

Most of the projects I’ve seen don’t use scenarios like this. They either write technology-focused scenarios, like this (or the equivalent using Browserkit, WebTestCase, etc.):

Given I am on “/welcome”
When I click “Submit”
Then I should see “Hi”

Or they don’t specify anything, but just test everything using PHPUnit.

Writing scenario-style unit tests

Although it may seem like having any kind of test is already better than having no tests at all, if you’re making an effort to test your code, I think your test deserves to be of a high quality. When aiming high, it’ll be smart to take advantage of the vast knowledge base from the scenario-writing community. As an example, I’ve been trying to import a number of style rules for scenarios into PHPUnit tests. The result is that those tests now become more useful for the (future) reader. They describe what’s going on, instead of just showing which methods will be called, what data will be passed, and what the result of that is. You can use simple tricks like:

Givens should be in the past tense

Whens should be in the present tense

Thens should be in the future tense (often using “should” or “will”)

But what if you don’t want to use Behat or another tool that supports Gherkin (the formalized language for these scenarios)? The cool thing is, you can use “scenario language” in any test, also in unit tests. The trick is to just use comments. This is the unit test above rewritten with this approach:

// Given the system has been configured to reroute all SMS messages to the email address [email protected]
$config = Mockery::mock(Config::class);
$config->shouldReceive(‘get’)
->with(‘reroute_sms_to_email’)
->andReturn(‘[email protected]’);

// When the system sends an SMS
$fallbackMailer = Mockery::spy(Mailer::class);
$smsSender = new SmsSender($config, $fallbackMailer);
$smsSender->send(‘0612345678’, ‘The message’);

// Then the SMS message will be sent as an email to [email protected] instead
$fallbackMailer->shouldHaveReceived(‘send’)
->with(function (Mail $mail) {
self::assertEquals(‘The message’, $mail->plainTextBody());
self::assertEquals(‘SMS for 0612345678’, $mail->subject());

return true;
});

Note that to match the Given/When/Then order we use a spy instead of a mock to verify that the right call was made to the fallback mailer. The resulting code is a lot easier to read than the original because you could read only the comments and skip the code, unless you want to zoom in on the details. This mimics the way it works with scenarios that have the scenario steps and their automated step definitions in different files. The difference is that you don’t have to switch between the .feature files that contain the scenario steps, and the Context classes that contain the implementations for each step. It also saves you from installing another testing tool in your project and having to teach everyone how to use it.

The Friends convention for test method names

Another thing we can learn from the scenario-writing community, a practice that can help us write good test method names (we’re always struggling with that, right?). For test method names we can adopt the “Friends” naming convention, completing the sentence: “The one where …”. If you, like me, don’t want to constantly be reminded of Friends, or want more direction, you can use the naming convention: “Here we specifically want to talk about what happens when …”. For example:

// Here we specifically want to talk about what happens when …
public function the_system_has_been_configured_to_reroute_sms_messages(): void
{
}

// Here we specifically want to talk about what happens when …
public function the_user_does_not_have_a_phone_number(): void
{
}

// etc.

Testing at higher abstraction levels

I think this approach is very useful. You can stick to your existing unit-testing practices and at the same time improve your scenario writing skills. What’s still lacking is a description of “why”. I’d want to describe the SMS sending feature as part of the bigger scenario, e.g. why do we send SMS messages in the first place? Where in the “user journey” does this happen?

I still prefer Behat for specifying the system on a level that’s closer to the end user. But PHPUnit itself doesn’t pose any limits on the abstraction level of your test. So it’s certainly a viable option to write tests at all kinds of abstraction levels using just PHPUnit. When you treat your tests as scenarios, and keep focusing on your writing skills for them, you’ll be writing truly valuable tests, that document behavior, the reason for that behavior, and allow the reader of your test to zoom in on the implementation details whenever they feel like it.

This post has been inspired by some development coaching work I’m doing for PinkWeb at the time of writing. Check out their vacancies if you’d like to join the team as well!