Training a ML.NET Model with Azure ML

Model Builder makes it easy to get started with Machine Learning and create your first model. As you gather more data over time, you may want to continuously refine or retrain your model. Using a combination of CLI and Azure tooling, you can train a new ML.NET model and integrate the training into a pipeline. This blog post shows an example of a training pipeline that can be easily rerun using Azure.

We’re going to use Azure Machine Learning Datasets to track data and an Azure ML Pipeline to train a new model. This retraining pipeline can then be triggered by Azure DevOps.
In this post, we will cover:

Creating an Azure Machine Learning Dataset
Training a ML.NET model via the Azure Machine Learning CLI (v2)
Creating a pipeline in Azure DevOps for re-training

Prerequisites

Azure Machine Learning Workspace

Compute Cluster in the workspace

Creating an Azure Machine Learning Dataset

Open the workspace in the Microsoft Azure Machine Learning Studio.

We need to create a file dataset. Navigate to Datasets. Click + Create Dataset.

Choose the datasource. We will upload a copy of this song popularity dataset available from Kaggle. It’s a fairly large dataset that I don’t want to maintain locally.

Give the dataset a unique name and make sure to choose “File” as the Dataset type.

Upload from a local file to the default workspaceblobstore. Take note of the file name.

When the data upload finishes, create the dataset.

Click on the completed dataset to view it. Confirm the preview available in the Explore tab looks correct.

Make note of the dataset name, file name, and if you uploaded multiple versions, the version number. We will use these values in the next step.

Training a ML.NET model via Azure Machine Learning

Now that we have a dataset uploaded to Azure ML we can create an Azure ML training pipeline, and use Azure CLI v2 to run it. The pipeline below will create a Docker container with a ML.NET CLI instance that will conduct the training.

Create the Dockerfile and save it in a new folder for this experiment. If not familiar with Dockerfiles, these file types don’t have an extension. The file should be called “Dockerfile” with no extension, and contain the following:

FROM mcr.microsoft.com/dotnet/sdk:6.0
RUN dotnet tool install -g microsoft.mlnet-linux-x64
ENV PATH=”$PATH:/root/.dotnet/tools”

We will need to figure out our ML.NET CLI command to train our model. If needed, see installation instructions for the ML.NET CLI.

We’re doing regression and will specify a dataset and label column. Text classification and recommendation are also supported for tabular files. Check the command information or ML.NET CLI docs for more details on other training scenarios.

Make sure to include the option –verbosity q, as some of the CLI features can cause problems in the Linux environment.

mlnet regression –dataset <YOUR_DATA_FILE_NAME> –label-col <YOUR_LABEL> –output outputs –log-file-path outputs/logs –verbosity q

Create the AzureTrain.yml file in the same folder as the Dockerfile. This is what will be passed to the Azure CLI. By using input data in the pipeline, Azure ML will download the file dataset to our compute. The training file can then be referenced directly. We just need to specify the path in the command to the ML.NET CLI. Do the following:

Replace with the unique dataset name, and with the version number (likely 1). Both values are visible in the Dataset tab. In this example the value is dataset: azureml:song_popularity:1.
Replace command with the local ML.NET CLI command. Instead of the local file path, we’ll use {inputs.data} to tell the pipeline to use the download path on the Azure compute. Add the data file name. In this example it is –dataset {inputs.data}/song_data.csv.
Replace the compute with our compute name. The available compute clusters in the workspace are visible under Computes -> Compute clusters.

For more information see command job YAML schema documentation.

inputs:
data:
dataset: azureml:<DATASET_NAME>:<VERSION>
mode: download
experiment_name: mldotnet-training
code:
local_path: .
command: mlnet regression –dataset {inputs.data}/<YOUR_DATA_FILE_NAME> –label-col <YOUR_LABEL_COLUMN> –output outputs –log-file-path outputs/logs –verbosity q
compute: azureml:<YOUR-COMPUTE-NAME>
environment:
build:
local_path: .
dockerfile_path: Dockerfile

Run manually

To kick off training from a local machine, or just test the functionality of the run, we can install and setup the Azure CLI (v2) with ML extension. In these instructions I’m running ml extension version 2.0.7.

Machine learning subcommands require the –workspace/-w and –resource-group/-g parameters. Configure the defaults for the group and workspace of the dataset.
az configure –-defaults group=<YOUR_RESOURCE_GROUP> workspace=<YOUR_WORKSPACE>

Run the retraining pipeline created in the previous step.
az ml job create –-file AzureTrain.yml

Check the results of the run online in the Azure Machine Learning Studio under Experiments -> mldotnet-training

Automate training with Azure DevOps Services pipelines

We can run the Azure ML training via Azure DevOps Pipelines. This allows the use of any trigger, including time based or file changes.

Below are the steps to get the Azure ML pipeline running. For more details, see step-by-step instructions for setting up Azure DevOps and Azure ML.

Check the Dockerfile and AzureTrain.yml into source control. It is best to create a new subfolder to put these files into. Azure CLI will upload the whole containing folder when running the experiment.
Create a service connection between Azure ML and Azure DevOps. In Azure DevOps:

Go to Project settings. Select Pipelines -> Service connections

Create a new connection of type Azure Resource Manager
Select Service principal (automatic) and Scope Level Machine Learning Workspace. Configure it to the Resource Group of your Machine Learning workspace. Name it aml-ws.

In Azure DevOps create a new pipeline, using the following file as a template. Replace the variables and trigger (if applicable). The ml-ws-connection is the connection created in step 2. Depending on where the file is checked in, add the AzureTrain.yml file path to the ‘Create training job’ step.

variables:
ml-ws-connection: ‘aml-ws’ # Workspace Service Connection name
ml-ws: ‘<YOUR_VALUE>’ # AML Workspace name
ml-rg: ‘<YOUR_VALUE>’ # AML resource Group name

trigger:
<YOUR_TRIGGER>

pool:
vmImage: ubuntu-latest

steps:

– task: [email protected]
displayName: ‘Set config functionality’
inputs:
azureSubscription: $(ml-ws-connection)
scriptLocation: inlineScript
scriptType: ‘bash’
inlineScript: ‘az config set extension.use_dynamic_install=yes_without_prompt’

– task: [email protected]
displayName: ‘Install AML CLI (azureml-v2-preview)’
inputs:
azureSubscription: $(ml-ws-connection)
scriptLocation: inlineScript
scriptType: ‘bash’
inlineScript: ‘az extension add -n ml’

– task: [email protected]
displayName: ‘Setup default config values’
inputs:
azureSubscription: $(ml-ws-connection)
scriptLocation: inlineScript
scriptType: ‘bash’
inlineScript: ‘az configure –defaults group=$(ml-rg) workspace=$(ml-ws)’

– task: [email protected]
displayName: ‘Create training job’
inputs:
azureSubscription: $(ml-ws-connection)
scriptLocation: inlineScript
scriptType: ‘bash’
inlineScript: ‘az ml job create –file <YOUR_PATH>/AzureTrain.yml’

Running the Azure CLI job either locally or from Azure DevOps will create an output model in Azure. To see the model, go to Microsoft Azure Machine Learning Studio and navigate to your ML workspace. Click on Experiments -> mldotnet-training. Toggle “View only my runs” to see runs started by the Azure Pipelines Service Principal. The completed training run should be visible. The trained model, and example code, is generated in the Outputs + Logs section, in the outputs folder.

In this post, we’ve created a flexible way to track our data and model via Azure ML. The Azure ML Dataset can be added to and updated while maintaining historical data. This Azure ML retraining pipeline can be run manually or automatically in Azure DevOps. Once your model is trained, you can deploy it using Azure ML custom containers.

Set up your own ML.NET retraining pipeline with Azure Machine Learning Datasets and Azure DevOps? Let us know of any issues, feature requests, or general feedback by filing an issue in the ML.NET Tooling (Model Builder & ML.NET CLI) GitHub repo.

The post Training a ML.NET Model with Azure ML appeared first on .NET Blog.

.NET 7 Preview 1 Has Been Released

This past week, NET 7 Preview 1 was released! By extension, this also means that Entity Framework 7 and ASP.NET Core 7 preview versions shipped at the same time.

So what’s new? In all honesty not a heck of a lot that will blow your mind! As with most Preview 1 releases, it’s mostly about getting that first version bump out of the way and any major blockers from the previous release sorted. So with that in mind, skimming the release notes I can see :

Progress continues on MAUI (The multi platform UI components for .NET), but we are still not at an RC (Although RC should be shipping with .NET 7)
Entity Framework changes are almost entirely bugs from the previous release
There is a slight push (And I’ve also seen this on Twitter), to merge in concepts from Orleans, or more broadly, having .NET 7 focus on quality of life improvements that lend itself to microservices or independent distributed applications (Expect to hear more about this as we get closer to .NET 7 release)
Further support for nullable reference types in various .NET libraries
Further support for file uploads and streams when building API’s using the Minimal API framework
Support for nullable reference types in MVC Views/Razor Pages
Performance improvements for header parsing in web applications

So nothing too ground breaking here. Importantly .NET 7 is labelled as a “Current” release which means it only receives 18 months of support. This is normal as Microsoft tend to alternate releases between Life Time Support and Current.

You can download .NET 7 Preview 1 here : https://dotnet.microsoft.com/en-us/download/dotnet/7.0

And you will require Visual Studio 2022 *Preview*!

The post .NET 7 Preview 1 Has Been Released appeared first on .NET Core Tutorials.

.NET 💜 GitHub Actions

Hi friends, I put together two posts where I’m going to teach you the basics of the GitHub Actions platform. In this first post, you’ll learn how GitHub Actions can improve your .NET development experience and team productivity. I’ll show you how to use them to automate common .NET app dev scenarios with workflow composition. In the next post, I’ll show you how to create a custom GitHub Action written in .NET.

An introduction to GitHub Actions

Developers that use GitHub for managing their git repositories have a powerful continuous integration (CI) and continuous delivery (CD) feature with the help of GitHub Actions. A common developer scenario is when developers propose changes to the default branch (typically main) of a GitHub repository. These changes, while often scrutinized by reviewers, can have automated checks to ensure that the code compiles and tests pass.

GitHub Actions allow you to build, test, and deploy your code right from your source code repository on https://github.com. GitHub Actions are consumed by GitHub workflows. A GitHub workflow is a YAML (either *.yml or *.yaml) file within your GitHub repository. These workflow files reside in the .github/workflows/ directory from the root of the repository. A workflow references one or more GitHub Action(s) together as a series of instructions, where each instruction executes a specific task.

The GitHub Action terminology

To avoid mistakenly using some of these terms inaccurately, let’s define them:

GitHub Actions: GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline.

workflow: A workflow is a configurable automated process that will run one or more jobs.

event: An event is a specific activity in a repository that triggers a workflow run.

job: A job is a set of steps in a workflow that execute on the same runner.

action: An action is a custom application for the GitHub Actions platform that performs a complex but frequently repeated task.

runner: A runner is a server that runs your workflows when they’re triggered.

For more information, see GitHub Docs: Understanding GitHub Actions

Inside the GitHub workflow file

A workflow file defines a sequence of jobs and their corresponding steps to follow. Each workflow has a name and a set of triggers, or events to act on. You have to specify at least one trigger for your workflow to run unless it’s a reusable workflow. A common .NET GitHub workflow would be to build and test your C# code when changes are either pushed or when there’s a pull request targeting the default branch. Consider the following workflow file:

name: build and test
on:
push:
pull_request:
branches: [ main ]
paths-ignore:
– ‘README.md’
env:
DOTNET_VERSION: ‘6.0.x’
jobs:
build-and-test:
name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
– uses: actions/[email protected]
– name: Setup .NET
uses: actions/[email protected]
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
– name: Install dependencies
run: dotnet restore
– name: Build
run: dotnet build –configuration Release –no-restore
– name: Test
run: dotnet test –no-restore –verbosity normal

I’m not going to assume that you have a deep understanding of this workflow, and while it’s less than thirty lines — there is still a lot to unpack. I put together a sequence diagram (powered by Mermaid), that shows how a developer might visualize this workflow.

Here’s the same workflow file, but this time it is expanded with inline comments to add context (if you’re already familiar with the workflow syntax, feel free to skip past this):

# The name of the workflow.
# This is the name that’s displayed for status
# badges (commonly embedded in README.md files).
name: build and test

# Trigger this workflow on a push, or pull request to
# the main branch, when either C# or project files changed
on:
push:
pull_request:
branches: [ main ]
paths-ignore:
– ‘README.md’

# Create an environment variable named DOTNET_VERSION
# and set it as “6.0.x”
env:
DOTNET_VERSION: ‘6.0.x’ # The .NET SDK version to use

# Defines a single job named “build-and-test”
jobs:
build-and-test:

# When the workflow runs, this is the name that is logged
# This job will run three times, once for each “os” defined
name: build-and-test-${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]

# Each job run contains these five steps
steps:

# 1) Check out the source code so that the workflow can access it.
– uses: actions/[email protected]

# 2) Set up the .NET CLI environment for the workflow to use.
# The .NET version is specified by the environment variable.
– name: Setup .NET
uses: actions/[email protected]
with:
dotnet-version: ${{ env.DOTNET_VERSION }}

# 3) Restore the dependencies and tools of a project or solution.
– name: Install dependencies
run: dotnet restore

# 4) Build a project or solution and all of its dependencies.
– name: Build
run: dotnet build –configuration Release –no-restore

# 5) Test a project or solution.
– name: Test
run: dotnet test –no-restore –verbosity normal

The preceding workflow file contains many comments to help detail each area of the workflow. You might have noticed that the steps define various usages of GitHub Actions or simple run commands. The relationship between a GitHub Action and a consuming GitHub workflow is that workflows consume actions. A GitHub Action is only as powerful as the consuming workflow. Workflows can define anything from simple tasks to elaborate compositions and everything in between. For more information on creating GitHub workflows for .NET apps, see the following .NET docs resources:

Create a build validation workflow
Create a test validation workflow
Create a deploy workflow
Create a CodeQL security vulnerability scanning CRON job workflow

I hope that you’re asking yourself, “why is this important?” Sure, we can create GitHub Actions, and we can compose workflows that consume them — but why is that important?! That answer is GitHub status checks .

GitHub status checks

One of the primary benefits of using workflows is to define conditional status checks that can deterministically fail a build. A workflow can be configured as a status check for a pull request (PR), and if the workflow fails, for example the source code in the pull request doesn’t compile — the PR can be blocked from being merged. Consider the following screen capture, which shows that two checks have failed, thus blocking the PR from being merged.

As the developer who is responsible for reviewing a PR, you’d immediately see that the pull request has failing status checks. You’d work with the developer who proposed the PR to get all of the status checks to pass. The following is a screen capture showing a “green build”, a build that has all of its status checks as passing.

For more information, see GitHub Docs: GitHub status checks.

GitHub Actions that .NET developers should know

As a .NET developer, you’re likely familiar with the .NET CLI. The .NET CLI is included with the .NET SDK. If you don’t already have the .NET SDK, you can download the .NET 6 SDK.

Using the previous workflow file as a point of reference, there are five steps — each step includes either the run or uses syntax:

Action or command
Description

uses: actions/[email protected]
This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it. For more information, see actions/checkout

uses: actions/[email protected]
This action sets up a .NET CLI environment for use in actions. For more information, see actions/setup-dotnet

run: dotnet restore
Restores the dependencies and tools of a project or solution. For more information, see dotnet restore

run: dotnet build
Builds the project or solution. For more information, see dotnet build

run: dotnet test
Runs the tests for the project or solution. For more information, see dotnet test

Some steps rely on GitHub Actions and reference them with the uses syntax, while others run commands. For more information on the differences, see Workflow syntax for GitHub Actions: uses and run.

.NET applications rely on NuGet packages. You can optimize your workflows by caching various dependencies that change infrequently, such as NuGet packages. As an example, you can use the actions/cache to cache NuGet packages:

steps:
– uses: actions/[email protected]
– name: Setup dotnet
uses: actions/[email protected]
with:
dotnet-version: ‘6.0.x’
– uses: actions/[email protected]
with:
path: ~/.nuget/packages
# Look to see if there is a cache hit for the corresponding requirements file
key: ${{ runner.os }}-nuget-${{ hashFiles(‘**/packages.lock.json’) }}
restore-keys: |
${{ runner.os }}-nuget
– name: Install dependencies
run: dotnet add package Newtonsoft.Json –version 12.0.1

For more information, see GitHub Docs: Building and testing .NET – Caching dependencies.

In addition to using the standard GitHub Actions or invoking .NET CLI commands using the run syntax, you might be interested in learning about some additional GitHub Actions.

Additional GitHub Actions

Several .NET GitHub Actions are hosted on the dotnet GitHub organization:

.NET GitHub Action
Description

dotnet/versionsweeper
This action sweeps .NET repos for out-of-support target versions of .NET. The .NET docs team uses the .NET version sweeper GitHub Action to automate issue creation. The action runs as a cron job (or on a schedule). When it detects that .NET projects target out-of-support versions, it creates issues to report its findings. The output is configurable and helpful for tracking .NET version support concerns.

dotnet/code-analysis
This action runs the code analysis rules that are included in the .NET SDK as part of continuous integration (CI). The action runs both code-quality (CAXXXX) rules and code-style (IDEXXXX) rules.

.NET developer community spotlight

The .NET developer community is building GitHub Actions that might be useful in your organizations. As an example, check out the zyborg/dotnet-tests-report which is a GitHub Action to run .NET tests and generate reports and badges. If you use this GitHub Action, be sure to give their repo a star .

There are many .NET GitHub Actions that can be consumed from workflows, see the GitHub Marketplace: .NET.

A word on .NET workloads

.NET runs anywhere, and you can use it to build anything. There are optional workloads that may need to be installed when building from a GitHub workflow. There are many workloads available, see the output of the dotnet workload search command as an example:

dotnet workload search

Workload ID Description
—————————————————————————————–
android .NET SDK Workload for building Android applications.
android-aot .NET SDK Workload for building Android applications with AOT support.
ios .NET SDK Workload for building iOS applications.
maccatalyst .NET SDK Workload for building macOS applications with MacCatalyst.
macos .NET SDK Workload for building macOS applications.
maui .NET MAUI SDK for all platforms
maui-android .NET MAUI SDK for Android
maui-desktop .NET MAUI SDK for Desktop
maui-ios .NET MAUI SDK for iOS
maui-maccatalyst .NET MAUI SDK for Mac Catalyst
maui-mobile .NET MAUI SDK for Mobile
maui-windows .NET MAUI SDK for Windows
tvos .NET SDK Workload for building tvOS applications.
wasm-tools .NET WebAssembly build tools

If you’re writing a workflow for Blazor WebAssembly app, or .NET MAUI as an example — you’ll likely run the dotnet workload install command as one of your steps. For example, an individual run step to install the WebAssembly build tools would look like:

run: dotnet workload install wasm-tools

Summary

In this post, I explained the key differences between GitHub Actions and GitHub workflows. I explained and scrutinized each line in an example workflow file. I then showed you how a developer might visualize the execution of a GitHub workflow as a sequence diagram. I shared a few additional resources you may not have known about. For more information, see .NET Docs: GitHub Actions and .NET.

In the next post, I’ll show how to create GitHub Actions using .NET. I’ll walk you through upgrading an existing .NET GitHub Action that is used to automatically maintain a _CODEMETRICS.md file within the root of the repository. The code metrics analyze the C# source code of the target repository to determine things such as cyclomatic complexity and the maintainability index. In addition to these metrics, we’ll add the ability to generate Mermaid class diagrams, which is now natively supported by GitHub flavored markdown.

The post .NET 💜 GitHub Actions appeared first on .NET Blog.

Building a simple Tweet Bot using Azure Logic Apps

This post is about building a simple tweet bot using Azure Logic Apps. Azure Logic Apps is Low code / No code serverless service from Microsoft Azure. In this post I will be building a Tweet bot which look for #dotNETLovesMe hash tag and retweets them.

To get started first you need to create a logic app. In the portal search for Azure Logic App, and click on create. In the screen you need to choose Type – I choose Consumption based, name for the logic app, TweetBot in our case and finally which region – I choose Southeast Asia.

Once you created it. You can choose the Logic App Editor, click on the Adding first step – Choose an operation – Add a trigger – search for Twitter. And in the triggers list choose the operation – When a new Tweet is posted.

If you’re used Logic Apps with Twitter – there might be connection – you can use it or you can create a new connection. If you’re create a new connection, you can use the shared application or you can bring your own application. If you’re choosing you’re own application you need to create an app in Twitter, you can do this on https://developer.twitter.com and use the Consumer Key and Consumer Secret – I am using this feature.

You need to configure on redirect URL as well – https://global.consent.azure-apim.net/redirect and you need to configure the Read and Write permissions.

Once you configured the consumer key and secrets, you need to authenticate logic app with your Twitter credentials. If you’re not configured callback URL, your connection will fail. Once the connection is configured, we can add the HashTag or Words or username from which our bot need to get the tweets. I am adding the #dotNETLovesMe hashtag. And rest of the configuration I am keeping as default. Next click on the plus button and choose Add Action option. Again search for Twitter and in the actions to choose Retweet action. And in the configuration – select the Tweet Id from the previous step.

That’s it. Now we created a Twitter Bot using Azure Logic Apps. Here is the completed Logic App look like.

Now you can tweet with a hashtag #dotNETLovesMe, you will be able to see your Bot is working and it retweets the Tweet. Here is the run history.

Happy Programming 🙂

Determine the country code from country name in C#

If you are trying to determine the country code (“IE”) from a string like “Dublin, Ireland”, then generally the best approach is to use a Geolocation API, such as Google Geocode, or Here maps, or the plethora of others. However, if speed is more important than accuracy, or the volume of data would be too costly to run through a paid API, then here is a simple script in C# to determine the country code from a string

https://github.com/infiniteloopltd/CountryISOFromString/

The code reads from an embedded resource, which is a CSV of country names. Some of the countries are repeated to allow for variations in spelling, such as “USA” and “United States”. The list is in English only, and feel free to submit a PR, if you have more variations to add to this.

It’s called quite simply as follows;

var country = Country.FromString(“Tampere, Pirkanmaa, Finland”);
Console.WriteLine(country.code);

Get Geolocation from IP Address Using PHP

Geolocation gives data about the geographic area of a client. In particular, the IP address is utilized by the geolocation administration to decide the area. To follow the guest’s area, the principal thing required is an IP address. In light of the IP address, we can gather the geolocation data of the guest. The PHP $_SERVER variable is the most straightforward method for getting the client’s IP address. In view of the guest’s IP address, you can recognize the area with scope and longitude utilizing PHP. In this instructional exercise, we will tell you the best way to get the area from the IP address utilizing PHP.

The Geolocation API is a moment method for observing the area of a client by IP address. You can use a free Geolocation API in PHP to bring area data from an IP address. This model content will utilize IP Geolocation API to get area, country, district, city, scope, and longitude from IP address utilizing PHP.

Get IP Address of User with PHP.

Use the REMOTE_ADDR of $_SERVER to get the current client’s IP address in PHP.

$userIP = $_SERVER[‘REMOTE_ADDR’]

Get Location from IP Address using PHP.

Use the IP Geolocation API to get the user’s location from IP using PHP.

Call API via HTTP GET request using cURL in PHP.
Convert API JSON response to array using json_decode() function.
Retrieve IP data from API response.

There is various info is available about geolocation in API response. Some of the most useful location details are:

Country Name
Country Code
Region Code
Region Name
City
Zip Code
Latitude
Longitude
Time Zone

<?php

$clientIP = $_SERVER[‘REMOTE_ADDR’];
$apiURL = ‘https://freegeoip.app/json/’.$clientIP;
$curl = curl_init($apiURL);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($curl);
curl_close($curl);
$ipDetails = json_decode($response, true);
if(!empty($ipDetails)){
$countryCode = $ipDetails[‘country_code’];
$countryName = $ipDetails[‘country_name’];
$regionCode = $ipDetails[‘region_code’];
$regionName = $ipDetails[‘region_name’];
$city = $ipDetails[‘city’];
$zipCode = $ipDetails[‘zip_code’];
$latitude = $ipDetails[‘latitude’];
$longitude = $ipDetails[‘longitude’];
$timeZone = $ipDetails[‘time_zone’];

echo ‘Country Name: ‘.$countryName.'<br/>’;
echo ‘Country Code: ‘.$countryCode.'<br/>’;
echo ‘Region Code: ‘.$regionCode.'<br/>’;
echo ‘Region Name: ‘.$regionName.'<br/>’;
echo ‘City: ‘.$city.'<br/>’;
echo ‘Zipcode: ‘.$zipCode.'<br/>’;
echo ‘Latitude: ‘.$latitude.'<br/>’;
echo ‘Longitude: ‘.$longitude.'<br/>’;
echo ‘Time Zone: ‘.$timeZone;
}else{
echo ‘IP data is not found!’;
}

?>

 

Crop And Upload Image In JQuery With PHP.     Upload Image In Angular With PHP

The post Get Geolocation from IP Address Using PHP appeared first on PHPFOREVER.

How to Create a Vue Application [Learn the Ropes!]

Introduction

About VueVue’s Pros
Vue’s Cons

Comparison with other Frameworks
Vue vs. React
Vue vs. Angular.js
Vue vs. Angular (Angular 2)
Summing Up

Using Vue Templates

Writing an App From ScratchWhat a Vue app looks like

Creating Vue App With Flatlogic Platform
Wrapping Up
Suggested Articles

How to Create a Vue App: Introduction

Today we’re talking about how to create a Vue app. There’s a myriad of ratings of the most popular programming languages and frameworks. Those are subjective and depend on many factors. How do we decide what’s more important, the total number of active users or the combined length of code on the web? Or should we measure the total number of visitors of the websites built on a given language or framework? Those aren’t even all the possible metrics, and each one is complicated enough. So, when we see yet another list of the most popular languages or frameworks of the year, we don’t rush to take it at face value. However, some entries keep making it to the tops of many lists.

Vue.js, often referred to as simply Vue, is one such example. When a framework is featured on every relevant rating you can find and usually makes it to the top 3, you know the thing is in demand. There are three main ways to create a Vue app. Let’s see what they are.

About Vue

Vue.js is a Model-View-Viewmodel (MVVM) type JavaScript framework. It means the View, or the Front-end, is largely independent of the business logic and back-end operation. MVVM allows for updating different components of a web app separately and independently. For example, a website redesign doesn’t have to affect the inner gears in any way. As the name Vue might suggest, this framework aimed at the front-end or the View of the software. It’s frequently used both in complex applications and on single-page sites.

Vue’s Pros

Vue.js is popular for many reasons. Keep reading to know the main ones.

1. Reactivity

Two-way data binding means that any changes to the data in the model immediately propagate the same changes in the matching view or graphical user interface. Likewise, any data changes on the client-side will be reflected in the database. In most cases, Apps with two-way binding work faster and smoother. Two-way data binding has drawbacks like selective compatibility, so do your research before deciding if it’s a good thing for your project.

2. Flexibility

Compared to most front-end frameworks, Vue offers plenty of space for maneuvering. It has few restrictions in terms of App architecture. If your project has uncommon features that don’t fit well with conventional App structure, or if you’re keen on experimenting, that’s an extra plus of Vue for you. That’s not always an advantage, though. More on that later.

3. Tools and ecosystem

Vue’s ecosystem has grown exponentially since its release. Vue has its own official CLI, a Webpack loader, a router, and a rich collection of development tools. Those form the backbone of the Vue ecosystem, and individual developers have kept building around it to offer us thethe infrastructure we can access today.

4. Readability

Vue’s syntax is simple, especially for those fluent in JavaScript and HTML. It doesn’t require a JavaScript developer to un-learn or re-learn anything, only to learn additional skills and knowledge.The components are easy to deploy, and you can make the whole Vue App run with a single line of code.

5. Virtual DOM rendering

DOM, or Document Object Model, is a cross-platform interface for managing the site’s architecture. It represents the website’s elements and dependencies as a tree structure. DOM makes the website’s inner mechanism intuitive and easy to grasp. When a user interacts with an element, it can change its state. That triggers the whole DOM to re-render, which costs time and computing power. Vue uses virtual DOM that replicates the primary one. Virtual DOM lets the framework keep track of all dependencies and figure out the exact elements that need to change. This selective re-rendering is then easier on the server and the end-user, who doesn’t have to wait for too long for the pages to load.

6. Documentation

Vue comes with plentiful documentation. Text instructions, video tutorials, you name it. Furthermore, the Community around Vue is vibrant and full of people ready to help. So, even in the unlikely scenario where the official documentation is insufficient, there’re always people out there ready to help a fellow developer out.

7. Reusability

Vue’s components can work in different functions simultaneously. A length of code can perform several functions and doesn’t have to be replicated. That plays into the next advantage of Vue.

8. Storage efficiency

Heavy weight is the bane of many front-end frameworks. Using ready sets of components means there will be some elements that you could have done without. Vue is one of the pleasant exceptions. It gives us some control over the components we want to include or exclude. That lets us save storage space with each component we don’t use.

Vue’s Cons

1. Flexibility

Yes, this is the second Pro of Vue.js. As we mentioned before, Vue’s flexibility doesn’t necessarily work in everyone’s favor. Vue’s freedom can spoil you for choice. More ways to do the same thing mean more things we can do wrong and more sub-optimal ways to construct an App. We love the freedom of Vue.js. But watch your step. Vue requires a strong understanding of software architecture, and its beginner-friendliness might give the false impression that it doesn’t.

2. Reactivity Nuances

We’ve mentioned Two-way data binding and virtual DOM rendering. These features offer their benefits, and thank goodness for them! But like most solutions, they have their costs. In Vue, their combination lets the application re-render only the parts that were affected by an action. However, this process is imperfect. Sometimes it leaves altered components behind which may require data flattening.This is a well-known issue, though, and Vue documentation offers ways to counter it.

3. Cross-cultural barriers

Vue’s contributors come from all over the world. This is a wonderful thing but comes with a side effect: lots of content cannot be found in English. This is becoming less of an issue. New resources keep emerging and publishing new tutorials and guides (and we hope we’ll speed that process up a bit:)).

4. Small-scale focus

Vue established itself as a great tool for individual developers, small teams, and those just learning the craft of web development. Maybe that contributed to the culture focused on basic, small-scale projects. This doesn’t mean Vue lacks the means, only established practices. If you like your Apps compound, big, and multi-layered, you might want to look at React or Angular.

Comparison with other frameworks

The Front-end framework market is crowded and Vue has quite a bit of competition. Judging by Vue’s popularity you could guess the framework stacks up decently against competitors. Still, let us take a closer look in case we find some details you’ll find useful when choosing a framework.

Vue vs. React

React and Vue are similarly fast and simple, so those factors will hardly tip the scale. When a React component’s state changes, this causes the whole sub-tree based on that component to re-render. Vue tracks each component’s dependencies when rendering. It slows the rendering down a notch but greatly speeds up further adjustments and optimization. You could manually add tags like PureComponent or shouldComponentUpdate in React but that would create a lot of additional work. 

Vue vs. Angular.js

The old, or the original Angular is still in wide use nowadays. Vue was largely inspired by Angular.JS. That explains the similarities: the parts that Angular.js got right moved to Vue. Vue outperforms Angular.js in almost any way we could think of. It is simpler and requires fewer steps to complete an App. Furthermore, Angular.js’s support is over and the framework is growing obsolete. Unless you have an existing application running on Angular.js, we would recommend against bothering with it.

Vue vs. Angular (Angular 2)

The framework once known as Angular 2 took off in 2016. It’s a comprehensive rewrite of the original Angular.js. Like Vue, it’s written in TypeScript, and this is a way more interesting matchup. These frameworks are similar in more ways than one, so let’s focus on what sets them apart.

Vue is more flexible than Angular, although cases, where a feature is impossible because of Angular’s restrictions, are rare.

Earlier versions of Angular were notorious for their size and processing power requirements. Later on, Angular implemented “tree-shaking” and Ahead-of-time compilation, and the weight stopped being a problem. Vue is still the lighter framework of the two but that difference is tolerable now.

All the differences we discussed are marginal and will hardly tip the scale. Perhaps the main difference is the learning curve. Angular’s interface is way larger, so you’ll have much more to learn if you plan to make good use of it. Developers who plan to build complex, extensive Apps and are ready to invest time into learning the ropes should consider Angular. Its interface is complicated at early stages but is a huge asset in building massive, compound platforms. The controls that seem excessive at a lower scale help manage the architecture of big projects which will be harder with Vue.

Summing Up the Comparison

Vue is a great choice for beginners and developers who plan to specialize in small-scale products. Creating basic Vue applications usually takes mere hours to learn, and requires little beyond knowing basic HTML, CSS, and JS. It incorporates features that optimize its weight and performance.

Using Vue templates

Assembly lines, lathes, molds, and other inventions have changed many industries. Production uniformity is an important thing that we’ve grown to take for granted. Web templates are to web development what assembly lines are to heavy industry. They help us skip many parts of manual development and only install, connect and adjust the product. At Flatlogic, we offer a great selection of Vue templates. Some are front-end dashboards, while others contain back-end and database and can serve as complete Web Apps. Visit us and browse the templates!

How to Create a Vue App By Hand

This path is the longest one. Even though Vue speeds up front-end development tremendously, the time and effort it takes to program the whole thing is significant. Read on to know what steps we have to take to create a Vue app with our bare hands.

Install Vue.js with NPM or Yarn

First off, we install the Vue framework if we haven’t already. To do that, we go to the Command-Line or a similar tool of your OS of choice. Depending on the package manager you’re using, the commands will be the following:

npm install -g @vue/cli

If you’re using npm, and

yarn global add @vue/cli

If you’re using Yarn.

At this point, you’ve got Vue installed on your PC. You can check its version with

vue –version

If the version is outdated, update it with

npm update -g @vue/cli

or

yarn global upgrade –latest @vue/cli

Voila! Vue is installed and ready!

Create a Vue App

Like many frameworks, Vue can create all the necessary files and folders for you to start developing your application. The process starts with one command that reads:

vue create my-own-app

vue create my-own-app

This will create a project named “my-own-app” after you make a few more adjustments. Those are necessary to define which features and components are necessary and which will be excessive and redundant. It helps keep the application lighter and faster. Let’s see what those settings are!

“Default” or “Manually select features”

The Default setting includes only the Babel transcompiler and ESLint. A transcompiler or source-to-source compiler translates the code to a programming language of the same abstraction level. That sets it apart from traditional compilers that convert the code to a lower abstraction language. ESLint is a type of linter or a tool for static code analysis. “Static” means it checks the code without running it.

Manually select features

The features we’re about to select are:

Babel
Router
Vuex
CSS pre-processors
Linter/Formatter

Choose the features you need. Some of them will need further definition which we’ll deal with next.

Make choices for the selected features

Router: Use history mode?

History mode isn’t exclusive to Vue.js. If you’re an experienced web developer, you probably know about it. With history mode, page navigation can happen without page reload.

Pre-processor

We choose between SASS/CSS, Less, and Stylus

Linter/Formatter config

ESlint with error prevention only
ESlint + AirBnb config
ESlint + standard config
ESlint + Prettier

Where to place config

Babel, PostCSS, ESlint, and other components At this step we’ll choose destinations for storing configuration files. The typical options are dedicated config files and the integral package.json file.

Lint on save?

This may remind you of the “save before closing?” question in many programs. If you choose this option, the application will run a static check every time you save your progress.Once all the options above have been defined, it’s time to hit the launch button. After a brief compilation (takes a couple of minutes on most devices) the files and folders that form the backbone of your App will be ready. 

You can run it on localhost and technically it will be operational. But only technically. If you launch it, you’ll get a generic logo filler, not something of practical value. This is just the structure you can base your App on.

What a Vue App looks like

`Let’s see what an almost basic Vue app looks like. We’re calling it “almost basic” because we can create an elementary Vue app by embedding Vue code in an HTML file. But that would limit Technically, we can create a simple Vue application through an HTML file but that limits our options. Instead, we’ll create one based on a file system we’ve created through the CLI.

There are plenty of files and folders in the App’s directory but for now, we’re interested in just a few of them.

First off, let’s deal with a file called main.js in the src folder. It will contain the following:

import Vue from “vue”;
import App from “./App.vue”;

new Vue({
render: h => h(App)
}).$mount(“#app”);

The first line indicates the application will work with Vue. The second one indicates that the main action will take place in the ‘App.vue’ file. Lastly, the remaining three lines define the tag “app”. That’ll come in handy later.

BasicApp

Next up, let’s create the BasicApp.vue file in the ‘components’ folder. The file’s contents are the following:

<template>
<h1>{{ msg }}</h1>
</template>

<script>
export default {
name: “BasicApp”,
props: {
msg: String,
},
};
</script>

In this file, we’re defining the parameters of the message we plan to show. We’ll keep things short and only define the ‘msg’ template, and its data type as String.

App.vue

We defined in the main.js file that App.vue will be the bulk of the application. Let’s head there. This is the code we’ll create in the App.vue:

<template>
<div class=”app”>
<BasicApp msg=”This is how we make a Vue app work” />
</div>
</template>

<script>
import BasicApp from “./components/BasicApp”;

export default {
name: “App”,
components: {
BasicApp,
},
};
</script>

The second line contains the tag ‘app’ that we mentioned in the main.js file. In the BasicApp.vue file we defined some properties of the ‘msg’ variable. Now we’re attributing it with the value “This is how we make a Vue app work”. Also, we’re importing and exporting all the necessary objects so all dependencies work properly. This is a basic HelloWorld-type Vue application that will say “This is how we make a Vue app work”.

How to create a Vue App with Flatlogic platform

Web apps are different in many ways but the mechanics they work on are largely the same. The acronym CRUD stands for Create, Read, Update, and Delete. These actions are the most basic ones an App can perform. If we sit down and take a close look at how an App works, we’ll see it’s almost always Creating new entries, Reading existing ones, changing or Updating said existing entries, or Deleting the data that’s already there.

Frameworks and libraries work on an idea similar to the one that all mass production is based on. If different solutions use the same parts, those parts don’t have to be invented from the ground up. We followed through with that idea to create the Flatlogic platform. The Flatlogic platform is a constructor-style tool that lets you create Apps by choosing a combination of technologies. That includes front-end technologies and yes, Vue is an option you can choose.

#1: Name your project

The Platform greets us with “Let’s build something cool!”. Yeah, let’s! The first thing to do is to pick a name for your project. This is not a test, the first step is that simple. Pick a name so you can find your project with ease.

#2: Define tech stack

An App’s stack is the combination of technologies it runs on. Pick them for the front-end, the back-end, and the database. There are no wrong answers, any combination you can pick will work. However, depending on the functionality you want for your App, some variants may offer additional benefits. For example, Vue is often credited with storage (and traffic) efficiency making it a great choice for basic, smaller Apps.

#3: Choose the design

The Flatlogic platform offers several design patterns to choose from. You’ll probably spend a lot of time looking at the admin panel’s interface so choose wisely.

#4: Define the schema

We’ve chosen the technology of the database. Next up, it’s time to construct its schema or inner structure. To make the data in your database meaningful, we need to sort it into different fields, define the types of data, and how the fields relate to each other. This sounds complicated but usually gets easier with a good understanding of how you want your web App to work. If you still think it’s complicated or want to save time, just pick one of our pre-built schemas. We tailored them to popular demands like e-Commerce and Social Media. One of them is bound to fit.

#5: Check and Finish

The heavy intellectual lifting is over. It’s time to review your choices, connect Git repository if you want to, and hit “Finish” if everything’s correct.

#6: Deploying the App

The Platform will compile the App for a couple of minutes and show you what you see in the screenshot. At this point, you can connect your App to Git if you want to, and hit “Deploy”. Voila! Your App’s structure is ready.

Wrapping Up

We’ve discussed three routes to creating your own Vue Apps. It’s a lot of information so let’s recap!

Using Vue Templates

Templates are solutions largely ready for deployment. There are pure front-end Vue templates and complete ones with backend and database. Who this approach fits the most:

Business owners and managers who want to spend less time on website development and more time on other aspects of the business
Those who need a web App ASAP and know the exact metrics a template has to meet
Anyone else who needs a web App but doesn’t have the time or the will to learn web development (no judgment there)

Creating Vue Apps by Hand

This method may be the easiest if you’re already an experienced Vue developer. But for everyone else, for those who only know the basics of web development or still learning the ropes, this path will be the longest. Not the worst, not useless, and not obsolete by any means, just the longest. Choose this path if you:

Want to learn by doing. Once you’ve learned the basics of Vue development, it’s time to put the knowledge to practice. Creating Apps of your own gives weight and context to your knowledge. For every How there’ll be a new Why, and vice versa. It’ll give you a better feeling of where to head next. If you’re an aspiring Vue developer, creating Apps is the natural thing to do.

Need an App with specific features you cannot find in ready solutions. If you want to do something well, do it yourself. Such logic has faults of its own but it’s hard to deny: when you need something and it doesn’t exist, you might have to do it yourself.

Have the time, the energy, and the self-esteem to try to do better. If you believe you can make a better solution than the ones out there, there’s one way to find out if you can. Try it, and then try some more. And don’t be afraid of thinking outside the box. If everyone stuck to existing practices, Henry Ford would’ve had to make faster horses, not cars.

Creating a Vue App with Flatlogic Platform

We have spent lots of time and energy perfecting the Platform, so we may be a little biased. Still, we believe there are solid reasons why it’s worth your time. Here are some of the people who would benefit greatly from Flatlogic Platform:

Those who need an App for their business but don’t have the time or the expertise to develop one from scratch

Web dev beginners who don’t have the experience to create their Apps yet, but want a better understanding of how an end product works. The free version is especially helpful here.
Business owners and executives who need separate web platforms for several products. Flatlogic will help them create applications with equally reliable and functional inner mechanics.
And everyone else who could use his or her web App.

Thanks for reading! As always, feedback is welcome, and feel free to read more on our blog!

Suggested Articles:

Top 16+ Vue Open Source Projects
10+ Noteworthy Bootstrap Admin Themes Made With the Latest Version of Vue – Flatlogic Blog
Vue vs React: What is Easier? What is Trending? [Detailed Guide with +/-]

The post How to Create a Vue Application [Learn the Ropes!] appeared first on Flatlogic Blog.

Building a gRPC Server in Go

Intro

In this article, we will create a simple web service with gRPC in Go. We won’t use any third-party tools and only create a single endpoint to interact with the service to keep things simple.

Motivation

This is the second part of an articles series on gRPC. If you want to jump ahead, please feel free to do so. The links are down below.

Introduction to gRPC
Building a gRPC server with Go (You are here)
Building a gRPC server with .NET
Building a gRPC client with Go
Building a gRPC client with .NET

The plan

So this is what we are trying to achieve.

Generate the .proto IDL stubs.
Write the business logic for our service methods.
Spin up a gRPC server on a given port.

In a nutshell, we will be covering the following items on our initial diagram.


💡 As always, all the code samples documentation can be found at: [https://github.com/sahansera/go-grpc](https://github.com/sahansera/go-grpc)

Prerequisites

This guide targets Go and assumes you have the necessary go tools installed locally. Other than that, we will cover gRPC specific tooling down below. Please note that I’m using some of the commands that are macOS specific. Please follow this link to set it up if you are on a different OS.

To install Protobuf compiler:

brew install protobuf

To install Go specific gRPC dependencies:

go install http://google.golang.org/protobuf/cmd/[email protected]
go install http://google.golang.org/grpc/cmd/[email protected]

Project structure

There is no universally agreed-upon project structure per se. We will use Go modules and start by initializing a new project. So our business problem is this – we have a bookstore and we want to expose its inventory via an RPC function.

Since this talks about the creation of the server, we will call it bookshop/server

We can create a new folder called server for the server and initialize it by so:

go mod init bookshop/server

In a later post, we will also work on the client-side of this app and call it bookshop/client

This is what it’s going to look like at the end of this post.


Creating the service definitions with .proto files

In my previous post, we discussed what are Protobufs and how to write one. We will be using the same example which is shown below.

A common pattern to note here is to keep your .proto files in their separate folder, so that use them to generate the server and client stubs.

bookshop.proto

syntax = “proto3”;

option go_package = “bookshop/pb”;

message Book {
string title = 1;
string author = 2;
int32 page_count = 3;
optional string language = 4;
}

message GetBookListRequest {}
message GetBookListResponse { repeated Book books = 1; }

service Inventory {
rpc GetBookList(GetBookListRequest) returns (GetBookListResponse) {}
}

Note how we have used the [option](https://developers.google.com/protocol-buffers/docs/proto#options) keyword here. We are essentially saying the Protobuf compiler where we want to put the generated stubs. You can have multiple option statements depending on which languages you are using to generate the stubs for.

💡 You can find a full list of allowed values at [google/protobuf/descriptor.proto](https://github.com/protocolbuffers/protobuf/blob/2f91da585e96a7efe43505f714f03c7716a94ecb/src/google/protobuf/descriptor.proto#L44)

Other than that, we have 3 messages to represent a Book entity, a request and a response, respectively. Finally we have a service defined called Inventory which has a RPC named GetBookList which can be called by the clients.

If you need to understand how this is structured, please refer to my previous post 🙏

Generating the stubs

Now that we have the IDL created we can generate the Go stubs for our server. It is a good practice to put it under the make gen command so that we can easily generate them with a single command in the future.

protoc –proto_path=proto proto/*.proto –go_out=. –go-grpc_out=.

Once this is done, you will see the generated files under the server/pb folder.


Awesome! 🎉 now we can use these subs in our server to respond to any incoming requests.

Creating the gRPC server

Now, we will create the main.go file to create the server.

main.go

type server struct {
pb.UnimplementedInventoryServer
}

func (s *server) GetBookList(ctx context.Context, in *pb.GetBookListRequest) (*pb.GetBookListResponse, error) {
return &pb.GetBookListResponse{
Books: getSampleBooks(),
}, nil
}

func main() {
listener, err := net.Listen(“tcp”, “:8080”)
if err != nil {
panic(err)
}

s := grpc.NewServer()
pb.RegisterInventoryServer(s, &server{})
if err := s.Serve(listener); err != nil {
log.Fatalf(“failed to serve: %v”, err)
}
}

We first define a struct to represent our server. The reason why we need to embed pb.UnimplementedInventoryServer is to maintain future compatibility when generating gRPC bindings for Go. You can read more on this in this initial proposal and on README.
As we discussed, GetBookList can be called by the clients, and this is where that request will be handled. We have access to context (such as auth tokens, headers etc.) and the request object we defined.
In the main method, we are creating a listening on TCP port 8080 with the net.Listen method, initialize a new gRPC server instance, register our Inventory service and then start responding to incoming requests.

Interacting with the Server

Usually, when interacting with the HTTP/1.1-like server, we can use cURL to make requests and inspect the responses. However, with gRPC, we can’t do that. (you can make requests to HTTP/2 services, but those won’t be readable). We will be using gRPCurl for that.

Once you have it up and running, you can now interact with the server we just built.

grpcurl -plaintext localhost:8080 Inventory/GetBookList

💡 Note: gRPC defaults to TLS for transport. However, to keep things simple, I will be using the `-plaintext` flag with `grpcurl` so that we can see a human-readable response.


How do we figure out the endpoints of the service? There are two ways to do this. One is by providing a path to the proto files, while the other option enables reflection through the code.

Enabling reflection

This is a pretty cool feature, where it will give you introspection capabilities to your API.

reflection.Register(gs)

Following screenshot displays some of the commands you can use to introspect the API.


grpcurl -plaintext -msg-template localhost:8080 describe .GetBookListResponse

Using proto files

If you don’t want to to enable it by code we can use the Protobuf files to let gRPCurl know which methods are available. Normally, when a team makes a gRPC service they will make the protobuf files available if you are integrating with them. So, without having to ask them or doing trial-and-error you can use these proto files to introspect what kind of endpoints are available for consumption.

grpcurl -import-path proto -proto bookshop.proto list

gRPCurl is great if you want to debug your RPC calls if you don’t have your client built yet.

Conclusion

In this article we looked at how we can create a simple gRPC server with Go. In the next one we will learn how to do the same with .NET.

Feel free to let me know any feedback or questions. Thanks for reading ✌️

References

https://medium.com/@nate510/structuring-go-grpc-microservices-dd176fdf28d0
https://www.youtube.com/watch?v=RHWwMrR8LUs&ab_channel=NicJackson
https://www.oreilly.com/library/view/grpc-up-and/9781492058328/

Getting OpenSwoole and the AWS SDK to Play Nice

I have some content that I store in S3-compatible object storage, and wanted to be able to (a) push to that storage, and (b) serve items from that storage.

Easey-peasey: use the Flysystem AWS S3 adapter, point it to my storage, and be done!

Except for one monkey wrench: I’m using OpenSwoole.

The Problem

What’s the issue, exactly?

By default, the AWS adapter uses the AWS PHP SDK, which in turn uses Guzzle.
Guzzle has a pluggable adapter system for HTTP handlers, but by default uses its CurlMultiHandler when the cURL extension is present and has support for multi-exec.
This is a sane choice, and gives optimal performance in most scenarios.

Internally, when the handler prepares to make some requests, it calls curl_multi_init(), and then memoizes the handle returned by that function.
This allows the handler to run many requests in parallel and wait for them each to complete, giving async capabilities even when not running in an async environment.

When using OpenSwoole, this state becomes an issue, particularly with services, which might be instantiated once, and re-used many times across many requests until the server is shutdown.
More specifically, it becomes an issue when coroutine support is enabled in OpenSwoole.

OpenSwoole has provided coroutine support for cURL for some time now.
However, when it comes to cURL’s multi-exec support, it only allows one multi-exec handle at a time.
This was specifically where my problem originated: I’d have multiple requests come in at once, each requiring access to S3, and each resulting in an attempt to initialize a new multi-exec handle.
The end result was a locking issue, which led to exceptions, and thus error responses.

(And boy, was it difficult to debug and get to the root cause of these problems!)

The solution

Guzzle allows you to specify your own handlers, thankfully, and the vanilla CurlHandler:

use GuzzleHttpClient;
use GuzzleHttpHandlerStack;
use GuzzleHttpHandlerCurlHandler;

$client = new Client([
‘handler’ => HandlerStack::create(new CurlHandler()),
]);

The next hurdle is getting the AWS S3 SDK to use this handler.
Fortunately, the S3 client constructor has an http_handler option that allows you to pass an HTTP client handler instance.
I can re-use the existing GuzzleHandler the SDK provides, passing it my client instance:

use AwsHandlerGuzzleV6GuzzleHandler;
use AwsS3S3Client;

$storage = new S3Client([
// .. connection options such as endpoint, region, and credentials
‘http_handler’ => new GuzzleHandler($client),
]);

While the namespace is GuzzleV6, the GuzzleHandler in that namespace also works for Guzzle v7.

I can then pass that to Flysystem, and I’m ready to go.

But what about those async capabilities?

But doesn’t switching to the vanilla CurlHandler mean I lose out on async capabilities?

The great part about the OpenSwoole coroutine support is that when the cURL hooks are available, you essentially get the parallelization benefits of multi-exec with the vanilla cURL functionality.
As such, the approach I outline both fixes runtime errors I encountered and increases performance.
I like easy wins like this!

Bonus round: PSR-7 integration

Unrelated to the OpenSwoole + AWS SDK issue, I had another problem I wanted to solve.
While I love Flysystem, there’s one place where using the AWS SDK for S3 directly is a really nice win: directly serving files.

When using Flysystem, I was using its mimeType() and fileSize() APIs to get file metadata for the response, and then copying the file to an in-memory (i.e. php://temp) PSR-7 StreamInterface.
The repeated calls meant I was querying the API multiple times for the same file, degrading performance.
And buffering to an in-memory stream had the potential for out-of-memory errors.

One alternative I tried was copying the file from storage to the local filesystem; this would allow me to use a standard filesystem stream with PSR-7, which is quite performant and doesn’t require a lot of memory.
However, one point of having object storage was so that I could reduce the amount of local filesystem storage I was using.

As a result, for this specific use case, I switched to using the AWS S3 SDK directly and invoking its getObject() method.
The method returns an array/object mishmash that provides object metadata, including the MIME type and content length, and also includes a PSR-7 StreamInterface for the body.
Combined, you can then stream this directly back in a response:

$result = $s3Client->getObject([
‘Bucket’ => $bucket,
‘Key’ => $filename,
]);

/** @var PsrHttpResponseFactoryInterface $responseFactory */
return $responseFactory->createResponse(200)
->withHeader(‘Content-Type’, $result[‘ContentType’])
->withHeader(‘Content-Length’, $result[‘ContentLength’])
->withBody($result[‘Body’]);

This new approach cut response times by 66% (files of ~400k now return in ~200ms), and reduced memory usage to the standard buffer size used by cURL.
Again, an easy win!

Getting OpenSwoole and the AWS SDK to Play Nice was originally
published 23 February 2022
on https://mwop.net by
Matthew Weier O’Phinney.

356: Amit Sheen

I got to talk with Amit Sheen this week about his journey into creative coding. Even his early work is incredibly interesting and recent work is downright stunning. Now he’s entering a phase of sharing what he knows with workshops like Pushing CSS to the Limit. Here’s a list of Pens we talk about in the podcast (mostly):

Bubbling – https://codepen.io/amit_sheen/pen/BxQqxz

Turning pages – https://codepen.io/amit_sheen/pen/WNweryv

Bouncing off the walls – https://codepen.io/amit_sheen/pen/abBgWvJ

House of CSS cards – https://codepen.io/amit_sheen/pen/QWGjRKR

FlipBoxes – https://codepen.io/amit_sheen/pen/YzQoMxR

RadioPoles – https://codepen.io/amit_sheen/pen/RwZwGVQ

3D Wobbly Disco – https://codepen.io/amit_sheen/pen/LYLQQpW4D

4D SimplexNoise – https://codepen.io/amit_sheen/pen/XWgVKxO

Typing effect – https://codepen.io/amit_sheen/pen/YzZYoMV

Text morphing – https://codepen.io/amit_sheen/pen/xxqYzvm

csStickman – https://codepen.io/amit_sheen/pen/abLPdoQ

The Lonely Claw – https://codepen.io/amit_sheen/pen/yLzWVYo

Newton’s CSS cradle – https://codepen.io/amit_sheen/pen/XWMXwvJ

Table tenniCSS – https://codepen.io/amit_sheen/pen/PobQjMX

Time Jumps

00:22 Guest introduction

01:17 2018 Pens

03:14 Smashing Magazine workshop

04:21 Bubbling

07:01 Turning pages

11:29 Sponsor: Retool

12:53 csStickman

17:24 The Lonely Claw

23:29 Bouncing off the walls

26:22 Cheat codes for successful

31:34 Text morphing

34:26 Table tenniCSS

Sponsor: Retool

Custom dashboards, admin panels, CRUD apps—build any internal tool faster in Retool. Visually design apps that interface with any database or API. Switch to code nearly anywhere to customize how your apps look and work. With Retool, you ship more apps and move your business forward—all in less time.

Thousands of teams at companies like Amazon, DoorDash, Peloton, and Brex collaborate around custom-built Retool apps to solve internal workflows. To learn more, visit retool.com.

The post 356: Amit Sheen appeared first on CodePen Blog.