Azure Stack Hub Foundation Core – updates

This article is contributed. See the original author and article here.

We have updated the https://github.com/Azure-Samples/Azure-Stack-Hub-Foundation-Core repo with a new framework and set of tools that are intended to help Azure Stack Hub Operators accelerate their ramping up and provide starting tools they can use for various activities. 

 

It now includes 3 main types of resources:

  • Learning Materials: slides, links to videos, and workshop materials
  • Tools: starting points for tools used by Azure Stack Hub Operators on day to day activities
  • SlideShare: slides from webcasts/presentations

We will update this data as we progress and welcome any feedback on how to improve them, in order to make them more relevant for Azure Stack Hub Operators.

 

Learning Materials

The Azure Stack Hub Foundation Core are a set of materials (PowerPoint presentations, workshops, and links to videos) aiming to provide Azure Stack Hub Operators the foundational materials required to ramp-up and understand the basics of operating Azure Stack Hub. These will build on the Azure Stack Hub Foundation Core – video series – Microsoft Tech Community.

 

Tools

The Azure Stack Hub Operators use a wide range of tooling to manage their infrastructure. The Tools folder provides scripts and snippets as starting point for automating operator tasks – from PowerShell scripts, to API calls, ARM templates, Azure integration, and all kinds of automation. This repo is intended to capture some of these tools and to provide them as example for others, as they are building their tooling as well. Most of these scripts are small snippets which can, and should be, included in your own automation. As most of them are generalized scripts, you will need to configure them according to your own environment.

 

SlideShare

As we run webcasts and presentations (like the “work from home” sessions ran in April 2020), the SlideShare folder will be used to share these slides and other information regarding these sessions.

 

ASR Failback Script

The ASR-failback-script tool helps automate the failback process, when using ASR to protect Azure Stack Hub VMs. The process is described in the Azure Site Recovery failback tool document.

Azure Sentinel To-Go (Part2): Integrating a Basic Windows Lab ? via ARM Templates ?

Azure Sentinel To-Go (Part2): Integrating a Basic Windows Lab ? via ARM Templates ?

This article is contributed. See the original author and article here.

Cyb3rWard0g_0-1601793395684.png

 

Most of the time when we think about the basics of a detection research lab, it is an environment with Windows endpoints, audit policies configured, a log shipper, a server to centralize security event logs and an interface to query, correlate and visualize the data collected.

 

Recently, I started working with Azure Sentinel and even though there are various sources of data and platforms one could integrate it with, I wanted to learn and document how I could deploy an Azure Sentinel with a Windows lab environment in Azure for research purposes.

 

In this post, I show how to integrate an ARM template created in the previous post to deploy an Azure Sentinel solution with other templates to deploy a basic Windows network lab. The goal is to expedite the time it takes to get everything set up and ready-to-go before simulating a few adversary techniques. 

 

This post is part of a four-part series where I show some of the use cases I am documenting through the open source project Azure Sentinel To-Go! . The other three parts can be found in the following links:

 

 

Azure Sentinel To-Go?

 

Cyb3rWard0g_0-1601739531324.png

 

In a previous post (part 1), I introduced the project Azure Sentinel To-Go to start documenting some of the use cases that one could use an Azure Sentinel solution for in a lab environment, and how it could all be deployed via Azure Resource Manager (ARM) templates to make it practical and modular enough for others in the community to use.

 

If you go to the project’s current deployment options, you can see some of the current scenarios you can play with. For this post, I am going to use the one highlighted below and explain how I created it:

 

Cyb3rWard0g_0-1601739777827.png

 

First of all, I highly recommend to read these two blog post to get familiarized with the process of deploying Azure Sentinel via an ARM template:

 

 

A basic template to deploy an Azure Sentinel solution would look similar to the one available in the Blacksmith project:

 

https://github.com/OTRF/Blacksmith/blob/master/templates/azure/Log-Analytics-Workspace-Sentinel/azuredeploy.json

 

Extending The Basic Azure Sentinel Template

 

In order to integrate an Azure Windows lab environment with the basic Azure Sentinel ARM template, we need to enable and configure the following features in our Azure Sentinel workspace:

 

  1. Enable the Azure Sentinel Security Events Data Connector to stream all security events (Microsoft-Windows-Security-Auditing event provider) to the Azure Sentinel workspace.
  2. Enable and stream additional Windows event providers (i.e Microsoft-Windows-Sysmon/Operational or Microsoft-Windows-WMI-Activity/Operational) to increase the visibility from a data perspective.

 

Of course, we also need to download and install the Log Analytics agent (also known as the Microsoft Monitoring Agent or MMA) on the machines for which we want to stream security events into Azure Sentinel. We will take care of that after this section.

 

1) Azure Sentinel + Security Events Data Connector

 

If you have an Azure Sentinel instance running, all you would have to do is go to Azure Portal>Azure Sentinel Workspaces>Data connectors>Security Events > Open connector page

 

Cyb3rWard0g_1-1601739790638.png

 

Then, you will have to select the events set you want to stream (All events, Common, Minimal or None)

 

Cyb3rWard0g_2-1601739803548.png

 

If you want to know more about each event set, you can read more about it here. The image below shows all the events behind each event set.

 

https://docs.microsoft.com/en-us/azure/sentinel/connect-windows-security-eventshttps://docs.microsoft.com/en-us/azure/sentinel/connect-windows-security-events

 

Once you select an event set and click on Apply Changes, you will see the status of the data connector as Connected and a message indicating the change happened successfully.

 

Cyb3rWard0g_0-1601740426447.png

 

If you go back to your data connectors view, you will see the Security Events one with a green bar next to it and again with the Connected status.

 

Cyb3rWard0g_1-1601740436202.png

 

Azure Resource Manager (ARM) Translation

 

We can take all those manual steps and express them as code as shown in the template below:

 

https://github.com/OTRF/Azure-Sentinel2Go/blob/master/azure-sentinel/linkedtemplates/data-connectors/securityEvents.json

 

The main part in the template is the following resource of type Microsoft.OperationalInsights/workspaces/dataSources and of kind SecurityInsightsSecurityEventCollectionConfiguration . For more information about all the additional parameters and allowed values, I recommend to read this document.

 

{
"type": "Microsoft.OperationalInsights/workspaces/dataSources",
"apiVersion": "2020-03-01-preview",
"location": "[parameters('location')]",
"name": "<workspacename>/<datasource-name>",
"kind": "SecurityInsightsSecurityEventCollectionConfiguration",
"properties": {
"tier": "<None,Minimal,Recommended,All>",
"tierSetMethod": "Custom"
}
}

 

2) Azure Sentinel + Additional Win Event Providers

 

It is great to collect Windows Security Auditing events in a lab environment, but what about other event providers? What if I want to install Sysmon and stream telemetry from Microsoft-Windows-Sysmon/Operational? Or maybe Microsoft-Windows-WMI-Activity/Operational?

 

There is not an option to do it via the Azure Sentinel data connectors view, but you can do it through the Azure Sentinel Workspace advanced settings (Azure Portal>Azure Sentinel Workspaces>Azure Sentinel>{WorkspaceName} > Advanced Settings) as shown below:

 

Cyb3rWard0g_2-1601740445611.png

 

We can manually add one by one by typing the names and clicking on the plus sign.

 

Cyb3rWard0g_3-1601740454719.png

 

Azure Resource Manager (ARM) Translation

 

We can take all those manual steps and express them as code as shown in the template below:

 

https://github.com/OTRF/Azure-Sentinel2Go/blob/master/azure-sentinel/linkedtemplates/log-analytics/winDataSources.json

 

The main part in the template is the following resource of type Microsoft.OperationalInsights/workspaces/dataSources and of kind WindowsEvent. For more information about all the additional parameters and allowed values, I recommend to read this document.

 

{
"type": "Microsoft.OperationalInsights/workspaces/dataSources",
"apiVersion": "2020-03-01-preview",
"location": "[parameters('location')]",
"name": "<workspacename>/<datasource-name>",
"kind": "WindowsEvent",
"properties": {
"eventLogName": "",
"eventTypes": [
{ "eventType": "Error"},
{ "eventType": "Warning"},
{ "eventType": "Information"}
]
}
}

 

In the template above, I use an ARM method called Resource Iteration to create multiple data sources and cover all the event providers I want to stream more telemetry from. By default these are the event providers I enable:

 

"System"
"Microsoft-Windows-Sysmon/Operational",
"Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational",
"Microsoft-Windows-Bits-Client/Operational",
"Microsoft-Windows-TerminalServices-LocalSessionManager/Operational",
"Directory Service",
"Microsoft-Windows-DNS-Client/Operational",
"Microsoft-Windows-Windows Firewall With Advanced Security/Firewall",
"Windows PowerShell",
"Microsoft-Windows-PowerShell/Operational",
"Microsoft-Windows-WMI-Activity/Operational"
"Microsoft-Windows-TaskScheduler/Operational"

 

Executing The Extended Azure Sentinel Template

 

We need to merge or link the previous two templates to the initial template . You might be asking yourself:

 

“Why are the two previous templates on their own and not just embedded within one main template?”

 

That’s a great question. I initially did it that way, but when I started adding Linux and other platform integrations to it, the master template was getting too big and a little too complex to manage. Therefore, I decided to break the template into related templates, and then deploy them together through a new master template. This approach also helps me to create a few template combinations and cover more scenarios without having a long list of parameters and one master template only. I use the Linked Templates concept which you can read more about here.

 

These are the steps to execute the template:

 

1) Download current demo template

 

https://github.com/OTRF/Blacksmith/blob/master/templates/azure/Log-Analytics-Workspace-Sentinel/demos/LA-Sentinel-Windows-Settings.json

 

2) Create Resource Group (Azure CLI)

 

You do not have to create a resource group, but for a lab environment and to isolate it from other resources, I run the following command:

 

az group create -n AzSentinelDemo -l eastus

 

  • az group create : Create a resource group
  • -n : Name of the new resource group
  • -l : Location/region

 

3) Deploy ARM Template (Azure CLI)

 

az deployment group create -f ./LA-Sentinel-Windows-Settings.json -g AzSentinelDemo

 

  • az deployment group create: Start a deployment
  • -f : Template that I put together for this deployment.
  • -g: Name of the Azure Resource group

 

Monitor Deployment

 

As you can see in the image below, multiple deployments were executed after executing the master template for this demo.

 

Cyb3rWard0g_4-1601740467202.png

 

Check Azure Sentinel Automatic Settings (Data Connector)

 

Cyb3rWard0g_5-1601740476291.png

 

Check Azure Sentinel Automatic Settings (Win Event Providers)

 

Cyb3rWard0g_6-1601740485002.png

 

Everything got deployed as expected and in less than 30 seconds!! Now, we are ready to integrate it with a Windows machine (i.e Azure Win10 VM).

 

Re-Using a Windows 10 ARM Template

 

Building a Windows 10 virtual machine via ARM templates, and from scratch, is a little bit out of scope for this blog post ( I am preparing a separate series for it), but I will highlight the main sections that allowed me to connect it with my Azure Sentinel lab instance.

 

A Win 10 ARM Template 101 Recipe

 

Cyb3rWard0g_1-1601793441988.png

 

I created a basic template to deploy a Win10 VM environment in Azure. It does not install anything on the endpoint, and it uses the same ARM method called Resource Iteration , mentioned before, to create multiple Windows 10 VMs in the same virtual network.

 

https://github.com/OTRF/Blacksmith/blob/master/templates/azure/Win10/demos/Win10-101.json

 

Main Components/Resources:

One part of the virtual machine resource object that is important to get familiarized with is the imageReference properties section.

 

A Marketplace image in Azure has the following attributes:

  • Publisher: The organization that created the image. Examples: MicrosoftWindowsDesktop, MicrosoftWindowsServer
  • Offer: The name of a group of related images created by a publisher. Examples: Windows-10, WindowsServer
  • SKU: An instance of an offer, such as a major release of a distribution. Examples: 19h2-pro, 2019-Datacenter
  • Version: The version number of an image SKU.

 

How do we get some of those values? Once again, you can use the Azure Command-Line Interface (CLI) . For example, you can list all the offer values available for the MicrosoftWindowsDesktop publisher in your subscription with the following command:

 

> az vm image list-offers -p MicrosoftWindowsDesktop -o table
Location    Name
---------- --------------------------------------------
eastus corevmtestoffer04
eastus office-365
eastus Test-offer-legacy-id
eastus test_sj_win_client
eastus Windows-10
eastus windows-10-1607-vhd-client-prod-stage
eastus windows-10-1803-vhd-client-prod-stage
eastus windows-10-1809-vhd-client-office-prod-stage
eastus windows-10-1809-vhd-client-prod-stage
eastus windows-10-1903-vhd-client-office-prod-stage
eastus windows-10-1903-vhd-client-prod-stage
eastus windows-10-1909-vhd-client-office-prod-stage
eastus windows-10-1909-vhd-client-prod-stage
eastus windows-10-2004-vhd-client-office-prod-stage
eastus windows-10-2004-vhd-client-prod-stage
eastus windows-10-ppe
eastus windows-7

 

Then, you can use a specific offer and get a list of SKU values:

 

> az vm image list-skus -l eastus -f Windows-10 -p MicrosoftWindowsDesktop -o table
Location    Name
---------- ---------------------------
eastus 19h1-ent
eastus 19h1-ent-gensecond
eastus 19h1-entn
eastus 19h1-entn-gensecond
eastus 19h1-evd
eastus 19h1-pro
eastus 19h1-pro-gensecond
eastus 19h1-pro-zh-cn
eastus 19h1-pro-zh-cn-gensecond
eastus 19h1-pron
eastus 19h1-pron-gensecond

 

Execute the Win 10 ARM Template 101 Recipe (Optional)

 

Once again, you can run the template via the Azure CLI as shown below:

 

az deployment group create -f ./Win10-101.json -g AzSentinelDemo --parameters adminUsername='wardog' adminPassword='<PASSWORD>' allowedIPAddresses=<YOUR-PUBLIC-IP

 

One thing to point out that is very important to remember is the use of the allowedIPAddresses parameter. That restricts the access to your network environment to only your Public IP address. I highly recommended to use it. You do not want to expose your VM to the world.

This will automate the creation of all the resources needed to have a Win 10 VM in azure. Usually one would need to create one resource at a time. I love to automate all that with an ARM template.

 

Cyb3rWard0g_7-1601740495802.png

 

Once the deployment finishes, you can simply RDP to it by its Public IP address. You will land at the privacy settings setup step. This is a basic deployment. Later, I will provide a template that takes care of all that (Disables all those settings and prepares the box automatically).

 

Cyb3rWard0g_8-1601740503242.png

 

You can delete all the resources via your Azure portal now to get ready for another deployment and continue with the next examples.

 

Extending the Basic Windows 10 ARM Template

 

In order to integrate the previous Win10 ARM template with the extended Azure Sentinel ARM template, developed earlier, we need to do the following while deploying our Windows 10 VM:

 

  • Download and install the Log Analytics agent (also known as the Microsoft Monitoring Agent or MMA) on the machines for which we want to stream security events into Azure Sentinel from.

 

Win 10 ARM Template + Log Analytics Agent

 

I put together the following template to allow a user to explicitly enable the monitoring agent and pass workspaceId and workspaceKey values as input to send/ship security events to a specific Azure Sentinel workspace.

 

https://github.com/OTRF/Blacksmith/blob/master/templates/azure/Win10/demos/Win10-Azure-Sentinel.json

 

The main change in the template is the following resource of type Microsoft.Compute/virtualMachines/extensions. Inside of the resource properties, I define the publisher as Microsoft.EnterpriseCloud.Monitoring and of type MicrosoftMonitoringAgent. Finally, I map the workspace settings to their respective input parameters as shown below:

 

{ 
"name": "<VM-NAME/EXTENSION-NAME>",
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2019-12-01",
"location": "[parameters('location')]",
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "[parameters('workspaceId')]"
},
"protectedSettings": {
"workspaceKey": "[parameters('workspaceKey')]"
}
}
}

 

Putting it All Together!

 

Cyb3rWard0g_2-1601793485368.png

 

To recap, the following template should do the following now:

 

  • Deploy an Azure Sentinel solution
  • Enable the Azure Sentinel SecurityEvents data connector
  • Enable more Windows event providers to collect more telemetry
  • Deploy a Windows 10 virtual machine and its own virtual network.
  • Install the Log Analytics Agent (Microsoft Monitoring Agent) in the Windows 10 VM.

 

Executing the ARM Template (Azure CLI)

 

az deployment group create -n Win10Demo -f ./Win10-Azure-Sentinel-Basic.json -g Win10AzSentinel --parameters adminUsername='wardog' adminPassword='<PASSWORD>' allowedIPAddresses=<PUBLIC-IP-ADDRESS>

 

Cyb3rWard0g_0-1601741668034.png

 

Cyb3rWard0g_1-1601741676466.png

 

Once the deployment finishes (~10mins), you can go to your Azure Sentinel dashboard, wait a few mins and you will start seeing security events flowing:

 

Cyb3rWard0g_2-1601741685634.png

 

As you can see in the image above, we have events from SecurityEvent and Event tables. We can explore the events through the Logs option.

 

SecurityEvent

 

You can run the following query to validate and explore events flowing to the SecurityEvent table:

 

SecurityEvent
| limit 1

 

Cyb3rWard0g_3-1601741694904.png

 

Event

 

The following basic query validates the consumption of more Windows event providers through the Event table:

 

Event
| summarize count() by EventLog, Source

 

Cyb3rWard0g_4-1601741704094.png

 

That’s it! Very easy to deploy and in a few minutes.

 

Improving the Final Template! What? Why?

 

I wanted to automate the configuration and installation of a few more things:

 

 

This final official template is provided by the Azure Sentinel To-Go project and can be deployed by clicking on the “Deploy to Azure” button in the repository as shown below.

 

https://github.com/OTRF/Azure-Sentinel2Gohttps://github.com/OTRF/Azure-Sentinel2Go

 

Cyb3rWard0g_5-1601741713564.png

 

The Final Results!

 

Cyb3rWard0g_3-1601793535384.png

 

Azure Sentinel

 

An Azure Sentinel with security events from several Windows event providers flowing right from a Win10 VM.

 

Cyb3rWard0g_6-1601741730076.png

 

Windows 10 VM

 

A pre-configured Win10 VM ready-to-go with Sysmon installed and a wallpaper courtesy of the Open Threat Research community.

 

Cyb3rWard0g_7-1601741739308.png

 

[Optional] Ubuntu — Empire Option Set

 

An Ubuntu 18 VM with Empire dockerized and ready-to-go. This is optional, but it helps me a lot to run a few simulations right away.

 

ssh wardog@<UBUNTU-PUBLIC-IP>
> sudo docker exec -ti empire ./empire

 

Cyb3rWard0g_8-1601741750185.png

 

Having a lab environment that I can deploy right from GitHub and in a few minutes with One Click and a few parameters is a game changer.

 

What you do next is up to you and depends on your creativity. With the Sysmon function/parser automatically imported to the Azure Sentinel workspace, you can easily explore the Sysmon event provider and use the telemetry for additional context besides Windows Security auditing.

 

Sysmon
| summarize count() by EventID

 

Cyb3rWard0g_9-1601741762127.png

 

FQA:

How much does it cost to host the last example in Azure?

 

Azure Sentinel (Receiving Logs), Win10VM (Shipping Logs) and Ubuntu VM running for 24 hours was ~$3–$4. I usually deploy the environment, run my test, play a little bit with the data, create some queries and destroy it. Thefore, it is usually less than a dollar every time I use it.

 

What about Windows Event Filtering? I want more flexibility

 

Great question! That is actually a feature in preview at the moment. You can read more about Azure Monitor Agent and Data Collection Rules Public Preview here. This is a sample data collection rule where you can specify specific events and event providers. I wrote a basic one for testing as shown below:

 

"dataSources": {
"windowsEventLogs": [
{
"name": "AuthenticationLog",
"streams": [
"Microsoft-WindowsEvent"
],
"scheduledTransferPeriod": "PT1M",
"xPathQueries": [
"Security!*[System[(EventID=4624)]]"
]
}
]
}

 

That will be covered in another blog post once it is more mature and is GA. xPathQueries are powerful!

 

I hope you liked this tutorial. As you can see in the last part of this post, you can now deploy everything with one click and a few parameters and through the Azure Portal. That is what the Azure Sentinel To-Go project is about. Documenting and creating templates for a few lab scenarios and share them with the InfoSec community to expedite the deployment of Azure Sentinel and a few resources for research purposes.

 

Next time, I will go over a Linux environment deployment, so stay tuned!

 

References

 

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage

https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-syntax

https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/

https://docs.microsoft.com/en-us/windows/win32/secauthz/access-control-lists

https://docs.microsoft.com/en-us/azure/sentinel/connect-windows-security-events

https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon

https://github.com/OTRF/Blacksmith/tree/master/templates/azure/Win10

https://github.com/OTRF/Azure-Sentinel2Go

https://github.com/OTRF/Azure-Sentinel2Go/tree/master/grocery-list/win10

 

Azure Marketplace new offers – Volume 88

Azure Marketplace new offers – Volume 88

This article is contributed. See the original author and article here.

We continue to expand the Azure Marketplace ecosystem. For this volume, 97 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Applications

ABILITY Customer Portal.png

ABILITY.Customer Portal: ABILITY.Customer Portal is a browser-based solution that makes it easy to communicate and collaborate with your customers, partners, and service providers anytime, anywhere. This app is available only in German.

ABILITY InvoiceManager.png

ABILITY.InvoiceManager: ABILITY.InvoiceManager merges invoices from different systems in one comprehensive tool, enabling you to see invoice status, approaching deadlines, and more. This app is available only in German.

Active Cypher Data Guard.png

Active Cypher Data Guard: Data Guard from Active Cypher helps protect your organization from ransomware by leveraging artificial intelligence to recognize cyberthreats and take action before malicious actors can attack you and gain access to your files.

AcuitySpark - Modern Data Platform.png

AcuitySpark – Modern Data Platform: AcuitySpark is a data platform that delivers operational and customer intelligence to multichannel retailers so they can establish an elastic, consumer-centric value chain. It processes data from a variety of retail sources with very low latency to rapidly generate actionable intelligence.

Administrative Communication System (Robox).png

Administrative Communication System (Robox): Administrative Communication System (Robox) is an end-to-end solution for managing imports and exports. It helps organizations save time and boost productivity while improving the information security of all transaction data. This app is available in Arabic and English.

Agreement Manager.png

Agreement Manager: Circle T Industries’ Agreement Manager on Azure is a secure, ready-to-deploy solution for managing all aspects of contracts and agreements, from creation and negotiation to execution and renewal.

Big Brain Chatbot.png

Big Brain Chatbot: Powered by Microsoft Azure Cognitive Services, Big Brain is an AI-enabled, industry-agnostic chatbot solution that empowers businesses to automate answers to frequently asked questions while delivering an exceptional user experience.

Billing for Azure Stack.png

Billing for Azure Stack: Cloud Assert’s Billing for Azure Stack tracks resource consumption across the Microsoft Azure stack and custom services. Compare usage versus quota, configure pricing for Azure resources based on usage meters, and generate invoices automatically based on use.

CEM Applicant Tracking System - ATS.png

CEM Applicant Tracking System – ATS: CEM Business Solutions’ Applicant Tracking System (ATS) is a comprehensive, mobile-friendly recruitment tool. Featuring AI-based résumé parsing, ATS lets you publish job postings in internal and external portals to help you attract the best candidates for your business needs.

Chord.png

Chord: Chord is a live chat solution that empowers marketing teams to easily put the right message and content in front of every customer visiting your website. Turn website traffic into increased foot traffic with Chord on Microsoft Azure.

classroom cloud.png

classroom.cloud: Developed with teachers for teachers, NetSupport’s classroom.cloud enables teachers to lead learning from anywhere while maintaining student engagement. Ensure students are on target for success, whether they’re in the classroom or learning remotely.

Cloud Service for Azure Optimisation.png

Cloud Service for Azure Optimisation: Fujitsu Sweden’s Cloud Service for Azure Optimization provides ongoing management and continual optimization services for workloads on Microsoft Azure. Available only in Sweden, this solution facilitates the turnkey adoption of Azure across the enterprise.

cloud config Virtual Desktop.png

cloud.config Virtual Desktop: FIXER’s cloud.config Virtual Desktop is a Windows Virtual Desktop (WVD) managed build service that provides a stable telework environment with cost-effective WVD deployments on Azure. This offering is available only in Japanese.

Cloudneeti Continuous Cloud Assurance - Financial.png

Cloudneeti Continuous Cloud Assurance – Financial: Cloudneeti is a Gartner-recognized, Center for Internet Security (CIS)-certified security and compliance assurance solution that accelerates cloud adoption by proactively identifying and eliminating cloud risks. Improve cloud security visibility and enforce standards with Cloudneeti.

CodeTwo Email Signatures for Office 365.png

CodeTwo Email Signatures for Office 365: CodeTwo’s Email Signatures for Office 365 is a cloud email signature solution that lets you create signature designs and add them to internal and external emails sent from any mail client by users in your Microsoft 365 tenant.

Comieru Live.png

Comieru Live: Designed to help enforce social distancing during COVID-19, Comieru Live uses edge AI to remove personally identifiable information from video feeds, then provides occupancy status updates to help prevent densely packed areas. This app is available only in Japanese.

Contour Container Image.png

Contour Container Image: Contour is an open-source Kubernetes ingress controller that works by deploying the Envoy proxy as a reverse proxy and load balancer. Bitnami container images follow industry standards and are continuously monitored for vulnerabilities and application updates.

Datamyne.png

Datamyne: With a comprehensive database of accurate, up-to-date import-export information, Descartes Datamyne delivers actionable intelligence for market research, sales insight, supply chain management, enhanced security, and competitive strategy.

EDRMS as a Service.png

EDRMS as a Service: iCognition’s Electronic Document and Records Management as a Service (EDRMS) secures your content to help you meet compliance and regulatory requirements, improve operational efficiency, and lower your total cost of ownership.

EJBCA Container Image.png

EJBCA Container Image: EJBCA is enterprise-class PKI certificate authority software, built using Java (JEE) technology. Bitnami container images follow industry standards and are continuously monitored for vulnerabilities and application updates.

EJBCA Helm Chart.png

EJBCA Helm Chart: EJBCA is enterprise-class PKI certificate authority software, built using Java (JEE) technology. Deploying Bitnami applications as Helm charts is the easiest way to get started with its applications on Kubernetes.

Email Absence Manager.png

Email Absence Manager: BulPros Consulting’s iQ.Suite Clerk is a comprehensive absence management solution for Microsoft Exchange Server and Office 365. It ensures important emails are processed in a timely manner and that the sender is informed of the recipient’s absence with a tailored notification.

Enforce.png

Enforce: Valimail Enforce automates the configuration and ongoing management of all critical domain authentication standards (DMARC, DKIM, and SPF). It lets users authorize all third-party sending services allowed to send email on their organization’s behalf, providing exceptional visibility.

Envoy Container Image.png

Envoy Container Image: Envoy is a distributed, high-performance proxy for cloud-native applications that features a small memory footprint, is compatible with universal application languages, and supports HTTP/2 and gRPC. Bitnami container images are continuously monitored for vulnerabilities and updates.

EPICA SaaS offer.png

EPICA SaaS offer: EPICA is a SaaS data prediction platform that collects and analyzes real-time data from your website. Build clusters based on data patterns, generate predictions, and deliver product recommendations to improve e-commerce sales performance and conversions with EPICA. 

Eventador Streaming Platform with SQLStreamBuilder.png

Eventador Streaming Platform with SQLStreamBuilder: Eventador Labs’ Streaming Platform with SQLStreamBuilder provides a robust, high-performance platform for processing vast amounts of real-time and streaming data into data APIs using simple SQL with Apache Flink.

HashiCorp Consul Service on Azure.png

HashiCorp Consul Service on Azure: HashiCorp Consul Service (HCS) enables your team to provision HashiCorp-managed Consul clusters via the Microsoft Azure Marketplace. HCS lowers the barrier to entry for securely connecting services and routing traffic across a mix of Kubernetes/Azure Kubernetes Service and virtual machine environments.

HautAI Skin SaaS.png

HautAI Skin SaaS: HautAI Skin SaaS for skin health and care collects standardized image data, extracts skin and face metrics, builds advanced analytics and recommendation engines, and simulates the effects of cosmetics products and treatments.

Healthy Habits.png

Healthy Habits: Healthper’s Healthy Habits program motivates members to build positive well-being habits by using the digital tools or engaging with an experienced coach. The program includes a self-service administrative portal, unlimited coaching, reports, and more.

Healthy Stride.png

Healthy Stride: Healthy Stride is a miles-tracking challenge designed to improve participants’ physical well-being through friendly competition. Healthper’s program includes digital posters to promote the challenge, text/push notifications to engage participants, wholesome cooking tips, and more.

Healthy Trim.png

Healthy Trim: Healthper’s Healthy Trim is a four-week weight loss program that helps participants change their lifestyle and lose weight. The program defines portion sizes and ingredients for home-cooked meals and includes nutritional supplements with instructions to support the weight loss journey.

Human Resource Management System HRMS.png

Human Resource Management System HRMS: Strom Human Resource Management System (HRMS) tracks all employee lifecycle activities, simplifies human resources work, and delivers deep insights. Features include a secure employee portal, document management, incident management, and in-depth reporting.

Industrial Predictive Operations Center.png

Industrial Predictive Operations Center: The Industrial Predictive Operations Center is a self-provisioning SaaS solution that uses Azure IoT, advanced analytics, and AI building blocks to help domain experts solve operations and maintenance issues across thousands of assets and hundreds of production lines and plants.

Informatica Data Quality 10.4.1.png

Informatica Data Quality 10.4.1: Informatica Data Quality ensures your initiatives and processes are fueled with relevant, timely, and trustworthy data. It combines collaborative capabilities for business users such as data stewards and analysts with the depth and enterprise scalability that technical roles need.

Informatica PowerCenter 10.4.1.png

Informatica PowerCenter 10.4.1: Informatica PowerCenter is a hybrid data integration tool that transforms fragmented, raw data from any source, at any latency, into actionable information. Use PowerCenter to move datacenters from on-premises to Microsoft Azure, perform cloud data integration tasks, and more.

ISO Quality Management System for Microsoft 365.png

ISO Quality Management System for Microsoft 365: Built for Microsoft 365, Konsolute’s Quality Management System (QMS) helps you define policy procedures and standard operating procedures such as quality records, regulatory requirements, ISO requirements, and industry specifications.

kc_rhel.png

kc_rhel: From data ingestion to data analysis, Kyligence Cloud simplifies the complexity of big data analytics in the cloud, enabling cluster deployment, data access, and data analysis. This app is available in Chinese and English.

Konsolute - Auto Classifier.png Konsolute – Auto Classifier: Konsolute’s Auto Classifier (AC) uses machine learning to extract meaningful keywords (metadata) that accurately describe the content. AC streamlines content publishing and reduces the risk of incorrectly classifying or applying metadata.
Konsolute Kolumbus - AI Driven Data Discovery Tool.png

Konsolute Kolumbus – AI Driven Data Discovery Tool: Konsolute’s Kolumbus uses language processors to understand the semantic context of your data, and its hyper-intuitive cognitive capabilities intelligently extract keywords that accurately describe the content, enabling you to understand the data being created in your organization.

Konsolute Onboard - New Hire Onboarding Platform.png

Konsolute Onboard – New Hire Onboarding Platform: Onboard is a new-hire onboarding platform built for organizations consuming Microsoft 365. Onboard’s intuitive, highly configurable interface allows new hires to go through introductory videos, policies and procedures, links, and recommended communities.

LTI PrivateEye.png

LTI PrivateEye: LTI’s PrivateEye is an automated data discovery tool used in the data privacy assessment process to identify sensitive fields across an enterprise. PrivateEye can scan structured, semi-structured, and unstructured data sources and classify their impact on business processes.

MMPredict.png

MMPredict: MMPredict is a SaaS application that uses artificial intelligence to predict device failure. Reduce equipment downtime, improve equipment operating rates, and reduce the load on maintenance workers with MMPredict on Azure. This app is available in Japanese.

MoonDesk.png

MoonDesk: Hosted on Microsoft Azure and co-developed with Microsoft and Adobe, MoonDesk delivers an integrated solution for the creation, management, and review of graphical designs — all the way to print and delivery. Simplify the design and review processes and prevent printing errors with MoonDesk.

nestjs.png

nestjs: Linnovate Technologies’ Nest (NestJS) is a framework for building efficient, scalable Node.js server-side applications. The open-source platform includes a complete development kit.

NetApp Global File Cache Core.png

NetApp Global File Cache Core: Featuring end-to-end security and data encryption, NetApp’s intelligent Global File Cache consolidates your unstructured data into the cloud to enable real-time global file sharing for your distributed workforce. 

No-code robotics.png

No-code robotics: Wandelbots’ no-code robotics solution enables users to quickly train, reprogram, and provide data for any robot. Available in German and English, the application lets users control individual robot joints, define safety areas, and teach complex motions.

NoSpamProxy.png

NoSpamProxy: NoSpamProxy is a gateway solution for comprehensive protection against spam and malware and for encrypting emails. Reduce administrative overhead and boost email security with NoSpamProxy’s automated functions. This app is available in German and English.

OneView Cloud Expense Management.png

OneView Cloud Expense Management: OneView Cloud Expense Management provides integrated baseline reporting, a detailed dashboard, and full tagging and cost recovery functionality. Leverage cloud benefits without financial risk, gain insights into cloud spending, and identify optimizations and cost efficiencies.

Return To Work.png

Return To Work: Return to Work is a comprehensive program to keep employees and visitors healthy and safe during COVID-19. It features an AI-powered IoT device with facial recognition for fast, accurate temperature screening. Centralize your COVID-19 response management with Healthper.

Route Planner.png

Route Planner: Descartes Route Planner helps improve operational efficiency by generating vehicle route-planning schedules in real time, enabling organizations to reallocate mobile fleet resources to optimize operating efficiencies and maintain overall customer service objectives.

Sarafan Tech objects' recognition in streaming.png

Sarafan Tech objects’ recognition in streaming: Sarafan Technology adds AI-based features to enrich video content and educate, entertain, and inspire viewers. It recognizes objects and actions in video, matches them with information from authorized third-party sources, and returns timestamped metadata and matched results.

Sarafan- safety compliance monitoring.png

Sarafan: safety compliance monitoring: Sarafan Technology’s monitoring solution on Microsoft Azure uses machine vision technology to automatically monitor compliance with occupational safety regulations. The system recognizes employee identity, personal protective equipment, machinery, restricted areas, and more.

Security Removable Media Manager (secRMM).png

Security Removable Media Manager (secRMM): Security Removable Media Manager (secRMM) is a Windows solution that provides data loss prevention for mobile devices and removable storage devices. Its detailed auditing returns user, device, file, and application information for deep insights into your IT environment.

SideKick 365 CRM - SharePoint and Power Apps Sales.png

SideKick 365 CRM – SharePoint & Power Apps Sales: Skylite Systems’ SideKick 365 CRM is a customer relationship management app for SharePoint, Microsoft Power Apps, Microsoft Power BI, and Office 365 users. Manage accounts, opportunities, leads, contacts, tasks, and more with SideKick 365 CRM.

Simplifai Documentbot.png

Simplifai Documentbot: Simplifai’s Documentbot classifies unstructured, free-text documents and extracts relevant information. The solution can be configured to call any external API to perform actions and to trigger actions in back-end systems according to business rules.

sonarcube.png

sonarcube: Empower your developers to write cleaner and safer code with SonarQube on Microsoft Azure. SonarQube’s continuous code inspection includes thousands of automated static code analysis rules to help protect your application and guide your team. 

SpendAi.png

SpendAi: SpendAi categorizes more than 95 percent of spending data while giving users control of categorization through a simple drag-and-drop interface. Transform spending data into actionable insights to manage risk.

SSP SYSTEM.png

SSP SYSTEM: Greeneye Technology’s selective spraying solution for weed control leverages artificial intelligence to turn every sprayer into a smart machine with seamless integration. The system detects weeds and sprays them precisely, reducing up to 90 percent of herbicide usage.

SymbioSys Auto Underwriting-as-a-Service.png

SymbioSys Auto Underwriting-as-a-Service: Improve sales conversions with SymbioSys Auto Underwriting-as-a-Service, which allows your field sales force to underwrite a case at the point of sale. The service is extensively used by life and health insurers across different countries.

SymbioSys InForce Illustration-as-a-Service.png

SymbioSys Inforce Illustration-as-a-Service: With SymbioSys Inforce Illustration-as-a-Service, insurers can generate illustrations for life, annuity, and pension products. The solution has helped businesses configure more than 500 plans, more than 1,000 riders, and more than 5 million illustrations.

SymbioSys Persistency Management-as-a-Service.png

SymbioSys Persistency Management-as-a-Service: SymbioSys Persistency-as-a-Service enables insurers to compute persistency based on the business objective. This allows them to incentivize distributors who contribute to retaining regular policy renewals, thus maintaining higher persistency.

SymbioSys Product Configurator-as-a-Service.png

SymbioSys Product Configurator-as-a-Service: SymbioSys Product Configurator-as-a-Service enhances sales velocity for insurers through its numerous features, which include real-time API interfaces and a modeler to configure complex actuarial calculations and illustrations.

SymbioSys Sales Illustration-as-a-Service.png

SymbioSys Sales Illustration-as-a-Service: Designed for global insurance distribution needs, SymbioSys Sales Illustration-as-a-Service enables sales forces to generate complex and interactive sales illustrations for their prospects, even in an offline mode using mobile devices.

Teradata Data Stream Controller.png

Teradata Data Stream Controller: Teradata Data Stream Controller provides administrative functions and metadata storage for the Teradata Data Stream Utility. It’s a key component of the backup and restore functions of Teradata systems.

TickStream KeyID.png

TickStream.KeyID: TickStream.KeyID uses keystroke analytics to confirm identities, protecting users against credential theft or misuse. Fortify passwords with TickStream.KeyID’s frictionless multifactor authentication.

TickStream PI - Password Integrity.png

TickStream.PI – Password Integrity: TickStream.PI is a free analytics reporting tool that identifies password misuse and hacking incidents. Data is presented both as a report and as a load-ready CEF file for many popular security and forensic tools.

TickStream WFH-inspiring confidence in remote work.png

TickStream.WFH-inspiring confidence in remote work: Remote work creates new challenges, with employees accessing resources from beyond the confines of a company’s internal network. TickStream.WFH enables companies to meet security, privacy, and compliance requirements when employees work remotely.

VConnect RP for Azure Stack.png

VConnect RP for Azure Stack: Cloud Assert’s VConnect is a hybrid cloud solution for provisioning and managing virtual machines across Microsoft Azure and other cloud providers. Extend Azure Stack portals, manage backup jobs, and sync subscription resources with source systems.

Visitor Registration Management (VRM).png

Visitor Registration Management (VRM): Visitor Registration Management tracks the entry and exit of visitors and their vehicles. Employees from different departments can provide a preset list of visitors, such as vendors, consultants, or interns, for efficient tracking and parking allocations.

VPN Biometrics.png

VPN Biometrics: VPN Biometrics adds a layer of security to VPN environments through the use of facial biometrics. Employees can gain access without the hassle of passwords, and corporate network administrators can easily confirm identities. This solution is available in English and Portuguese.

Xtractor - OCR Platform.png Xtractor – OCR Platform: Xtractor extracts and classifies documents, enabling businesses to automate manual processes. It also handles face recognition and signature detection. Use Xtractor for customer onboarding, invoices, sales, bills, medical reports, investigations, and more.

Consulting services

[dot] NET Modernization in a Day- 1-Day workshop.png

.NET Modernization in a Day:- 1-Day workshop: This workshop from FyrSoft will get you started modernizing .NET apps using Microsoft Azure. Participants will gain a better understanding of .NET apps and how Microsoft technologies can increase .NET resiliency and enable easy scaling.

AI-100 Azure AI Solutions- 3-Days Workshop.png

AI-100 Azure AI Solutions: 3-Days Workshop: Qualitia Energy’s workshop, which will cover bots and Microsoft Azure Cognitive Services, is intended for cloud solution architects, Azure AI designers, and AI developers. Participants must have an understanding of C#, Azure fundamentals, and Azure storage technologies.

Analytics in A Day 1-Day Workshop.png

Analytics in A Day 1-Day Workshop: In this workshop, FyrSoft will cover cloud analytics capabilities and teach you how to create a pipeline that goes from data ingestion to insights with Microsoft Power BI.

Application Migration Audit- 5-Days Assessment.png

Application Migration Audit: 5-Days Assessment: Are you ready to start your application transformation? Accedia’s seasoned consultants will help you efficiently migrate so you can start reaping the benefits of Microsoft Azure.

Art of the Possible on Azure- 2-Hr Briefing.png

Art of the Possible on Azure: 2-Hr Briefing: Dimension Data’s briefing will address four key components for maximizing your Microsoft Azure investment: digital business, technology innovation, a Lean-Agile framework, and rapid prototyping.

AZ-104 Microsoft Azure Admin- 4-Day Workshop.png

AZ-104 Microsoft Azure Admin.: 4-Day Workshop: In Qualitia Energy’s workshop, Microsoft Azure administrators will provision, size, monitor, and adjust resources. Participants must have an understanding of on-premises virtualization technologies, network configuration, and Active Directory concepts.

AZ-120 Plan & Admin Azure for SAP- 4-Day Workshop.png

AZ-120 Plan & Admin Azure for SAP: 4-Day Workshop: This workshop from Qualitia Energy will teach IT professionals who have experience with SAP solutions how to utilize Microsoft Azure resources, including virtual machines, virtual networks, storage, and Azure Active Directory.

AZ-900 Microsoft Azure Fundamentals- 2-Day Workshop.png

AZ-900 Microsoft Azure Fundamentals:2-Day Workshop: Qualitia Energy’s workshop will help IT personnel new to Microsoft Azure prepare for the AZ-900: Microsoft Azure Fundamentals exam. Participants will primarily be using the Azure portal to create services. Scripting skills are not required. 

Azure Cost Optimization- 2-Weeks Implementation.png

Azure Cost Optimization: 2-Weeks Implementation: Are you looking to optimize your Microsoft Azure costs without compromising efficiency and performance? Accedia’s analysis and action plan will help you save money, configure policies, and implement guidelines for future improvements.

Azure Disaster Recovery - 2 Hour Briefing.png

Azure Disaster Recovery – 2 Hour Briefing: Incremental Group’s briefing will help you determine your disaster recovery options and learn about using Microsoft Azure Site Recovery to protect your servers from a datacenter outage.

Azure Hybrid Services- 2 day workshop.png

Azure Hybrid Services: 2 day workshop: This workshop from Move AS will involve setting up a site-to-site VPN to connect your on-premises environment to Microsoft Azure, updating Windows servers, and protecting your infrastructure using Azure Backup and Azure Site Recovery.

Azure IaaS Migration- 3-Weeks Proof of Concept.png

Azure IaaS Migration: 3-Weeks Proof of Concept: Realize the benefits of Microsoft Azure through a proof of concept implemented by Accedia consultants and engineers. Accedia’s team will build and configure a test environment in Azure, then hand over the steps required to make it production-ready.

Azure Migration- 1-Week Assessment.png

Azure Migration: 1-Week Assessment: Eastbay Cloud Services will conduct a workshop and infrastructure assessment to set your organization up for a successful migration to Microsoft Azure. Eastbay Cloud Services will also address security policies and disaster recovery plans.

Business Automation- 2-Wk Proof of Concept.png

Business Automation: 2-Wk Proof of Concept: TechFabric’s proof of concept will focus on automating a business process with Microsoft Azure. This offer includes an assessment of your technology stack and up to 50 hours of development time.

Business Insights on Azure- 1-wk Assessment.png

Business Insights on Azure: 1-wk Assessment: This assessment from Dimension Data will help you understand the information in your Microsoft Azure or hybrid environment so that you can act upon it, allowing for transformation and innovation.

CAF - Adopt - Migrate 1-Day Workshop.png

CAF – Adopt – Migrate 1-Day Workshop: FyrSoft will deliver the Microsoft Cloud Adoption Framework for Azure, review migration prerequisites, identify a pilot project, and use Azure Migrate and deployment tools to migrate your organization’s first workload to the cloud.

CAF - Manage Governance 1 day Workshop.png

CAF – Manage Governance 1 day Workshop: In this workshop, FyrSoft will assess your current environment against governance benchmarks, identify risks and compliance requirements, implement governance tools, and add governance controls to address risks.

Cloud Adoption- 21-Day Azure Implementation.png

Cloud Adoption: 21-Day Azure Implementation: Asseco Data Systems S.A.’s knowledge and expertise, combined with cloud tools, can support your project and migration needs. Asseco Data Systems S.A. will determine your organizational requirements, then implement a solution using Microsoft Azure.

Cloud Migration- 10 Week Implementation.png

Cloud Migration: 10 Week Implementation: Working with your internal IT teams, Kainos will design and build a secure cloud landing zone aligned to your immediate and longer-term needs, complete a pilot migration for one of your applications, and prepare a full migration plan.

Data Modernization in a Day - 1 Day Workshop.png

Data Modernization in a Day – 1 Day Workshop: This workshop from FyrSoft will dive into migration strategies and provide hands-on experiences with data migration tools. The workshop is intended for customers who have moved infrastructure to Microsoft Azure but not apps or data.

Data Protection and DR with Azure - 4h Workshop.png

Data Protection and DR with Azure – 4h Workshop: PROGEL SpA’s workshop will examine disaster recovery plans and how Microsoft Azure Site Recovery is able to protect, orchestrate, and test plans even in the presence of non-homogeneous environments. This workshop is available in Italian.

Fast Track Azure AI 4-week Proof of Concept.png

Fast Track Azure AI 4-week Proof of Concept: Using a mix of Microsoft Azure Machine Learning, Microsoft Power BI, and Microsoft Power Apps, this proof of concept from Algospark will deliver a solid foundation for AI solutions.

Product Design Sprint - 5 day workshop.png

Product Design Sprint – 5 day workshop: IJYI’s design sprint will begin by bringing together insights from client stakeholders and the IJYI team. After establishing goals, the IJYI developer team will create a working prototype and put it through user testing.

SAP on Azure- 2-day Workshop.png

SAP on Azure: 2-day Workshop: In this workshop, Dimension Data, an NTT company, will provide you with an overview of your SAP setup, delivering a business case for moving to the cloud or S/4 HANA. You’ll receive a clear roadmap for your SAP transition.

SAP on Azure Briefing- 2-hours.png SAP on Azure Briefing: 2-hours: This briefing from Dimension Data will explain the process for migrating your organization’s core SAP systems to Microsoft Azure. Dimension Data will cover potential pitfalls and how to reduce downtime during migration.
XContent Azure Managed Services - XCOPS.png XContent Azure Managed Services | XCOPS: Let XContent manage your Microsoft Azure environment using best practices and its around-the-clock support team of security professionals and Azure-certified engineers. XContent can implement system patches, provide escalation support, and lighten the load on your internal IT staff.
Real-time Inference on NVIDIA GPUs in Azure Machine Learning (Preview)

Real-time Inference on NVIDIA GPUs in Azure Machine Learning (Preview)

This article is contributed. See the original author and article here.

AI today is about scale: models with billions of parameters used by millions of people. Azure Machine Learning is built to support your delivery of AI-powered experiences at scale. With our notebook-based authoring experience, our low-code and no-code training platform, our responsible AI integrations, and our industry-leading ML Ops capabilities, we give you the ability to develop large machine learning models easily, responsibly, and reliably.

 

One key component of employing AI in your business is model serving. Once you have trained a model and assessed it per responsible machine learning principles, you need to quickly process requests for predictions, for many users at a time. While serving models on general-purpose CPUs can work well for less complex models serving fewer users, those of you with a significant reliance on real-time AI predictions have been asking us how you can leverage GPUs to scale more effectively.

 

That is why today, we are partnering with NVIDIA to announce the availability of the Triton Inference Server in Azure Machine Learning to deliver cost-effective, turnkey GPU inferencing.

 

There are three components to serving an AI model at scale: server, runtime, and hardware. This new Triton server, together with ONNX Runtime and NVIDIA GPUs on Azure, complements Azure Machine Learning’s support for developing AI models at scale by giving you the ability to serve AI models to many users cheaply and with low latency. Below, we go into detail about each of the three components to serving AI models at scale.

 

Server

 

Triton Inference Server in Azure Machine Learning can, through server-side mini batching, achieve significantly higher throughput than can a general-purpose Python server like Flask.

 

gopalv_0-1601598975457.png

 

Triton can support models in ONNX, PyTorch, TensorFlow, and Caffe2, giving your data scientists the freedom to explore any framework of interest to them during training time.

 

Runtime

 

For even better performance, serve your models in ONNX Runtime, a high-performance runtime for both training (in preview) and inferencing.

 

gopalv_1-1601598975463.png

Numbers Courtesy of NVIDIA

ONNX Runtime is used by default when serving ONNX models in Triton, and you can convert PyTorch, TensorFlow, and Scikit-learn models to ONNX.

 

Hardware

 

NVIDIA Tesla T4 GPUs in Azure provide a hardware-accelerated foundation for a wide variety of models and inferencing performance demands. The NC T4 v3 series is a new, lightweight GPU-accelerated VM, offering a cost-effective option for customers performing real-time or small batch inferencing who may not need the throughput afforded by larger GPU sizes such as the V100-powered ND v2 and NC v3-series VMs, and desire a wider regional deployment footprint.

 

gopalv_2-1601598975466.png

 

The new NCasT4_v3 VMs are currently available for preview in the West US 2 region, with 1 to 4 NVIDIA Tesla T4 GPUs per VM, and will soon expand in availability with over a dozen planned regions across North America, Europe and Asia.

To learn more about NCasT4_v3-series virtual machines, visit the NCasT4_v3-series documentation.

 

Easy to Use

 

Using Triton Inference Server with ONNX Runtime in Azure Machine Learning is simple. Assuming you have a Triton Model Repository with a parent directory triton and an Azure Machine Learning deploymentconfig.json, run the commands below to register your model and deploy a webservice.

 

 

 

az ml model register -n triton_model -p triton --model-framework=Multi
az ml model deploy -n triton-webservice -m triton_model:1 --dc deploymentconfig.json --compute-target aks-gpu

 

 

Next Steps

 

In this blog, you have seen how Azure Machine Learning can enable your business to serve large AI models to many users simultaneously. By bringing together a high-performance inference server, a high-performance runtime, and high-performance hardware, we give you the ability to serve many requests per second at millisecond latencies while saving money.

 

To try this new offering yourself:

  1. Sign up for an Azure Machine Learning trial
  2. Clone our samples repository on GitHub
  3. Read our documentation
  4. Be sure to let us know what you think

You can also request access to the new NCasT4_v3 VM series (In Preview) by applying here.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What is Flexible Server in Azure Database for PostgreSQL?

What is Flexible Server in Azure Database for PostgreSQL?

This article is contributed. See the original author and article here.

At Ignite, we announced the preview of a new deployment option for Azure Database for PostgreSQL: Flexible Server. Flexible Server is the result of a multi-year Azure engineering effort to deliver a reimagined database service to those of you who run Postgres in the cloud. Over the past several years, our Postgres engineering team has had the opportunity to learn from many of you about your challenges and expectations around the Single Server deployment option in Azure Database for PostgreSQL. Your feedback and our learnings have informed the creation of Flexible Server.

 

If you are looking for a technical overview of what Flexible Server is in Azure Database for PostgreSQL—and what the key capabilities are, let’s dive in.

 

Azure-Postgres-database-icon-with-magnifying-glass-for-LinkedIn-Twitter.png

 

Flexible server is architected to meet requirements for modern apps

Our Flexible Server deployment option for Postgres is hosted on the same platform as Azure Database for PostgreSQL – Hyperscale (Citus), our deployment option that scales out Postgres horizontally (by leveraging the Citus open source extension to Postgres).

 

Flexible Server is hosted in a single-tenant Virtual Machine (VM) on Azure, on a Linux based operating system that aligns naturally with the Postgres engine architecture. Your Postgres applications and clients can connect directly to Flexible Server, eliminating the need for redirection through a gateway. The direct connection also eliminates the need for an @ sign in your username on Flexible Server. Additionally, you can now place Flexible Server’s compute and storage—as well as your application—in the same Azure Availability Zone, resulting in lower latency to run your workloads. For storage, our Flexible Server option for Postgres uses Azure Premium Managed Disk. In the future, we will provide an option to use Azure Ultra SSD Managed Disk. The database and WAL archive (WAL stands for write ahead log) are stored in zone redundant storage.

 

Flexible Server Architecture showing PostgreSQL engine hosted in a VM with zone redundant storage for data/log backups and client, database compute and storage in the same Availability ZoneFlexible Server Architecture showing PostgreSQL engine hosted in a VM with zone redundant storage for data/log backups and client, database compute and storage in the same Availability Zone

 

There are numerous benefits of using a managed Postgres service, and many of you are already using Azure Database for PostgreSQL to simplify or eliminate operational complexities. With Flexible Server, we’re improving the developer experience even further, as well as providing options for scenarios where you want more control of your database.

 

A developer-friendly managed Postgres service

For many of you, your primary focus is your application (and your application’s customers.) If your application needs a database backend, the experience to provision and connect to the database should be intuitive and cost-effective. We have simplified your developer experience with Flexible Server on Azure Database for PostgreSQL, in few key ways.

  • Intuitive and simplified provisioning experience. To provision Flexible Server, some of the fields are automatically filled based on your profile. For example, Admin username and password use defaults but you can always overwrite them.

  • Simplified CLI experience. For example, it’s now possible to provision Flexible Server inside a virtual network in one command, and the number of keystrokes for the command can be reduced by using local context. For more details, see Flexible server CLI reference.

 

CLI command to provision the Flexible ServerCLI command to provision the Flexible Server

 

  • Connection string requirement. The requirement to include @servername suffix in the username has been removed. This allows you to connect to Flexible Server just like  you would to any other PostgreSQL engine running on-premise or on a virtual machine.

  • Connection management: Pgbouncer is now natively integrated to simplify PostgreSQL connection pooling.

  • Burstable compute: You can optimize cost with lower-cost, burstable compute SKUs that let you pay for performance only when you need it.

  • Stop/start: Reduce costs with the ability to stop/start the Flexible Server when needed, to stop a running service or to start a stopped Service. This is ideal for development or test scenarios where it’s not necessary to run your database 24×7. When Flexible Server is stopped, you only pay for storage, and you can easily start it back up with just a click in the Azure portal.

 

Screenshot from the Azure Portal showing how to stop compute in your Azure Database for PostgreSQL flexible server when you don’t need it to be operational.Screenshot from the Azure Portal showing how to stop compute in your Azure Database for PostgreSQL flexible server when you don’t need it to be operational.

 

Screenshot from the Azure Portal depicting how to start compute for your Azure Database for PostgreSQL flexible server, when you’re ready to restart work.Screenshot from the Azure Portal depicting how to start compute for your Azure Database for PostgreSQL flexible server, when you’re ready to restart work.

 

Maximum database control

Flexible Server brings more flexibility and control to your managed Postgres database, with key capabilities to help you meet the needs of your application.

 

  • Scheduled maintenance: Enterprise applications must be available all the time, and any interruptions during peak business hours can be disruptive. Similarly, if you’re a DBA who is running a long transaction—such as a large data load or index create/rebuild operations—any disruption will abort your transaction prematurely. Some of you have asked for the ability to control Azure maintenance windows to meet your business SLAs. Flexible Server will schedule one maintenance window every 30 days at the time of your choosing. For many customers, the system-managed schedule is fine, but the option to control is helpful for some mission-critical workloads.

 

Screenshot from the maintenance settings for Azure Database for PostgreSQL flexible server in the Azure Portal, showing where you can select the day of week and start time for your maintenance schedule.Screenshot from the maintenance settings for Azure Database for PostgreSQL flexible server in the Azure Portal, showing where you can select the day of week and start time for your maintenance schedule.

 

  • Configuration parameters: Postgres offers a wide range of server parameters to fine tune the database engine performance, and some of you want similar control in a managed service as well. For example, there is sometimes a need to mimic the configuration you had on-premises or in a VM. Flexible Server has enabled control over additional server parameters, such as Max_Connections, and we will add even more by Flexible Server GA.

  • Lower Latency: To provide low latency for applications, some of you have asked for the ability to co-locate Azure Database for PostgreSQL and your application in physical proximity (i.e. the same Availability Zone). Flexible Server provides the ability to co-locate the client, database, and storage for lower latency and improved out-of-the-box performance. Based on our internal testing and customer testimonials, we are seeing much better out-of-the-box performance.

  • Network Isolation: Some of you need the ability to provision servers with your own VNet or subnet, to ensure complete lock down from any outside access. With Flexible Server private endpoints, you can completely isolate the network by preventing any public endpoint to exist for the database workload. All connections to the server on public or private endpoints are secured and encrypted by default with SSL/TLS v1.2.

 

Zone-redundant high availability

With the new Flexible Server option for Azure Database for PostgreSQL, you can choose to turn on zone redundant high availability (HA). If you do, our managed Postgres service will spin up a hot standby with the exact same configuration, for both compute and storage, in a different Availability Zone. This allows you to achieve fast failover and application availability should the Availability Zone of the primary server become unavailable.

 

Any failure on the primary server is automatically detected, and it will fail over to the standby which becomes the new primary. Your application can connect to this new primary with no changes to the connection string.

 

Zone redundancy can help with business continuity during planned or unplanned downtime events, protecting your mission-critical databases. Given that the zone redundant configuration provides a full standby replica server, there are cost implications, and zone redundancy can be enabled or disabled at any time.

 

Screenshot from the Azure Portal depicting an Azure Database for PostgreSQL flexible server in a zone-redundant HA configuration, with the primary server in Availability Zone 1 and the standby server in Availability Zone 2.Screenshot from the Azure Portal depicting an Azure Database for PostgreSQL flexible server in a zone-redundant HA configuration, with the primary server in Availability Zone 1 and the standby server in Availability Zone 2.

 

Get started with Flexible Server today!

We can’t wait to see how you will use our new Flexible Server deployment option that is now in preview in Azure Database for PostgreSQL. If you’re ready to try things out, here are some quickstarts to get you started:

 

 

Azure Database for PostgreSQL Single Server remains the enterprise ready database platform of choice for your mission-critical workloads, until Flexible Server reaches GA. For those of you who want to migrate over to Flexible Server, we are also working to provide you a simplified migration experience from Single Server to Flexible Server with minimal downtime.

 

If you want to dive deeper, the new Flexible Server docs are a great place to roll up your sleeves, and visit our website to learn more about our Azure Database for PostgreSQL managed service. We are always eager to hear your feedback so please reach out via email using Ask Azure DB for PostgreSQL.

 

Sunil Agarwal

Twitter: @s_u_n_e_e_l

Experiencing Data Latency Issue in Azure portal for Log Analytics – 10/02 – Resolved

This article is contributed. See the original author and article here.

Final Update: Friday, 02 October 2020 19:45 UTC

We’ve confirmed that all systems are back to normal with no customer impact as of 10/02, 19:05 UTC. Our logs show the incident started on 10/02, 17:00 UTC and that during the 2 hours 5 mins that it took to resolve the issue 10% of customers experienced periods of increased latency when ingesting Log Analytics data in East US.

.

  • Root Cause: The failure was due to back-end web role had experienced a period of high CPU utilization due to an ongoing upgrade, causing some calls from the front-end ingestion service to time out and leading to the drop in front-end availability.
  • Incident Timeline: 2 Hours & 05 minutes – 10/02, 17:00 UTC through 10/02, 19:05 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Vincent