IPv4/IPv6 dual stack webservices in Azure

IPv4/IPv6 dual stack webservices in Azure

This article is contributed. See the original author and article here.

IPv6 is the most recent version of the Internet Protocol (IP) and the Internet Engineering Task Force (IETF) standard is already more than 20 years old, but for most of us is it still not something we have to deal with on a day-to-day basis. This is slowly changing as some areas of the world are running out of IPv4 addresses and local governments start to make IPv6 support mandatory for some verticals. This results in urgent requests by some of our customers to provide their webservices via IPv4 and IPv6 (so called dual stack) to fulfill their regulatory requirements.

 

The Internet Society published an article already back in June 2013 (see “Making Content Available Over IPv6”) on what potential implementations could look like. Their article is covering native IPv6, proxy servers and Network Address Translation (NAT). Even though the article is already a few years old, the different options are still valid.

 

So how does this apply to Azure? Azure supports IPv6 and has done for quite some time now for most of its the foundational network and compute services. This enables native implementations using IPv4 and IPv6 by leveraging Azure’s dual stack capabilities.

 

For example, in scenarios where applications are mainly based on Virtual Machines (VMs) and other Infrastructure-as-a-Service (IaaS) services with full IPv6 support, we are able to use native IPv4 and IPv6 end-to-end. This avoids any complications caused by translations and it provides the most information to the server and the application. But it also means that every device along the path between the client and the server must handle IPv6 traffic.

 

The following diagram depicts a simple dual stack (IPv4/IPv6) deployment in Azure:

 

ipv6_image_1.png

 

The native end-to-end implementation is most interesting for scenarios and use cases where direct server-to-server or client-to-server communication is required. This is not the case for most webservices and applications as they are typically already exposed through, or shielded behind a Firewall, a Web Application Firewall (WAF) or a reverse proxy.

 

Other more complex deployments and applications often times contain a set of 3rd-party applications or Platform-as-a-Service (PaaS) services like Azure Application Gateway (AppGw), Azure Kubernetes Services (AKS), Azure SQL databases and others that do not necessarily support native IPv6 today. And there might be also other backend services and solutions, in some cases hosted on IaaS VMs that are also not capable of speaking IPv6. This is where NAT/NAT64 or an IPv6 proxy solution comes to play. These solutions allow us to transition from IPv6 to IPv4 and vice versa.

 

Besides the technical motivation and need to translate from IPv6 to IPv4 are there also other considerations especially around education, the costs of major application changes and modernization as well as the complexity of an application architecture why customers consider using a gateway to offer services via IPv4/IPv6 while still leveraging IPv4-only infrastructure in the background.

A very typical deployment today, for example using a WAF, looks like this:

 

ipv6_image_2.png

 

The difference between the first and the second drawing in this post is that the latter has a front-end subnet that contains for example a 3rd-Party Network Virtual Appliance (NVA) that accepts IPv4 and IPv6 traffic and translates it into IPv4-only traffic in the backend.

 

This allows customers to expose existing applications via IPv4 and IPv6, making them accessible to their end-users natively via both IP versions without the need to re-architect their application workloads and to overcome limitations in some Azure services and 3rd-Party services that do not support IPv6 today.

 

Here is a closer look on how a typical architecture could look like:

 

ipv6_image_3.png

 

The Azure Virtual Network is enabled for IPv6 and has address prefixes for IPv4 and IPv6 (these prefixes are examples only). The “Front-end subnet” is enabled for IPv6 as well and contains a pair of NVAs. These NVAs are deployed into an Azure Availability Set, to increase their availability, and exposed through an Azure Load Balancer (ALB). The first ALB (on the left) has a public IPv4 and a public IPv6 address.

 

The NVAs (for example a WAF) in the Front-end subnet accept IPv4 and IPv6 traffic and can offer, depending on the ISV solution and SKU a broad set of functionality and translate the inbound traffic into IPv4 to access the application that is running in our “Application subnet”. The application is again exposed (internally; no public IP addresses) via IPv4 only through an internal ALB and only accessible from the NVAs in the front-end subnet.

 

This results in an architecture that provides services via IPv4 and IPv6 to end-users while the backend application is still using IPv4-only. The benefit of this approach is the reduced complexity, the application teams do not need to know and take care of IPv6, on one hand and on the other hand a reduced attach surface as between the front-end and the application subnet we’re only using the well-known and fully supported IPv4 protocol.

 

Instead of using a 3rd-party NVA, Azure offers similar, managed capabilities using the Azure Front Door (AFD) service. AFD is a global Azure service that “enables you to define, manage and monitor global routing for web traffic by optimizing for best performance and quick global failover for high availability” (see “What is Azure Front Door?”). AFD works at Layer 7 (OSI model) or HTTP/HTTPS layer and uses the anycast protocol and allows you to route client requests to the fastest and most available application backend. An application backend can be any Internet-facing service hosted inside or outside of Azure.

 

AFD’s capabilities include proxying IPv6 client requests and traffic to an IPv4-only backend as shown below:

 

ipv6_image_4.png

 

The main architectural difference between the NVA-based approach we have described in the beginning of this post and the use of AFD service is, that the NVAs are customer-managed, work at Layer 4 (OSI model) and can be deployed into the same Azure Virtual Network as the application in a way where the NVA has a private and a public interface. The application is then only accessible through the NVAs which allows filtering of all ingress (and egress) traffic. Whereas AFD is a global PaaS service in Azure, living outside of a specific Azure Region and works at Layer 7 (HTTP/HTTPS). The application backend is an Internet-facing service hosted inside or outside of Azure and can be locked down to accept traffic only from your specific Front Door.

 

While we are in this post explaining these two solutions as different options to make IPv4-only applications and services hosted in Azure available to users via IPv4 and IPv6, it is in general and especially in more complex environments not either a WAF/NVA solution or Azure Front Door. It can be a combination of both where NVAs are used within a regional deployment while AFD is used to route the traffic to one or more regional deployments  in different Azure regions (or other Internet-facing locations).

 

What’s next? Take a closer look into Azure Front Door service and its documentation to learn more about its capabilities to decide if either native end-to-end IPv4/IPv6 dual stack, a 3rd-party NVA-based solution or Azure Front Door services fulfils your needs to support IPv6 with your web application.

 

 

Azure Sphere TLS certificate update

This article is contributed. See the original author and article here.

On September 15, we will publish an update to the TLS certificates used by Azure Sphere devices to establish a connection to the update service.  When a device takes the cert update, it will reboot once to apply it.  There is no impact to device connectivity or operations other than an additional reboot as the device downloads new certs.

 

Device update can be temporarily delayed for up to 24 hours. For more information see Defer device updates in the customer documentation.

 

If you encounter problems

For self-help technical inquiries, please visit Microsoft Q&A or Stack Overflow. If you require technical support and have a support plan, please submit a support ticket in Microsoft Azure Support or work with your Microsoft Technical Account Manager/Technical Specialist. If you would like to purchase a support plan, please explore the Azure support plans.

 

 

 

Monitoring queries being executed in your Azure Log Analytics Workspaces

Monitoring queries being executed in your Azure Log Analytics Workspaces

This article is contributed. See the original author and article here.

One of the most requested features in Azure Monitor Logs is the ability to track the queries being executed in the system. Recently, we released to public preview the capability to meet all of these needs: the Query Audit Logs for Azure Log Analytics!

 

A rich dataset to monitor your Workspace

The feature was designed to answer questions around the areas of compliance, security, and performance of queries in the system.

 

The dataset that you will see once you enable the collection of the Query Audit Logs will include full information about each query executed. This includes information that will help you identify who ran the query, what application was used to run the query, and for successful queries, a full set of performance counters. The rich dataset will let you answer a wide variety of questions: from detecting malicious attempts to access sensitive data, to identifying queries that are particularly inefficient, and even detecting broken automation through consistently-failing queries

 

Collecting query audit logs is simple

Full details about how to enable and use the feature are available on our documentation page here.

 

Enabling the collection of the query logs is simple – just open the workspace that you want to start tracking logs for, go to the diagnostic settings, and enable the collection of the query logs into any combination of a Storage Blob, Event Hub, and/or (of course!) Azure Monitor Logs.

 

1.png

 

If you prefer to use a programmatic approach, be it through an ARM template or Azure Policy, we provide full support for such an approach. You’ll find an example of an ARM template you can use here.

 

Answering a wide array of questions

So what can you do with these query logs once you enable them? Below are just a few examples.

 

You can see the number of queries each user in the system ran:

2.png

 

The response codes for these queries, useful for detection of failed logins (403s), or broken automation (409s):

3.png

 

 

And a list of the users most advanced in their knowledge of KQL, judged by the length of query they write:

4.png

 

While these are just a few examples to showcase the capabilities of these logs, many more questions can be answered – give it a go and see what insights you can come up with!

 

Next Steps

Enable the collection of the Query Audit Logs in Azure Monitor Logs today, and start getting visibility into how your Workspace is being used. Please do let us know of any questions or feedback you have around the feature – we’re excited to see the creative ways in which these get used!

Azure Media Service scalable video streaming on Azure

This article is contributed. See the original author and article here.

Azure Media Services is highly scalable for streaming videos to mobile or web applications. It enables a customer to use high-definition video encoding and streaming services to reach audiences on the devices they use, enhancing content discoverability and performance with AI, all while helping to protect content with digital rights management (DRM).

 

Azure Media Services also enables a customer to live stream. With the power of Azure Media Services, there’s no requirement for any kind of special hardware or infrastructure cost. To stream your live events with Media Services, you need the following:

  • A camera that is used to capture the live event.
    For setup ideas, check out Simple and portable event video gear setup.

    If you do not have access to a camera, tools such as Telestream Wirecast can be used to generate a live feed from a video file.

  • A live video encoder that converts signals from a camera (or another device, like a laptop) into a contribution feed that is sent to Media Services. The contribution feed can include signals related to advertising, such as SCTE-35 markers.
    For a list of recommended live streaming encoders, see live streaming encoders. Also, check out this blog: Live streaming production with OBS.

  • Components in Media Services, which enable you to ingest, preview, package, record, encrypt, and broadcast the live event to your customers, or to a CDN for further distribution.

For customers looking to deliver content to large internet audiences, we recommend that you enable CDN on the streaming endpoint.

This article gives an overview and guidance of live streaming with Media Services and links to other relevant articles.

https://docs.microsoft.com/en-us/azure/media-services/latest/live-streaming-overview

 

@arsalan_ali  

Experiencing Data Access Issue in Azure portal for Log Analytics – 09/14 – Resolved

This article is contributed. See the original author and article here.

Final Update: Monday, 14 September 2020 20:28 UTC

We’ve confirmed that all systems are back to normal with no customer impact as of 9/14, 19:59 UTC. Our logs show the incident started on 9/14, 9:30 UTC and that during the 10 hours and 30 minutes that it took to resolve, customers in the China region may have experienced data latency or data gaps that could have caused false or missed alerts.

  • Root Cause: The failure was due to a service change that resulted in data being misrouted.
  • Incident Timeline: 10 Hours & 30 minutes – 9/14, 09:30 UTC through 9/14, 19:59 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Ian