Azure Unblogged – GitHub

This article is contributed. See the original author and article here.

Today, I am pleased to share with you a new episode of Azure Unblogged.  I chat to Martin Woodward, Director of Developer Relations at GitHub.  Martin and I discuss why GitHub is something that IT Pros and System Administrators should look at learning GitHub.  The new features GitHub Actions and GitHub Codespaces and how they integrate with Azure as well as the forthcoming GitHub Universe


 


You can watch the full video here or on Microsoft Channel 9


 

I hope you enjoyed the video if you have any questions feel free to leave a comment and if you want to check out some of the resources Martin mentioned please check out the links below:


Azure Sphere OS version 20.12 is now available for evaluation

This article is contributed. See the original author and article here.

The Azure Sphere OS version 20.12 is now available for evaluation in the Retail Eval feed. The retail evaluation period provides 14 days for backwards compatibility testing. During this time, please verify that your applications and devices operate properly with this release before it is deployed broadly via the Retail feed. The Retail feed will continue to deliver OS version 20.10 until we publish 20.12 in two weeks. For more information on retail evaluation see our blog post, The most important testing you’ll do: Azure Sphere Retail Evaluation.


 


Azure Sphere OS version 20.12


The 20.12 release includes the following bug fixes and enhancements in the Azure Sphere OS. It does not include an updated SDK. 



  • Reduced the maximum transmission unit (MTU) from 1500 bytes to 1420 bytes.

  • Improved device update in congested networks.

  • Fixed an issue wherein the Wi-Fi module stops scanning but does not respond with a completion event if a background scan is running and the active Wi-Fi network is deleted.

  • Fixed a bug wherein I2CMaster_Write() returns EBUSY when re-sideloading the app interrupts operation.


 


Azure Sphere SDK version 20.11


On Nov 30, we released version 20.11 of the Azure Sphere SDK. The 20.11 SDK introduces the first Beta release of the azsphere command line interface (CLI) v2. The CLI v2 Beta is installed alongside the existing CLI on both Windows and Linux, and it works with both the 20.10 and 20.12 versions of the OS. For the purpose of retail evaluation, continue to use the CLI v1. For more information on the v2 CLI and a complete list of additional features, see Azure Sphere CLI v2 Beta.


 


For more information on Azure Sphere OS feeds and setting up an evaluation device group, see Azure Sphere OS feeds. 


 


For self-help technical inquiries, please visit Microsoft Q&A or Stack Overflow. If you require technical support and have a support plan, please submit a support ticket in Microsoft Azure Support or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the Azure support plans.

Azure Service Fabric 7.2 Fourth Refresh Release

This article is contributed. See the original author and article here.

The Azure Service Fabric 7.2 fourth refresh release includes stability fixes for standalone, and Azure environments and has started rolling out to the various Azure regions. The updates for .NET SDK, Java SDK and Service Fabric Runtime will be available through Web Platform Installer, NuGet packages and Maven repositories in 7-10 days within all regions.


 


You will be able to update to the 7.2 fourth refresh release through a manual upgrade on the Azure Portal or via an Azure Resource Manager deployment. Due to customer feedback on releases around the holiday period we will not begin automatically updating clusters set to receive automatic upgrades.


 



  • Service Fabric Runtime


    • Windows – 7.2.445.9590

    • Service Fabric for Windows Server Service Fabric Standalone Installer Package – 7.2.445.9590




  • .NET SDK


    • Windows .NET SDK –  4.2.445

    • Microsoft.ServiceFabric –  7.2.445

    • Reliable Services and Reliable Actors –  4.2.445

    • ASP.NET Core Service Fabric integration –  4.2.432


  • Java SDK –  1.0.6


 


Key Announcements



  • .NET 5 apps for Windows on Service Fabric are now supported as a preview. Look out for the GA announcement of .NET 5 apps for Windows on Service Fabric in the coming weeks.

  • .NET 5 apps for Linux on Service Fabric will be added in the Service Fabric 8.0 release (Spring 2021).

  • Windows Server 20H2 is now supported as of the 7.2 CU4 release.


For more details, please read the release notes.  

Deploying a LoRaWAN network server on Azure

Deploying a LoRaWAN network server on Azure

This article is contributed. See the original author and article here.


 







There is something oddly fascinating about radio waves, radio communications, and the sheer amount of innovations they’ve enabled since the end of the 19th century.


What I find even more fascinating is that it is now very easy for anyone to get hands-on experience with radio technologies such as LPWAN (Low-Power Wide Area Network, a technology that allows connecting pieces of equipment over a low-power, long-range, secure radio network) in the context of building connected products.






 




It’s of no use whatsoever […] this is just an experiment that proves Maestro Maxwell was right—we just have these mysterious electromagnetic waves that we cannot see with the naked eye. But they are there.


— Heinrich Hertz, about the practical importance of his radio wave experiments

Nowadays, not only is there a wide variety of hardware developer kits, gateways, and radio modules to help you with the hardware/radio aspect of LPWAN radio communications, but there is also open-source software that allows you to build and operate your very own network. Read on as I will be giving you some insights into what it takes to set up a full-blown LoRaWAN network server in the cloud!

 


A quick refresher on LoRaWAN


 


LoRaWAN is a low-power wide-area network (LPWAN) technology that uses the LoRa radio protocol to allow long-range transmissions between IoT devices and the Internet. LoRa itself uses a form of chirp spread spectrum modulation which, combined with error correction techniques, allows for very high link budgets—in other terms: the ability to cover very long ranges!


Data sent by LoRaWAN end devices gets picked up by gateways nearby and is then routed to a so-called network server. The network server de-duplicates packets (several gateways may have “seen” and forwarded the same radio packet), performs security checks, and eventually routes the information to its actual destination, i.e. the application the devices are sending data to.




 




LoRaWAN end nodes are usually pretty “dumb”, battery-powered, devices (ex. soil moisture sensor, parking occupancy, …), that have very limited knowledge of their radio environment. For example, a node may be in close proximity to a gateway, and yet transmit radio packets with much more transmission power than necessary, wasting precious battery energy in the process. Therefore, one of the duties of a LoRaWAN network server is to consolidate various metrics collected from the field gateways to optimize the network. If a gateway is telling the network server it is getting a really strong signal from a sensor, it might make sense to send a downlink packet to that device so that it can try using slightly less power for future transmissions.


As LoRa uses an unlicensed spectrum and granted one follows their local radio regulations, anyone can freely connect LoRa devices, or even operate their own network.


 


My private LoRaWAN server, why?


 


The LoRaWAN specification puts a really strong focus on security, and by no means do I want to make you think that rolling out your own networking infrastructure is mandatory to make your LoRaWAN solution secure. In fact, LoRaWAN has a pretty elegant way of securing communications, while keeping the protocol lightweight. There is a lot of literature on the topic that I encourage you to read but, in a nutshell, the protocol makes it almost impossible for malicious actors to impersonate your devices (messages are signed and protected against replay attacks) or access your data (your application data is seen by the network server as an opaque, ciphered, payload).


So why should you bother about rolling your ow LoRaWAN network server anyway?


Coverage where you need it


 


In most cases, relying on a public network operator means being dependant on their coverage. While some operators might allow a hybrid model where you can attach your own gateways to their network, and hence extend the coverage right where you need it, oftentimes you don’t get to decide how well a particular geographical area will be covered by a given operator.


When rolling out your own network server, you end up managing your own fleet of gateways, bringing you more flexibility in terms of coverage, network redundancy, etc.


 


Data ownership


 


While operating your own server will not necessarily add a lot in terms of pure security (after all, your LoRaWAN packets are hanging in the open air a good chunk of their lifetime anyway!), being your own operator definitely brings you more flexibility to know and control what happens to your data once it’s reached the Internet.


 


What about the downsides?


 


It goes without saying that operating your network is no small feat, and you should obviously do your due diligence with regards to the potential challenges, risks, and costs associated with keeping your network up and running.


Anyway, it is now high time I tell you how you’d go about rolling out your own LoRaWAN network, right?


 


The Things Stack on Azure


 


The Things Stack is an open-source LoRaWAN network server that supports all versions of the LoRaWAN specification and operation modes. It is actively being maintained by The Things Industries and is the underlying core of their commercial offerings.


A typical/minimal deployment of The Things Stack network server relies on roughly three pillars:



  • A Redis in-memory data store for supporting the operation of the network ;

  • An SQL database (PostgreSQL or CockroachDB are supported) for storing information regarding the gateways, devices, and users of thje network ;

  • The actual stack, running the different services that power the web console, the network server itself, etc.


The deployment model recommended for someone interested in quickly testing out The Things Stack is to use their Docker Compose configuration. It fires up all the services mentioned above as Docker containers on the same machine. Pretty cool for testing, but not so much for a production environment: who is going to keep those Redis and PostgreSQL services available 24/7, properly backed up, etc.?


I have put together a set of instructions and a deployment template that aim at showing how a LoRaWAN server based on The Things Stack and running in Azure could look like.


 



 


The instructions in the GitHub repository linked below should be all you need to get your very own server up and running!


In fact, you only have a handful of parameters to tweak (what fancy nickname to give your server, credentials for the admin user, …) and the deployment template will do the rest!



OK, I deployed my network server in Azure, now what?


 


Just to enumerate a few, here are some of the things that having your own network server, running in your own Azure subscription, will enable. Some will sound oddly specific if you don’t have a lot of experience with LoRaWAN yet, but they are important nevertheless. You can:



  • benefit from managed Redis and PostgreSQL services, and not have to worry about potential security fixes that would need to be rolled out, or about performing regular backups, etc. ;

  • control what LoRaWAN gateways can connect to your network server, as you can tweak your Network Security Group to only allow specific IPs to connect to the UDP packet forwarder endpoint of your network server ;

  • completely isolate the internals of your network server from the public Internet (including the Application Server if you which so), putting you in a better position to control and secure your business data ;

  • scale your infrastructure up or down as the size and complexity of the fleet that you are managing evolves ;

  • … and there is probably so much more. I’m actually curious to hear in the comments below about other benefits (or downsides, for that matter) you’d see.


I started to put together an FAQ in the GitHub repository so, hopefully, your most obvious questions are already answered there. However, there is one that I thought was worth calling out in this post, which is: How big of a fleet can I connect?.


It turns out that even a reasonably small VM like the one used in the deployment template—2 vCPUs, 4GB of RAM—can already handle thousands of nodes, and hundreds of gateways. You may find this LoRaWAN traffic simulation tool that I wrote helpful in case you’d want to conduct your own stress testing experiments.


 


What’s next?


 


You should definitely expect more from me when it comes to other LoRaWAN related articles in the future. From leveraging DTDL for simplifying end application development and interoperability with other solutions, to integrating with Azure IoT services, there’s definitely a lot more to cover. Stay tuned, and please let me know in the comments of other related topics you’d like to see covered!






Deploy an End-to-End Azure Synapse Analytics and Power BI Solution using CMS Medicare Data

Deploy an End-to-End Azure Synapse Analytics and Power BI Solution using CMS Medicare Data

This article is contributed. See the original author and article here.

For many people, hands-on experience is often the best way to learn and evaluate data tools. I’ve been working with a colleague from our Azure team, Kunal Jain, to put together an end-to-end Azure Synapse and Power BI solution using 120+ million rows of real CMS Medicare Part D Data that is available for use in the public domain. If you’re not highly technical, and you’ve never used Azure or Power BI before, you can still deploy this solution with a few simple steps using an Azure ARM template. We also provide a video to walk you through a paint-by-numbers tutorial. The Azure ARM template will automatically:



  • Create Azure Data Lake, Azure Data Factory, and Azure Synapse

  • Pull the raw data from CMS into a Data Lake using Azure Data Factory

  • Shape the data and create a dimensional model for Azure Synapse

  • Deploy the solution to Azure Synapse, including performance tuning settings


The entire process takes about an hour and a half to run, with most of that time spent on waiting for the scripts from the ARM template to finish. Once deployed, another video also walks you through the steps of connecting it to a pre-built Power BI report template. The whole process should take about 1-2 hours with no code required, and at the end you can review and evaluate and end-to-end Azure and Power BI solution using real CMS Medicare Part D Data:



Here is a link to the GitHub site: https://github.com/kunal333/E2ESynapseDemo 


 


While the source CMS data is real public Healthcare data, the intent of this project is to provide you with a simple end-to-end solution for the purposes of learning, demos, and tool evaluation. We intend to enhance and build upon this solution in the future, but it is not a supported solution intended to be used for production purposes. 


 


Below is a tutorial video for deploying the solution. Note that this is designed to be low code with only a few things to cut and paste. All you need is an Azure account, and you can pause the Synapse instance or delete the entire Resource Group at any time. There are also simple instructions on the GitHub page:


 


Here is another tutorial video describing the process by which to connect the Power BI Template file containing the business logic:


 


The following diagram summarizes the steps of the whole process:


Source to Target.png


Azure Data Factory, along with Power BI, creates the following logical model that enables highly performant end user queries for complicated questions about the data. Notice that a CSV file is also added to the Power BI layer to demonstrate that custom criteria from a business user can be used to query the Synapse data:


Logical Model.png


 Calculations have been added to the Power BI Semantic Layer to enable complex analytics such as Pareto Analysis: 


Calculations.png


 Below is a screenshot of the pre-built Power BI report:


Dashboard Image.png


 


More information about the solution is available at the GitHub site: https://github.com/kunal333/E2ESynapseDemo 


 


If you deploy this solution, we’d appreciate if you could take the time to provide some feedback. What was your experience with the ARM template? How do you plan to use this solution? What types of similar solutions can we provide in the future that would be valuable?: