#DevDecember Week 3 Recap: Inspiration

#DevDecember Week 3 Recap: Inspiration

This article is contributed. See the original author and article here.

Blog-images_week3.png

 

This week in #DevDecember, we had the hard job of selecting just a handful of sources of inspiration from a big grab bag of options 

 

Slide_20_1.pngYou’ll find out what we went for below, but there were so many other exciting things we could have featured, so we hope you will add the things you prized most this yearWhat projects impressed you? What tools did you like usingUse our fill-in-the-blank to share what motivated and propelled you on in 2020, tagged as #DevDecember. Thanks in advance for the inspiration. 

 

Now, let’s review what we featured in week 3: 

IoT and Azure help with family chores  

With everyone at home, dishes pile up faster, garbage accumulates, and chores must be done more frequently. To keep track in real time what chores need to be completed and by when, Scott Hanselman built an IoT solution, using sensors, a web-based heatmap, and notifications.  

Check out how to build Chores IoT 

 

The Developer Activity Book  

Taking your mind off what you’re working on is sometimes the best way to get inspiration. The Developer Activity Book features family-friendly fun, including seven coloring pages, a crossword puzzle, a word search, and a logic puzzle.  

Get the Developer Activity Book 

 

2020 Imagine Cup World Championship 

The Imagine Cup World Championship encourages students across the globe to innovate using Microsoft Azure. Finalist teams created technological solutions to tackle pressing global issues. Get inspired by their passion and the incredible projects they created. 

Watch the recap (3 minutes) 

 

Plastic Origins Project 

Most of the plastic that ends up in the oceans comes from inland sources. In this episode of CodeStories, Seth Juarez shows how developers can use AI to address this problem by monitoring microplastic and tracking oceanbound plastic waste. 

Watch the video (13 minutes) 

 

Next up is week 4 of #DevDecember. We’ll cover one of our favorite topics: the projects and technologies we’re most looking forward to in 2021.

 

Meanwhile, we’re hoping you’ll upload a #DevWithABev selfie if you haven’t already. What’s this about? Simply put, devs take a picture of themselves with a bev. Hot or cold, commonplace or original, we like it all. Share and tag your pic #DevWithABev. 

 

Missed some or all of #DevDecember? Catch up on our #DevDecember homepage for more info! 

Aggregated Views of Service Health Across Partner-managed Tenants

Aggregated Views of Service Health Across Partner-managed Tenants

This article is contributed. See the original author and article here.

Admins who manage multiple Microsoft 365 tenants have told us that one of their top challenges is understanding the impact of service incidents and advisories across all their managed tenants. Signing into tenants individually and navigating through Service Health to understand the issues requires additional time and ultimately, it degrades the quality of end user support by making support reactive instead of proactive.


 


To address this challenge, we created an experience for these admins that includes aggregated views of service health incidents and advisories across multiple tenants. This view is designed to help admins that support multiple tenants to proactively support their customers. The multi-tenant Service health combines all incidents and advisories affecting your managed tenants and allows you to monitor the health of Microsoft 365 services across your managed tenants. You can see how many and which tenants are affected by a specific incident or advisory. This helps you ascertain the impact across tenants and allows you to proactively communicate to the affected users.


 


The landing page in the new experience is a start your day view that provides an aggregated view of service health. It shows you the total number of incidents, the total number of advisories affecting any of the managed tenants, and the number of services with active incidents. You can also see how many of your tenants are affected by incidents and advisories.


 


Frame 1.png


 


When you select an incident or advisory, a flyout provides important details about its cause and current status.


 


Frame 2.png


 


The Tenants affected tab lists the tenants that are currently affected by the selected incident. This enables you to take actions, such as informing your customers, sharing workarounds, and the like. The list of affected tenants can be exported to CSV format so that admins can share it with support teams.


 


Frame 3.png


 


This new experience also includes filter options that can be used to view issues by issue type or by service.


 


Frame 4.png


 


We’re excited to release this feature to everyone, but initially support will be limited to partner admins with delegated administration privileges relationships for managing multiple tenants. See How to request a reseller relationship from a customer in Partner Center for information about delegated administration privileges for partners. If you’re a partner admin, we’d love to hear your thoughts on this new experience.  Feel free to let us know using the built-in feedback in the Microsoft 365 admin center.


 


–The Release and Delivery Experiences Team

Cozy up in one of three Microsoft Lists sweater backgrounds for Microsoft Teams

Cozy up in one of three Microsoft Lists sweater backgrounds for Microsoft Teams

This article is contributed. See the original author and article here.

It might get cold as you make your lists, and check ‘em twice. So, we suggest wearing one of three Microsoft Lists-themed winter holiday sweaters to back you up and keep you warm – in Microsoft Teams, or to use as wallpaper on your desktop.


 


Make your information tracking festive and nice! Grab the below png’s and follow the ‘how to’ link below.


 


Background-Sweater-Lists-abstract.png


Download Microsoft Lists holiday sweater background – abstract.


 


Background-Sweater-Lists-Hannukah.png


Download Microsoft Lists holiday sweater background – Hanukkah.


 


Background-Sweater-Lists-Christmas.png


Download Microsoft Lists holiday sweater background – Christmas.


 


Learn how to change your background for a Microsoft Teams meeting.


 


Special thanks to our design team for creating the fun. They pair well with the Lists app in Teams.



Get more from the custom backgrounds gallery for Microsoft Teams and the Microsoft backgrounds creators hub.



Stay safe and happy holidays,
Ben Truelove

Analyze and explore data with T-SQL in Azure Synapse Analytics

Analyze and explore data with T-SQL in Azure Synapse Analytics

This article is contributed. See the original author and article here.

Azure Synapse Analytics brings the worlds of data integration, big data, and enterprise data warehousing together into a single service for end-to-end analytics—at cloud scale. In this blog post we will focus on using T-SQL to explore and analyze data.


 


Every Synapse workspace includes a built-in serverless SQL pool designed to enable quick exploration of data stored in the data lake based on pay-per-query pricing. You can also provision one or more dedicated SQL pools to your workspace that lets you run high-performance analytics against data stored in tables with columnar storage.


 


IgorStanko_0-1607979639476.png


 


Both serverless and dedicated SQL pools can be used within the same Synapse workspace, providing the flexibility to choose one or both options to cost-effectively manage your SQL analytics workloads. With Azure Synapse, you can use T-SQL to directly query data within a data lake for rapid data exploration and take advantage of the full capabilities of a data warehouse for more predictable and mission-critical workloads. With both query options available, you can choose the most cost-effective option for each of your use cases, resulting in cost savings across your business.


 


This post explores 2 consumption choices when exercising analytics using Synapse SQL (serverless and dedicated SQL pools) and examines the power and flexibility provided by Azure Synapse when both are used to execute T-SQL workloads. In addition, we will explore options to control cost when using both models.


 


Consumption choices


 


1. Serverless SQL pool


To enable easy exploration and transformation of data in your data lake you can use a serverless SQL pool. One can directly query external files stored in Azure storage without copying or loading data into a specialized store, all using familiar T-SQL dialect. A default, built-in, endpoint for this service is provided within every Synapse workspace and you can start querying data as soon as your workspace is created.


 


As the name implies, the serverless SQL pool is serverless, which means there is no infrastructure to manage, no upfront costs, and no resource reservations required to begin using the service. The serverless SQL pool follows pay-per-use model, so you are only charged for the data processed by queries you run.


 



  • Import sample dataset from Knowledge Center


To demonstrate how easy it is to start with querying data in the data lake using a serverless SQL pool, let’s import some sample data from the Knowledge Center gallery and run some queries to explore the dataset. First, select the Data Hub from the left-hand menu in Synapse Studio, select the + (Add) icon on the Data Hub, and then select Browse gallery.


 


IgorStanko_1-1607979639499.png


The Browse gallery link on the Data Hub’s add button menu is highlighted.


 


On the Gallery page, select the Datasets tab, choose the Bing COVID-19 Data sample, and then select Continue.


 


IgorStanko_2-1607979639508.png


Within the Knowledge Center, the Datasets tab is selected, and the Bing COVID-19 Data sample dataset is highlighted.


 


On the Bing COVID-19 Data page, you have to opportunity to preview the sample data. Select Add dataset to initiate a background process to import the Open dataset and create a Linked Service for it within Synapse Studio.


 


IgorStanko_3-1607979639524.png


The Bing COVID-19 Data sample dataset is displayed in the Knowledge Center preview page.


 


You can find the Linked Service for the downloaded sample dataset on the Linked tab within the Data Hub. It is named bing-covid-19-data and can be found under Azure Blob Storage, within the Sample datasets container.


 


IgorStanko_4-1607979639535.png


On the Linked Services tab of the Data Hub, the bing-covid-19-data folder is highlighted under the Azure Blob Storage Sample Datasets container.


 



  • Query files in your data lake using a serverless SQL pool


Next, we want to open the dataset using a T-SQL query. After locating Linked Service for the Bing COVID-19 sample dataset, select the Actions ellipsis next to it to display the actions menu.


 


IgorStanko_5-1607979639541.png


The Actions menu ellipsis is highlighted next to the bing-covid-19-data sample dataset.


 


In the Action menu, select New SQL script and Select TOP 1000 rows.


 


IgorStanko_6-1607979639549.png


On the Actions menu for the sample dataset, New SQL script and Select TOP 100 rows are highlighted.


 


The previous action results in a new SQL script tab opening within Synapse Studio. This script contains the auto-generated T-SQL code for reading the top 100 rows of data from the bing_covid-19_data.parquet file located in Azure Storage. The query uses the OPENROWSET(BULK…) function to access files within the data lake.


 


Azure Synapse uses the OPENROWSET(BULK…) function to enable accessing files in Azure Storage. The OPENROWSET function reads the contents of an external data source and returns its content as a set of rows. The OPENROWSET function is referenced in the FROM clause of your query, allowing it to act as if it were a table named OPENROWSET. A built-in BULK provider enables data from files to be read and returned as a rowset.


 


To query data in the data lake using a serverless SQL pool, ensure that your serverless SQL pool, named Built-in, is selected in the Connect to box on the SQL script tab’s toolbar.


 


IgorStanko_7-1607979639551.png


The auto-generated T-SQL script is displayed, and the Connect to textbox containing Built-in is highlighted.


 


Select Run on the SQL script tab’s toolbar to execute the query using the serverless SQL pool resource.


 


IgorStanko_8-1607979639560.png


The Run button is highlighted on the SQL script toolbar, and the results of the query execution are displayed in the Results pane.


 



  • Execute GROUP BY query with serverless SQL pool


The previous query demonstrates how quickly you can query and explore data contained within files stored in Azure Storage. Now, let’s execute a query to perform some basic transformations on the sample dataset and calculate the average number of new daily confirmed cases and deaths by region.


 


Modify the code in the SQL script tab by making the following changes:


– Replace TOP 100 * with Country_region AS Region, AVG(Confirmed_change) AS AverageDailyConfirmedCases, AVG(Deaths_change) AS AverageDailyDeaths.


– Append GROUP BY Country_region to the end of the query, before the ; after [result].


 


Your final query should look similar to the following:


SELECT
    Country_region AS Region, AVG(Confirmed_change) AS AverageDailyConfirmedCases, AVG(Deaths_change) AS AverageDailyDeaths
FROM
    OPENROWSET(
        BULK     https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.parquet,
        FORMAT = ‘parquet’
    ) AS [result] GROUP BY Country_region;


 


As you did previously, execute the updated query by selecting Run on the toolbar and observing the Results pane.


 


IgorStanko_9-1607979639567.png


The Run button is highlighted on the SQL script toolbar, and the updated SELECT and added GROUP BY statements are highlighted within the query. The results of the transformation query are displayed in the Results pane.


 


The serverless SQL pool supports the whole range of GROUP BY options, giving you the ability to aggregate data stored in external files into a summary set of rows.


 



  • Create an external table


We will end our overview of using the serverless SQL pool resource by creating a serverless SQL pool database and defining an external table. The serverless SQL pool is an interactive query service over external data in Azure storage. As such, you cannot create tables in serverless SQL pool databases. You can, however, define external tables using the CREATE EXTERNAL TABLE T-SQL command. External tables point to data located in Hadoop, Azure Storage blob, or Azure Data Lake Storage. For the serverless SQL pool, you can use external tables to query data in Azure Storage using T-SQL statements. You can also use external tables to write serverless SQL pool query results to files in Azure Storage.


 


As you did earlier, select the Action menu ellipsis for the bing-covid-19-data Linked Service, but this time select New SQL script and Create external table from the menu.


 


IgorStanko_10-1607979639575.png


On the Actions menu for the sample dataset, New SQL script and Create external table are highlighted.


 


On the Create external table dialog, ensure Built-in is selected in the SQL pool dropdown, then select Select a database and select + New.


 


IgorStanko_11-1607979639593.png


On the Create external table dialog, New is highlighted under Select a database.


 


On the Create SQL database dialog, enter ServerlessSql in the Name field and select Create.


 


IgorStanko_12-1607979639599.png


ServerlessSql is entered into the Name box and highlighted on the Create SQL database dialog.


 


Back on the Create external table dialog, enter BingCovidData into the External table name field and select Create.


 


IgorStanko_13-1607979639617.png


BingCovidData is entered into the External table name field, and the Create button is highlighted.


 


A new T-SQL script tab will open in Synapse Studio with the code to create an external table from the data lake file. This auto-generated script contains the three steps necessary to create an external table using serverless SQL pool.


– Create an external file format to specify the actual layout of the data referenced by the external table.


– Create an external data source, which is used to connect to your storage account.


– Create an external table to generate an external table that enables Azure Synapse to access data stored in Azure Storage.


 


IgorStanko_14-1607979639631.png


The three steps, (1) create an external file, (2) create an external data source, and (3) create an external table, are highlighted within the auto-generated T-SQL code.


 


The script also contains a SELECT TOP 100 statement to query the newly created external table. Before running the query, verify the Connect to box is set to Built-in and note the value in the Use database dropdown. The database is set to the new SQL database created when you were generating the script for creating the external table.


 


Select Run from the toolbar to execute the query to create and query the BingCovidData external table.


 


IgorStanko_15-1607979639646.png


Built-in is highlighted in the Connect to box, and the new ServerlessSql database is selected in the Use database dropdown. The Run button is highlighted on the toolbar.


 


The newly defined database and the external table will now be accessible on the Data Hub’s Workspace tab, under Databases. From here, you can query the table just like you would any table in a SQL database.


 


IgorStanko_16-1607979639653.png


On the Data Hub’s Workspace tab, the ServerlessSql database and External tables are expanded, and the dbo.BingCovidData table is highlighted.


 


You may need to select the Refresh button in the top right corner of Synapse Studio to see the database.


 


With each of these queries, as with all queries executed using a serverless SQL pool, you are only charged for the data processed by queries you run. This pay-per-use model helps provide a cost-effective way to explore data and execute ad-hoc queries against external data.


 


2. Dedicated SQL pool


To exercise enterprise data warehousing features available in Azure Synapse, provision and use dedicated SQL pools. Data is stored in tables with columnar storage, significantly reducing data storage costs and improving query performance. Once information is stored, you can run analytics at a massive scale. Compared to traditional database systems, queries finish in seconds instead of minutes or hours instead of days.


 


After you create a dedicated SQL pool in your Synapse workspace, you can import big data using simple to use COPY statement and then use the power of the distributed query engine to run high-performance analytics.


 



  • Add a dedicated SQL pool


There are two paths you can use to add a dedicated SQL pool to your Synapse workspace. You can create one via the Azure portal or from within Synapse Studio.


From the Azure portal, select the New dedicated SQL pool link from your Synapse workspace blade.


 


IgorStanko_17-1607979639658.png


The new dedicated SQL pool button is highlighted on the toolbar of a Synapse workspace blade in the Azure portal.


 


To add a dedicated SQL pool to your Synapse workspace from within Synapse Studio, navigate to the Manage Hub, select SQL pools, and then select the New button.


 


IgorStanko_18-1607979639663.png


In Synapse Studio, the Manage Hub and SQL pools are selected, and the New button is highlighted.


 


The Manage Hub in Synapse Studio allows you to view and manage your SQL pools.


The process for creating the new dedicated SQL pool is the same after that. On the Create dedicated SQL pool Basics tab, you specify a name for your dedicated SQL pool and choose your desired performance level.


 


IgorStanko_19-1607979639667.png


The Create dedicated SQL pool dialog is displayed, with DedicatedSql entered into the SQL pool name field.


 


The performance level can range from DW100c to DW30000c. To learn more about resource consumption, check out the Synapse SQL resource consumption article in the Azure Synapse Analytics documentation.


 


You can then select Review + create, which sets the default values in the Additional settings tab. Then, select Create to provision your dedicated SQL pool.


 


IgorStanko_20-1607979639673.png


A summary of the configure settings for the dedicated SQL pool is displayed, and the Create button is highlighted.


 



  • Create an external table


As with serverless SQL pool, we can create external tables in dedicated SQL pool to provide pointers to external data. As you did previously for the serverless SQL pool queries, navigate to the Linked tab of the Data Hub, select the Action menu ellipsis for the bing-covid-19-data Linked Service, then select New SQL script and Create external table from the menu.


 


IgorStanko_21-1607979639681.png


On the Actions menu for the sample dataset, New SQL script and Create external table are highlighted.


 


On the Create external table dialog, select your dedicated SQL pool, DedicatedSql, in the Select SQL pool dropdown, and then enter BingCovidData as the external table name, and select Create.


 


IgorStanko_22-1607979639697.png


The create external table dialog is displayed with the dedicated SQL pool selected and BingCovidData entered into the external table name field.


 


Select Run on the toolbar to execute the script and create the external table. When the script execution completes, select the Workspace tab within the Data Hub and expand DedicatedSql and External Tables under databases to view the new external table.


 


IgorStanko_23-1607979639705.png


The BingCovidData external table is highlighted in the DedicatedSql database on the Data Hub.


 


Select the Actions menu ellipsis next to BingCovidData and select New SQL script and Select TOP 100 rows.


 


IgorStanko_24-1607979639715.png


Select top 100 rows is highlighted in the Actions menu.


 


Execute the query by selecting Run from the toolbar, as you have done previously, to view the table’s data.


 



  • Create table in the data warehouse


The simplest and fastest way to create a table in the data warehouse is to use the CREATE TABLE AS SELECT (CTAS) statement. CTAS is one of the most important T-SQL features available in dedicated SQL pool. It creates a new table based on the output of a SELECT statement.


 


To demonstrate how to use the CTAS statement, let’s use it to copy aggregated data from the BingCovidData external table into a new table in the data warehouse. Select the Actions menu ellipsis next to the DedicatedSql (SQL) database, and then select New SQL script and Empty script.


 


In the empty SQL script pane, paste the following code:


CREATE TABLE [dbo].[CovidRegionalDailySummary]
WITH
(
    DISTRIBUTION = ROUND_ROBIN,
    CLUSTERED COLUMNSTORE INDEX
)
AS
SELECT
    Country_region AS Region,
    AVG(Confirmed_change) AS AverageDailyConfirmedCases,
    AVG(Deaths_change) AS AverageDailyDeaths
FROM [dbo].[BingCovidData]
GROUP BY Country_region


 


With CTAS, we specify the new table name with the CREATE TABLE statement and can also specify both the distribution of the table data and the table structure type. We complete the statement with the AS SELECT statement and specify the source table in the FROM clause.


 


Execute the script to create the CovidRegionalDailySummary in your data warehouse. You can then find the new table under Tables in the DedicatedSql (SQL) database on the Data Hub’s Workspace tab. Select the tables Actions menu ellipsis and create a new Select TOP 100 rows query.


 


IgorStanko_25-1607979639721.png


The CovidRegionalDailySummary table is highlighted under Tables in the dedicated SQL pool.


 


Execute the query and observe the results.


 


IgorStanko_26-1607979639725.png


The results of the new aggregated data table are displayed.


 


Cost control


The ability to use serverless and dedicated SQL pools grants you flexibility in controlling costs. And the separation of storage and compute allows you to maintain better cost control for your resources There are also additional cost control options available from the Manage Hub in the Synapse Studio.


 


1. Serverless SQL pool


Serverless SQL pool is a pay per query service that automatically adjusts based on your requirements. The service frees you from managing infrastructure and needing to pick the right size for your solution.


 


The cost control feature for a serverless SQL pool enables you to set one or more budgets for the amount of data (in TB) that can be processed in a day, week, or month. To configure cost control for serverless SQL pool in the Synapse Studio, navigate to Manage Hub and select SQL pool under Analytics pools. As you hover over the serverless SQL pool, the cost control icon will become visible. Select this icon to open the cost control dialog.


 


IgorStanko_27-1607979639728.png


The Cost Control icon is highlighted for the serverless SQL pool.


 


On the Cost Control dialog, you can select the workspace budget limit, depending on your requirements.


 


IgorStanko_28-1607979639739.png


The Cost Control dialog and its various options are displayed.


 


2. Dedicated SQL pool


Controlling costs for your dedicated SQL pool is handled by pausing the SQL pool or scaling down.


 


To pause your dedicated SQL pool in Synapse Studio, simply navigate to Manage Hub and select SQL pool under Analytics pools. As you hover over the dedicated SQL pool, the pause icon will become visible. Select this icon to pause the dedicated SQL pool.


 


IgorStanko_29-1607979639742.png


The pause icon is highlighted for the dedicated SQL pool.


 


To scale the dedicated SQL pool in the Synapse Studio, navigate to Manage Hub and select SQL pool under Analytics pools. As you hover over the dedicated SQL pool, the scale icon will become visible. Select this icon to open the scale dialog.


 


IgorStanko_30-1607979639745.png


The scale icon is highlighted for the dedicated SQL pool.


 


On the Scale dialog, select the desired performance level and select Apply.


 


IgorStanko_31-1607979639751.png


The Scale dialog for the dedicated SQL pool is displayed.


 


Conclusion


This post discussed using both serverless and dedicated SQL pools to perform queries and analysis on your data. The ability to use dedicated and serverless SQL pools within a single Synapse workspace empowers you to choose the most cost-effective option for each of your analytics workloads.


 


Try out this tutorial today by creating an Azure Synapse workspace with a dedicated SQL pool.



IgorStanko_32-1607979639752.jpeg


 

Understanding High-Performant, Hybrid Data Integration Using Azure Data Factory | Data Exposed

This article is contributed. See the original author and article here.

Learn how you could load data into Azure from 90+ source connectors in Azure Data Factory. Azure Data Factory is a serverless data integration service that supports hybrid (on-premise/ multi-cloud) data integration capabilities in a code-free approach. In this episode with Abhishek Narain, he’ll demonstrate how you can easily get started with loading on-premise data into Azure SQL Database in just a few clicks.


 


Want more ADF? Don’t miss our Data Exposed Special, Around the Clock with Azure SQL and Azure Data Factory on February 3, 2021 at 9:00 AM PT. Register today: https://aka.ms/aroundtheclock


 


Watch on Data Exposed


 


View/share our latest episodes on Channel 9 and YouTube!