by Scott Muniz | Aug 21, 2020 | Uncategorized
This article is contributed. See the original author and article here.
To enable powerful meeting experiences across the hybrid workplace, we’ve been working to deliver new features and experiences across our shared space devices. Updates for these device experiences are available now on the Windows store – Microsoft Teams Rooms app version 4.5.37.0, collaboration bar app version 1.0.94.2020062501, and Surface Hub version 0.2020.84.1970.
Included below are additional details for new experiences enabled through these app releases:
Coordinated Meetings for Microsoft Teams Rooms and Surface Hub
With Coordinated Meetings, people can drive inclusive, collaborative meeting experiences between remote and in-person attendees like never before. Now, users can leverage the power of both Microsoft Teams Room and Surface Hub devices in the same meeting, through a coordinated experience. Using proximity or one-touch join, both devices join the meeting simultaneously with Teams Rooms running audio and video, while Surface Hub is automatically muted to avoid any distracting feedback. During the meeting, users can maximize screen real estate by using the front of room display to show attendees in the meeting gallery, while the Surface Hub is used to show content or to conduct a collaborative whiteboarding session. With the whiteboard experience on Surface Hub and Microsoft Whiteboard in Teams, people can draw and ink together on the same savable canvas, no matter their location. For more information on how to enable this coordinated meeting experience, read our guidance.
Coordinated meetings with Surface Hub and Microsoft Teams Rooms
Pin attendees on Surface Hub
Control the view in-room attendees see on the Surface Hub with the pin feature. Like the Teams meeting experience on the PC, users can pin a participant’s video to the gallery view, holding the view in place regardless of who is talking. This feature will begin rolling out later this month.
Microsoft Teams video meeting on a Surface Hub featuring a pinned video
Proximity-based meeting join, now available for Surface Hub
The simplicity of proximity-based meeting join for Microsoft Teams Rooms is now available on Surface Hub. Using proximity join, a person’s laptop or mobile device can be used to identify a nearby, available Surface Hub and add it to the meeting from the pre-join screen. The personal device will automatically join the meeting in content-only mode with mics and speakers muted, allowing the Surface Hub to provide audio and video for the meeting, without echo or feedback. By default, an invite to the meeting will appear on the device. However, customers can change the settings on the Surface Hub to enable auto accept.
Microsoft Teams pre-join screen with Surface Hub available as another join option
Live captions are now available for collaboration bars
Live captions make meetings more inclusive by providing an alternative way for attendees to follow along and engage in the conversation. For people in noisy environments or with different levels of language proficiency, live captions can people follow along more easily. To turn on live captions, click on the option located in the navigation bar.
Microsoft Teams video meeting featuring live captioning
Switch to video gallery when content is present
Microsoft Teams Rooms and collaboration bars can now switch to the video gallery layout, even when content is being shared by a remote participant or locally using HDMI ingest. This functionality is available to all Teams Rooms devices configured with a single display (TV or projector) in the room. This layout is useful for moments when customers want to focus on participants in the meeting or for room configurations with always-on external devices connected via HDMI Ingest. We look forward to delivering additional layouts in the future.
Change layout selection window on the Microsoft Teams console
Raise hand support for Microsoft Teams Rooms, collaboration bar, and Surface Hub
We announced the Raise Hand feature for Teams desktop in June, and are excited to announce that it is now available on Teams Rooms, collaboration bar, and Surface Hub. During a meeting, participants can raise a virtual hand to let people know they want to contribute without interrupting the conversation. To engage this feature, just tap Raise Hand in the meeting controls. Meeting participants will see a notification on the display and the console, that a hand is raised. Once the participant is done speaking, they can tap on the control again to lower their hand.
Raise Hand view on the Microsoft Teams Rooms console (left) and on the Surface Hub (right)
Adjust default volume for in-room speakers
Microsoft Teams Rooms support a wide variety of audio peripherals that can be used across all space sizes. This ability to customize based on spatial needs requires better control over default volume levels for audio devices. Device administrators can now set the default volume level for conferencing and default speaker from the Teams Rooms settings.
Settings view for adjusting the default volume for Teams Rooms devices
Manage Teams Devices from Teams Admin Center
Microsoft Teams recently rolled out expanded device management capabilities within the Teams Admin Center. Customers will now be able to manage their Teams Rooms in the same place as their collaboration bars and Teams phones. Additionally, we added new features enabling IT admins to simplify, customize, and automate Teams device management. Admins can now perform all the device management tasks, including enrollment, inventory management, software updates, and inspecting device health from a single location. For customers who rely on partners for device management, they can now delegate administration directly through the Partner Center, while ensuring secure authorized device access. Read through our recent blog post for more details on these announcements, and for information on how to begin managing devices through Teams Admin Center, click here.
Teams Admin Center console featuring Teams Rooms devices
Manage Microsoft Teams settings on Surface Hub via XML file
Customers can now manage Microsoft Teams settings on a Surface Hub using Windows Configuration Designer or Microsoft Intune in Microsoft Endpoint Manager. Now Surface Hub customers can find and manage their Teams settings in one place.
by Scott Muniz | Aug 21, 2020 | Uncategorized
This article is contributed. See the original author and article here.
Until now, Update Staging Lab could detect regressions based solely on changes in the crash/hang behavior of your apps for upcoming security updates. This month’s update to the Update Staging Lab service includes changes to our algorithm to help you more clearly notice significant memory usage increases in the test VMs running your apps. Performance metrics, such as memory usage, can be indicative of overall application health and we believe this addition will greatly help keep your apps performing optimally.
Read on for more details or watch this video for a quick walk through of the latest improvements. For more information on Update Staging Lab’s ability to help with regression analysis, see Regression results based on process reliability.
Looking closer at memory regressions
The Update Staging Lab dashboard shows the memory consumed by your application on a new pre-released Windows update and compares it with the memory used by the last released Windows update. With this month’s enhancements, memory regression analysis is now featured in your favorited processes. Applications can contain multiple processes and you can manually select your favorite processes through the Reliability tab. Our service will then identify memory regressions in these favorited processes while comparing test runs across different Windows update releases. If a regression is detected, details about the regression are easily available.
Now let’s look at this feature in detail—and discuss how you can troubleshoot memory regressions using Windows Performance Analyzer.
The failure signal caused by a memory regression is shown in the Update Staging Lab dashboard on the Test results page under Memory Utilization:
Memory utilization displays Fail because of high memory consumption
Failure for the application due to higher memory consumption, will also be displayed as Fail on the Test Summary page:
The Test Summary page displays failure because of memory regression
By providing these failure signals upfront, our goal is to clearly flag potential issues that can disrupt and impact the end user experience for your application. You can then download the log files and use the Windows Performance Analyzer, or your preferred toolkit, to investigate further. You can also work jointly with the Update Staging Lab team on remediating the issue and help prevent issues impacting end users.
Memory signals are captured in the Memory Utilization tab in the Update Staging Lab service for all test runs. The example below shows a recent test run with the onboarded application “Smoke Test Memory Stress” against the pre-release August 2020 security update. (This application was written by our team to illustrate memory regressions.)
Memory regression comparison for all processes and favorited processes
In this example, the favorite process “USLTestMemoryStress.exe” process consumed an average of approximately 100 MB on the pre-release August update compared to the released July update, hence the Update Staging Lab identified a regression. The other processes—shown here as “USLTestMemoryStress_Aux1.exe” and “USLTestMemoryStress_Aux2.exe”—also belong to the same application, but consumed approximately the same amount of memory for the two releases so they “passed” and were considered healthy.
The regression on the main process was determined to be “statistically significant” so the service communicated and highlighted this difference to the user. If the comparison was not statistically significant, it would not be highlighted. Because memory utilization can be noisy, we use statistical models to distinguish, across builds and releases, meaningful differences from inconsequential differences. A comparison may be rarely be flagged when there is no true difference (a false positive), but this is a necessary tradeoff to improve the likelihood of correctly identifying regressions (or true positives.)
The next step is to understand what caused the memory regression. You can download the zip files for both executions from the Download log files option, as shown below. These zip files contain the results of your test run, including script results and memory and CPU performance data which is included in the ETL file.
Where to locate test result log files for download
Download and unzip the logs for the two test runs, then locate the ETL file within each folder and rename them as target.etl (for the test run on the pre-release update) and baseline.etl (for the test run on last released update) to simplify exploration and navigation.
Troubleshooting memory regressions with Windows Performance Analyzer
If you don’t currently have it installed, you’ll need to install and launch Windows Performance Analyzer from the Microsoft Store. Then, open the baseline.etl trace (analyzed in blue tabs), then open the target.etl trace (analyzed in purple tabs), as shown below:
Opening target.etl and baseline.etl in Windows Performance Analyzer
In the example above, both tabs were dragged into the “Virtual Memory Snapshots” view. We then applied a filter on processes containing “usl” to limit the analysis to the favorited processes, as shown below:
Limiting the analysis to favorited processes
You can open multiple instances of this “Virtual Memory Snapshots” view and restrict the analysis to a single process via applying similar filters in Advanced Settings in Windows Performance Analyzer.
To compare multiple traces in a single graph, you’ll first need to open a comparative view for the two traces via Windows, then select New Comparative Analysis View:
Creating a comparative view between baseline and target
If you aggregate the memory at the Process Name level instead of the Process level, you can see a difference in the Memory Working Set of approximately 100 MB for the first “USLTestMemoryStress.exe”, while the other processes have the same Memory Working Set:
Comparing memory utilization at the process level
You can load other WPA Memory views like Total Commit or VirtualAlloc Commit Lifetime to understand more about the memory difference between the two test runs.
Learn more
Memory regressions for key processes can lead to issues for the end user, but they can now be caught and isolated easily with Update Staging Lab. We hope these new features will help you more easily identify and troubleshoot memory regressions in your applications and enable you to remediate as needed. We are working on additional UI and service-level improvements and are excited to share details on those soon.
In the example above, we illustrated some of the insights you can obtain by investigating the memory consumption patterns that Update Staging Lab presents after a test run. Obviously, every app is different, and you understand your app performance profile better than anyone, so we encourage you to follow the links below to learn more about the Windows Performance Analyzer and provide us with feedback.
Finally, if you aren’t already using Update Staging Lab to validate your applications, please join the private preview and try these latest features so we can continue to make improvements that matter to you. Also, we would love your feedback so feel free to leave comments below or email us at uslpreview@microsoft.com.
by Scott Muniz | Aug 21, 2020 | Uncategorized
This article is contributed. See the original author and article here.
Thanks for being part of this Data Architecture Blog and Community. In this post I want to discuss the importance of building a Data Community in your organization and how Azure Data Services can help nurture that Data Community and drive solution adoption and success.
How do you convince someone in an organization to be engaged in a Data Community and contribute to the many assets required to meet an organizations vision for Data and Analytics Solutions? I love the term Tom Sawyering. Not the deceitful part of getting someone to do my work (I believe in doing my own ironing, painting, data wrangling, upskilling, etc.), but the recognition that to be able to paint a fence at scale you need more than one painter. I know this firsthand, as I have been recruiting my wife to help paint our house this summer.
Wikipedia definition of Tom Sawyering someone https://en.wiktionary.org/wiki/Tom_Sawyer
“To convince someone to volunteer, especially to do something which one should do oneself.”
To compound this scaling issue, organizations have many projects and initiatives so there is more than one fence to paint. To get the job done you are going to need resources from outside your immediate team or department. So how do you attract people to author content for Data and Analytics Solutions? More importantly how do you foster contributors that don’t end up disillusioned and feeling like they have been Tom Sawyered. Azure Data Services can help with faster time to value, and the innovation and agility focus of these solutions is going to attract a Data Community. How do you upskill the Data Community? The Microsoft global skilling initiative can help with that. Members of the Data Community also need good business problems to solve to provide the necessary focus and opportunity to build proficiency and upskill to meet the challenge. In this time of Covid-19, necessity truly is the mother of invention so you need to be innovative in your approach in building your Data Community – upskill the organization with both students entering the workforce, and by reskilling the existing workforce.
To get started whether a green field initiative (not super common), or by more commonly extending, expanding, or modernizing an existing Data and Analytics Solution you need resources – people (labor), and infrastructure (buildings, electricity, tools, hardware, and software, or cloud platforms like Azure). To be successful with people resources for a solution you need to have customers/users who are both authors and consumers of data and content. To be successful with infrastructure resources you need technology, tools, hardware, and software that are flexible, agile, and elastic because of business volatility and changing demand. Azure can help achieve this economic benefit by providing elastic infrastructure and reducing sunk costs in building, power, and servers. Azure can also help achieve a financial benefit by improving cash flows and creating a variable vs fixed cost structure. In my opinion the most important is the agility benefit that comes with IT and Business Transformation and faster time to value. This innovation approach vs the cost savings provided by the economic and financial benefit is what brings the extended resources to the Data and Analytics Solution. This business opportunity and innovation is what attracts people to the solution, but the cost saving can also be opportunities to reinvested to help with resourcing more people for required roles for the solution.
If you are moving an existing data solution from on premises to Azure this cost saving may be delayed as you pay to keep the production on premises solution going until you compete migration. So, for a time you pay to keep the as-is data solution running while you build the new to-be Data and Analytics Solution – kind of like a double mortgage. The size of and adoption by the Data Community is going to be a big part of the justification for continued investment in the solution so you best get them onboard early – early involvement also builds ongoing commitment and ownership. The following diagram depicts how Azure Data Services can extend an organizations infrastructure, and how the Data Community can extend the workforce required for creation, ongoing maintenance, and growth of Data and Analytics Solutions.

How does Azure make this possible? This diagram illustrates how the cloud vendor manages more of the infrastructure and how that can provide a faster time to value, standardization, and lower cost.

Demystifying Service Offerings from Microsoft Azure, Amazon AWS, and Google Cloud Platform
By Charlie Crocker
https://www.kdnuggets.com/2019/06/understanding-cloud-data-services.html
Instead of just lift and shifting on premise data solutions to IaaS, bridging more workloads to PaaS Data Services on Azure and SaaS BI Services like Power BI can also come with people resource savings and enable existing resources to focus on activities higher up the stack.

Demystifying Service Offerings from Microsoft Azure, Amazon AWS, and Google Cloud Platform
By Charlie Crocker
https://www.kdnuggets.com/2019/06/understanding-cloud-data-services.html
While Azure Data Services provide infrastructure benefits Power BI provides self-service capabilities that can help anyone in the organization be more engaged with data. Users can easily collaborate with others to Tom Sawyer some additional volunteers and build a data-driven culture and Data Community. Azure and Power BI also provide differentiated capabilities for users in the Data Community who have a diversity of needs. Examples by User Role:
Data Analyst – Power BI, Notebooks (Jupyter in Azure ML, Databricks, Synapse Analytics)
Data Scientist – Notebooks (Jupyter in Azure ML, Databricks, Synapse Analytics) and ML models deployed to the Kubernetes service AKS
Data Engineer – Azure Data Factory, Databrick Notebooks, Synapse Notebooks
DBA – Azure SQL Database, PostgreSQL, MySQL, Synapse as well as Database Management tools like SQL Server Management Studio (SSMS) and Azure Data Studio.
Developer/Software Engineers – Relational Databases like those mentioned for DBA role, NoSQL Databases like CosmosDB as well as Integrated Development Environments and Code Editors
Just like painters of trim, main walls, lower story, second story, and chimney (35 ft up) on a house are going to need tools like different sized paint brushes and ladders, paint, buckets. A Data Community needs different tools.
Data and Analytics Solutions require a Data Community and Azure Data Services to successfully solve business problems! Put hands on keyboard and do the work. Do the work and learn the Skills. Just like Mr. Miyagi in Karate Kid used Paint the Fence to upskill Daniel https://www.youtube.com/watch?v=R37pbIySnjg
Enjoy the rest of your summer!
Darwin
by Scott Muniz | Aug 21, 2020 | Uncategorized
This article is contributed. See the original author and article here.
News this week includes:
Microsoft 365 apps say farewell to Internet Explorer 11 and Windows 10 sunsets Microsoft Edge Legacy.
The Natural Language Queries feature introduced by Excel at Ignite last November is now generally available in Windows, Mac, and Excel for the web.
Microsoft announced it has transitioned Azure HDInsight to the Microsoft engineered distribution of Apache Hadoop and Spark, specifically built to drastically improve the performance, improved release cadence of powerful Open Source data analytics frameworks and optimized to natively run at cloud scale in Azure.
@harveer singh is our Member of the Week, and a great contributor in the Office 365 community.
View the Weekly Roundup for August 17-21 in Sway and attached PDF document.
by Scott Muniz | Aug 21, 2020 | Azure, Microsoft, Technology, Uncategorized
This article is contributed. See the original author and article here.
Final Update: Friday, 21 August 2020 17:48 UTC
We’ve confirmed that all systems are back to normal with no customer impact as of 8/21, 17:17 UTC. Our logs show the incident started on 8/21, 17:06 UTC and that during the 11 minutes that it took to resolve the issue a small number of customers in the South Central US region experienced data latency and data gaps, as well as possible alert misfires.
- Root Cause: The failure was due to an unexpected service interruption during a service update task.
- Incident Timeline: 11 minutes – 8/21, 17:06 UTC through 8/21, 17:17 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
-Jeff
Initial Update: Friday, 21 August 2020 17:39 UTC
We are aware of issues within Application Insights in South Central US region and are actively investigating. Some customers may experience Latency and Data Gaps.
- Work Around: None
- Next Update: Before 08/21 20:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jeff
Recent Comments