Azure Cognitive Services Hands-on Workshop resources and insights.

Azure Cognitive Services Hands-on Workshop resources and insights.

This article is contributed. See the original author and article here.


Agenda


The agenda of the workshop was to provide students with a hands-on experience of Microsoft Azure Cognitive Services focusing mainly on Custom Vision and QnA Maker.


Also provided a brief introduction to Microsoft Azure and fundamentals of cloud computing concepts. Help them figure out how to exhibit Artificial Intelligence, Machine Learning, and Natural Language Processing (NLP) projects on their resume.


Overview


The workshop was attended by 30 students. All students received a Microsoft Azure for Student subscriptions.


azure_students.png


Starting with the basic concepts of cloud computing and how Azure Cognitive Services fits into the Microsoft Azure ecosystem. Elaborating on how API calls embed the ability to see, hear, speak, search, understand, and accelerate advanced decision-making into modern applications.


Quick revision of services under the Cognitive Services umbrella such as:



Flow of the workshop


After the introduction, the subscription were created by the students. Activating Azure for Student provided each of the students with a balance of $100 USD that they could use to explore and experiment with the services on the Azure portal.


All the activities performed were strictly directed by the Learning module on Microsoft Learn


Custom Vision



Note



Click the Custom Vision title to view the Microsoft Learn Document for this activity.

The first activity to be performed was on Custom Vision, where we discussed:




  • What is Custom Vision?




  • What are the applications of custom vision?




  • How to implement Custom Vision to your application?



    1. Creating a Custom Vision resource to get started

    2. Select the subscription, resource group, region, name, and pricing tier

    3. Upload the existing images of an object to train the model.

    4. Select the “Quick Training” option to quickly prepare the model in minutes

    5. Train an image classification model based on existing images

    6. Publish the model to use it in your applications




  • How to verify the functionalities of the trained model?


    To verify and test the model by running a simple command-line program in the Cloud Shell, real-world solutions, such as web sites or phone applications, use the same ideas and functionalities.




  • Troubleshooting the errors and blockades faced by the attendees throughout the workshop




After the Custom Vision session, there was short FAQ session to answer all the queries regarding custom vision.


QnA Maker



Note



Click the QnA Maker title to view the Microsoft Learn Document for this activity.

In the QnA Maker session, we aimed to create a live chat bot using python by:



  • Understanding what are Chat-bots

  • What are the applications for Chat-bots

  • Create a chat-bot

    1. Creating a QnA Maker resource to get started

    2. Select the subscription, resource group, region, name, and pricing tier

    3. Create a custom question answering knowledge base

    4. Edit the knowledge base

    5. Train and test the knowledge base

    6. Create a bot for the knowledge base

    7. Test the bot to verify its functionalities.




Conclusion


A questioning session was held where students can ask their queries about Microsoft services especially regrading Microsoft Azure and its services. A brief discussion about the Microsoft certifications and how students can leverage Microsoft Learn to excel in the certification exams.


Provision of a roadmap on what their approach should be if they want to make a career in Artificial Intelligence, Data Science, and Cloud Computing.


Take away from this session was to get a hands-on experience on Custom Vision and QnA Maker as service offerings from Microsoft Azure, and build real-time project on the same to showcase on their resume.

CISA Releases Security Advisory on Dominion Voting Systems Democracy Suite ImageCast X

This article is contributed. See the original author and article here.

CISA has released an Industrial Controls Systems Advisory (ICSA) detailing vulnerabilities affecting versions of the Dominion Voting Systems Democracy Suite ImageCast X, which is an in-person voting system used to allow voters to mark their ballot.

Exploitation of these vulnerabilities would require physical access to individual ImageCast X devices, access to the Election Management System (EMS), or the ability to modify files before they are uploaded to ImageCast X devices. Jurisdictions can prevent and/or detect the exploitation of these vulnerabilities by diligently applying the mitigations recommended in ICSA-22-154A, including technical, physical, and operational controls that limit unauthorized access or manipulation of voting systems. Many of these mitigations are already typically standard practice in jurisdictions where these devices are in use and can be enhanced to further guard against exploitation of these vulnerabilities.

While these vulnerabilities present risks that should be mitigated as soon as possible, CISA has no evidence that these vulnerabilities have been exploited in any elections. 

Atlassian Releases New Versions of Confluence Server and Data Center to Address CVE-2022-26134

This article is contributed. See the original author and article here.

Atlassian has released new Confluence Server and Data Center versions to address remote code execution vulnerability CVE-2022-26134 affecting these products. An unauthenticated remote attacker could exploit this vulnerability to execute code remotely. Atlassian reports that there is known exploitation of tmhis vulnerability..

CISA strongly urges organizations to review Confluence Security Advisory 2022-06-02 and upgrade Confluence Server and Confluence Data Center.

Note: per BOD 22-01 Catalog of Known Exploited Vulnerabilities, federal agencies are required to immediately block all internet traffic to and from Atlassian’s Confluence Server and Data Center products AND either apply the software update to all affected instances OR remove the affected products by 5 pm ET on Monday, June 6, 2022.

Active Learning at scale, with Azure SQL and Azure ML

Active Learning at scale, with Azure SQL and Azure ML

This article is contributed. See the original author and article here.

wopauli_0-1654275898771.png


 Figure 1: Example demonstration of the value of storing model inference results in Azure SQL DB. We performed a query to retrieve a video frame that shows young Fred (FI) with his mother Fifi (FF) and close family members.


 


Introduction


 


Organizations often sit on a treasure trove of unstructured data, without the ability to derive insights from the data.


 


We experienced this situation while working on a co-innovation project with the Jane Goodall Institute (JGI), MediaValet, and the University of Oxford. JGI had digitized and uploaded many decades of videos of chimpanzees in the wild and wanted to enable primate researchers to use this data for quantitative scientific analyses. To this end, we built a no-code active learning solution for training state-of-the-art computer vision models. This solution allows researchers at JGI to index and understand their unstructured data assets, it allows them to join them the unstructured data with other, structured data sources, eventually enabling statistical analysis for scientific enquiries. For example, how does the social network structure change over the first few months after a new chimp was born?


 


In this blog post, we provide an overview of the use case, challenges, and solutions. Briefly, to enable active learning at scale, we implemented PyTorch dataset classes, which load image data from Azure Blob Storage and annotations from an Azure SQL database. Model predictions are written to the same database. The Azure SQL database can then be used for gaining new insights, using quantitative analytics (see Figure 1).


 


Challenges


 


We faced several challenges while working on this project. The largest challenge was that there is only one person in the world who can reliably recognize the over 300 individual chimpanzees by name: the famous wildlife cinematographer and scientific advisor Bill Wallauer. Over the course of several years, he spent many months living in the Gombe National Park, filming chimpanzees in the wild.


 


The second challenge was the sheer scale of the project. We had to store annotations for over 30 million video frames in such a way that they could be used for machine learning. At the same time, the annotations needed to be accessible to primate researchers, to enable scientific inquiry.


 


The third challenge was to build a no-code solution that would allow JGI staff to annotate and train deep learning models without requiring expertise in computer programming and machine learning.


 


Minimizing data labeling costs with active learning


 


To address the challenge that only Bill Wallauer can reliably recognize the over 300 individual chimpanzees by name, we needed to build a no-code solution that would maximize the returns on every data label he provides. That is, the brute-force approach of crowd-sourcing data labeling, to get as much labeled data as possible couldn’t be applied here.


 


Active learning is a machine learning technique that tries to minimize required labeling efforts by strategically selecting those samples for annotation that are expected to benefit the model the most. In this context, the goal is to find an optimal policy of selecting samples for annotation to maximally increase model performance on a validation set. Active learning is a relatively new technique in machine learning, and we will cover this and related topics in depth in future blog posts.


 


Azure SQL Server and Database enable active learning at scale


 


Another challenge we faced was the large scale of the project. We had to find a way to efficiently store data annotations, so that they could be used for model training, inference, and allow primate researchers to perform quantitative analysis.


 


A common approach to training deep learning models is to store annotations in JSON format or CSV files, for the annotations to be loaded into host memory at the beginning of training. We quickly reached limitations in terms of speed and memory usage with this approach. There are several workarounds for more advanced use cases. We decided to use Azure SQL DB for this project, which immediately alleviated all concerns around increases the dataset size. There are some very real advantages to using Azure SQL DB for a project of this scale:



  • Memory limitations on the training host machines used for model training and inference are no longer an issue because there is no requirement to load the annotations for the entire dataset into memory

  • Speed! We found that our implementation scaled extremely well as the dataset grew, because Azure SQL DB had no issues handling a dataset of this size.


 


Finally, the same SQL database we are using for training and inference can also be used by primate researchers for quantitative analytics.


 


Azure ML enables the automation of model training and monitoring


 


It was our explicit goal to build a no-code solution that would empower JGI staff and volunteers, without requiring expertise in computer programming and machine learning. We were able to achieve this goal via a set of Azure ML Pipelines, with triggers for automatic execution in response to well-defined events. These pipelines automate data ingestion, model training and re-training, monitoring for model and data drift, batch inference, and active learning.


 


Other Applications


 


Here we demonstrate how to use Azure SQL database and Azure ML to enable active learning at scale for a particular use case, but the same principles can be applied to a wide variety of applications, which can be found across industries:



  • Worker Safety. Supervisors have the suspicion that a particular kind of worker behavior leads to accidents. They have a very large repository of video footage and records of work accidents. They would like to investigate whether they can find evidence in these videos that certain kinds of behaviors have indeed historically led to accidents.

  • Public Safety. Public employees suspect that a particular type of traffic intersection is associated with an increased number of traffic accidents. Employees have historical GIS data on traffic accidents and footage of traffic cameras. They train a model on categorizing intersections and join that data with GIS data on traffic accidents.

  • Manufacturing. A manufacturer suspects that a particular kind of manufacturing defect leads to warranty claims later. The manufacturer has a large dataset of images from manufacturing pipelines. Investigators train a model to recognize the anomaly and join the data with warranty claims to test their hypothesis. Based on their findings, they can start a product recall to avoid costly warranty claims.

  • Predictive Maintenance. Acoustic sensor data on manufacturing machines are hoped to provide a signal that is predictive of outages and other equipment failure. Operators would like to know whether it is possible to join this unstructured acoustic data with maintenance records to perform predictive maintenance.


 


Related Tools and Services


 


Azure ML Data Labeling. Data Labeling in Azure Machine Learning offers a powerful web interface within Azure ML Studio that allows users to create, manage, and monitor labeling projects. To increase productivity and to decrease costs for a given project, users can take advantage of the ML-assisted labeling feature, which uses Azure ML Automated ML computer vision models under the hood. However, in contrast to the approach described here, Azure ML Data Labeling does not support active learning.


Azure Custom Vision service is a mature and convenient managed service that allows customers to label data and to train and deploy computer vision models. In contrast to the approach discussed here, the focus is on developing a performant model, rather than understanding and indexing very large amounts of unstructured data. Like the Azure ML Data Labeling tool above, it does not have support for active learning.


Video Indexer is a powerful managed service for indexing large assets of video data. It currently offers only limited options for customizing models to understand the subject domain of the dataset at hand. It also does not offer a straightforward approach to use the generated index for secondary analysis.


 


Conclusion


 


This blog post represents the first of a series of blog posts on combining Azure SQL Database and Azure ML to index and understand very large repositories of unstructured data. Future blog posts will offer more depth on the topics touched upon above. For example:



  • Writing a PyTorach Dataset class for SQL

  • Implementing Active Learning at scale with SQL DB and Azure ML

  • Optimizing SQL tables and queries to increase training and inference speed

  • Ensuring AI fairness

  • Gaining scientific insights after all unstructured data has been indexed


We also welcome requests in the comment section, for other topics you would like us to cover in these future blog posts.

Finance and Operations (Dynamics 365) mobile app to be deprecated

Finance and Operations (Dynamics 365) mobile app to be deprecated

This article is contributed. See the original author and article here.

The Microsoft Finance and Operations (Dynamics 365) mobile app, the associated mobile platform, and related mobile workspaces are deprecated effective June 2022. Existing assets will be supported through October 2024. New mobile Finance and Operations experiences should be built in Power Apps, using virtual tables from Microsoft Power Platform to access finance and operations data.

What’s happening to the Finance and Operations (Dynamics 365) mobile workspaces?

Some of the existing mobile workspaces will be replaced. The Microsoft Dynamics 365 Project Timesheet mobile app is already available as a replacement for the Project time entry mobile workspace.

Replacement experiences are planned to be released in 2023 for the following mobile workspaces:

  • Expense management
  • Inventory on-hand
  • Asset management
  • Invoice approval
  • Purchase order approval

Replacement experiences are not currently planned for the remaining mobile workspaces:

  • Company directory
  • My team
  • Cost controlling
  • Vendor collaboration
  • Sales orders

If you need to continue using one of these mobile workspaces after the end-of-support date, we encourage you to build a mobile experience in Power Apps.

Note that the Warehouse Management mobile app, which is not built on the Finance and Operations (Dynamics 365) mobile app, is not impacted by this deprecation.

What’s next

Here are some additional things we encourage you to do now that the Finance and Operations (Dynamics 365) mobile app has been deprecated.

  1. Stop building new mobile experiences in the Finance and Operations mobile app.
  2. Start learning about virtual tables and how to create mobile experiences in Power Apps.
  3. Begin planning for converting your existing mobile experiences to Power Apps. You have some time before the end-of-support date, but it’s never too early to start getting ready!

The post Finance and Operations (Dynamics 365) mobile app to be deprecated appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.