Virtual live events and cloud-enabled content workflows now supported in Dynamics 365 media and entertainment accelerator

Virtual live events and cloud-enabled content workflows now supported in Dynamics 365 media and entertainment accelerator

This article is contributed. See the original author and article here.

In the media and entertainment industry, production timelines continue to accelerate, file sizes continue to grow, and remote workflows have become the norm.

Content creators must scale and accelerate their production workflows to meet ever-increasing audience demands for personalization, immediacy, and choice. At the same time, media and entertainment organizations must look for every opportunity to reduce cost, better manage and monetize assets, and enable collaboration and productivity across their production and post-production teamsall while maintaining the security of their high-value creative IP.

The Dynamics 365 media and entertainment accelerator empowers partners and developers serving the media industry to adopt cloud workflows to address these challenges. Our 2.0 release with enhanced features and expanded data model is now generally available.

First introduced in July 2020, the Dynamics 365 media and entertainment accelerator enables organizations todevelop and deploy their own business applications for quick access to data insights and new workplace automation. It builds on Common Data Model and Microsoft Power Platform, and includes sample code and various industry-specific customizations to support media and entertainment applications and business logic.

Version 1.0 of the media and entertainment accelerator included a data model with entities and attributes centered on the theme of fan and guest engagement. These are used when building new applications to drive guest experiences and sales workflows, and to support broad data collection and analysis. The accelerator allows ISV partners and developers working in Microsoft Power Platform to add media-specific data entities, sample user interface forms and canvas apps, and business flows that could be used for:

  • Event and physical venue management.
  • Sports management.
  • Ticketing and advertising sales.
  • Media sponsorships.
  • Guest interactions, such as automating event registrations, creating and managing new loyalty programs, or tracking guest preferences.

Dynamics 365 media and entertainment accelerator events portal

Enhanced live, hybrid, and virtual events with Microsoft Power Apps portals and Microsoft Teams integrations

In the 2.0 release, Microsoft has expanded the media data model and code samples that ship with the accelerator to incorporate support for the industry’s recent shift to hosted online and hybrid live events. Microsoft Teams API integration has been added so event producers can easily create schedules, and support live events and broadcasts, virtual conferences and seminars, remote press conferences and briefings, or even sports matches or league activities where guest plan to participate through Microsoft Teams. A sample events and venues model-driven application has been added to enable these capabilities. Microsoft has also included a Power Apps portals template so event producers can launch their own fan-facing web sites to promote events, register guests for upcoming live events, and allow registrants to RSVP for activities. They can even join a live event in progress such as a sports game, symposium, or music concert directly and without the need for a separate Microsoft Teams client.

Accelerate the development of apps for cloud-based content production workflows

What’s more, in the 2.0 release we have added a content production management solution that addresses the needs of creative production houses, advertising agencies, special effects houses, and television and motion picture studios.

graphical user interface, text, application, email

The content production management solution adds 15 data entities to the media data model from studios, shows, episodes, and seasons to production assets, video and audio tracks, and descriptive and AI-generated metadata. The accelerator includes sample user interfaces, dashboards, and automated business flows to assist with the rapid development of online applications that can be used for optimized collaboration and productivity throughout the creative process.

These elements enable the development applications that address most of today’s collaborative cloud-based production and post-production workflows by using Microsoft Power Automate for:

  • Adding data and processes for managing access and asset sharing.
  • Improved searching capabilities.
  • Relational asset management.
  • Automation of common functions, such as:
    • Asset uploads
    • Archiving
    • Tagging and discovery
    • Distribution
    • Quality control

As described by Harry Grinling, founder and CEO of Support Partners, “The new capabilities of the Microsoft Dynamics 365 Media and Entertainment Accelerator will allow us to rapidly extend our solution portfolio for cloud-enabled archiving, asset management, and remote collaboration. The combination of this latest release of the accelerator and our tried-and-tested production frameworks will help make the migration of workflows to the cloud frictionless for our global roster of media and entertainment customers.” Support Partners is a Microsoft ISV partner that designs, deploys, and supports innovation in the cloud.

Mark Keller, head of strategy and innovation at WPP, added “We are really excited to collaborate with Microsoft and see huge potential and massive scalability with the Content Production features of the M&E Accelerator for building and expanding new virtual studio offerings for our clients.” WPP is one of the world’s largest advertising agencies, operating in 110 countries, and leverages Microsoft technology to build customized solutions for their brand clients.

Next steps

Get started right away with a test drive of the latest release of the Dynamics 365 media and entertainment accelerator on AppSource. You can interact with a preconfigured test drive environment that demonstrates key features and benefits without the need to set up or use your own Power Apps or Dynamics 365 subscription.

When you’re ready, the data model, solutions, application and data samples, Power BI reports, and UX controls that come with the Dynamics 365 media and entertainment accelerator are available to any Microsoft Power Platform developer from the “Get It Now” feature on AppSource, or also for download on GitHub, with additional supporting documentation and configuration information on Microsoft Docs.

Microsoft empowers media and entertainment organizations to achieve more with our powerful and flexible development platforms supported by a comprehensive partner ecosystem with industry-leading solutions for creativity, collaboration, content management, audience insights, and personalized customer experiences.

The post Virtual live events and cloud-enabled content workflows now supported in Dynamics 365 media and entertainment accelerator appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

MAR-10336935-2.v1: Pulse Secure Connect

MAR-10336935-2.v1: Pulse Secure Connect

This article is contributed. See the original author and article here.

Notification

This report is provided “as is” for informational purposes only. The Department of Homeland Security (DHS) does not provide any warranties of any kind regarding any information contained herein. The DHS does not endorse any commercial product or service referenced in this bulletin or otherwise.

This document is marked TLP:WHITE–Disclosure is not limited. Sources may use TLP:WHITE when information carries minimal or no foreseeable risk of misuse, in accordance with applicable rules and procedures for public release. Subject to standard copyright rules, TLP:WHITE information may be distributed without restriction. For more information on the Traffic Light Protocol (TLP), see http://www.cisa.gov/tlp.

Summary

Description

CISA received two Common Gateway Interface (CGI) scripts for analysis. The two CGI scripts are Pulse Secure system files that were modified by a malicious actor. The files contain a malicious modification which allows the attacker to maintain remote command and control (C2) access to a target system. This analysis is derived from malicious files found on Pulse Connect Secure devices.

For a downloadable copy of indicators of compromise, see: MAR-10336935-2.v1.stix.

Submitted Files (2)

c287cd9e3c37f5869dbce168a89a78836a61791a72b36d048c086576b9af2769 (licenseserverproto.cgi)

d27730060be3099846a673cfee890da05dc9f7b34d987c65f4299980b6865822 (licenseserverproto.cgi)

Findings

d27730060be3099846a673cfee890da05dc9f7b34d987c65f4299980b6865822

Tags

backdoortrojan

Details
Name licenseserverproto.cgi
Size 3377 bytes
Type Perl script text executable
MD5 ae76be46d7e1ca140cf4d30d5a60d407
SHA1 0dc2f82d9392b9b0646fa65523e2da712a401e99
SHA256 d27730060be3099846a673cfee890da05dc9f7b34d987c65f4299980b6865822
SHA512 29f46f49a3d700d1f8b88df8d20eed3a834fccaf0057754d465cd27017332dd9ef2efc47c49315091d55d1c0afdbb14b433a4a3458372e74ae24f0524fccc664
ssdeep 48:ErLYmeAJAZo6HMeQT808inRbxhcQjQkBQVeWo7BuswT4o7oo7vpBBBQWBZ7zSH74:EfYkJAZnqpxhcOQVHo0v/wO27YJ
Entropy 5.316307
Antivirus

No matches found.

YARA Rules

No matches found.

ssdeep Matches
91 ade49335dd276f96fe3ba89de5eb02ea380901b5ef60ff6311235b6318c57f66
97 c287cd9e3c37f5869dbce168a89a78836a61791a72b36d048c086576b9af2769
Description

This is a CGI script that was maliciously modified (Figure 1) from a Pulse Secure system file. The malicious form accepts a command of no more than 45 characters in length. The script executes the provided command on the compromised system using the system function.

Screenshots

Figure 1 - Screenshot of the dependencies and the malicious main() function patched into the Pulse Secure file.

Figure 1 – Screenshot of the dependencies and the malicious main() function patched into the Pulse Secure file.

c287cd9e3c37f5869dbce168a89a78836a61791a72b36d048c086576b9af2769

Tags

backdoortrojan

Details
Name licenseserverproto.cgi
Size 3378 bytes
Type Perl script text executable
MD5 bff36121c5e6b7fdce02d5b076aee54e
SHA1 45284d5ccc85e76f566ec25d46696ddb4eb861c0
SHA256 c287cd9e3c37f5869dbce168a89a78836a61791a72b36d048c086576b9af2769
SHA512 f6b51f28ebcad247f8910cb357a8f9f40a6d44262c9d00524651d04ff078612498dbf311e27184ad1f2f8ccc4a538bc851899b56769f0a90a48cf76c7150d601
ssdeep 48:EbLYmeAJAZo6HMeQT808inRZxhcQjQkBQVeWo7BuswT4o7oo7vpBBBQWBZ7zSH74:EvYkJAZnqPxhcOQVHo0v/wO27YJ
Entropy 5.316014
Antivirus

No matches found.

YARA Rules

No matches found.

ssdeep Matches
90 ade49335dd276f96fe3ba89de5eb02ea380901b5ef60ff6311235b6318c57f66
97 d27730060be3099846a673cfee890da05dc9f7b34d987c65f4299980b6865822
Description

This is a CGI script with same malicious modification as the file “licenseserverproto.cgi” (d27730060be3099846a673cfee890da05dc9f7b34d987c65f4299980b6865822).

Screenshots

Figure 2 - Screenshot of the dependencies and the malicious main() function added to the Pulse Secure file.

Figure 2 – Screenshot of the dependencies and the malicious main() function added to the Pulse Secure file.

Recommendations

CISA recommends that users and administrators consider using the following best practices to strengthen the security posture of their organization’s systems. Any configuration changes should be reviewed by system owners and administrators prior to implementation to avoid unwanted impacts.

  • Maintain up-to-date antivirus signatures and engines.
  • Keep operating system patches up-to-date.
  • Disable File and Printer sharing services. If these services are required, use strong passwords or Active Directory authentication.
  • Restrict users’ ability (permissions) to install and run unwanted software applications. Do not add users to the local administrators group unless required.
  • Enforce a strong password policy and implement regular password changes.
  • Exercise caution when opening e-mail attachments even if the attachment is expected and the sender appears to be known.
  • Enable a personal firewall on agency workstations, configured to deny unsolicited connection requests.
  • Disable unnecessary services on agency workstations and servers.
  • Scan for and remove suspicious e-mail attachments; ensure the scanned attachment is its “true file type” (i.e., the extension matches the file header).
  • Monitor users’ web browsing habits; restrict access to sites with unfavorable content.
  • Exercise caution when using removable media (e.g., USB thumb drives, external drives, CDs, etc.).
  • Scan all software downloaded from the Internet prior to executing.
  • Maintain situational awareness of the latest threats and implement appropriate Access Control Lists (ACLs).

Additional information on malware incident prevention and handling can be found in National Institute of Standards and Technology (NIST) Special Publication 800-83, “Guide to Malware Incident Prevention & Handling for Desktops and Laptops”.

Contact Information

CISA continuously strives to improve its products and services. You can help by answering a very short series of questions about this product at the following URL: https://us-cert.cisa.gov/forms/feedback/

Document FAQ

What is a MIFR? A Malware Initial Findings Report (MIFR) is intended to provide organizations with malware analysis in a timely manner. In most instances this report will provide initial indicators for computer and network defense. To request additional analysis, please contact CISA and provide information regarding the level of desired analysis.

What is a MAR? A Malware Analysis Report (MAR) is intended to provide organizations with more detailed malware analysis acquired via manual reverse engineering. To request additional analysis, please contact CISA and provide information regarding the level of desired analysis.

Can I edit this document? This document is not to be edited in any way by recipients. All comments or questions related to this document should be directed to the CISA at 1-888-282-0870 or CISA Service Desk.

Can I submit malware to CISA? Malware samples can be submitted via three methods:

CISA encourages you to report any suspicious activity, including cybersecurity incidents, possible malicious code, software vulnerabilities, and phishing-related scams. Reporting forms can be found on CISA’s homepage at www.cisa.gov.

Design an AI enabled NVR with AVA Edge and Intel OpenVino

Design an AI enabled NVR with AVA Edge and Intel OpenVino

This article is contributed. See the original author and article here.

This is the first in a series of articles which explore how to integrate Artificial Intelligence into a video processing infrastructure using off-the-market cameras and Intel OpenVino Model Server running at the edge. In the below sections we will learn some background trivia, hardware/software prerequisites for implementation, and steps to setup a production-ready AI enabled Network Video Recorder that has the best of both worlds – Microsoft and Intel.


 


What is a video analytics platform


In the last few years, video analytics, also known as video content analysis or intelligent video analytics, has attracted increasing interest from both industry and the academic world. Video Analytics products add artificial intelligence to cameras by analyzing video content in real-time, extracting metadata, sending out alerts and providing actionable intelligence to security personnel or other systems. Video Analytics can be embedded at the edge (even in-camera), in servers on-premise, and/or on-cloud. They extract only temporal and spatial events in a scene, filtering out noise such as lighting changes, weather, trees and animal movements. Here is a logical flow of how it works.


 


                                 KaushikRoy_0-1627062515741.png    


Let’s face it: on-premises and legacy video surveillance infrastructure are still in the dark ages. Physical servers often have limited virtualization integration and support, as well as racks upon racks of servers that clog up performance regardless of whether the data center is using NVR, direct-attached storage, storage area network or hyper-converged infrastructure. It’s been that way for the last 10, if not 20, years. Buying and housing an NVR for five or six cameras is expensive and time-consuming from a management and maintenance point of view. With great improvements in connectivity, compression and data transfer methods, a cloud-native solution becomes an excellent option. Here are some of the popular use cases in this field and digram of a sample deployment for critical infrastructure.


 










 



  • Motion Detection

  • Intrusion Detection

  • Line Crossing

  • Object Abandoned

  • License Plate Recognition

  • Vehicle Detection

  • Asset Management

  • Face Detection

  • Baby Monitoring

  • Object Counting


KaushikRoy_0-1627063795302.jpeg

 


 



 


Common approaches for proposals to clients involve either a new installation (greenfield project) or a lift and shift scenario (brownfield project).  Video intelligence is one industry where it becomes important to follow a bluefield approach – meant to describe a combination of both brownfield and greenfield, where some streams of information are already in motion and some will be new instances of technology. The reason is that the existing hardware and software installations are very expensive and although they are open to new ideas, they want to keep what is already working. The current article is about setting up this new technology in a way so it accepts pipelines, for inference and event generation, on live video for the above use cases in future.


 


The rise of AI NVRs


Video Intelligence was invented in 1942 by German engineer, Walter Bruch, so that he and others could observe the launch of V2 rockets on a private system. While its purpose has not drastically changed in the past 75 years, the system itself has undergone radical changes. Since its development, users’ expectations have evolved exponentially, necessitating the development of faster, better, and more cost-effective technology. 


 


Initially, they could only watch through live streams as they happened— recordings would become available much later (VCR). Until the recent past these were analog devices using analog cameras and a Digital Video Recorder (DVR). Not unlike your everyday television box which used to run off these DVRs in every home! Recently, these have started getting replaced with Power Over Ethernet (PoE) enabled counterparts, running off Network Video Recorders (NVR). Here is a quick visual showing the difference between DVR and NVR.


dvr-vs-nvr.jpeg



AI NVR Video Analytics System is a plug-and-play turnkey solution, including video search for object detection, high-accuracy intrusion detection, face search and face recognition, license plate and vehicle recognition, people/vehicle counting, and abnormal-activity detection. All functions support live stream and batch mode processing, real-time alerts and GDPR-friendly privacy protection when desired. AI NVR overcomes the challenges of many complex environments and is fully integrated with AI video analytics features for various markets, including perimeter protection of businesses, access controls for campuses and airports, traffic management by law enforcement, and business intelligence for shopping centers. Here is a  logical flow of an AI NVR from video capture to data-driven applications.


 


crop1.jpg


 


In this article we are going to see how to create such an AI NVR at the edge using Azure Video Analyzer (AVA) and Intel products.


 


Azure Video Analyzer – a one-stop solution from Microsoft


Azure Video Analyzer (AVA) is a brand new service to build intelligent video applications that span the edge and the cloud. It offers the capability to capture, record, and analyze live video along with publishing the results – video and/or video analytics. Video can be published to the edge or the Video Analyzer cloud service, while video analytics can be published to Azure services (in the cloud and/or the edge). With Video Analyzer, you can continue to use your existing video management systems (VMS)  and build video analytics apps independently. AVA can be used in conjunction with computer vision SDKs and toolkits to build cutting edge IoT solutions. The diagram below illustrates this.


KaushikRoy_0-1627265834078.png


 


This is the most essential component of creating the AI NVR.  As you may have guessed, in this article we are going to deploy an AVA module on IoT edge to coordinate between the model server and the video feeds through an http extension. You can ‘bring you own model’ and call it through either http or grpc endpoint.


 


Intel OpenVINO toolkit


OpenVINO (Open Visual Inference and Neural network Optimization) is a toolkit provided by Intel to facilitate faster inference of deep learning models. It helps developers to create cost-effective and robust computer vision applications. It enables deep learning inference at the edge and supports heterogeneous execution across computer vision accelerators — CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA. It supports a large number of deep learning models out of the box.



avaopen.jpeg



OpenVino uses its own Intermediate Representation (IR) (link)(link) format similar to ONNX (link), and works with all your favourite deep learning tools like Tensorflow, Pytorch etc. You can either convert your resultant model to openvino or use/optimize the available pretained models in the Intel model zoo. In this article we are specifically using the OpenVino Model Server (OVMS) available through this Azure marketplace module, Out of the many models in their catalogue I am only using those that count faces, vehicles, and people. These are identified by their call signs – personDetection, faceDetection, and vehicleDetection.



Prerequisites


There are some hardware and software prerequisites for creating this platform.



  1. ONVIF PoE camera able to send encoded RTSP streams (link)(link)

  2. Intel edge device with Ubuntu 18/20 (link)

  3. Active Azure subscription

  4. Development machine with VSCode & IoT Extension

  5. Working knowledge of Computer Vision & Model Serving


 


ONVIF PoE camera

ONVIF (the Open Network Video Interface Forum) is a global and open industry forum with the goal of facilitating the development and use of a global open standard for the interface of physical IP-based security products. ONVIF creates a standard for how IP products within video surveillance and other physical security areas can communicate with each other. This is different from propreitary equipment, and you can use all open source libraries with them. A decent quality camera like Reolink 410 is enough. Technically you can use wireless camera but I would not recommend that in a professional setting.


 


Intel edge device with Ubuntu

This can be any device with one or more Intel cpu. Intel NUC makes great low cost IoT edge device and even the cheap ones can handle around 10 cameras running at 30 fps. I am using a base model with Celeron processor priced at around 130$. The camera(s), device, and some cables are all you need to implement this. Optionally, like me, you may need a PoE switch or network extender to get connected. Check the wattage of the PoE to be at least 5 W, and bandwidth to be at least 20 mbps per camera. You also need to install Ubuntu Linux.


 


Active Azure subscription

Surely, you will need this one, but as we know Azure has this immense suit of products, and while ideally we want to have everything, it may not be practically feasible. For practical purposes you might have to ask for access to particular services, meaning you have to know ahead exactly which ones you want to use. We will need the following:



  • Azure IoT Hub (link)

  • Azure Container Registry (link)

  • Azure Media Services (link)

  • Azure Video Analyzer (link)

  • Azure Streaming Analytics (link)(future article)

  • Power BI / React App (link)(future article)

  • Azure Linux VM (link)(optional)


 


Computer Vision & Model Serving

Generally this prerequisite takes a lot of engineering and is expensive. Thankfully the OVMS extension from Intel is capable of serving high quality models from their zoo, because without this you would have to do the whole flask/socket server thing and it wouldn’t be half as good. Whatever models you need you can mention their call sign and it will be served instantly for you at the edge by the extension. We will see more about this in the next article once things are setup. Note: we are making the platform in such a way that you can use Azure CustomVision or Azure Machine Learning models on this same setup in future with very minimal changes.


 


Reference Architecture


We are definitely living in interesting times when something as complex as video analytics is almost an OOTB feature! Below is a ready-to-deploy architecture recommended and maintained by Microsoft Azure for video analytics. Technically if you know what you are doing you can deploy this entire thing with the push of a few buttons. However, I found it to have a bit too much for a Minimum Viable Product (MVP) as it is ‘viable’ but not ‘minimum’ so to say.


architecture.png


 


 


Here I present an alternate architecture that we followed, implemented, and got comparable results to the one above. This is a stripped down version of the official architecture, contains only the necessary components of a MVP for AI NVR, and is much easier to disect. 


aarch.pngNotice it looks somewhat simliar to the logical flow of an AI NVR shown in one of the prior sections.


 


Inbound feed to the AI NVR


Before we go into the implementation I wanted to mention some aspects about the inputs and outputs of this system.



  • Earlier we said the system needs RTSP input, even though there are other forms of streaming protocols such as RTMP(link), HTTP etc. However, we choose RTSP mostly because its optimized for viewing experience and scalability

  • For development purpose it is recommended to use the excellent RTSP Simulator provided by Microsoft.

  • To display the video being processed use any of the following players.

  • You can technically use a usb webcam and create your own RTSP stream(link)(link), However, underneath it uses GStreamer, RTSPServer, and pipelines. From my experience you should be careful using this method, especially since you will need understanding of hardware/software media encoding (e.g. H.264) and GStreamer dockerization :cool:.

  • One very interesting option that I used as a video source was the RTSP Camera Server app. This will instantly turn your smartphone camera into an RTSP feed that your AI NVR can consume :stareyes:!

  • Last, but not the least you should make sure that your incoming feed has the required resolution that your CV algorithms need. The trick is not to use too good cameras. 4 to 5 MP is fine for maintaining pixel distribution parity with available pretrained models.


 


Outbound events from the AI NVR


In Azure Video Analyzer, each inference object regardless of using HTTP-based contract or gRPC based contract follows the object model described below.


object-model.png



The example below contains a single Inference event with vehicleDetection. We will see more of these in a future article.


 


 


 

{
  "timestamp": 145819820073974,
  "inferences": [
    {
      "type": "entity",
      "subtype": "vehicleDetection",
      "entity": {
        "tag": {
          "value": "vehicle",
          "confidence": 0.9147264
        },
        "box": {
          "l": 0.6853116,
          "t": 0.5035262,
          "w": 0.04322505,
          "h": 0.03426218
        }
      }
    }

 


 


 


Apart from the inference events there are many other type of events, such as the MediaSessionEstablished event, which happens when you are recording the media either in File Sink or Video Sink.


 


 


 

[IoTHubMonitor] [9:42:18 AM] Message received from [avasampleiot-edge-device/avaedge]:
{
  "body": {
    "sdp": "SDP:nv=0rno=- 1586450538111534 1 IN IP4 XXX.XX.XX.XXrns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Serverrni=media/camera-300s.mkvrnt=0 0rna=tool:LIVE555 Streaming Media v2020.03.06rna=type:broadcastrna=control:*rna=range:npt=0-300.000rna=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Serverrna=x-qt-text-inf:media/camera-300s.mkvrnm=video 0 RTP/AVP 96rnc=IN IP4 0.0.0.0rnb=AS:500rna=rtpmap:96 H264/90000rna=fmtp:96 packetization-mode=1;profile-level-id=4D0029;sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXXrna=control:track1rn"
  },
  "applicationProperties": {
    "dataVersion": "1.0",
    "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoanalyzers/{ava-account-name}",
    "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sources/rtspSource",
    "eventType": "Microsoft.VideoAnalyzers.Diagnostics.MediaSessionEstablished",
    "eventTime": "2021-04-09T09:42:18.1280000Z"
  }
}

 


 


 


The above points are mentioned so as to show how some of the expected outputs look like. After all that, lets see how exactly you can create a foundation for your AI NVR.


 


Implementation


In this section we will see how we can use these tools to our benefit. For the Azure resources I may not go through the entire creation or installation process as there are quite a few articles on the internet for doing those. I shall only mention the main things to look out for. Here is an outline of the steps involved in the implementation.


 



  1. Create a resource group in Azure (link)

  2. Create a IoT hub in Azure (link)

  3. Create a IoT Edge device in Azure (link)

  4. Create and name a new user-assigned managed identity (link)

  5. Create Azure Video Analyzer Account (link)

  6. Create AVA Edge provisioning token

  7. Install Ubuntu 18/20 on the edge device

  8. Prepare the device for AVA module (link)

  9. Use Dev machine to turn on ONVIF camera(s) RTSP (link)

  10. Set a local static IP for the camera(s) (link)

  11. Use any of the players to confirm input streaming video (link)

  12. Note down RTSP url(s), username(s), and password(s)

  13. Install docker on the edge device 

  14. Install VSCode on development machine 

  15. Install IoT Edge runtime on the edge device (link)

  16. Provision the device to Azure IoT using connection string (link)

  17. Check IoT edge Runtime is running good on the edge device and portal 

  18. Create an IoT Edge solution in VSCode (link)

  19. Add env file to solution with AVA/ACR/Azure details

  20. Add Intel OVMS, AVA Edge, and RTSP Simulator modules to manifest 

  21. Create deployment from template (link)

  22. Deploy the solution to the device 

  23. Check Azure portal for deployed modules running


 


Lets go some of the items in the list in details.


 


Steps 1 and 2 are common steps in many use cases and can be done by following this. For 3 you need to make sure you are creating an ‘IoT Edge‘ device and not a simple IoT device. Follow the link for 4 to create a managed identity. For 5 use the interface to create an AVA account. Enter a name for your Video Analyzer account. The name must be all lowercase letters or numbers with no spaces, and 3 to 24 characters in length. Fill in the proper subscription, resource group, storage account, and identity from previous steps. You should now be having a running AVA account. Use these steps to create ‘Edge Provisioninig Token‘ for step 6. Remember, this is just for AVA Edge, not to be confused with provisioning through DPS. For 7, ubuntu linux is good, the support for this in windows is a work in progress. After you create the account keep the following information on standby.


 


 


 

AVA_PROVISIONING_TOKEN="<Provisioning token>"

 


 


 


Step 8, although simple, is an important step in the process. All you actually need to do is to run the below command.


 


 


 

bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"

 


 


 


However, underneath this there is a lot going on in preparation for the NVR. The Azure Video Analyzer module should be configured to run on the IoT Edge device with a non-privileged local user account. The module needs certain local folders for storing application configuration data. The RTSP camera simulator module needs video files with which it can synthesize a live video feed. The prep-device script in the above command automates the tasks of creating input and configuration folders, downloading video input files, and creating user accounts with correct privileges. 


 


Steps 9,10, and 11 are for setting up your ONVIF camera(s). Things to note here are that you need to set static class C IP addresses for each camera, and set https protocol along with difficult-to-guess passwords. Again, take extra caution if you are doing this with wireless camera. I use VLC to confirm the live camera feed from each camera. You may think this is obvious or choose to automate this, but I have seen a lot of issues in either. I personally recommend clients to confirm feed/frame-rate from every camera manually using urls. VLC is my player of choice but you have many more choices. 


 


Before you bring Azure into the picture, you must have all your RTSP urls ready and tested in setp 12. Here is an example rtsp url of the main feed. Notice the port number ‘554‘ and encoding ‘h264‘. 


 


 


 

rtsp://username:difficultpassword@192.168.0.35:554//h264Preview_01_main

 


 


 


For 13 to 18 keep going by the book(links). For step 19, fill in your details in the following block and create the ‘env‘ file.


 


 


 

SUBSCRIPTION_ID="<Subscription ID>"
RESOURCE_GROUP="<Resource Group>"
AVA_PROVISIONING_TOKEN="<Provisioning token>"
VIDEO_INPUT_FOLDER_ON_DEVICE="/home/localedgeuser/samples/input"
VIDEO_OUTPUT_FOLDER_ON_DEVICE="/var/media"
APPDATA_FOLDER_ON_DEVICE="/var/lib/videoAnalyzer"
CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>"
CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry password>"

 


 


 


For 20 add the following module definitions in your deployment json. This will cover Azure AVA, Intel OVMS, and RTSP Simulator. Also follow this for more details.


 


 


 

"modules": {
          "avaedge": {
            "version": "1.1",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "mcr.microsoft.com/media/video-analyzer:1",
              "createOptions": {
                "Env": [
                  "LOCAL_USER_ID=1010",
                  "LOCAL_GROUP_ID=1010"
                ],
                "HostConfig": {
                  "Dns": [
                    "1.1.1.1"
                  ],
                  "LogConfig": {
                    "Type": "",
                    "Config": {
                      "max-size": "10m",
                      "max-file": "10"
                    }
                  },
                  "Binds": [
                    "$VIDEO_OUTPUT_FOLDER_ON_DEVICE:/var/media/",
                    "$APPDATA_FOLDER_ON_DEVICE:/var/lib/videoanalyzer"
                  ]
                }
              }
            }
          },
"openvino": {
            "version": "1.0",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "marketplace.azurecr.io/intel_corporation/open_vino:latest",
              "createOptions": {
                "HostConfig": {
                  "Dns": [
                    "1.1.1.1"
                  ]
                },
                "ExposedPorts": {
                  "4000/tcp": {}
                },
                "Cmd": [
                  "/ams_wrapper/start_ams.py",
                  "--ams_port=4000",
                  "--ovms_port=9000"
                ]
              }
            }
          },
"rtspsim": {
            "version": "1.0",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2",
              "createOptions": {
                "HostConfig": {
                  "Dns": [
                    "1.1.1.1"
                  ],
                  "LogConfig": {
                    "Type": "",
                    "Config": {
                      "max-size": "10m",
                      "max-file": "10"
                    }
                  },
                  "Binds": [
                    "$VIDEO_INPUT_FOLDER_ON_DEVICE:/live/mediaServer/media"
                  ]
                }
              }
            }
          }
}

 


 


 


21 to 23 are again the usual steps for all IoT solutions and once you deploy the template, you should have the following modules running as below.


WhatsApp Image 2021-08-20 at 10.20.07 PM.jpeg


 


There, we have created the foundation for our Azure IoT Edge device to perform as a powerful AI NVR. Here ‘avaedge‘ is the Azure Video Analyzer service, ‘openvino‘ provides the model server extension, and ‘rtspsim‘ creates the simulated ‘live’ input video feed. In the next article we will see how we can use this setup to detect faces or maybe cars and stuff.


 


Future Work


 


I hope you enjoyed this article on setting up an AI enabled NVR for video analytics application. We love to share our experiences and get feedback from the community as to how we are doing. Look out for upcoming articles and have a great time with Microsoft Azure.


To learn more about Microsoft apps and services, contact us at contact@abersoft.ca or 1-833-455-1850!


 


Please follow us here for regular updates: https://lnkd.in/gG9e4GD and check out our website https://abersoft.ca/ for more information!