Microsoft Search and More – Mid-Day Cafe 02-22-2021 Recording

Microsoft Search and More – Mid-Day Cafe 02-22-2021 Recording

This article is contributed. See the original author and article here.

middaycafe.PNG


On 02/22, at 12 noon eastern, we hosted Microsoft’s Bill Baer, Senior Technical Product Manager for Microsoft Search. Search at Microsoft has been a rapidly evolving service building upon the power of the Microsoft Graph. Properly leveraged within an organization the power of search, search driven, applications can be transformational. As Senior Technical Product Manager for Microsoft Search, Bill Baer brought us the latest in Microsoft search to help organizations unlock the potential in their Microsoft 365 data and more. Check out the recording below:


 


 


Resources:


Microsoft Search Resources from Bill Baer:



News in 2:



Upcoming Mid-Day Cafe Webcast Schedule:



  • March 1st – Dan Holme, Community/Yammer

  • March 8th – Mark Kashman, Microsoft Lists

  • March 15th – Karuana Gatimu, Teams Adoption and Governance


Keep up to date with MidDay Café:



 


Have questions/comments/suggestions/requests for the Mid-Day Café team? Post them to our Mailbag! Click here to access the Mid-Day Café Mailbag form.


 


Thanks for visiting!


Sam Brown, Microsoft Teams Technical Specialist


1572449743556.jpg

Quick Tip: Does my NIC support VMMQ?

Quick Tip: Does my NIC support VMMQ?

This article is contributed. See the original author and article here.

Hi Folks – Most often, when a virtual machine or container is receiving network traffic, the traffic passes through the virtualization stack in the host. This requires host (parent partition) CPU cycles.


 


Synthetic Data PathSynthetic Data Path


 


If the amount of traffic being processed exceeds that which a single core can handle, the received network traffic must be spread across multiple CPUs. This “spreading” can occur in the operating system – at the expense of more CPU cycles, or hardware (the NIC) as an offload. In hardware, we call this capability Virtual Machine Multi-Queue. The benefit of VMMQ is actually two-fold:



  • It allows you to reach higher throughput into your virtual systems (VMs/Containers)

  • It reduces the cost (in terms of host resources) of processing that network traffic


VMMQ is a combined feature of the NIC, driver/firmware, and operating system. All of these must support VMMQ and be configured properly for you to leverage this offload.


 


To identify if your adapter supports VMMQ, use the Get-NetAdapterAdvancedProperty cmdlet to see the advanced registry property *RSSOnHostVPorts or “Virtual Switch RSS” – We won’t go into what the naming means but suffice to say that if you see this capability displayed using the command below, your NIC and driver/firmware combination supports VMMQ.


 


image.png


 


Now you simply need to follow the instructions in this article for how to configure it.


 


Hope this quick tip was helpful!


 

 

 

 

 

 

Teams Platform for Gov Quick Start | FREE Virtual Event

Teams Platform for Gov Quick Start | FREE Virtual Event

This article is contributed. See the original author and article here.

EDU19_ITWorkingonComputers_002.jpg


 


Learn more about accelerating and automating work processes in the US Federal Government & DoD using Microsoft Teams!


 


Click here to register for this FREE event!!


 


You are invited to an exclusive experience with Microsoft Teams Engineering.  During this event we will show how you can leverage your investment in Microsoft Teams to drive real innovation in your organization using the Teams Platform for US Gov.


 


We will showcase how to extend Microsoft Teams into custom applications that accelerate and automate your processes. We will also highlight best practices from other government organizations, perform live demos surrounding real life government use cases, and tell you how to get started on your journey right away!



Following the event, we’ll connect with you (for free!) to understand your specific organizational needs.


 


Event Details


 


Available Dates/Times:



  • Wednesday, February 24th from 1:00pm to 2:30pm EST

  • Tuesday, March 2nd from 2pm to 3:30pm EST

  • Tuesday, March 16th from 2:00pm to 3:30pm EST


 


Agenda:  



  • How to Extend Microsoft Teams into Custom Applications that Accelerate and Automate Your Processes

  • Live Demos of Real Life Gov Use Cases

  • Next Steps On How to Start Implementing Solutions Now


 


Presenters:



  • Dave Jennings, Principal Program Manager, Microsoft Teams Engineering

  • Joshua Armant, Technical Customer Success Manager, Microsoft Federal


 


Register Here:


Utility scams are snow joke

Utility scams are snow joke

This article was originally posted by the FTC. See the original article here.

Winter often brings the blues, but when it brings Arctic blasts, burst pipes, power outages, and even icicles indoors, scammers aren’t far behind with weather-related scams.

Scammers know severe weather may have shut off your electricity, heat, and water and might pose as your utility company. They might call to say that they’re sorry your power went out and offer a reimbursement, but first they need your bank account information. They might email you to say that there’s an error in their system, and you have to give them personal information so they can turn your gas on again. They could even threaten to leave your utilities shut off if you don’t send them money immediately. But those are all lies.

If you get one of these calls, texts, or emails, here are some things you can do:

  • If you get a call, thank the caller and hang up. Never call a number left in a voicemail, text, or email. Instead, if you’re worried, contact the utility company directly using the number on your bill or on the company’s website. Verify if the message came from them.
  • If you get a call out of the blue and the caller claims you have to pay a past due bill or your services will be shut off, never give banking information over the phone. To pay your bill over the phone, always place the call to a number you know is legitimate.
  • Utility companies don’t demand payment information by email, text, or phone. And they won’t force you to pay by phone as your only option.
  • If the caller tells you to pay by gift card, cash reload card, money transfer, or cryptocurrency, it’s a scam. Every time. No matter what they say.

It’s cold out there. Help protect your community by reporting any scams you see at ReportFraud.ftc.gov.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Ombromanie: Creating Hand Shadow stories with Azure Speech and TensorFlow.js Handposes

Ombromanie: Creating Hand Shadow stories with Azure Speech and TensorFlow.js Handposes

This article is contributed. See the original author and article here.

 


Have you ever tried to cast hand shadows on a wall? It is the easiest thing in the world, and yet to do it well requires practice and just the right setup. To cultivate your #cottagecore aesthetic, try going into a completely dark room with just one lit candle, and casting hand shadows on a plain wall. The effect is startlingly dramatic. What fun!


 


jelooper_0-1613690550387.jpeg


 



Even a tea light suffices to create a great effect



In 2020, and now into 2021, many folks are reverting back to basics as they look around their houses, reopening dusty corners of attics and basements and remembering the simple crafts that they used to love. Papermaking, anyone? All you need is a few tools and torn up, recycled paper. Pressing flowers? All you need is newspaper, some heavy books, and patience. And hand shadows? Just a candle.


 


jelooper_1-1613690550389.jpeg


 



This TikTok creator has thousands of views for their handshadow tutorials



But what’s a developer to do when trying to capture that #cottagecore vibe in a web app?


High Tech for the Cottage


While exploring the art of hand shadows, I wondered whether some of the recent work I had done for body poses might be applicable to hand poses. What if you could tell a story on the web using your hands, and somehow save a video of the show and the narrative behind it, and send it to someone special? In lockdown, what could be more amusing than sharing shadow stories between friends or relatives, all virtually?


 






 



Hand shadow casting is a folk art probably originating in China; if you go to tea houses with stage shows, you might be lucky enough to view one like this!



A Show Of Hands


When you start researching hand poses, it’s striking how much content there is on the web on the topic. There has been work since at least 2014 on creating fully articulated hands within the research, simulation, and gaming sphere:


 


jelooper_2-1613690550390.png


 



MSR throwing hands



There are dozens of handpose libraries already on GitHub:


 



There are many applications where tracking hands is a useful activity:


 


• Gaming
• Simulations / Training
• “Hands free” uses for remote interactions with things by moving the body
• Assistive technologies
• TikTok effects :trophy:
• Useful things like Accordion Hands apps


 


One of the more interesting new libraries, handsfree.js, offers an excellent array of demos in its effort to move to a hands free web experience:


 


jelooper_3-1613690550409.gif


 



Handsfree.js, a very promising project



As it turns out, hands are pretty complicated things. They each include 21 keypoints (vs PoseNet’s 17 keypoints for an entire body). Building a model to support inference for such a complicated grouping of keypoints has provenn challenging.


 


jelooper_4-1613690550394.png


 


There are two main libraries available to the web developer when incorporating hand poses into an app: TensorFlow.js’s handposes, and MediaPipe’s. HandsFree.js uses both, to the extent that they expose APIs. As it turns out, neither TensorFlow.js nor MediaPipe’s handposes are perfect for our project. We will have to compromise.


 




  • TensorFlow.js’s handposes allow access to each hand keypoint and the ability to draw the hand to canvas as desired. HOWEVER, it only currently supports single hand poses, which is not optimal for good hand shadow shows.




  • MediaPipe’s handpose models (which are used by TensorFlow.js) do allow for dual hands BUT its API does not allow for much styling of the keypoints so that drawing shadows using it is not obvious.





One other library, fingerposes, is optimized for finger spelling in a sign language context and is worth a look.



Since it’s more important to use the Canvas API to draw custom shadows, we are obliged to use TensorFlow.js, hoping that either it will soon support multiple hands OR handsfree.js helps push the envelope to expose a more styleable hand.


 


Let’s get to work to build this app.


Scaffold a Static Web App


As a Vue.js developer, I always use the Vue CLI to scaffold an app using vue create my-app and creating a standard app. I set up a basic app with two routes: Home and Show. Since this is going to be deployed as an Azure Static Web App, I follow my standard practice of including my app files in a folder named app and creating an api folder to include an Azure function to store a key (more on this in a minute).


 


In my package.json file, I import the important packages for using TensorFlow.js and the Cognitive Services Speech SDK in this app. Note that TensorFlow.js has divided its imports into individual packages:


 



@tensorflow-models/handpose: ^0.0.6,
@tensorflow/tfjs: ^2.7.0,
@tensorflow/tfjs-backend-cpu: ^2.7.0,
@tensorflow/tfjs-backend-webgl: ^2.7.0,
@tensorflow/tfjs-converter: ^2.7.0,
@tensorflow/tfjs-core: ^2.7.0,

microsoft-cognitiveservices-speech-sdk: ^1.15.0,


 



Set up the View


We will draw an image of a hand, as detected by TensorFlow.js, onto a canvas, superimposed onto a video suppled by a webcam. In addition, we will redraw the hand to a second canvas (shadowCanvas), styled like shadows:


 



<div id=“canvas-wrapper column is-half”>
<canvas id=“output” ref=“output”></canvas>
<video
id=“video”
ref=“video”
playsinline
style=
-webkit-transform: scaleX(-1);
transform: scaleX(-1);
visibility: hidden;
width: auto;
height: auto;
position: absolute;

></video>
</div>
<div class=“column is-half”>
<canvas
class=“has-background-black-bis”
id=“shadowCanvas”
ref=“shadowCanvas”
>
</canvas>
</div>


 



Load the Model, Start Keyframe Input


Working asynchronously, load the Handpose model. Once the backend is setup and the model is loaded, load the video via the webcam, and start watching the video’s keyframes for hand poses. It’s important at these steps to ensure error handling in case the model fails to load or there’s no webcam available.


 



async mounted() {
await tf.setBackend(this.backend);
//async load model, then load video, then pass it to start landmarking
this.model = await handpose.load();
this.message = Model is loaded! Now loading video;
let webcam;
try {
webcam = await this.loadVideo();
} catch (e) {
this.message = e.message;
throw e;
}

this.landmarksRealTime(webcam);
},



 



Setup the Webcam


Still working asynchronously, set up the camera to provide a stream of images


 



async setupCamera() {
if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
throw new Error(
Browser API navigator.mediaDevices.getUserMedia not available
);
}
this.video = this.$refs.video;
const stream = await navigator.mediaDevices.getUserMedia({
video: {
facingMode: user,
width: VIDEO_WIDTH,
height: VIDEO_HEIGHT,
},
});

return new Promise((resolve) => {
this.video.srcObject = stream;
this.video.onloadedmetadata = () => {
resolve(this.video);
};
});
},



 



Design a Hand to Mirror the Webcam’s


Now the fun begins, as you can get creative in drawing the hand on top of the video. This landmarking function runs on every keyframe, watching for a hand to be detected and drawing lines onto the canvas – red on top of the video, and black on top of the shadowCanvas. Since the shadowCanvas background is white, the hand is drawn as white as well and the viewer only sees the offset shadow, in fuzzy black with rounded corners. The effect is rather spooky!


 



async landmarksRealTime(video) {
//start showing landmarks
this.videoWidth = video.videoWidth;
this.videoHeight = video.videoHeight;

//set up skeleton canvas
this.canvas = this.$refs.output;

//set up shadowCanvas
this.shadowCanvas = this.$refs.shadowCanvas;

this.ctx = this.canvas.getContext(2d);
this.sctx = this.shadowCanvas.getContext(2d);

//paint to main

this.ctx.clearRect(0, 0, this.videoWidth,
this.videoHeight);
this.ctx.strokeStyle = red;
this.ctx.fillStyle = red;
this.ctx.translate(this.shadowCanvas.width, 0);
this.ctx.scale(1, 1);

//paint to shadow box

this.sctx.clearRect(0, 0, this.videoWidth, this.videoHeight);
this.sctx.shadowColor = black;
this.sctx.shadowBlur = 20;
this.sctx.shadowOffsetX = 150;
this.sctx.shadowOffsetY = 150;
this.sctx.lineWidth = 20;
this.sctx.lineCap = round;
this.sctx.fillStyle = white;
this.sctx.strokeStyle = white;

this.sctx.translate(this.shadowCanvas.width, 0);
this.sctx.scale(1, 1);

//now you’ve set up the canvases, now you can frame its landmarks
this.frameLandmarks();
},



 



For Each Frame, Draw Keypoints


 


As the keyframes progress, the model predict new keypoints for each of the hand’s elements, and both canvases are cleared and redrawn.


 



      const predictions = await this.model.estimateHands(this.video);

if (predictions.length > 0) {
const result = predictions[0].landmarks;
this.drawKeypoints(
this.ctx,
this.sctx,
result,
predictions[0].annotations
);
}
requestAnimationFrame(this.frameLandmarks);



 



Draw a Lifelike Hand


Since TensorFlow.js allows you direct access to the keypoints of the hand and the hand’s coordinates, you can manipulate them to draw a more lifelike hand. Thus we can redraw the palm to be a polygon, rather than resembling a garden rake with points culminating in the wrist.


 


Re-identify the fingers and palm:


 



     fingerLookupIndices: {
thumb: [0, 1, 2, 3, 4],
indexFinger: [0, 5, 6, 7, 8],
middleFinger: [0, 9, 10, 11, 12],
ringFinger: [0, 13, 14, 15, 16],
pinky: [0, 17, 18, 19, 20],
},
palmLookupIndices: {
palm: [0, 1, 5, 9, 13, 17, 0, 1],
},


 



…and draw them to screen:


 



    const fingers = Object.keys(this.fingerLookupIndices);
for (let i = 0; i < fingers.length; i++) {
const finger = fingers[i];
const points = this.fingerLookupIndices[finger].map(
(idx) => keypoints[idx]
);
this.drawPath(ctx, sctx, points, false);
}
const palmArea = Object.keys(this.palmLookupIndices);
for (let i = 0; i < palmArea.length; i++) {
const palm = palmArea[i];
const points = this.palmLookupIndices[palm].map(
(idx) => keypoints[idx]
);
this.drawPath(ctx, sctx, points, true);
}


 



With the models and video loaded, keyframes tracked, and hands and shadows drawn to canvas, we can implement a speech-to-text SDK so that you can narrate and save your shadow story.


To do this, get a key from the Azure portal for Speech Services by creating a Service:


 


jelooper_5-1613690550393.png


 


You can connect to this service by importing the sdk:


 


import * as sdk from “microsoft-cognitiveservices-speech-sdk”;


 


…and start audio transcription after obtaining an API key which is stored in an Azure function in the /api folder. This function gets the key stored in the Azure portal in the Azure Static Web App where the app is hosted.


 



async startAudioTranscription() {
try {
//get the key
const response = await axios.get(/api/getKey);
this.subKey = response.data;
//sdk

let speechConfig = sdk.SpeechConfig.fromSubscription(
this.subKey,
eastus
);
let audioConfig = sdk.AudioConfig.fromDefaultMicrophoneInput();
this.recognizer = new sdk.SpeechRecognizer(speechConfig, audioConfig);

this.recognizer.recognized = (s, e) => {
this.text = e.result.text;
this.story.push(this.text);
};

this.recognizer.startContinuousRecognitionAsync();
} catch (error) {
this.message = error;
}
},



 



In this function, the SpeechRecognizer gathers text in chunks that it recognizes and organizes into sentences. That text is printed into a message string and displayed on the front end.


Display the Story


In this last part, the output cast onto the shadowCanvas is saved as a stream and recorded using the MediaRecorder API:


 



const stream = this.shadowCanvas.captureStream(60); // 60 FPS recording
this.recorder = new MediaRecorder(stream, {
mimeType: video/webm;codecs=vp9,
});
(this.recorder.ondataavailable = (e) => {
this.chunks.push(e.data);
}),
this.recorder.start(500);


 



…and displayed below as a video with the storyline in a new div:


 



      const video = document.createElement(video);
const fullBlob = new Blob(this.chunks);
const downloadUrl = window.URL.createObjectURL(fullBlob);
video.src = downloadUrl;
document.getElementById(story).appendChild(video);
video.autoplay = true;
video.controls = true;


 



This app can be deployed as an Azure Static Web App using the excellent Azure plugin for Visual Studio Code. And once it’s live, you can tell durable shadow stories!


 


jelooper_6-1613690550392.png


 



Try Ombromanie here. The codebase is available here



Take a look at Ombromanie in action:


 





 


Learn more about AI on Azure
Azure AI Essentials Video covering speech and language
Azure free account sign-up