This article is contributed. See the original author and article here.
There’s no question about it: People are the make-or-break element of a high-performing IT organization. To stay competitive, organizations must ensure that they have the right people with the right skills in the right positions. But with accelerating tech advancements, doing so has never been tougher.
IDC data reveals that IT skills shortages are widening in both scope and severity. According to 811 IT leaders responding to the IDC 2024 IT Skills Survey (January 2024), organizations are experiencing a wide range of negative impacts relating to a dearth of enterprise skills. Nearly two-thirds (62%) of IT leaders tell IDC that a lack of skills has resulted in missed revenue growth objectives. More than 60% say that it has led to quality problems and a loss of customer satisfaction overall.
As a result, IDC now predicts that by 2026, more than 90% of organizations worldwide will feel similar pain, amounting to some $5.5 trillion in losses caused by product delays, stalled digital transformation journeys, impaired competitiveness, missed revenue goals, and product quality issues.
To maintain competitiveness, organizations everywhere must improve IT training. They must go beyond just putting more modern IT training platforms and learning systems in place, however. Rather, they must invest in instilling and promoting a culture of learning within the organization — one that values ongoing, continuous learning and rewards the accomplishments of learners. Vendor- and technology-based credentials have long been core to organizational training efforts, and for good reason: They provide organizations with a way to ascertain important technical skills. And full-vendor credentials still matter, of course. But IDC data shows growing excitement about badges and micro-credentials as a means of verifying more specific scenario- or project-based skills.
When less is more
Without a doubt, vendor certifications remain a critical tool for hiring new professionals and upskilling existing ones. IDC data reveals that they can enhance career mobility, engagement, and salary. In fact, according to IDC’s 2024 Full and Micro Credential Survey (March 2024), some 70% of IT leaders say that certifications are important when hiring, regardless of how experienced candidates are in their careers. But organizations these days also find specific scenario- or project-based micro-credentials to be increasingly appealing as a complement to full-fledged vendor credentials.
To this point, more than 60% of organization leaders tell IDC that they use micro-credentials to test and validate specific skills related to real-world technical scenarios. They say that it also helps them to prepare themselves and their employees for in-demand job roles (55%) and to fulfill upskilling and reskilling needs (53%).
Q. What are the main factors driving the consideration of the use of micro-credentials?
Source: IDC 2024 Digital Skilling Survey, March 2024
Moving forward
As the IT skills shortage continues to expand and worsen, IDC predicts that global IT leaders will face increasing pressure to get the right people with the right skills into the right roles. Adding to the pressure is the rapidly growing demand for AI skills. According to IDC’s Future Enterprise and Resiliency Survey, 58% of CEOs worldwide are deeply concerned about their ability to deliver on planned AI initiatives over the next 12 months.
A full-featured IT training program that lets organizations and employees hone the skills they have makes all the difference. Credentials, whether full or project based, are a key part of any thoughtful IT training and continuous skilling initiative. Full credentials and micro-credentials together allow organizations to determine where skill gaps lie and create a plan of action to skill up employees on the technologies and projects on which they rely.
Organizations should focus on involving key stakeholders across departments to get their buy-in and partner with established credentialing organizations to add credibility. They should design clear career pathways that show how reskilling with micro-credentials and certifications can align with company goals and lead to faster promotions. They should foster a culture of continuous learning to keep skills current as well as update the program with industry trends and emerging technologies to maintain its relevance. Finally, they should regularly assess the program’s effectiveness in closing the skills gap and adjust as necessary.
Investing in continuing education for your employees just makes plain sense. IDC’s research has long shown that companies that invest in training are better able to retain their best, most talented employees.
This article is contributed. See the original author and article here.
Discover some key insights from the Work Trend Index report that can impact small and medium-sized business leaders, as well as the actions you can take to prepare your organization for AI and better leverage its benefits so you can maintain your competitive edge.
This article is contributed. See the original author and article here.
More often than we want to admit, customers frequently come to us with cases where a Consumption logic app was unintentionally deleted. Although you can somewhat easily recover a deleted Standard logic app, you can’t get the run history back nor do the triggers use the same URL. For more information, see GitHub – Logic-App-STD-Advanced Tools.
However, for a Consumption logic app, this process is much more difficult and might not always work correctly. The definition for a Consumption logic app isn’t stored in any accessible Azure storage account, nor can you run PowerShell cmdlets for recovery. So, we highly recommend that you have a repository or backup to store your current work before you continue. By using Visual Studio, DevOps repos, and CI/CD, you have the best tools to keep your code updated and your development work secure for a disaster recovery scenario. For more information, see Create Consumption workflows in multitenant Azure Logic Apps with Visual Studio Code.
Despite these challenges, one possibility exists for you to retrieve the definition, but you can’t recover the workflow run history nor the trigger URL. A few years ago, the following technique was documented by one of our partners, but was described as a “recovery” method:
We’re publishing the approach now as a blog post but with the disclaimer that this method doesn’t completely recover your Consumption logic app, but retrieves lost or deleted resources. The associated records aren’t restored because they are permanently destroyed, as the warnings describe when you delete a Consumption logic app in the Azure portal.
Recommendations
We recommend applying locks to your Azure resources and have some form of Continuous Integration/Continuous Deployment (CI/CD) solution in place. Locking your resources is extremely important and easy, not only to limit user access, but to also protect resources from accidental deletion.
To lock a logic app, on the resource menu, under Settings, select Locks. Create a new lock, and select either Read-only or Delete to prevent edit or delete operations. If anyone tries to delete the logic app, either accidentally or on purpose, they get the following error:
If the Azure resource group is deleted, the activity log is also deleted, which means that no recovery is possible for the logic app definition.
Run history won’t be available.
The trigger URL will change.
Not all API connections are restored, so you might have to recreate them in the workflow designer.
If API connections are deleted, you must recreate new API connections.
If a certain amount of time has passed, it’s possible that changes are no longer available.
Procedure
In the Azure portal, browse to the resource group that contained your deleted logic app.
On the logic app menu, select Activity log.
In the operations table, in the Operation name column, find the operation named Delete Workflow, for example:
Select the Delete Workflow operation. On the pane that opens, select the Change history tab. This tab shows what was modified, for example, versioning in your logic app.
As previously mentioned, if the Changed Property column doesn’t contain any values, retrieving the workflow definition is no longer possible.
In the Changed Property column, select .
You can now view your logic app workflow’s JSON definition.
Copy this JSON definition into a new logic app resource.
As you don’t have a button that restores this definition, the workflow should load without problems.
You can also use this JSON workflow definition to create a new ARM template and deploy the logic app to an Azure resource group with the new connections or by referencing the previous API connections.
If you’re restoring this definition in the Azure portal, you must go to the logic app’s code view and paste your definition there.
The complete JSON definition contains all the workflow’s properties, so if you directly copy and paste everything into code view, the portal shows an error because you’re copying the entire resource definition. However, in code view, you only need the workflow definition, which is the same JSON that you’d find on the Export template page.
So, you must copy the definition JSON object’s contents and the parameters object’s contents, paste them into the corresponding objects in your new logic app, and save your changes.
In this scenario, the API connection for the Azure Resource Manager connector was lost, so we have to recreate the connection by adding a new action. If the connection ID is the same, the action should re-reference the connection.
After we save and refresh the designer, the previous operation successfully loaded, and nothing was lost. Now you can delete the actions that you created to reprovision the connections, and you’re all set to go.
We hope that this guidance helps you mitigate such occurrences and speeds up your work.
This article is contributed. See the original author and article here.
Today we are thrilled to announce the latest milestone in our journey towards modernizing customer service: Microsoft Dynamics 365 Contact Center, a Copilot-first contact center solution that delivers generative AI to every customer engagement channel. With general availability on July 1, this standalone Contact Center as a Service (CCaaS) solution enables customers to maximize their current investments by connecting to preferred customer relationship management systems (CRMs) or custom apps.
Modernizing service experiences with generative AI
Customer service expectations are higher than ever. It’s not only frustrating for customers to deal with long wait times, being transferred to the wrong agent or having to repeat themselves multiple times — it’s detrimental to business. When people have poor customer service experiences, over half of them end up spending less or decide to take their business elsewhere (Qualtrics).
Generative AI is transforming customer service and revolutionizing the way contact centers operate — from delivering rich experiences across digital and voice channels that enable customers to resolve their own needs, to equipping agents with relevant context within the flow of work, and ultimately unifying operations to drive efficiency and reduce costs.
We have experienced the transformational impact of generative AI firsthand with Microsoft’s Customer Service and Support (CSS) team, one of the largest customer service organizations in the world. Before the support team migrated to Microsoft’s own tools, CSS was previously using 16 different systems and over 500 individual tools — slowing down service, hindering collaboration and producing inefficient workflows. With Copilot as part of the solution, the CSS team achieved a 12 percent decrease in average handle time for chat engagements and 13 percent decrease in agents requiring peer assistance to resolve an incident. And more broadly, CSS has seen a 31 percent increase in first call resolution and a 20 percent reduction in missed routes.
Dynamics 365 Contact Center
Applying learnings and insights from our own Copilot usage, coupled with multi-year investments in voice and digital channels, Dynamics 365 Contact Center infuses generative AI throughout the contact center workflow — spanning the channels of communication, self-service, intelligent routing, agent-assisted service and operations to help contact centers solve problems faster, empower agents and reduce costs.
Additionally, Dynamics 365 Contact Center is built natively on the Microsoft cloud to deliver extensive scalability and reliability across voice, digital channels and routing while at the same time allowing organizations to retain their existing investments in CRM or custom apps.
Key Dynamics 365 Contact Center capabilities include:
Next-generation self-service: With sophisticated pre-integrated Copilots for digital and voice channels that drive context-aware, personalized conversations, contact centers can deploy rich self-service experiences. Combining the best of interactive voice response (IVR) technology from Nuance and Microsoft Copilot Studio’s no-code/low-code designer, contact centers can provide customers with engaging, individualized experiences powered by generative AI.
Accelerated human-assisted service: Across every channel, intelligent unified routing steers incoming requests that require a human touch to the agent best suited to help, enhancing service quality and efficiency. When a customer reaches an agent, Dynamics 365 Contact Center gives the agent a 360-degree view of the customer with generative AI — for example, real-time conversation tools like sentiment analysis, translation, conversation summary, transcription and more are included to help improve service, along with others that automate repetitive tasks for agents such as case summary, draft an email, suggested response and the ability for Copilot to answer agent questions grounded on your trusted knowledge sources.
Operational efficiency: Contact center efficiency depends just as much on what happens behind the scenes as it does on customer and agent experiences. We’ve built a solution that helps service teams detect issues early, improve critical KPIs and adapt quickly. With generative AI-based, real-time reporting, Dynamics 365 Contact Center allows service leaders to optimize contact center operations across all support channels, including their workforce.
Here’s what customers are saying:
“At 1-800-Flowers.com, we pride ourselves on exceptional service and continually raising the bar. With Microsoft Dynamics 365 Contact Center, we’re creating a best-in-class solution that furthers our mission and helps inspire people to give more, connect more, and build more and better relationships.” — Arnie Leap, CIO, 1-800-FLOWERS.COM, Inc.
“MSC has always been known for the personal service that we give to our customers; Microsoft Dynamics 365 Contact Center helps us elevate that customer-centric approach.”— Fabio Catassi, CIO, Mediterranean Shipping Company
“For our support teams, efficient problem-solving and smooth customer interactions are key to delivering exceptional service. With Dynamics 365 Contact Center and by leveraging its AI capabilities, we see a future where our support teams will deliver that level of service every day.”— Stephen Currie, Vice President Support Operations, Synoptek
If you’re attending Customer Contact Week in Las Vegas, join me for my main stage panel on Thursday, June 6. Be sure to also stop by the Microsoft booth (#151) during the event to see Dynamics 365 Contact Center in action.
Stay tuned for the general availability of Dynamics 365 Contact Center on July 1.
This article is contributed. See the original author and article here.
Introduction
In this article we will demonstrate how we leverage GPT-4o capabilities, using images with function calling to unlock multimodal use cases.
We will simulate a package routing service that routes packages based on the shipping label using OCR with GPT-4o.
The model will identify the appropriate function to call based on the image analysis and the predefined actions for routing to the appropriate continent.
Background
The new GPT-4o (“o” for “omni”) can reason across audio, vision, and text in real time.
It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.
It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API.
GPT-4o is especially better at vision and audio understanding compared to existing models.
GPT-4o now enables function calling.
The application
We will run a Jupyter notebook that connects to GPT-4o to sort packages based on the printed labels with the shipping address.
Here are some sample labels we will be using GPT-4o for OCR to get the country this is being shipped to and GPT-4o functions to route the packages.
Make sure you create your python virtual environment and fill the environment variables as stated in the README.md file.
The code
Connecting to Azure OpenAI GPT-4o deployment.
from dotenv import load_dotenv
from IPython.display import display, HTML, Image
import os
from openai import AzureOpenAI
import json
load_dotenv()
GPT4o_API_KEY = os.getenv("GPT4o_API_KEY")
GPT4o_DEPLOYMENT_ENDPOINT = os.getenv("GPT4o_DEPLOYMENT_ENDPOINT")
GPT4o_DEPLOYMENT_NAME = os.getenv("GPT4o_DEPLOYMENT_NAME")
client = AzureOpenAI(
azure_endpoint = GPT4o_DEPLOYMENT_ENDPOINT,
api_key=GPT4o_API_KEY,
api_version="2024-02-01"
)
Defining the functions to be called after GPT-4o answers.
# Defining the functions - in this case a toy example of a shipping function
def ship_to_Oceania(location):
return f"Shipping to Oceania based on location {location}"
def ship_to_Europe(location):
return f"Shipping to Europe based on location {location}"
def ship_to_US(location):
return f"Shipping to Americas based on location {location}"
Defining the available functions to be called to send to GPT-4o.
It is very IMPORTANT to send the function’s and parameters descriptions so GPT-4o will know which method to call.
tools = [
{
"type": "function",
"function": {
"name": "ship_to_Oceania",
"description": "Shipping the parcel to any country in Oceania",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The country to ship the parcel to.",
}
},
"required": ["location"],
},
},
},
{
"type": "function",
"function": {
"name": "ship_to_Europe",
"description": "Shipping the parcel to any country in Europe",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The country to ship the parcel to.",
}
},
"required": ["location"],
},
},
},
{
"type": "function",
"function": {
"name": "ship_to_US",
"description": "Shipping the parcel to any country in the United States",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The country to ship the parcel to.",
}
},
"required": ["location"],
},
},
},
]
available_functions = {
"ship_to_Oceania": ship_to_Oceania,
"ship_to_Europe": ship_to_Europe,
"ship_to_US": ship_to_US,
}
Function to base64 encode our images, this is the format accepted by GPT-4o.
# Encoding the images to send to GPT-4-O
import base64
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
The method to call GPT-4o.
Notice below that we send the parameter “tools” with the JSON describing the functions to be called.
def call_OpenAI(messages, tools, available_functions):
# Step 1: send the prompt and available functions to GPT
response = client.chat.completions.create(
model=GPT4o_DEPLOYMENT_NAME,
messages=messages,
tools=tools,
tool_choice="auto",
)
response_message = response.choices[0].message
# Step 2: check if GPT wanted to call a function
if response_message.tool_calls:
print("Recommended Function call:")
print(response_message.tool_calls[0])
print()
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
function_name = response_message.tool_calls[0].function.name
# verify function exists
if function_name not in available_functions:
return "Function " + function_name + " does not exist"
function_to_call = available_functions[function_name]
# verify function has correct number of arguments
function_args = json.loads(response_message.tool_calls[0].function.arguments)
if check_args(function_to_call, function_args) is False:
return "Invalid number of arguments for function: " + function_name
# call the function
function_response = function_to_call(**function_args)
print("Output of function call:")
print(function_response)
print()
Please note that WE and not GPT-4o call the methods in our code based on the answer by GTP4-o.
# call the function
function_response = function_to_call(**function_args)
Iterate through all the images in the folder.
Notice the system prompt where we ask GPT-4o what we need it to do, sort labels for packages routing calling functions.
# iterate through all the images in the data folder
import os
data_folder = "./data"
for image in os.listdir(data_folder):
if image.endswith(".png"):
IMAGE_PATH = os.path.join(data_folder, image)
base64_image = encode_image(IMAGE_PATH)
display(Image(IMAGE_PATH))
messages = [
{"role": "system", "content": "You are a customer service assistant for a delivery service, equipped to analyze images of package labels. Based on the country to ship the package to, you must always ship to the corresponding continent. You must always use tools!"},
{"role": "user", "content": [
{"type": "image_url", "image_url": {
"url": f"data:image/png;base64,{base64_image}"}
}
]}
]
call_OpenAI(messages, tools, available_functions)
Let’s run our notebook!!!
Running our code for the label above produces the following output:
Recommended Function call:
ChatCompletionMessageToolCall(id='call_lH2G1bh2j1IfBRzZcw84wg0x', function=Function(arguments='{"location":"United States"}', name='ship_to_US'), type='function')
Output of function call:
Shipping to Americas based on location United States
Recent Comments