Boost your sales business with smart organization charts

Boost your sales business with smart organization charts

This article is contributed. See the original author and article here.

Organization charts enable sellers to better understand their customers’ organizational structures and identify key decision-makers. This information helps sellers develop and execute targeted sales strategies, improve their sales effectiveness, and build stronger relationships with their customers. Additionally, having an org chart in a CRM system helps improve collaboration among sales teams and improves overall communication and coordination with the customer’s organization.

With our new organization charts, you can build your entire org chart with ease and precision!

Creating organization charts made easy

The new feature in Dynamics 365 Sales makes building an organizational chart easier and more efficient, as users can create entire org charts with simple drag-and-drop actions. The list of all contacts of a given account is automatically gathered for you and displayed in the side pane. Through a simple drag-and-drop action, the entire org chart can be built in just a few minutes!

With the new organization chart, users can leverage tags to indicate key players and decision-makers in the org. This helps sellers quickly identify the right people to engage with during the sales process, reducing the time it takes to close deals and improving the overall customer experience. Users can create assistant cards to include executive assistants in the chart as well.

Organization chart

Monitor Contact Health

The new feature allows users to monitor the health and risks of customer relationships using relationship health embedded in organization charts. This capability helps sellers to identify potential risks to customer relationships, such as inactive accounts or unresolved issues, and take proactive measures to address them. It improves the overall health of customer relationships and reduces the risk of losing valuable customers. You can learn more about relationship intelligence by reading the Overview of Relationship intelligence | Microsoft Learn

Users can capture notes directly from organization charts on-the-go, enabling them to capture critical information about customers quickly. This feature helps sellers remember important details about their customers and allows them to keep track of their customer interactions. Users can access the org chart directly from the Contacts form, making it easier to navigate and manage customer information.

Contact health

Do more with LinkedIn

LinkedIn Sales Navigator is a powerful tool that enables sales professionals to build and maintain relationships with their clients and contacts. With a Microsoft Relationship Sales license, users can receive notifications when one of their contacts leaves an account. This feature is particularly useful for sales teams, as they rely on accurate and up-to-date information to achieve their goals. Additionally, with a Sales Navigator license, users can continue to send InMail and access the LinkedIn profile of their contacts. Therefore, organization charts offer even more, when you combine them with LinkedIn Sales Navigator as users get notifications that help maintain data accuracy.

Organization chart with LinkedIn update

To summarize, the smart organization charts offer the following capabilities:

  • Build the entire org chart via simple drag-and-drop action.
  • Leverage tags to indicate key players and decision-makers.
  • Create Assistant cards to include executive assistants in the organization chart.
  • Capture notes directly from org charts on-the-go.
  • Access your organization chart directly from the Contacts form as well.
  • Monitor the health and risks of the customer relationships using relationship health embedded in organization charts.
  • Get notified when contacts leave the organization with LinkedIn Sales Navigator License.

Next Steps

Increasing your sales team’s collaboration could be as simple as having an organization chart where you can visualize all your stakeholders and Dynamics 365 Sales makes it easy.

To get started with the new org charts:

Not a Dynamics 365 Sales customer yet? Take a guided tour and sign up for a free trial at Dynamics 365 Sales overview.

The post Boost your sales business with smart organization charts appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Cost Optimization Considerations for Azure VMs – Part 1: VM services

This article is contributed. See the original author and article here.

Azure Virtual Machines are an excellent solution for hosting both new and legacy applications. However, as your services and workloads become more complex and demand increases, your costs may also rise. Azure provides a range of pricing models, services, and tools that can help you optimize the allocation of your cloud budget and get the most value for your money.


 


Let’s explore Azure’s various cost-optimization options to see how they can significantly reduce your Azure compute costs.


The major Azure cost optimization options can be grouped into three categories: VM services, pricing models and programs, and cost analysis tools. 


 


Let’s have a quick overview of these 3 categories:


 


VM services – Several VM services give you various options to save, depending on the nature of your workloads.  These can include things like dynamically autoscaling VMs according to demand or utilizing spare Azure capacity at up to 90% discount versus pay-as-you-go rates.


 


Pricing models and programs – Azure also offers various pricing models and programs that you can take advantage of depending on your needs and desires of how you plan to spend your Azure costs.  For example, committing to purchase compute capacity for a certain time period can lower your average costs per VM by up to 72%.


 


Cost analysis tools – This category of optimization features various tools available for you to calculate, track, and monitor costs of your Azure spend.  This deep insight and data into your spending allows you to make better decisions about where your compute costs are being spent and how to allocate them in a way that best suits your needs.


 


When it comes to VMs, the various VMs services are probably the first place you want to start when looking to save cost.  While this blog will focus mostly on VM services, stay tuned for blogs about pricing models & programs and cost analysis tools!


 


Spot Virtual Machines


 


Spot Virtual Machines provide compute capacity at drastically reduced costs by leveraging compute capacity that isn’t being currently used.  While it’s possible to have your workloads evicted, this compute capacity is charged at a greatly reduced price, up to 90%.  This makes Spot Virtual Machines ideal for workloads that are interruptible and non-time sensitive, like machine learning model training, financial modeling, or CI/CD.


 


Incorporating Spot VMs can undoubtedly play a key role in your cost savings strategy. Azure provides significant pricing incentives to utilize any current spare capacity.  The opportunity to leverage Spot VMs should be evaluated for every appropriate workload to maximize cost savings.  Let’s learn more about how Spot Virtual Machines work and if they are right for you.


 


Deployment Scenarios


There are a variety of cases in which Spot VMs can be ideal for, let’s look at some examples:


 



  • CI/CD – CI/CD is one of the easiest places to get started with Spot Virtual Machines. The temporary nature of many development and test environments makes them suited for Spot VMs.  The difference in time of a couple minutes to a couple hours when testing an application is often not business-critical.  Thus, deploying CI/CD workloads and build environments with Spot VMs can drastically lower the cost of operating your CI/CD pipeline. Customer story

  • Financial modeling – creating financial models is also compute resource intensive, but often transient in nature.  Researchers often struggle to test all the hypotheses they want with non-flexible infrastructure.  But with Spot VMs, they add extra compute resources during periods of high demand without having to commit to purchasing a higher amount of dedicated VM resources, creating more and better models faster. Customer story

  • Media rendering – media rendering jobs like video encoding and 3D modeling can require lots of computing resources but may not necessarily demand resources consistently throughout the day.  These workloads are also often computationally similar, not dependent on each other, and not requiring immediate responses.  These attributes make it another ideal case for Spot VMs. For rendering infrastructure often at capacity, Spot VMs are also a great way to add extra compute resources during periods of high demand without having to commit to purchasing a higher amount of dedicated VM resources to meet capacity, lowering overall TCO of running a render farm. Customer story


 


Generally speaking, if the workload is stateless, scalable, or time, location, and hardware-flexible, then they may be a good fit for Spot VMs.  While Spot VMs can offer significant cost savings, they are not suitable for all workloads. Workloads that require high availability, consistent performance, or long-running tasks may not be a good fit for Spot VMs. 


 


Features & Considerations


Now that you have learned more about Spot VMs and may be considering using them for your workloads, let’s talk a bit more about how Spot VMs work and the controls available to you to optimize cost savings even further.


 


Spot VMs are priced according to demand.  With this flexible pricing model, Spot VMs also give you the ability to set a price limit for the Spot VMs that you’ll use.  If the demand is high enough that the price for a Spot VM exceeds what you’re willing to pay, you can simply use this limit to opt to not run your workloads at that time and wait for demand to decrease.  If you anticipate the Spot VMs you want to use are in a region that will have high utilization rates a time of day or month, you may want to choose another region, or plan for creating higher price limits for workloads that occur during higher demand times.  If the time when the workload runs isn’t important, you may opt to set the price limit low, such that your workloads only run during periods that Spot capacity is the cheapest to minimize your Spot VM costs.


 


While using Spot VMs with price limits, we also must look at the different eviction types and policies, which are options you can set in place to determine what happens to your Spot VMs when they are to be reclaimed by a pay-as-you-go customer.   To maximize cost savings, it’s best to prioritize the delete eviction policy first.  VMs can be redeployed faster, meaning less downtime waiting for Spot capacity, and not having to pay for disk storage.  However, if your workload is region or size specific, and requires some level of persistent data in the event of an eviction, then the Deallocate policy will be a better option. 


 


These things may only be a small slice of all the considerations to best utilize Spot VMs.  Learn more about best practices for building apps with Spot VMs here.


 


So how can we actually deploy and manage Spot VMs at scale? Using Virtual Machine Scale Sets is likely your best option. Virtual Machine Scale Sets, in addition to Spot VMs, offer a plethora of cost savings features and options for your VM deployments and easily allow you to deploy your Spot VMs in conjunction with standard VMs.  In our next section, we’ll look at some of these features in Virtual Machine Scale Sets and how we can use them to deploy Spot VMs at scale.


 


Virtual Machine Scale Sets


 


Virtual Machine Scale Sets enable you to manage and deploy groups of VMs at scale with a variety of load balancing, resource autoscaling, and resiliency features.  While a variety of these features can indirectly save costs like making deployments simpler to manage or easier to achieve high availability, some of these features contribute directly to reducing costs, namely autoscaling and Spot Mix.  Let’s dive deeper into how these two features can optimize costs.


 


Autoscaling


Autoscaling is a critical feature included within Virtual Machine Scale Sets that give you the ability to dynamically increase or decrease the number of virtual machines running within the scale set. This allows you to scale out your infrastructure to meet demand when it is required, and scale it in when compute demand lowers, reducing the likelihood that you’ll be paying to have extra VMs running when you don’t have to.


 


VMs can be autoscaled according to rules that you can define yourself from a variety of metrics.  These rules can be based off host-based metrics available from your VM like CPU usage or memory demand or application-level metrics like session counts and page load performance.  This flexibility gives you the option to scale in or out your workload to very specific requirements, and it is with this specificity that you can control your infrastructure scaling to optimally meet your compute demand without extra overhead.


You can also scale in or out according to a schedule, for cases in which you can anticipate cyclical changes to VM demand throughout certain times of the day, month, or year.  For example, you can automatically scale out your workload at the beginning of the workday when application usage increases, and then scale in the number of VM instances to minimize resource costs overnight when application usage lowers.  It’s also possible to scale out on certain days when events occur such as a holiday sale or marketing launch.  Additionally, for more complex workloads, Virtual Machines Scale Sets also provides the option to leverage machine learning to predictively autoscale workloads according to historical CPU usage patterns. 


 


These autoscaling policies make it easy to adapt your infrastructure usage to many variables and leveraging autoscale rules to best fit your application demand will be critical to reducing cost.


 


Spot Mix


With Spot Mix in Virtual Machine Scale Sets, you can configure your scale in or scale out policy to specify a ratio of standard to Spot VMs to maintain as VMs increase or decrease.  Say if you specify a ratio of 50%, then for every 10 new VMs the scale out policy adds to the scale set, 5 of the machines will be standard VMs, while the other 5 will be Spot.  To maximize cost savings, you may want to have a low ratio standard to Spot VMs, meaning more Spot VMs will be deployed instead of standard VMs as the scale set grows.  This can work well for workloads that don’t need much guaranteed capacity at larger scales.  However, for workloads that need greater resiliency at scale, then you may want to increase the ratio to ensure adequate baseline standard capacity.


 


You can learn more about choosing which VM families and sizes might be right for you with the VM selector and the Spot Advisor, which we will cover more in depth a later blog of this VM cost optimization blog series. 


 


Wrapping up


 


We’ve learned how Spot VMs and Virtual Machines Scale Sets, especially when combined, equip you with various features and options to control how your VMs behave and how you can use those controls in a manner to maximize your cost savings. 


Next time, we’ll go in depth the various pricing models and programs available in Azure that can even further optimize your cost, allowing you to do more with less with Azure VMs.  Stay tuned for more blogs!

Lesson Learned #347: String or binary data would be truncated applying batch file in DataSync.

This article is contributed. See the original author and article here.

Today, we got a service request that our customer using DataSync to transfer data from OnPremise to Azure SQL Database they got the following error message: Sync failed with the exception ‘An unexpected error occurred when applying batch file sync_aaaabbbbcccddddaaaaa-bbbb-dddd-cccc-8825f4397b31.batch.


 



  • See the inner exception for more details.Inner exception: Failed to execute the command ‘BulkUpdateCommand‘ for table ‘dbo.Table1’; the transaction was rolled back.

  • Ensure that the command syntax is correct.Inner exception: SqlException Error Code: -2146232060 – SqlError Number:2629, Message: String or binary data would be truncated in object ID ‘-nnnnn’. Truncated value: ”.

  • SqlError Number:8061, Message: The data for table-valued parameter ‘@changeTable’ doesn’t conform to the table type of the parameter. SQL Server error is: 2629, state: 1 SqlError Number:3621, Message: The statement has been terminated. 


 


We reviewed the object ID exposed in the error and we found that a column that belongs to the table1 in OnPremise has been changed of data type from NCHAR(100) to NVARCHAR(255). Once the sync started again there is not possible to update the data in the subscribers of DataSync. 



In this case, our recomendations was: 
     1. Remove the affected table from the sync group.
     2. Trigger a sync.
     3. Re-add the affected table to the sync group.
     4. Trigger a sync.
     5. The sync of Step 2 would remove the metadata for the affected table, and would re-add it correctly on Step 4.


 


Regards, 

Introducing the Microsoft 365 Copilot Early Access Program and new capabilities in Copilot 

Introducing the Microsoft 365 Copilot Early Access Program and new capabilities in Copilot 

This article is contributed. See the original author and article here.

In March, we introduced Microsoft 365 Copilot—your copilot for work. Today, we’re announcing that we’re bringing Microsoft 365 Copilot to more customers with an expanded preview and new capabilities.

The post Introducing the Microsoft 365 Copilot Early Access Program and new capabilities in Copilot  appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Securing Windows workloads on Azure Kubernetes Service with Calico

Securing Windows workloads on Azure Kubernetes Service with Calico

This article is contributed. See the original author and article here.

This blog post has been co-authored by Microsoft and Dhiraj Sehgal, Reza Ramezanpur from Tigera.


 


Container orchestration pushes the boundaries of containerized applications by preparing the necessary foundation to run containers at scale. Today, customers can run Linux and Windows containerized applications in a container orchestration solution, such as Azure Kubernetes Service (AKS).


 


This blog post will examine how to set up a Windows-based Kubernetes environment to run Windows workloads and secure them using Calico Open Source. By the end of this post, you will see how simple it is to apply your current Kubernetes skills and knowledge to rule a hybrid environment.


 


Container orchestration at scale with AKS


After creating a container image, you will need a container orchestrator to deploy it at scale. Kubernetes is a modular container orchestration software that will manage the mundane parts of running such workloads, and AKS abstracts the infrastructure on which Kubernetes runs, so you can focus on deploying and running your workloads.


 


In this blog post, we will share all the commands required to set up a mixed Kubernetes cluster (Windows and Linux nodes) in AKS – you can open up your Azure Cloud Shell window from the Azure Portal and run the commands if you want to follow along.


 


If you don’t have an Azure account with a paid subscription, don’t worry—you can sign up for a free Azure account to complete the following steps.


 


Resource group


To run a Kubernetes cluster in Azure, you must create multiple resources that share the same lifespan and assign them to a resource group. A resource group is a way to group related resources in Azure for easier management and accessibility. Keep in mind that each resource group must have a unique name.


 


The following command creates a resource group named calico-win-container in the australiaeast location. Feel free to adjust the location to a different zone.


 

az group create --name calico-win-container --location australiaeast

 


 


Cluster deployment


Note: Azure free accounts cannot create any resources in busy locations. Feel free to adjust your location if you face this problem.


 


A Linux control plane is necessary to run the Kubernetes system workloads, and Windows nodes can only join a cluster as participating worker nodes.


 

az aks create --resource-group calico-win-container --name CalicoAKSCluster --node-count 1 --node-vm-size Standard_B2s --network-plugin azure --network-policy calico --generate-ssh-keys --windows-admin-username 

 


 


Windows node pool


Now that we have a running control plane, it is time to add a Windows node pool to our AKS cluster.


 


Note: Use `windows` as the value for the ‘–os-type’ argument.


 

az aks nodepool add --resource-group calico-win-container --cluster-name CalicoAKSCluster --os-type Windows --name calico --node-vm-size Standard_B2s --node-count 1

 


 


Calico for Windows


Calico for Windows is officially integrated into the Azure platform. Every time you add a Windows node in AKS, it will come with a preinstalled version of Calico. To check this, use the following command to ensure EnableAKSWindowsCalico is in a Registered state:


 

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAKSWindowsCalico')].{Name:name,State:properties.state}"

 


 


Expected output:


 

Name                                               State
-------------------------------------------------  ----------
Microsoft.ContainerService/EnableAKSWindowsCalico  Registered

 


 


If your query returns a Not Registered state or no items, use the following command to enable AKS Calico integration for your account:


 

az feature register --namespace "Microsoft.ContainerService" --name "EnableAKSWindowsCalico"

 


 


After EnableAKSWindowsCalico becomes registered, you can use the following command to add the Calico integration to your subscription:


 

az provider register --namespace Microsoft.ContainerService

 


 


Exporting the cluster key


Kubernetes implements an API Server that provides a REST interface to maintain and manage cluster resources. Usually, to authenticate with the API server, you must present a certificate, username, and password. The Azure command-line interface (Azure CLI) can export these cluster credentials for an AKS deployment.


 


Use the following command to export the credentials:


 

az aks get-credentials --resource-group calico-win-container --name CalicoAKSCluster

 


 


 


After exporting the credential file, we can use the kubectl binary to manage and maintain cluster resources. For example, we can check which operating system is running on our nodes by using the OS labels.


 

kubectl get nodes -L kubernetes.io/os

 


 


You should see a similar result to:


 

NAME                                STATUS   ROLES   AGE     VERSION   OS
aks-nodepool1-64517604-vmss000000   Ready    agent   6h8m    v1.22.6   linux
akscalico000000                     Ready    agent   5h57m   v1.22.6   windows

 


 


Windows workloads


If you recall, Kubernetes API Server is the interface that we can use to manage or maintain our workloads.


 


We can use the same syntax to create a deployment, pod, service, or Kubernetes resource for our new Windows nodes. For example, we can use the same OS selector that we previously used for our deployments to ensure Windows and Linux workloads are deployed to their respective nodes:


 

kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/00_deployment.yaml

 


 


Since our workload is a web server created by Microsoft’s .NET technology, the deployment YAML file also packages a service load balancer to expose the HTTP port to the Internet.


 


Use the following command to verify that the load balancer successfully acquired an external IP address:


 

kubectl get svc win-container-service -n win-web-demo

 


 


You should see a similar result:


 


 

NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
win-container-service   LoadBalancer   10.0.203.176   20.200.73.50   80:32442/TCP   141m

 


 


 


Use the “EXTERNAL-IP” value in a browser, and you should see a page with the following message:


Picture1.png


 


Perfect! Our pod can communicate with the Internet.


 


Securing Windows workloads with Calico


The default security behavior for the Kubernetes NetworkPolicy resource permits all traffic. While this is a great way to set up a lab environment in a real-world scenario, it can severely impact your cluster’s security.


 


First, use the following manifest to enable the API server:


 

kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/01_apiserver.yaml

 


 


Use the following command to get the API Server deployment status:


 

kubectl get tigerastatus

 


 


You should see a similar result to:


 

NAME        AVAILABLE   PROGRESSING   DEGRADED   SINCE
apiserver   True        False         False      10h
calico      True        False         False      10h

 


 


 


Calico offers two security policy resources that can cover every corner of your cluster. We will implement a global policy since it can restrict Internet addresses without the daunting procedure of explicitly writing every IP/CIDR in a policy.


 

kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/02_default-deny.yaml

 


 


If you go back to your browser and click the Try again button, you will see that the container is isolated and cannot initiate communication to the Internet.


Picture2.png


 


Note: The source code for the workload is available here.


 


Clean up
If you have been following this blog post and did the lab section in Azure, please make sure that you delete the resources, as cloud providers will charge you based on usage.


Use the following command to delete the resource group:


 


Conclusion


While network policy is not relevant for lab scenarios, production workloads have a different level of security requirements to meet. Calico offers a simple and integrated way to apply network policies to Windows workloads on Azure Kubernetes Service. In this blog post, we covered the basics for implementing a network policy to a simple web server. You can check out more information on how Calico works with Windows on AKS in our documentation page.


 


Additional links: