CISA Adds Ten Known Exploited Vulnerabilities to Catalog

This article is contributed. See the original author and article here.

CISA has added ten new vulnerabilities to its Known Exploited Vulnerabilities Catalog, based on evidence of active exploitation. These types of vulnerabilities are a frequent attack vector for malicious cyber actors and pose significant risk to the federal enterprise. Note: to view the newly added vulnerabilities in the catalog, click on the arrow in the “Date Added to Catalog” column, which will sort by descending dates.      

Binding Operational Directive (BOD) 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities established the Known Exploited Vulnerabilities Catalog as a living list of known CVEs that carry significant risk to the federal enterprise. BOD 22-01 requires FCEB agencies to remediate identified vulnerabilities by the due date to protect FCEB networks against active threats. See the BOD 22-01 Fact Sheet for more information.     

Although BOD 22-01 only applies to FCEB agencies, CISA strongly urges all organizations to reduce their exposure to cyberattacks by prioritizing timely remediation of Catalog vulnerabilities as part of their vulnerability management practice. CISA will continue to add vulnerabilities to the Catalog that meet the specified criteria.

The AI-powered contact center, part 3: Build powerful conversational AI solutions

The AI-powered contact center, part 3: Build powerful conversational AI solutions

This article is contributed. See the original author and article here.

To remain competitive and thrive, organizations must differentiate their brand through outstanding customer service experiences. As customer expectations and business needs continuously shift, enterprises need the agility to rapidly create, maintain, and optimize those experiences with the latest technologies, including sophisticated AI, without relying on external vendors.

What enterprises need are options that keep them in controla choice of no-code, low-code, and pro-code AI development tools that offer the freedom to quickly build the conversational AI applications they need. After the recent launch of the Microsoft Digital Contact Center Platform, organizations now have those options.

The right tools for the job 

Some organizations want the flexibility and agility to build their own conversational solution in-house on an end-to-end software-as-a-service (SaaS) solution that is quick to deploy, monitor and tune; one that can easily be self-managed by their subject experts. For such organizations, Microsoft Power Virtual Agents and the underlying low-code Power Platform is the right choice. Power Virtual Agents solutions are used widely in many industriesfrom airlines to technology firmsand for a variety of use cases, from internal-facing help desk solutions to external-facing chatbots across multiple channels and languages.

Some examples of Power Virtual Agents users include:

  • A municipal government with 1.4 million citizens using Power Virtual Agents and Microsoft Teams to empower subject matter experts to easily address employee queries.
  • A sports and events organization using Dynamics 365 and Power Virtual Agents to scale their contact center with a 97 percent resolution rate.
  • A financial institution using Power Virtual Agents and Dynamics 365 to enable text and voice enabled bots to serve customers banking needs. 

Figure 1: Power Virtual Agents intuitive graphical interface showing the conversational flow design and topic navigation.

For large enterprises that need a custom-built natural language model or those with sophisticated interactive voice response (IVR) omnichannel needs, Nuance Mix is the right choice. Such organizations can also call on Nuance Professional Services teams at any stage of their project, drawing on deep expertise in developing, deploying, tuning, and optimizing custom-built conversational AI applications. Omnichannel IVRs and chatbots built using Mix by Nuance Professional Services are widely used by large enterprises, including Fortune 100 companies, from large telcos to major retailers across the world in multiple languages.

Some examples of Nuance Mix users include:

  • A large airline putting Nuance Mix at the center of its virtual assistant (VA) and IVR omnichannel strategy to add transactional self-service functionalities
  • A 5G mobile network operator using Nuance Mix to create a customer care VA that enables its customers to resolve issues by simply speaking or typing.
  • A recognized leader in clinical communication and workflow solutions using Nuance Mix to enable care teams to communicate and collaborate in real-time.

n Mix.dialog, call flow designers build on core components that can orchestrate mixed‑initiative dialogs
Figure 2: In Mix.dialog, call flow designers build on core components that can orchestrate mixed‑initiative dialogs.

Shaping the future of the contact center together

Regardless of which solution organizations begin with, the Digital Contact Center Platform will provide interoperability between them. As we continue to bring Nuance Mix and Power Virtual Agents closer together on the Digital Contact Center Platform, their strengths will make powerful conversational AI solutions even easier and faster to build. And as ever, our commitment to protecting your current investments is thanks to backward compatibility and a clear, disruption-free migration path to any future solutions.

Next steps

In the final article in this series, we will turn our attention to biometrics and fraud prevention in the digital contact center. In the meantime, learn more about how to create engaging, personalized digital experiences, and achieve superior self-service voice support with the Microsoft Digital Contact Center Platform.

The post The AI-powered contact center, part 3: Build powerful conversational AI solutions appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Backup restore speed improvement for SQL Managed Instance

Backup restore speed improvement for SQL Managed Instance

This article is contributed. See the original author and article here.

The importance of fast database restore operations in any environment cannot be overstated – it is essentially time business cannot carry on; everything is offline.


 


There is a known DBA saying that “one should not have a backup strategy, but a recovery strategy”, which means that just taking backups isn’t enough, and that the restore process should also be regularly tested, so that you measure and know how much time it takes to restore a production database if and when the need arises.


 


Azure SQL Managed Instance allows customers not only to recover their databases from when FULL or DIFFERENTIAL backups were taken, but it also allows customers to recover their database to any given point in time by offering PITR – Point-In-Time Restore functionality. Besides being able to restore data from recent backups, Azure SQL Managed Instance also supports recovering databases from discrete, individual backups from long term retention (LTR) storage – which can be configured up to 10 years.


 


Once the restore process has been initiated, there is nothing that a customer can do to help it – as the process executes asynchronously and cannot be cancelled. While customers can scale up Azure SQL Managed Instance before a restore operation to increase backup restore speed, it’s only possible to do so before a planned restore, and not when there is a sudden and unexpected need.  Otherwise, no matter how fast the scaling operation executes, it would still further delay the overall speed of the database restoration process.


Improving restore speed



There are multiple phases of the SQL MI database restore process, which, aside from the usual SQL Server restore operations, includes registering databases as an Azure asset for visibility & management within the Azure Portal. Additionally, the restore process forces a number of specific internal options, and some property changes such as forcing the switch to the FULL recovery model and forcing the database option PAGE_VERIFY to CHECKSUM, as well as eventually performing a full backup to start the log chain and provide full point-in-time- restore options through the combination of full and log database backups.


 


The restore operation on SQL MI also includes log truncation, and the execution time for the truncation has been vastly improved, which means that customers can expect their entire database restore process to become faster on both service tiers.


Service tiers differences



It is important to understand that faster means different speeds on different service tiers. This has to do with an additional necessary operation for the Business Critical service tier. As we have described the internal architecture in High Availability in Azure SQL MI: Business Critical service tier, the Business Critical Service tier runs on a synchronous Availability Group with 4 replicas, meaning the initial backup must be replicated to all the replicas in order to complete the setup. The current implementation of the restore process on the Business Critical service tier uses direct seeding of the Availability Group to distribute the newly restored database between replicas. As such, the restore operation will not complete until the backup has been restored to every replica, and a full backup can’t be taken until direct seeding to the secondary replicas finishes.


 


The following diagram shows a conceptual explanation of some of the most time-consuming execution phases of the database restore process on Azure SQL Managed Instance.


NikoNeugebauer_0-1661347677514.png


 


Although the boxes are similar sizes, they don’t represent the real amount of time spent on each of those specific functions. The real amount of time vastly differs based on the number of transaction log backups, the ratio of data compression, and, of course, the sheer speed of write and log operations that are available to the Azure SQL Managed Instance. Another important factor to consider are the operations executed against the Azure SQL Managed Instance during the database restore process – either directly by the user, or automatically by the engine – such as Tuple Mover in Columnstore Indexes, or even automated operations configured by the customer, such as index maintenance jobs.


 


Test setup & obtained results



For the performance testing, we created two identical Azure SQL Managed Instances – one with the log truncation improvement included and activated, while the second one does not use the improvement.  The two instances use the following hardware specifications:



  • Service tier: General Purpose

  • Hardware generation: Standard Series (Gen 5)

  • CPU vCores: 8

  • Storage: 4 TB

  • Region: East US 2


NikoNeugebauer_1-1661347769754.png


We have tested three databases, each one with a different size and a different fullness percentage, representing ranges from 60 GB up to 800 GB.



  • db1: size of 800GB

  • db2: size of 600GB

  • db3: size of 60GB


Here are the results, in seconds, for multiple restore tests for each of the databases presented both for the default Azure SQL Managed Instance and for the Azure SQL Managed Instance that has the feature enabled.
























 



60 GB



600 GB



800 GB



Without Feature



266 s



1718 s



5048 s



With Feature enabled



155 s



704 s



2054 s



NikoNeugebauer_2-1661347965402.png


We went a step further, experimenting with the 800 GB database and tested different situations that included not just a fresh FULL backup but also such configurations as FULL backup with a number of Transaction Log backups, FULL and DIFFERENTIAL and Transaction Log backups. The observed results can be seen in the table below:
























 



Full only



Full + logs



Full + Diff + Logs



Without Feature



84 min



106 min



120 min



With Feature enabled



34 min



46 min



90 min



 


Here are those results visualized for better understanding of the impact:


NikoNeugebauer_3-1661348056571.png


 


Benefit estimates


 


Exact benefits are very specific to the configuration and workload of each Azure SQL Managed Instance. On the General Purpose service tier, the exact size of database data and log files directly impacts the benefit, as it does with any kind of workload. If you are interested in more details, please consult our respective documentation article How-to improve data loading performance on SQL Managed Instance with General Purpose service tier. Additionally, customers should be mindful of the exact geographical region of their backup file locations, and the number of files, as a high number of transaction log files significantly impact restore times, as well as the number of differential backups, and their size.


 


For the databases smaller than 1 TB in the General Purpose service tier, we expect database restoration speed to increase from 20% to 70%. For databases that are greater than 1 TB, expect restoration speed improvements between 30% to 70%.


However, overall, customers on both service tiers should see a performance improvement to their database backup restoration process – but the improvement will vary for each case.





So, what is the catch, you might ask…?
How can you get an Azure SQL Managed Instance with this feature?
How can you enable faster restores?


The best news is that this feature is coming right now to all of our customers, and they won’t have to enable it, their restore operations will simply start working faster.

Preparing Critical Infrastructure for Post-Quantum Cryptography

This article is contributed. See the original author and article here.

CISA has released CISA Insights: Preparing Critical Infrastructure for Post-Quantum Cryptography, which outlines the actions that critical infrastructure stakeholders should take now to prepare for their future migration to the post-quantum cryptographic standard that the National Institute of Standards and Technology (NIST) will publish in 2024.  

CISA strongly urges critical infrastructure stakeholders follow the recommendations in the Insights now to ensure a smooth migration to the post-quantum cryptography standard.