Modern Application Development

Modern Application Development

This article is contributed. See the original author and article here.

Blog Overview 


This blog will provide an overview of Modern application development. The blog will first define the modern application development approachThen delve into the 7 building blocks of the approach starting with cloud native architecture, followed by AI, Integration, Data, Software delivery, Operations, and Security.  


 


Each segment will define and explain the building block and how the modern application development approach leverages the ‘building blocks’ to produce more robust applications. 


 


What is Modern Application Development (MAD)? 


Modern application development is an approach that enables you to innovate rapidly by using cloud-native architectures with loosely coupled microservices, managed databases, AI, DevOps support, and built-in monitoring.


 


The resulting modern applications leverage cloud native architectures by packaging code and dependencies in containers and deploying them as microservices to increase developer velocity using DevOps practices.


 


Subsequently modern applications utilize continuous integration and delivery (CI/CD) technologies and processes to improve system reliability. Modern apps employ automation to identify and quickly mitigate issues applying best practices like infrastructure as code and increasing data security with threat detection and protection.


 


Lastly, modern applications are faster by infusing AI into native architecture structures to reduce manual tasks, accelerating workflows and introducing low code application development tools to simplify and expedite development processes.


 


Cloud-native architectures 


According to The Cloud Native Computing Foundation (CNCF), cloud native is defined as “Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.  


Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.


 


These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”


 


Utilizing that definition, what are the key tenants of a cloud-native approach, and how does each tenant benefit you?  


As stated above, cloud-native architectures center on speed and agility. That speed and agility are derived from 6 factors: 



  1.  Cloud infrastructure 

  2.  Modern design 

  3.  Microservices 

  4.  Containers 

  5.  Backing services 

  6.  Automation.  


riduncan_0-1615417254157.png


 


 


Cloud infrastructure is the most important factor that contributes to the speed and agility of cloud-native architecture.


 


Key Factors  



  1.  Cloud-native systems fully leverage the cloud service model using PaaS compute infrastructure and managed services.  

  2.  Cloud-native systems continue to run as infrastructure scales in or out without worrying about the back end because the infra is fully managed. 

  3.  Cloud-native systems have auto scaling, self-healing, and monitoring capabilities. 


Modern Design is highly effective in part due to the Twelve-Factor Application method, which is a set of principles and practices that developers follow to construct applications optimized for modern cloud environments.


 


Most Critical Considerations for Modern Design 



  1.  Communication – How front ends communication with back-end services, and how back-end services communicate with each other. 

  2.  Resiliency – How services in your distributed architecture respond in less-than-ideal scenarios due to the in-process, out-process network communications of microservices architecture. 

  3.  Distributed Data – How do you query data or implement a transaction across multiple services? 

  4.  Identity – How does your service identify who is accessing it and their allotted permissions? 


What are Microservices?  


Microservices are built as a distributed set of small, independent services that interact through a shared fabric.  


riduncan_1-1615417254174.png


 


 


Improved Agility with Microservices  



  1.  Each microservice has an autonomous lifecycle and can evolve independently and deploy frequently.   

  2.  Each microservice can scale independentlyenabling services to scale to meet demand.  


Those microservices are then packaged a container image, those images are stored in container registry. When needed you transform the container into a running container instance, to utilize the stored microservices. How do containers benefit cloud native apps?


 


Benefits of Containers 



  1.  Provide portability and guarantee consistency across environments. 

  2.  Containers can isolate microservices and their dependencies from the underlying infrastructure.  

  3.  Smaller footprints than full virtual machines (VMs). That smaller size increases density, the number of microservices, that a given host can run at a time.  


Cloud native solutions also increase application speed and agility via backing services.  


riduncan_2-1615417254161.png


 


Benefits of Backing Services 



  1.  Save time and labor 

  2.  Treating backing services as attached resources enables the services to attach and detach as needed without code changes to the microservices that contain information, enabling greater dynamism 


Lastly, cloud-native solutions leverage automation. Using cloud-native architectures your infrastructure and deployment are automated, consistent, and reputable.


 


Benefits of Automation 



  1. Infrastructure as Code (IaC) avoids manual environment configuration and delivers stable environments rapidly at scale. 

  2.  Automated deployment leverages CI/CD to speed up innovation and deployment, updating on-demandsaving money and time.  


Artificial Intelligence 


The second building block in the modern application development approach is Artificial intelligence (AI).


 


What comprises artificial intelligence? How do I add AI to my applications? Azure Artificial Intelligence is comprised of machine learning, knowledge mining, and AI apps and agents. Under the apps and agent’s domain there are two overarching products, Azure Cognitive Services and Bot Servicethat we’re going to focus on.


  


Cognitive services are a collection of domain specific pre-trained AI models that can be customized with your data. Bot service is a purpose-built bot development environment with out-of-the-box templates. To learn how to add AI to your applications watch the short video titled Easily add AI to your applications.


 


riduncan_3-1615417254190.png


 


Innate Benefits 


 


User benefits: Translation, chatbots, and voice for AI-enabled user interfaces. 


Business benefits: Enhanced business logic for scenarios like search, personalization, document processing, image analytics, anomaly detection, and speech analytics.


 


Modern Application Development unique benefit: 


Enable developers of any skill to add AI capabilities to their applications with pre-built and customizable AI models for speech, vision, language, and decision-making.


 


Integration 


The third building block is integration.


 


Why is integration needed, and how is it accomplished? 


Integration is needed to integrate applications by connecting multiple independent systemsThe four core cloud services to meet integration needs are: 



  1.  A way to publish and manage application programming interfaces (APIs).  

  2.  A way to create and run integration logic, typically with a graphical tool for defining the workflow’s logic.  

  3.  A way for applications and integration technologies to communicate in a loosely coupled way via messaging.  

  4.  A technology that supports communication via events


 


riduncan_4-1615417254165.jpeg


 


What are the benefits of Azure integration services and how do they translate to the modern app dev approach? 


Azure meets all four needs, the first need is met by Azure API management, the second is met by Azure Logic Apps, the third is Azure Service Bus, and the final is met by Azure Event Grid.


 


The four components of Azure Integration Services address the core requirements of application integration. Yet real scenarios often require more, and this is where the modern application development approach comes into play.


  


Perhaps your integration application needs a place to store unstructured data, or a way to include custom code that does specialized data transformations.


  


Azure Integration Services is part of the larger Azure cloud platform, making it easier to integrate data, APIs, and into your modern app to meet your needs.


 


You might store unstructured data in Azure Data Lake Store, for instance, or write custom code using Azure Functions, to meet serverless compute tech needs.


 


Data 


The fourth building block is data, and more specifically managed databases.


 


What are the advantages of managed databases? 


Fully managed, cloud-based databases provide limitless scale, low-latency access to rich data, and advanced data protection—all built in, regardless of languages or frameworks.


 


How does the modern application development approach benefit from fully managed databases? 


Modern application development leverages microservices and containers, the benefit to both technologies is their ability to operate independently and scale as demand warrants.


  


To ensure the greatest user satisfaction and app functionality the limitless scale and lowlatency access to data enable apps to run unimpeded. 


riduncan_5-1615417254180.png


 


Software Delivery 


The fifth building block is software delivery.


 


What constitutes modern development software delivery practices? 


Modern app development software delivery practices enable you to meet rapid market changes that require shorter release cycles without sacrificing quality, stability, and security.


 


The practices help you to release in a fast, consistent, and reliable way by using highly productive tools, automating mundane and manual steps, and iterating in small increments through CI/CD and DevOps practices.


 


What is DevOps? 


A compound of development (Dev) and operations (Ops), DevOps is the union of people, process, and technology to continually provide value to customers. DevOps enables formerly siloed roles—development, IT operations, quality engineering, and security—to coordinate and collaborate to produce better, more reliable products.


 


By adopting a DevOps culture along with DevOps practices and tools, teams gain the ability to better respond to customer needs, increase confidence in the applications they build, and achieve development goals faster.


 


DevOps influences the application lifecycle throughout its plandevelopdeliver, and operate phases. 


riduncan_6-1615417254185.png


 


Plan 


In the plan phase, DevOps teams ideate, define, and describe features and capabilities of the applications and systems they are building. Creating backlogs, tracking bugs, managing agile software development with Scrum, using Kanban boards, and visualizing progress with dashboards are some of the ways DevOps teams plan with agility and visibility.


 


Develop 


The develop phase includes all aspects of coding—writing, testing, reviewing, and the integration of code by team members—as well as building that code into build artifacts that can be deployed into various environments. To develop rapidly, they use highly productive tools, automate mundane and manual steps, and iterate in small increments through automated testing and continuous integration. 


 


Deliver 


Delivery is the process of deploying applications into production environments and deploying and configuring the fully governed foundational infrastructure that makes up those environments.  


In the deliver phase, teams define a release management process with clear manual approval stages. They also set automated gates that move applications between stages until they’re made available to customers. 


 


Operate 


The operate phase involves maintaining, monitoring, and troubleshooting applications in production environments. In adopting DevOps practices, teams work to ensure system reliability, high availability, and aim for zero downtime while reinforcing security and governance.


 


What is CI/CD? 


Under continuous integration, the develop phase—building and testing code—is fully automated. Each time you commit code, changes are validated and merged to the master branch, and the code is packaged in a build artifact.


 


Under continuous delivery, anytime a new build artifact is available, the artifact is automatically placed in the desired environment and deployed. With continuous deployment, you automate the entire process from code commit to production.


  


Operations 
The sixth building block is operations to maximize automation.


 


How do you maximize automation in your modern application development approach?  


With an increasingly complex environment to manage, maximizing the use of automation helps you improve operational efficiency, identify issues before they affect customer experiences, and quickly mitigate issues when they occur.


 


Fully managed platforms provide automated logging, scaling, and high availability. Rich telemetry, actionable alerting, and full visibility into applications and the underlying system are key to a modern application development approach.


 


Automating regular checkups and applying best practices like infrastructure as code and site reliability engineering promotes resiliency and helps you respond to incidents with minimal downtime and data loss.


 


Security 


The seventh building block is multilayered security. 


 


Why do I need multi-layered security in my modern applications? 


Modern applications require multilayered security across code, delivery pipelines, app runtimes, and databases. Start by providing developers secure dev boxes with well-governed identity. As part of the DevOps lifecycle, use automated tools to examine dependencies in code repositories and scan for vulnerabilities as you deploy apps to the target environment.


 


Enterprise-grade secrets and policy management encrypt the applications and give the operations team centralized policy enforcement. With fully managed compute and database services, security control is built in and threat protection is executed in real time.


 


Conclusion
While modern application development can seem daunting, it is an approach that can be done iteratively, and each step can yield large benefits for your team.


 


Access webinars, analyst reports, tutorials, and more on the Modern application development on Azure page.

Enhance your fraud workflow with efficient manual review

Enhance your fraud workflow with efficient manual review

This article is contributed. See the original author and article here.

Managing robust fraud operations can be complex and time consuming. To help simplify the process and increase your fraud detection efficiency as well as accuracy, Dynamics 365 Fraud Protection takes a cohesive approach to manual review. With the Manual Review tool, now available in preview, you can set the rules to identify transactions that can benefit from further human review. You then place those items in a queue to facilitate and amplify the review process. The tool enables rule-based or business process-based queues with intelligent routing and feedback integration to help keep manual reviewers on schedule in their tasks. This seamless integration helps reduce the complex feedback loop and is scalable to accommodate any type of manual review operations. Key capabilities Queue management – Create workflows that route suspected fraudulent transactions to different queues for manual review based on specific criteria and manage them in one place. Review dashboard Use a dashboard to see a curated view of data, complete with previous transaction history, so that you can review a transaction and analyze the fraud pattern efficiently. Create customized actions – Dynamically create remedy actions, such as decisions and fraud labeling, which can be applied for tracking and analysis purposes. You can escalate complex transactions that may require further review. A customized performance dashboard – Access a dashboard that displays a list of reviewed orders, fraudulent orders, the false positive rate, and so on, calculated by the team or analyst, with daily and monthly views. Reports can also be exported and shared internally for review and discussion. Example screenshots     Next steps To learn more about manual review capabilities and details,check out the GitHub site for Dynamics 365 Fraud Protection – Manual review. Also, join the Dynamics 365 Fraud Protection Insider Program, to get an early view of upcoming features and discuss best practices to combat fraud

The post Enhance your fraud workflow with efficient manual review appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

How to Enable AD Authentication for SQL 2019 Containers in Less than 5 Minutes | Data Exposed

This article is contributed. See the original author and article here.

We all know one of the most important and secure methods of authentication in SQL Server is through AD authentication. Learn today in this episode of Data Exposed with Amit Khandelwal how to enable AD authentication for a SQL container and log in using an AD account all in less than 5 minutes using a tool called adutil.



Watch on Data Exposed



Resources:



ION – We Have Liftoff!

ION – We Have Liftoff!

This article is contributed. See the original author and article here.

ION – We Have Liftoff!


 


Four years ago, we started a journey to help develop and advance decentralized identity, an emerging form of identity technology that empowers individuals and creates new business capabilities. Our goal is to put individuals, organizations, and other entities at the center of the apps, services, and digital exchanges that increasingly play a pivotal role in our lives. Among all the technical development required to deliver decentralized identity, none is more important than Decentralized Identifiers (DIDs).


 


DIDs are identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company), and exist independently of any external organization or trusted intermediary. Without DIDs, you can’t have a vibrant, interoperable decentralized identity and application ecosystem. Early on we recognized the existence of a secure, scalable DID implementation was a prerequisite for the kinds of applications and services we wanted to offer, so in 2019 we set out to build one.


 


We are excited to share that v1 of ION is complete and has been launched on Bitcoin mainnet. We have deployed an ION node to our production infrastructure and are working together with other companies and organizations to do so as well. ION does not rely on centralized entities, trusted validators, or special protocol tokens – ION answers to no one but you, the community. Because ION is an open, permissionless system, anyone can run an ION node, in fact the more nodes in operation, the stronger the network becomes. Development of ION, and the Sidetree standard ION is based on, takes place in the Decentralized Identity Foundation (DIF). Read on to learn how you can integrate ION, DIDs, and Verifiable Credentials in your applications and services.

Learn more about ION here: https://identity.foundation/ion/


 


Picture1.png


 


Use ION DIDs


Creating an open, public, permissionless DID implementation that runs at massive scale, to the tune of thousands of operations per second, while maintaining decentralization and security was a long road – now it’s time to drive adoption. To help get DIDs into the hands of users and enable developers to easily integrate ION DIDs in wallets, decentralized apps, and credential-related services, we have contributed an open source library for generating DIDs and have opened up our ION node to provide a no-hassle option for anchoring ION DIDs:


Generate ION DIDs and keys – the high-level ION.js helper library is the easiest way to start generating ION DIDs as fast as possible: github.com/decentralized-identity/ion-tools (ION.js library).


An example of generating an ION DID with the ION.js library:


 


Picture2.png


 


Use the lower-level SDK – access a larger set of ION-related APIs that provide more granular functionality: github.com/decentralized-identity/ion-sdk (TypeScript/Node)


 


Anchor DIDs you generate – easily anchor your DIDs via our ION node, without having to interact with a cryptocurrency wallet or run an ION node locally: github.com/decentralized-identity/ion-tools


[ NOTE: ownership of your DIDs is based on keys you generate locally, and all ION operations are signed with those keys, so even if you use our node for anchoring DID operations (or any other node), you are always in sole control. ]


 


Run an ION node


Running an ION node provides the fastest lookup of ION DIDs, the highest level of security when interacting with ION DIDs, and ensures you can always resolve ION DIDs without depending on intermediaries. There are two options for running an ION node:


 



  1. Run the Dockerized version of ION: https://github.com/decentralized-identity/ion/tree/master/docker (provides an option to connect to an existing Bitcoin node)

  2. Install a node natively on your machine: https://identity.foundation/ion/install-guide/


 


Lookup ION DIDs


You can resolve ION DIDs to view their keys and routing endpoints using the ION Explorer interface: https://identity.foundation/ion/explorer/. This dashboard (which you’ll soon be able to run against your own local ION node) is being built-out with more views and tools as we speak, and will eventually contain interfaces to help operators monitor their local ION nodes.


 


pic3.png


 


 


Leverage ION DIDs today


Here are a few ways you can use ION DIDs right now:



  1. If you are a business or organization, sign up for the public preview of the Azure AD Verifiable Credential service: http://aka.ms/vcpreview

  2. Explore integrating OpenID Connect Self-Issued for DIDs to authenticate with sites, apps, and services that implement the draft specification: https://bitbucket.org/openid/connect/src/master/openid-connect-self-issued-v2-1_0.md

  3. Create a DID for yourself or your company and cryptographically link it to Web domains you control, using the DIF Well-Known DID Configuration specification: https://identity.foundation/.well-known/resources/did-configuration/.

  4. Use a DID to issue Verifiable Credentials, which are digital proofs that can be used to represent just about any verifiable assertion or asset, such as diplomas, membership cards, event tickets, etc. 


 


ION’s core protocol has been standardized


Along with ION reaching v1, so too has the protocol at its core: Sidetree. Sidetree is a specification developed alongside many others at the Decentralized Identity Foundation (DIF) that enables scalable DID networks (i.e. ION, Element, Orb) to be built atop any decentralized event record system (e.g. blockchains). We would like to thank the following collaborators who have worked on specs, contributed code, or provided feedback during this process:


 



 


This work would not have been possible without the contributions of folks like Orie Steele of Transmute and Troy Ronda of SecureKey, who played key roles in shaping the Sidetree specification, our colleagues in Microsoft Research, as well as Dietrich Ayala and the Protocol Labs team, who helped integrate IPFS as the P2P file replication protocol used in ION.


 


Open source development and codification of standards is essential to the creation of a vibrant decentralized identity ecosystem. If you are a developer or organization interested in contributing to the Sidetree specification, ION’s open source code, or any other work underway in this area, we encourage you to join the Decentralized Identity Foundation (DIF) and its Sidetree Development & Operating Group. This group is the primary place where contributors meet to discuss various technical and operational aspects of ION and the general Sidetree protocol.


 


Beyond v1


With ION v1 out the door, we will be turning our attention toward optimizing the ION node implementation and adding other important features, such as:



  • Deliver a light node configuration, making node operation easier for low-resource devices.

  • Add tooling and support for Ed25519 and BLS12-381 keys

  • Enable optimistic operation ingestion for transactions still in the mempool (reduces time to resolution)

  • Codify an initial set of DID type tags (used in tagging DIDs as IoT devices, software packages, etc.)

  • Enable querying of ION’s decentralized DID directory based on DID type – for example: once organizations and businesses establish DIDs, you will be able to fetch all DIDs typed as OrganizationLocalBusiness, etc., to build a decentralized directory. You will also be able find all DIDs of types like SoftwareSourceCode, to create decentralized code package and app registries. (NPM? How about DPM)


 


While launching v1 of ION is a significant milestone, we’re still in the early phases of this journey. We have a lot left to do before we can fully realize a better, more trustworthy, more decentralized Web that empowers every person and every organization on the planet to achieve more.



Daniel Buchner
Decentralized Identity, Microsoft

What’s New in Microsoft Endpoint Manager – 2103 (March) Edition

What’s New in Microsoft Endpoint Manager – 2103 (March) Edition

This article is contributed. See the original author and article here.

This month’s Microsoft Endpoint Manager highlights include a guided scenario for Windows 10 in cloud configuration, Microsoft Tunnel health metrics, scale improvements to your Automated Device Enrollment experience for iOS, iPadOS, or macOS devices, and more.


 


On this blog and across many social platforms – including LinkedIn and Twitter – you’ve shared your feedback on What’s New in Microsoft Endpoint Manager – Microsoft Ignite 2021 Edition. Based on your response, I’m continuing the series, and I’m excited to share more of our new management and security capabilities! While you can find the full list of engineering investments, we’ve made in What’s New, here are a few of my favorite additions. Which one’s your favorite? Let me know by leaving a comment, connecting with me on LinkedIn, or by tagging me on Twitter.


 


We recently announced Windows 10 in cloud configuration. With the Microsoft Endpoint Manager service release 2103 in March, we are providing a guided scenario for Windows 10 that makes it even easier for you to apply a uniform, Microsoft-recommended device configuration to any Windows 10 device. We focused on engineering a simplified cloud configuration experience so it’s faster to set up and easier to use. In less than a minute, you can now go from zero policy to managing Windows 10 devices that are cloud-optimized. Watch Senior Program Manager Ravi Ashok demonstrate this new guided scenario.


 


 


Guided scenario for Windows 10 in cloud configuration


 


 


Microsoft Tunnel is an IT Pro favorite from the past two Microsoft Ignite conferences. From the start, we wanted to make the Tunnel experience as simple and easy to use as possible. New in 2103, Tunnel performance and health metrics are easier for you to see right away. In the Microsoft Endpoint Manager admin center, you can easily see the top four health checks – CPU, memory, latency, and your Transport Layer Security (TLS) certificate. You no longer need to log in to your gateway server to do this troubleshooting – this simplification brings troubleshooting to you. From the UI, you can quickly see what you need to act on, with logs available if you need to dive deeper.


 


 


Microsoft Endpoint Manager admin center view of Tunnel performance and health metrics.png


 


Microsoft Endpoint Manager admin center view of Tunnel performance and health metrics


 


 


If you’re not up to speed on Tunnel, watch a demo starting at 08:24 in our What’s New in Microsoft Endpoint Manager session.


 


Finally, with this release, we’ve made significant architectural changes in our support of Apple’s Automated Device Enrollment (ADE). Many of our customers use Automated Device Enrollment to enroll large numbers of devices without touching them – perfect for a remote or distributed workforce. By making these architectural changes, we have enabled you to enroll three times the number of devices per single token with the same profile. In future releases, we’ll focus on optimizing and scaling this enrollment experience to make it even simpler. While this may seem like a minor change to highlight in this blog post, customers from healthcare to school districts have requested this improved Automated Device Enrollment experience. I’m glad we could simplify your management experience.


 


Next month is already shaping up to include many favorite features. As I shared before, I am incredibly proud of the work the team does, and we always work with our customers top of mind. We listen to your feedback and goals and make changes and investments that help improve the user experience and simplify IT.


 


As always we welcome your feedback, so leave a comment below, connect with me on LinkedIn, or tag me @RamyaChitrakar on Twitter.