by Contributed | Apr 26, 2021 | Technology
This article is contributed. See the original author and article here.
Millions of developers use GitHub daily to build, ship, and maintain their software – but software development isn’t performed in silos and require open communication and collaboration across teams. Microsoft Teams is one of the key tools that developers use for their collaboration needs, so it was important that the integration of our platforms provide a seamless, connected experience. Context switching is a drain on productivity and stifles the flow of work, which is why the GitHub integration in Teams is so important – as it gives you and your team full line of sight into your GitHub projects without having to leave Teams. Now, when you collaborate with your team, you can stay engaged in your channels and ideate, triage issues, and work together to move your projects forward.
GitHub has made tremendous updates refreshing the integration – with public preview last September and recently with general availability of the app last month. Many in the developer community have been looking forward to the updates in the new GitHub app, which they’ve experienced on other collaboration platforms, and so we’re excited to share some of new features and the existing capabilities you can experience today.
New support for personal app and scheduling reminders
Since the public preview launch of the GitHub app last September, we’ve made some great updates on a couple key areas. First, we’ve added support for the personal app experience and secondly, we’ve added capabilities to support scheduling reminders for pending pull requests.
Personal app experience
As part of personal app experience, you can now subscribe to your repositories and receive notifications for:
- issues
- pull requests
- commits
All the commands available in your channel are now available on your personal app experience with the GitHub app.
Image showing subscription experience within the personal app view
Schedule reminders
You can now schedule reminders for pending pull requests. With this feature you can now get periodic reminders of pending pull requests as part of your channel or personal chat. Scheduled reminders ensure your teammates are unblocking your workflows by providing reviews on your pull request. This will have an impact on business metrics like time-to-release for features or bug fixes.
Image showing user setting up scheduled reminders within the GitHub Teams app
From your Teams channel, you can run the following command to configure a reminder for pending pull requests on your Organization or Repository:
@github schedule organization/repository
This will create reminder for weekdays at 9.30 AM. However, if you want to configure the reminder for a different day or time, you can achieve that by passing day and time with the below command:
@github schedule organization/repository <Day format> <Time format>
Learn more about channel reminders.
You can also configure personal reminders in your personal chats, as well, using the below command:
@github schedule organization
Stay notified on updates that matter to you through subscriptions
Subscriptions allow you to customize the notifications you receive from the GitHub app for pull request and issues. You can use filters to create subscriptions that are helpful for your projects, without the noise of non-relevant updates – and you can create these in channels or in the personal app.
Subscribing and Unsubscribing
You can subscribe to get notifications for pull requests and issues for an Organization or Repository’s activity.
@github subscribe <organization>/<repository>
Image showing user subscribing to a specific organization and repository via GitHub app
To unsubscribe to notifications from a repository, use:
@github unsubscribe <organization>/<repository>
Customize notifications
You can customize your notifications by subscribing to activity that is relevant to your Teams channel and unsubscribing from activity that is less helpful to your project.
@github subscribe owner/repo
Learn more about subscription notifications on the GitHub app.
Use threading to synchronize comments between Teams and GitHub
Notifications for any pull request and issue are grouped under a parent card as replies. The parent card always shows the latest status of the pull request/issue along with other meta-data like description, assignees, reviewers, labels, and checks. Threading gives context and helps improve collaboration in the channel.
Image showing pull request card staying updated with latest comments and information
Make your conversations actionable
Stay in the flow of work by making the conversations you have with your teammates on GitHub actionable. You can use the app to create an issue, close or reopen an issue, or leave comments on issues and pull requests.
With the GitHub app, you can perform the following:
- Create issue
- Close an issue
- Reopen and issue
- Comment on issue
- Comment on pull request
You can perform these actions directly from the notification card in the channel by clicking on the buttons available in the parent card.
Image of a GitHub pull request notification on a card and the user from Teams commenting
More to come!
We’re excited for you to use these new and existing features as you continue to build world-class software and services – and we’re equally excited to see the growth and evolution of the app in the future. If you haven’t already installed the GitHub app for Teams, you can easily do so today to get started.
Happy coding!
by Scott Muniz | Apr 26, 2021 | Security, Technology
This article is contributed. See the original author and article here.
The Federal Bureau of Investigation (FBI), Department of Homeland Security (DHS), and Cybersecurity and Infrastructure Security Agency (CISA) assess Russian Foreign Intelligence Service (SVR) cyber actors—also known as Advanced Persistent Threat 29 (APT 29), the Dukes, CozyBear, and Yttrium—will continue to seek intelligence from U.S. and foreign entities through cyber exploitation, using a range of initial exploitation techniques that vary in sophistication, coupled with stealthy intrusion tradecraft within compromised networks. The SVR primarily targets government networks, think tank and policy analysis organizations, and information technology companies. On April 15, 2021, the White House released a statement on the recent SolarWinds compromise, attributing the activity to the SVR. For additional detailed information on identified vulnerabilities and mitigations, see the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), and FBI Cybersecurity Advisory titled “Russian SVR Targets U.S. and Allied Networks,” released on April 15, 2021.
The FBI and DHS are providing information on the SVR’s cyber tools, targets, techniques, and capabilities to aid organizations in conducting their own investigations and securing their networks.
Click here for a PDF version of this report.
Threat Overview
SVR cyber operations have posed a longstanding threat to the United States. Prior to 2018, several private cyber security companies published reports about APT 29 operations to obtain access to victim networks and steal information, highlighting the use of customized tools to maximize stealth inside victim networks and APT 29 actors’ ability to move within victim environments undetected.
Beginning in 2018, the FBI observed the SVR shift from using malware on victim networks to targeting cloud resources, particularly e-mail, to obtain information. The exploitation of Microsoft Office 365 environments following network access gained through use of modified SolarWinds software reflects this continuing trend. Targeting cloud resources probably reduces the likelihood of detection by using compromised accounts or system misconfigurations to blend in with normal or unmonitored traffic in an environment not well defended, monitored, or understood by victim organizations.
SVR Cyber Operations Tactics, Techniques, and Procedures
Password Spraying
In one 2018 compromise of a large network, SVR cyber actors used password spraying to identify a weak password associated with an administrative account. The actors conducted the password spraying activity in a “low and slow” manner, attempting a small number of passwords at infrequent intervals, possibly to avoid detection. The password spraying used a large number of IP addresses all located in the same country as the victim, including those associated with residential, commercial, mobile, and The Onion Router (TOR) addresses.
The organization unintentionally exempted the compromised administrator’s account from multi-factor authentication requirements. With access to the administrative account, the actors modified permissions of specific e-mail accounts on the network, allowing any authenticated network user to read those accounts.
The actors also used the misconfiguration for compromised non-administrative accounts. That misconfiguration enabled logins using legacy single-factor authentication on devices which did not support multi-factor authentication. The FBI suspects this was achieved by spoofing user agent strings to appear to be older versions of mail clients, including Apple’s mail client and old versions of Microsoft Outlook. After logging in as a non-administrative user, the actors used the permission changes applied by the compromised administrative user to access specific mailboxes of interest within the victim organization.
While the password sprays were conducted from many different IP addresses, once the actors obtained access to an account, that compromised account was generally only accessed from a single IP address corresponding to a leased virtual private server (VPS). The FBI observed minimal overlap between the VPSs used for different compromised accounts, and each leased server used to conduct follow-on actions was in the same country as the victim organization.
During the period of their access, the actors consistently logged into the administrative account to modify account permissions, including removing their access to accounts presumed to no longer be of interest, or adding permissions to additional accounts.
Recommendations
To defend from this technique, the FBI and DHS recommend network operators to follow best practices for configuring access to cloud computing environments, including:
- Mandatory use of an approved multi-factor authentication solution for all users from both on premises and remote locations.
- Prohibit remote access to administrative functions and resources from IP addresses and systems not owned by the organization.
- Regular audits of mailbox settings, account permissions, and mail forwarding rules for evidence of unauthorized changes.
- Where possible, enforce the use of strong passwords and prevent the use of easily guessed or commonly used passwords through technical means, especially for administrative accounts.
- Regularly review the organization’s password management program.
- Ensure the organization’s information technology (IT) support team has well-documented standard operating procedures for password resets of user account lockouts.
- Maintain a regular cadence of security awareness training for all company employees.
Leveraging Zero-Day Vulnerability
In a separate incident, SVR actors used CVE-2019-19781, a zero-day exploit at the time, against a virtual private network (VPN) appliance to obtain network access. Following exploitation of the device in a way that exposed user credentials, the actors identified and authenticated to systems on the network using the exposed credentials.
The actors worked to establish a foothold on several different systems that were not configured to require multi-factor authentication and attempted to access web-based resources in specific areas of the network in line with information of interest to a foreign intelligence service.
Following initial discovery, the victim attempted to evict the actors. However, the victim had not identified the initial point of access, and the actors used the same VPN appliance vulnerability to regain access. Eventually, the initial access point was identified, removed from the network, and the actors were evicted. As in the previous case, the actors used dedicated VPSs located in the same country as the victim, probably to make it appear that the network traffic was not anomalous with normal activity.
Recommendations
To defend from this technique, the FBI and DHS recommend network defenders ensure endpoint monitoring solutions are configured to identify evidence of lateral movement within the network and:
- Monitor the network for evidence of encoded PowerShell commands and execution of network scanning tools, such as NMAP.
- Ensure host based anti-virus/endpoint monitoring solutions are enabled and set to alert if monitoring or reporting is disabled, or if communication is lost with a host agent for more than a reasonable amount of time.
- Require use of multi-factor authentication to access internal systems.
- Immediately configure newly-added systems to the network, including those used for testing or development work, to follow the organization’s security baseline and incorporate into enterprise monitoring tools.
WELLMESS Malware
In 2020, the governments of the United Kingdom, Canada, and the United States attributed intrusions perpetrated using malware known as WELLMESS to APT 29. WELLMESS was written in the Go programming language, and the previously-identified activity appeared to focus on targeting COVID-19 vaccine development. The FBI’s investigation revealed that following initial compromise of a network—normally through an unpatched, publicly-known vulnerability—the actors deployed WELLMESS. Once on the network, the actors targeted each organization’s vaccine research repository and Active Directory servers. These intrusions, which mostly relied on targeting on-premises network resources, were a departure from historic tradecraft, and likely indicate new ways the actors are evolving in the virtual environment. More information about the specifics of the malware used in this intrusion have been previously released and are referenced in the ‘Resources’ section of this document.
Tradecraft Similarities of SolarWinds-enabled Intrusions
During the spring and summer of 2020, using modified SolarWinds network monitoring software as an initial intrusion vector, SVR cyber operators began to expand their access to numerous networks. The SVR’s modification and use of trusted SolarWinds products as an intrusion vector is also a notable departure from the SVR’s historic tradecraft.
The FBI’s initial findings indicate similar post-infection tradecraft with other SVR-sponsored intrusions, including how the actors purchased and managed infrastructure used in the intrusions. After obtaining access to victim networks, SVR cyber actors moved through the networks to obtain access to e-mail accounts. Targeted accounts at multiple victim organizations included accounts associated with IT staff. The FBI suspects the actors monitored IT staff to collect useful information about the victim networks, determine if victims had detected the intrusions, and evade eviction actions.
Recommendations
Although defending a network from a compromise of trusted software is difficult, some organizations successfully detected and prevented follow-on exploitation activity from the initial malicious SolarWinds software. This was achieved using a variety of monitoring techniques including:
- Auditing log files to identify attempts to access privileged certificates and creation of fake identify providers.
- Deploying software to identify suspicious behavior on systems, including the execution of encoded PowerShell.
- Deploying endpoint protection systems with the ability to monitor for behavioral indicators of compromise.
- Using available public resources to identify credential abuse within cloud environments.
- Configuring authentication mechanisms to confirm certain user activities on systems, including registering new devices.
While few victim organizations were able to identify the initial access vector as SolarWinds software, some were able to correlate different alerts to identify unauthorized activity. The FBI and DHS believe those indicators, coupled with stronger network segmentation (particularly “zero trust” architectures or limited trust between identity providers) and log correlation, can enable network defenders to identify suspicious activity requiring additional investigation.
General Tradecraft Observations
SVR cyber operators are capable adversaries. In addition to the techniques described above, FBI investigations have revealed infrastructure used in the intrusions is frequently obtained using false identities and cryptocurrencies. VPS infrastructure is often procured from a network of VPS resellers. These false identities are usually supported by low reputation infrastructure including temporary e-mail accounts and temporary voice over internet protocol (VoIP) telephone numbers. While not exclusively used by SVR cyber actors, a number of SVR cyber personas use e-mail services hosted on cock[.]li or related domains.
The FBI also notes SVR cyber operators have used open source or commercially available tools continuously, including Mimikatz—an open source credential-dumping too—and Cobalt Strike—a commercially available exploitation tool.
by Contributed | Apr 26, 2021 | Technology
This article is contributed. See the original author and article here.
By John Mighell, Sr. Product Marketing Manager, Viva Learning Marketing Lead
Since we announced Microsoft Viva in February, we’ve heard consistently from customers, industry analysts, and our own internal users how important learning is to them. Everyday we hear examples of creative learning – not just as an important aspect of personal and professional growth – but also as an avenue for social engagement, helping people feel more connected in a largely virtual work environment. These scenarios are top of mind as we get closer to launching Viva Learning – a central hub for learning in Microsoft 365 where people can discover, share, recommend, and learn from best-in-class content libraries to help teams and individuals make learning a natural part of their day.
At Microsoft Ignite in March, we shared a new set of product features and admin capabilities, derived from our private preview customer feedback and internal use scenarios. With this feedback we’re continuing to build Viva Learning to seamlessly integrate learning into the tools you already use everyday – you can share learning via chat, pin learning content in existing Teams channels, recommend learning, see all your available learning sources in a personalized view, search across available sources, and so much more.
As we strive to build a product that facilitates an organization’s learning culture by bringing learning into the flow of work, we’re thrilled to announce today we’re ready to take the next step in that journey.
Viva Learning enters public preview today
There are a few items to keep in mind as we kick off public preview:
Eligibility
Viva Learning public preview is open to all organizations with paid subscription access to Microsoft Teams, with the exception of Education or Government customers. We’ll have additional information to share when Viva Learning is available to those audiences in the future.
Approval from your IT admin
Public preview onboarding requests should either come from your IT admin directly or have the support of your IT admin. In the Teams Admin Center, your IT admin will have the ability to configure who within your organization has access to the Viva Learning app in preview – this can range from the entire organization to just a small subset of users.
Onboarding timelines
We expect a high volume of public preview requests and will manage requests as they come in through a rolling onboarding process. Expect it to take a few weeks from request submission to tenant activation.
Product features in preview
As with any product in preview or beta stages, please be aware that we are still hard at work building the final version. The preview product will not initially include all the features that will be available at product general availability. We will roll out additional features to our preview customers continually as we lead up to general availability later this year.
Submitting feedback
As you start to use Viva Learning we want to hear your feedback! Please use the “Help” button in Teams (bottom of the left nav rail) and select either “Suggest a feature” or “Report a problem”
How to sign up for Viva Learning public preview
Once you’ve read through the 5 items above, you’re ready to get started. The public preview request process is simple and should only take you a few minutes:
1. Gather required information
To activate the Viva Learning preview for your Microsoft 365 tenant we need the tenant ID your organization uses to login to Teams (has AAD users associated), and the type of SKU your organization is currently licensed with. You will need that information to accurately fill out the form linked below.
2. Submit your request
Complete our onboarding request form with the required information. Completing this form should take about 5 minutes.
3. Next steps
After submitting your request, please stay tuned for a confirmation email within 24 business hours. Later, you will receive a notification email once your tenant is activated which will include setup and usage documentation. We will keep you posted using the vivalp@microsoft.com alias. Please add this alias to your Safe Senders list in Outlook (Home → Junk → Junk E-mail Options → Safe Senders). Please do not send emails directly to this alias.
Looking ahead
We will have additional important updates, features, and news on Viva Learning in the coming weeks and months. Don’t forget to visit aka.ms/VivaLearning and click “sign up for updates” to receive the latest customer and partner updates as they become available.
by Contributed | Apr 26, 2021 | Technology
This article is contributed. See the original author and article here.
Many ISVs today are developing on the AKS platform and looking to take advantage of the flexibility that the Kubernetes platform provides to run multiple workloads in a shared cluster environment. There are many benefits to this approach including cost, management, and performance.
However, many customers have concerns around how to run their workloads in a shared Kubernetes environment while ensuring that they maintain a secure and performant runtime environment for their tenants. In this video we’ll cover some of the considerations you’ll need to take into consideration for your AKS solution as well as some of the Kubernetes primitives that will help you achieve a successful multi-tenant deployment.
For more information on the topics covered in this video please refer to the following docs:
AKS
Kubernetes
Please let us know in the comments what you think, and if you have any questions!
by Contributed | Apr 26, 2021 | Technology
This article is contributed. See the original author and article here.
Welcome to the April update of Java Azure Tools! This blog series provide updates for all the Azure tooling support we are providing for Java users, covering Maven/Gradle plugins for Azure, Azure Toolkit for IntelliJ/Eclipse and Azure Extensions for VS Code. Follow us and you will get more exciting updates in the future blogs.
If you use Azure with Java apps deployed either on VM, on App Service, on AKS or on-premise, you probably store application data on Azure as well using data services like the Azure Database for MySQL. The Azure Toolkit for IntelliJ 3.50.0 release brings this brand new experience in IntelliJ on connecting your Java app with Azure Database for MySQL. We will also show you how to deploy the app seamlessly to Azure Web Apps with the database connection.
In brief, the Azure Toolkit for IntelliJ can manage your Azure database credentials and supply them to your app automatically through:
- If running the app locally: Environment variables through a before launch task named “Connect Azure Resource”.
- If running the app on Azure Web Apps: App Setting deployed along with the artifact.
Check out the showcase GIF below and detail steps will be explained in later sections.
Create an Azure Database for MySQL
let’s start from creating an Azure Database for MySQL server instance. You can either follow the steps here right from your Azure Toolkit for IntelliJ plugin or with any other tools like Azure Portal.
- Right-click on the Azure Database for MySQL node in the Azure Explore, select Create and then select More settings to open the wizard shown in image below.
- (Optional) Customize the resource group name and server name.
- Choose a location you prefer, here we use West US.
- Specify admin username and password.
- Select the two checkboxes in the Connection security section. This step will automatically add corresponding IP whitelist rules to the firewalls protecting your MySQL server.
- Click OK.
The background operation can take a few minutes to complete. After that you can refresh the Azure Database for MySQL node in the Azure Explore, right-click on the server you just created and select Show Properties for some key information listed. If you are using IntelliJ IDEA Ultimate version, select Open by Database Tools will connect the MySQL server to the database tools embedded.
Connect with your local project
Here we use a sample Spring Boot project called PetClinic. You can also try this with your own project consuming MySQL.
- Clone the project to your dev machine and import with IntelliJ IDEA.
git clone https://github.com/spring-projects/spring-petclinic.git
- Enable MySQL profile by adding
spring.profiles.active=mysql
in the application.properties.
- Connect to the MySQL server using MySQL Workbench or MySQL CLI for example:
mysql -u <admin name>@<mysql server name> -h <mysql server name>.mysql.database.azure.com -P 3306 -p --ssl
- Run the commend in resources/db/mysql/user.sql on the MySQL server to create the petclinic database and user.
- In Azure Explorer, right-click on the MySQL server you created and select Connect to Project to open the wizard below. Select the petclinic database, specify password and select Until restart for to save the password for this IDE session. Then you can click Test Connection below to verify connection from your IDE. Then click OK.
- Open application-mysql.properties, comment out the original spring.datasource.url/username/password. Type those properties again and accept the autocomplete suggestion with Connect to Azure Datasource.
# database init, supports mysql too
database=mysql
#spring.datasource.url=${MYSQL_URL:jdbc:mysql://localhost/petclinic}
#spring.datasource.username=${MYSQL_USER:petclinic}
#spring.datasource.password=${MYSQL_PASS:petclinic}
spring.datasource.url=${AZURE_MYSQL_URL}
spring.datasource.username=${AZURE_MYSQL_USERNAME}
spring.datasource.password=${AZURE_MYSQL_PASSWORD}
# SQL is written to be idempotent so this is safe
spring.datasource.initialization-mode=always
- Run the application by right-clicking on the PetClinicApplication main class and choosing Run ‘PetClinicApplication’. This will launch the app locally with the MySQL server on Azure connected.
Deploy the app seamlessly to Azure Web Apps
Once you have finished all the steps above, there is no extra steps to make the database connect also works on Azure Web Apps. Just follow the ordinary steps to deploy the app on Azure: Right-click on the project and select Azure->Deploy to Azure Web Apps. You can also see the before launch task “Connect Azure Resource” added here, which will upload your database credentials as App Settings to Azure. Therefore, after click Run and wait for the deployment to complete you will see the app working on Azure without any further configuration.
Try our tools
Please don’t hesitate to give it a try! Your feedback and suggestions are very important to us and will help shape our product in future.
Recent Comments