Deep Dive How To Debug Syslog Ingestion for Sentinel and Log Analytics.

Deep Dive How To Debug Syslog Ingestion for Sentinel and Log Analytics.

This article is contributed. See the original author and article here.

 


Hello everybody, Simone here to tell you about a situation that happened many times to my customers: understanding how the syslog ingestion works.


To make subject clear make sure you have clear in mind the below references:



Most of the time nobody knows what needs to be collected and how hence, with this article, I just want to make some clarification on what is behind the scenes.


Starting from RFC, it is mentioned that we have a list of “Facility” like in the screenshot below:


sifriger_0-1608569858351.png


 


And for each of them we could have a specific “Severity” (see the corresponding picture below):


sifriger_1-1608569858358.png


 


Back to the situation, the natural question that comes up is: how can we clearly understand who is using who if we have no information about facilities and severities about related products we are using?


To find the information we need, we must capture some TCP/UDP packets from the syslog server and rebuild the packets in wireshark and then analyze the results.


Let’s start with first step: packets capture. Below you have the macro steps to be followed:



  • From the syslog server (in this case a Linux server) we will use the tcpdump command,
    if not available follow this link on how to setup
    https://opensource.com/article/18/10/introduction-tcpdump

  • the command could be for example
    tcpdump -i any -c50 -nn src xxx.xxx.xxx.xxx (replace with source IPADDRESS under analysis)

  • the results after the rebuilt with wireshark, should be something similar the following image:sifriger_2-1608569858371.png


The header of every row contains exactly the information that we are looking for; how to deal with this piece of info? Easy; use the formula contained in the following part directly taken from RFC:


“The Priority value is calculated by first multiplying the Facility number by 8 and then adding the numerical value of the Severity.  For example, a kernel message (Facility=0) with a Severity of Emergency (Severity=0) would have a Priority value of 0.  Also, a “local use 4” message (Facility=20) with a Severity of Notice (Severity=5) would have a Priority value of 165.  In the PRI of a syslog message, these values would be placed between the angle brackets as <0> and <165> respectively.
The only time a value of “0” follows the “<” is for the Priority value of “0”.  Otherwise, leading “0”s MUST NOT be used.”


 


In the example above, we have the value of <46>. According to the above-mentioned RFC, the formula used to translate that number into something human readable is the following:


8 x facility + severity


 


We now must look for the formula result in the following matrix:



































































































































































































































































































 

Emergency



Alert



Critical



Error



Warning



Notice



Informational



Debug


                 

Kernel



0



1



2



3



4



5



6



7



user-level



8



9



10



11



12



13



14



15



mail



16



17



18



19



20



21



22



23



system



24



25



26



27



28



29



30



31



security/auth



32



33



34



35



36



37



38



39



message



40



41



42



43



44



45



46



47



printer



48



49



50



51



52



53



54



55



network news



56



57



58



59



60



61



62



63



UUCP



64



65



66



67



68



69



70



71



clock



72



73



74



75



76



77



78



79



security/auth



80



81



82



83



84



85



86



87



FTP deamon



88



89



90



91



92



93



94



95



NTP



96



97



98



99



100



101



102



103



Log Audit



104



105



106



107



108



109



110



111



Log Alert



112



113



114



115



116



117



118



119



Clock



120



121



122



123



124



125



126



127



local0



128



129



130



131



132



133



134



135



local1



136



137



138



139



140



141



142



143



local2



144



145



146



147



148



149



150



151



local3



152



153



154



155



156



157



158



159



local4



160



161



162



163



164



165



166



167



local5



168



169



170



171



172



173



174



175



local6



176



177



178



179



180



181



182



183



local7



184



185



186



187



188



189



190



191



 


So now, let’s make one step back to customer’ question and “guess” what the “Facility” and the “Severity” are in the provided example.


Since header was 46, the result was:



  • Facility = message

  • Severity = Informational


Once we understood what to deal with, it’s time to configure Log Analytics / Sentinel enabling the Syslog data sources in Azure Monitor.


All we have to do is to:



  • add the facilities (by entering its name and leveraging the intellisense) to the workspace.
    sifriger_3-1608569858377.png

     



  • select what severity(ies) to import.
    sifriger_4-1608569858407.png


 



  •  and click Save.
    sifriger_5-1608569858411.png


 


Using some real-life example, if we want to collect the logs for FTP, the corresponding facility to be entered is “ftp” and the following logs will be imported:


























Syslog file



Log Path



ftp.info; ftp.notice



/log/ftplog/ftplog.info



ftp.warning



/log/ftplog/ftplog.warning



ftp.debug



/log/ftplog/ftplog.debug



ftp.err; ftp.crit; ftp.emerg



/log/ftplog/ftplog.err



 


Differently, talking about Users, the facility is “user” and the imported logs will be:


























Syslog file



Log Path



user.info;user.notice



/log/user/user.info



user.warning



/log/user/user.warning



user.debug



/log/user/user.debug



user.err;user.crit;user.emerg



/log/user/user.err



 


Another one: for Apache, the facility is “local0” and the logs will be:


























Syslog file



Log Path



local0.info;local0.notice



/log/httpd/httpd.



local0.warning



/log/httpd/httpd.warning



local0.debug



/log/httpd/httpd.debug



local0.err; local0.crit;local0.emerg



/log/httpd/httpd.err



 


We have everything in place, but are we really sure that info is produced?
What if you would like to effectively test that data is flowing in the corresponding facility?
We can leverage the following sample commands for CEF & Syslog using the logger built-in utility:



logger -p auth.notice “Some message for the auth.log file”


logger -p local0.info “Some message for the local0.log file”


logger “CEF:0|Microsoft|MOCK|1.9.0.0|SuspiciousActivity|Demo suspicious activity|5|start=2020-12-12T18:52:58.0000000Z app=mock suser=simo msg=Demo suspicious activity externalId=2024 cs1Label=tag cs1=my test”



Note pay attention to time when you query for this result!!! ;)


That’s it from my side, thank you for reading my article till the end.


Special thanks go to Bruno Gabrielli for review


Simone


 


 

Azure Service Fabric 7.2 Fifth Refresh Release

This article is contributed. See the original author and article here.

The Azure Service Fabric 7.2 fifth refresh release includes stability fixes for standalone, and Azure environments and has started rolling out to the various Azure regions. The updates for .NET SDK, Java SDK and Service Fabric Runtime will be available through Web Platform Installer, NuGet packages and Maven repositories in 7-10 days within all regions.



  • Service Fabric Runtime


    • Windows – 7.2.452.9590

    • Ubuntu 16 – 7.2.454.1

    • Ubuntu 18 – 7.2.454.1804

    • Service Fabric for Windows Server Service Fabric Standalone Installer Package – 7.2.452.9590




  • .NET SDK


    • Windows .NET SDK –  4.2.452

    • Microsoft.ServiceFabric –  7.2.452

    • Reliable Services and Reliable Actors –  4.2.452

    • ASP.NET Core Service Fabric integration –  4.2.452


  • Java SDK –  1.0.6


Key Announcements



  • Key Vault references for Service Fabric applications are now GA on Windows and Linux.

  • .NET 5 apps for Windows on Service Fabric are now supported as a preview. Look out for the GA announcement of .NET 5 apps for Windows on Service Fabric.

  • .NET 5 apps for Linux on Service Fabric will be added in the Service Fabric 8.0 release.


For more details, please read the release notes.  

Azure Service Fabric 7.1 Tenth Refresh Release

This article is contributed. See the original author and article here.

The Azure Service Fabric 7.1 tenth refresh release includes stability fixes for standalone, and Azure environments and has started rolling out to the various Azure regions. The updates for .NET SDK, Java SDK and Service Fabric Runtime will be available through Web Platform Installer, NuGet packages and Maven repositories in 7-10 days within all regions. This release is only available on Windows.



  • Service Fabric Runtime


    • Windows – 7.1.510.9590

    • Service Fabric for Windows Server Service Fabric Standalone Installer Package – 7.1.510.9590




  • .NET SDK


    • Windows .NET SDK –  4.1.510

    • Microsoft.ServiceFabric –  7.1.510

    • Reliable Services and Reliable Actors –  4.1.510

    • ASP.NET Core Service Fabric integration –  4.1.510


  • Java SDK –  1.0.6


Key Announcements



  • .NET 5 apps for Windows on Service Fabric are now supported as a preview. Look out for the GA announcement of .NET 5 apps for Windows on Service Fabric.

  • .NET 5 apps for Linux on Service Fabric will be added in the Service Fabric 8.0 release.


For more details, please read the release notes.  

How to build a voice-enabled grocery chatbot with Azure AI

How to build a voice-enabled grocery chatbot with Azure AI

This article is contributed. See the original author and article here.

Chatbots have become increasingly popular in providing useful and engaging experiences for customers and employees. Azure services allow you to quickly create bots, add intelligence to them using AI, and customize them for complex scenarios.


In this blog, we’ll walk through an exercise which you can complete in under two hours, to get started using Azure AI Services. This intelligent grocery bot app can help you manage your shopping list using voice commands. We’ll provide high level guidance and sample code to get you started, and we encourage you to play around with the code and get creative with your solution!


Features of the application:


iPhoneview.png



  • Add or delete grocery items by dictating them to Alexa.



  • Easily access the grocery list through an app.

  • Check off items using voice commands; for example, “Alexa, remove Apples from my grocery list.”

  • Ask Alexa to read the items you have in your grocery list.

  • Automatically organize items by category to help save time at the store.

  • Use any laptop or Web Apps to access the app and sync changes across laptop and phone.


Prerequisites:



 


Key components:



Solution Architecture


App Ref Architecture.png


App Architecture Description: 



  • The user accesses the chatbot by invoking it as an Alexa skill.

  • User is authenticated with Azure Active Directory.

  • User interacts with the chatbot powered by Azure Bot Service; for example, user requests bot to add grocery items to a list.

  • Azure Cognitive Services process the natural language request to understand what the user wants to do. (Note: If you wanted to give your bot its own voice, you can choose from over 200 voices and 54 languages/locales. Try the demo to hear the different natural sounding voices.)

  • The bot adds or removes content in the database.


Another visual of the flow of data within the solution architecture is shown below.


App flow.png


 


 


 


 


 


 


Implementation


High level overview of steps involved in creating the app along with some sample code snippets for illustration:


We’ll start by creating an Azure Bot Service instance, and adding speech capabilities to the bot using the Microsoft Bot Framework and the Alexa skill. Bot Framework, along with Azure Bot Service, provides the tools required to build, test, deploy, and manage the end-to-end bot development workflow. In this example, we are integrating Azure Bot Service with Alexa, which can process speech inputs for our voice-based chatbot. However, for chatbots deployed across multiple channels, and for more advanced scenarios, we recommend using Azure’s Speech service to enable voice-based scenarios. Try the demo to listen to the over 200 high quality voices available across 54 languages and locales.



  1. The first step in the process is to login into Azure portal and follow the steps here to create an Azure Bot Service resource and a web app bot. To add voice capability to the bot, click on channels to add Alexa (see the below snapshot) and note the Alexa Service Endpoint URI.


 Azure Bot Service ChannelsAzure Bot Service Channels



  1. Next, we need to log into the Alexa Developer Console and create an Amazon Alexa skill. After creating the skill, we are presented with the interaction model. Replace the JSON Editor with the below example phrases.


 


 


 

{

  "interactionModel": {

    "languageModel": {

      "invocationName": "Get grocery list",

      "intents": [

        {

          "name": "AMAZON.FallbackIntent",

          "samples": []

        },

        {

          "name": "AMAZON.CancelIntent",

          "samples": []

        },

        {   

          "name": "AMAZON.HelpIntent",

          "samples": []

        },

        {

          "name": "AMAZON.StopIntent",

          "samples": []

        },

        {

          "name": "AMAZON.NavigateHomeIntent",

          "samples": []

        },

        {

          "name": "Get items in the grocery",

          "slots": [

            {

              "name": "name",

              "type": "AMAZON.US_FIRST_NAME"

            }

          ],

          "samples": [

            "Get grocery items in the list",

            "Do I have bread in my list",

           ]

        }

      ],

      "types": []

    }

  }

}

 


 


 


 



  1. Next, we’ll integrate the Alexa Skill with our Azure bot. We’ll need two pieces of information to do this: the Alexa Skill ID and the Alexa Service Endpoint URI. First, get the Skill ID either from the URl in the Alexa portal, or by going to the Alexa Developer Console and clicking “view Skill ID”. The skill ID should be a value like ‘amzn1.ask.skil.A GUID’. Then, get the Alexa Service Endpoint URI from the Azure portal, by going to the channels page of our Azure Web App Bot in the Azure portal, and clicking on Alexa to copy the Alexa Service Endpoint URI. Then integrate as shown:


 



  • Amazon Developer Console: After building the Alexa Skill, click on Endpoint and paste the Alexa Service Endpoint URI that we copied from the Azure portal and save the Endpoints.
    Amazon Developer Console.jpg

  • Azure Portal: Go to the channels page of the Azure Bot, click on Alexa, and paste the Alexa Skill ID that we copied from the Alexa Developer Console.
    Alexa config settings in Azure bot service.jpg


 



  1. Now, we’ll download and the bot locally for testing using the Bot Framework Emulator. Click on “Build” in the Azure Web Bot app to download the source code locally with Bot Framework Emulator. Modify app.py as below:

    # Copyright (c) Microsoft Corporation. All rights reserved.
    # Licensed under the MIT License.
    
    from http import HTTPStatus
    
    from aiohttp import web
    from aiohttp.web import Request, Response, json_response
    from botbuilder.core import (
        BotFrameworkAdapterSettings,
        ConversationState,
        MemoryStorage,
        UserState,
    )
    from botbuilder.core.integration import aiohttp_error_middleware
    from botbuilder.schema import Activity
    
    from config import DefaultConfig
    from dialogs import MainDialog, groceryDialog
    from bots import DialogAndWelcomeBot
    
    from adapter_with_error_handler import AdapterWithErrorHandler
    
    CONFIG = DefaultConfig()
    
    # Create adapter.
    # See https://aka.ms/about-bot-adapter to learn more about how bots work.
    SETTINGS = BotFrameworkAdapterSettings(CONFIG.APP_ID, CONFIG.APP_PASSWORD)
    
    # Create MemoryStorage, UserState and ConversationState
    MEMORY = MemoryStorage()
    USER_STATE = UserState(MEMORY)
    CONVERSATION_STATE = ConversationState(MEMORY)
    
    # Create adapter.
    # See https://aka.ms/about-bot-adapter to learn more about how bots work.
    ADAPTER = AdapterWithErrorHandler(SETTINGS, CONVERSATION_STATE)
    
    # Create dialogs and Bot
    RECOGNIZER = IntelligentGrocery(CONFIG)
    grocery_DIALOG = groceryDialog()
    DIALOG = MainDialog(RECOGNIZER, grocery_DIALOG)
    BOT = DialogAndWelcomeBot(CONVERSATION_STATE, USER_STATE, DIALOG)
    
    # Listen for incoming requests on /api/messages.
    async def messages(req: Request) -> Response:
        # Main bot message handler.
        if "application/json" in req.headers["Content-Type"]:
            body = await req.json()
        else:
            return Response(status=HTTPStatus.UNSUPPORTED_MEDIA_TYPE)
    
        activity = Activity().deserialize(body)
        auth_header = req.headers["Authorization"] if "Authorization" in req.headers else ""
    
        response = await ADAPTER.process_activity(activity, auth_header, BOT.on_turn)
        if response:
            return json_response(data=response.body, status=response.status)
        return Response(status=HTTPStatus.OK)
    
    APP = web.Application(middlewares=[aiohttp_error_middleware])
    APP.router.add_post("/api/messages", messages)
    
    if __name__ == "__main__":
        try:
            web.run_app(APP, host="localhost", port=CONFIG.PORT)
        except Exception as error:
            raise error
    ​


  2. Next, we’ll run and test the bot with Bot Framework Emulator. From the terminal, navigate to the code folder and run pip install -r requirements.txt to install the required packages to run the bot. Once the packages are installed, run python app.py to start the bot. The bot is ready to test as shown below:
    BF Emulator test.jpg}

    Open the bot and add the below port number into the following URL.
    Bot Framework Emulator viewBot Framework Emulator view


 



  1. Now we’re ready to add natural language understanding so the bot can understand user intent. Here, we’ll use Azure’s Language Understanding Cognitive Service (LUIS), to map user input to an “intent” and extract “entities” from the sentence. In the below illustration, the sentence “add milk and eggs to the list” is sent as a text string to the LUIS endpoint. LUIS returns the JSON seen on the right.
    Language Understanding utterances diagramLanguage Understanding utterances diagram


 



  1. Use the below template to create a LUIS JSON model file where we specify intents and entities manually. After the “IntelligentGrocery” app is created in the LUIS portal under “Import New App”, upload the JSON file with the below intents and entities.


 


 


 

{
      "text": "access the groceries list",
      "intent": "Show",
      "entities": [
        {
          "entity": "ListType",
          "startPos": 11,
          "endPos": 19,
          "children": []
        }
      ]
    },
    {
      "text": "add bread to the grocery list",
      "intent": "Add",
      "entities": [
        {
          "entity": "ListType",
          "startPos": 23,
          "endPos": 29,
          "children": []

 


 


 


The above sample intents are for adding items and accessing the items in the grocery list. Now, it’s your turn to add additional intents to perform the below tasks, using the LUIS portal. Learn more about how to create the intents here.


Intents






















Name



Description



CheckOff



Mark the grocery items as purchased.



Confirm



Confirm the previous action.



Delete



Delete items from the grocery list.



 


Once the intents and entities are added, we will need to train and publish the model so the LUIS app can recognize utterances pertaining to these grocery list actions.

Language Understanding (LUIS) PortalLanguage Understanding (LUIS) Portal


 



  1. After the model has been published in the LUIS portal, click ‘Access your endpoint Urls’ and copy the primary key, example query and endpoint URL for the prediction resource.
    Language Understanding endpointLanguage Understanding endpoint

    Language Understanding (LUIS) Prediction viewLanguage Understanding (LUIS) Prediction view


Navigate to the Settings page in the LUIS portal to retrieve the App ID.
Application settingsApplication settings


 



  1. Finally, test your Language Understanding model. The endpoint URL will be in the below format, with your own custom subdomain, and app ID and endpoint key replacing APP-ID, and KEY_ID. Go to the end of the URL and enter an intent; for example, “get me all the items from the grocery list”. The JSON result will identify the top scoring intent and prediction with a confidence score. This is a good test to see if LUIS can learn what should be predicted with the intent.









https://YOUR-CUSTOM-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/APP-ID/slots/production/predict?subscription-key=KEY-ID&verbose=true&show-all-intents=true&log=true&query=YOUR_QUERY_HERE



 


Additional Ideas


We’ve now seen how to build a voice bot leveraging Azure services to automate a common task. We hope it gives you a good starting point towards building bots for other scenarios as well. Try out some of the ideas below to continue building upon your bot and exploring additional Azure AI services.



  • Add Google Home assistant as an additional channel to receive voice commands.

  • Add a PictureBot extension to your bot and add pictures of your grocery items. You will need to create intents that trigger actions that the bot can take, and create entities that require these actions. For example, an intent for the PictureBot may be “SearchPics”. This could trigger Azure Cognitive Search to look for photos, using a “facet” entity to know what to search for. See what other functionality you can come up with!

  • Use Azure QnA maker to enable your bot to answer FAQs from a knowledge base. Add a bit of personality using the chit-chat feature.

  • Integrate Azure Personalizer with your voice chatbot to enables the bot to recommend a list of products to the user, providing a personalized experience.

  • Include Azure Speech service to give your bot a custom, high quality voice, with 200+ Text to Speech options across 54 different locales/languages, as well as customizable Speech to Text capabilities to process voice inputs.

  • Try building this bot using Bot Framework Composer, a visual authoring canvas.

How to build an intelligent travel journal using Azure AI

How to build an intelligent travel journal using Azure AI

This article is contributed. See the original author and article here.

AI capabilities can enhance many types of applications, enabling you to improve your customer experience and solve complex problems. With Azure Cognitive Services, you can easily access and customize industry-leading AI models, using the tools and languages of your choice.


In this blog, we’ll walk through an exercise which you can complete in under an hour, to get started using Azure AI Services. Many of us are dreaming of traveling again, and building this intelligent travel journal app can help you capture memories from your next trip, whenever that may be. We’ll provide high level guidance and sample code to get you started, and we encourage you to play around with the code and get creative with your solution!


 


Features of the application:



  • Capture voice memos, voice tag photos, and transcribe speech to text.

  • Automatically tag your photos based on key phrase extraction and analysis of text in pictures.

  • Translate tags and text into desired language.

  • Organize your memos by key phrase and find similar travel experiences you enjoyed with AI-powered search.


travel blog app image.jpg


 


Prerequisites:



 


Key Azure technologies:



NOTE:  For more information refer to the “References.txt” file under respective folders within JournalHelper library project in the provided sample solution with this blog.


 


Solution Architecture


 


travel blog architecture image.png


 


App Architecture Description:



  • User records a voice memo; for example, to accompany an image they’ve captured. The recorded file is stored in a file repository (alternatively, you could use a database).

  • The recorded voice memo (e.g. .m4a) is converted into desired format (e.g. .wav), using Azure’s Speech Service batch transcription capability.

  • The folder containing voice memos is uploaded to a Blob container.

  • Images are uploaded into a separate container for analysis of any text within the photos, using Azure Computer Vision.

  • Use Translator to translate text to different languages, as needed. This may be useful to translate foreign street signs, menus, or other text in images.

  • Extract tags from the generated text files using Text Analytics, and send tags back to the corresponding image file. Tags can be travel related (#milan, #sunset, #Glacier National Park), or based on geotagging metadata, photo metadata (camera make, exposure, ISO), and more.

  • Create a search indexer with Azure Cognitive Search, and use the generated index to search your intelligent travel journal.


Implementation


Sample code


The entire solution code is available for download at this link. Download/clone and follow instructions in ReadMe.md solution item for further setup.


 


Implementation summary


The sample is implemented using various client libraries and samples available for Azure Cognitive Services. All these services are grouped together into a helper library project named “journalhelper”. In the library we introduce a helper class to help with scenarios that combine various Cognitive Services to achieve desired functionality.


We use “.Net Core console app” as the front end to test the scenarios. This sample also uses another open source library (FotoFly), which is ported to .Net Core here, to access and edit image metadata.


 


High level overview of steps, along with sample code snippets for illustration:



  1. Start by batch transcribing voice memos and extracting key tags from the text output. Group the input voice memos into a folder, upload them into an Azure Blob container or specify a list of their URls, and use batch transcription to get results back into the Azure Blob container, as well as a folder in your file system. The following code snippet illustrates how helper functions can be grouped together for a specific functionality. It combines local file system, Azure storage containers, and Cognitive Services speech batch transcription API.


 


 

Console.WriteLine("Uploading voice memos folder to blob container...");
Helper.UploadFolderToContainer(
HelperFunctions.GetSampleDataFullPath(customSettings.SampleDataFolders.VoiceMemosFolder),
customSettings.AzureBlobContainers.InputVoiceMemoFiles, deleteExistingContainer);
Console.WriteLine("Branch Transcribing voice memos using containers...");
//NOTE: Turn the pricing tier for Speech Service to standard for this below to work.

await Helper.BatchTranscribeVoiceMemosAsync(
customSettings.AzureBlobContainers.InputVoiceMemoFiles,
customSettings.AzureBlobContainers.BatchTranscribedJsonResults,
          customSettings.SpeechConfigSettings.Key,
          customSettings.SpeechConfigSettings.Region);

Console.WriteLine("Extract transcribed text files into another container and folder, delete the intermediate container with json files...");

await Helper.ExtractTranscribedTextfromJsonAsync(
customSettings.AzureBlobContainers.BatchTranscribedJsonResults,
customSettings.AzureBlobContainers.InputVoiceMemoFiles,
customSettings.AzureBlobContainers.ExtractedTranscribedTexts,
HelperFunctions.GetSampleDataFullPath(customSettings.SampleDataFolders.BatchTranscribedFolder), true);

 


 



  1. Next, create tags from the transcribed text. Sample helper function using the Text Analytics client library is listed below.


 


 

//text analytics
public static void CreateTagsForFolderItems(string key, string endpoint, string batchTranscribedFolder, string extractedTagsFolder)
{
    if (!Directory.Exists(batchTranscribedFolder))
    {
       Console.WriteLine("Input folder for transcribed files does not exist");
       return;
    }

    // ensure destination folder path exists
    Directory.CreateDirectory(extractedTagsFolder);
    TextAnalyticsClient textClient = TextAnalytics.GetClient(key, endpoint);

    var contentFiles = Directory.EnumerateFiles(batchTranscribedFolder);
    foreach(var contentFile in contentFiles
    {
var tags = TextAnalytics.GetTags(textClient, 
contentFile).ConfigureAwait(false).GetAwaiter().GetResult();

// generate output file with tags 
string outFileName = Path.GetFileNameWithoutExtension(contentFile);
                outFileName += @"_tags.txt";
string outFilePath = Path.Combine(extractedTagsFolder, outFileName);
File.WriteAllLinesAsync(outFilePath, tags).Wait() ;
    }
}

 


 


The actual client library or service calls are made as shown:


 


 

static public async Task<IEnumerable<string>> GetTags(TextAnalyticsClient 
client, string inputTextFilePath)
{
   string inputContent = await File.ReadAllTextAsync(inputTextFilePath);
   var entities = EntityRecognition(client, inputContent);
   var phrases = KeyPhraseExtraction(client, inputContent);
   var tags = new List<string>();
   tags.AddRange(entities);
   tags.AddRange(phrases);
   return tags;
}

 


 



  1. Update tags to the photo/image file, using the open source FotoFly library.  Alternatively, you can update the Blob metadata with these tags and include that in the search index, but the functionality will be limited to using Azure Blob storage.


 


 

string taggedPhotoFile = photoFile.Replace(inputPhotosFolder,    
      OutPhotosFolder);
File.Copy(photoFile, taggedPhotoFile, true);

if (tags.Count > 0)
{
    ImageProperties.SetPhotoTags(taggedPhotoFile, tags);
}

 


 



  1. Other useful functions to complete the scenario are:

    1. Helper.ProcessImageAsync, and

    2. Helper.TranslateFileContent




The first one can be used to extract text from images using OCR or regular text processing using Computer Vision. The second can detect the source language, translate using Azure’s Translator service into the desired output language, and then create more tags for an image file.



  1. Finally, use Azure Cognitive Search to create an index from the extracted text files saved in the Blob container, enabling you to search for documents and create journal text files. For example, you can search for images by cities or countries visited, date, or even cuisines. You can also search for images by camera-related metadata or geolocation.


In this sample we have demonstrated simple built-in skillsets for entity and language detection. The solution can be further enhanced by adding additional data sources to process tagged images and their metadata, and adding additional information to the searches.


NOTE:  The helper functions can be made more generic to take additional skillset input.


 


 

public static async Task CreateSearchIndexerAsync(
    string serviceAdminKey, string searchSvcUrl,
    string cognitiveServiceKey,
    string indexName, string jsonFieldsFilePath,
    string blobConnectionString, string blobContainerName
    )
{
    // Its a temporary arrangment.  This function is not complete
    IEnumerable<SearchField> fields = SearchHelper.LoadFieldsFromJSonFile(jsonFieldsFilePath);

    // create index
    var searchIndex = await 
Search.Search.CreateSearchIndexAsync(serviceAdminKey, 
searchSvcUrl, indexName, fields.ToList());

    // get indexer client
    var indexerClient = 
Search.Search.GetSearchIndexerClient(serviceAdminKey, searchSvcUrl);

    // create azure blob data source
    var dataSource = await 
Search.Search.CreateOrUpdateAzureBlobDataSourceAsync(indexerClient, 
blobConnectionString, indexName, blobContainerName);

    // create indexer

    // create skill set with minimal skills
    List<SearchIndexerSkill> skills = new List<SearchIndexerSkill>();
            skills.Add(Skills.CreateEntityRecognitionSkill());
            skills.Add(Skills.CreateLanguageDetectionSkill());
     var skillSet = await 
Search.Search.CreateOrUpdateSkillSetAsync(indexerClient,
             indexName + "-skillset", skills, cognitiveServiceKey);

     var indexer = await Search.Search.CreateIndexerAsync(indexerClient, 
dataSource, skillSet, searchIndex);

     // wait for some time to have indexer run and load documents
     Thread.Sleep(TimeSpan.FromSeconds(20));

     await Search.Search.CheckIndexerOverallStatusAsync(indexerClient, 
             indexer);
}

 


 


Finally, search documents and generate the corresponding journal files, utilizing the following functions:



  1. Helper.SearchDocuments

  2. Helper.CreateTravelJournal


Additional Ideas


In addition to the functionality described so far, there are many other ways you can  leverage Azure AI to further enhance your intelligent travel journal and learn more advanced scenarios. We encourage you to explore some the following ideas to enrich your app:



  • Add real time voice transcription and store transcriptions in an Azure managed database, to correlate voice transcription with images in context.

  • Include travel tickets and receipts as images for OCR-based image analysis (Form Recognizer) and include them as journal artifacts.

  • Use multiple data sources for a given search index. We have simplified and only included text files to index in this sample, but you can include the tagged photos from a different data source for the same search index.

  • Add custom skills and data extraction for search indexer. Extract metadata from images and include as search content.

  • Extract metadata from video and audio content using Video Indexer.

  • Experiment with Language Understanding and generate more elaborate and relevant search content based on top scoring intents and entities. Sample keywords and questions related to current sample data are included in Objectives.docx solution item.

  • Build a consumer front-end app that stitches all of this together and displays the journal in a UI.