This article is contributed. See the original author and article here.

Hi!


 


I written several posts about how to use Local Large Language Models with C#.

But what about Small Language Models?



Well, today’s demo is an example on how to use SLMs this with ONNX.  So let’s start with a quick intro to Phi-3, ONNX and why to use ONNX. And then, let’s showcase some interesting code sample and respirces. 


 


In the Phi-3 C# Labs sample repo we can find several samples, including a Console Application that loads a Phi-3 Vision model with ONNX, and analyze and describe an image.


 


30SampleVisionConsole.gif


 


Introduction to Phi-3 Small Language Model


The Phi-3 Small Language Model (SLM) represents a groundbreaking advancement in AI, developed by Microsoft. It’s part of the Phi-3 family, which includes the most capable and cost-effective SLMs available today. These models outperform others of similar or even larger sizes across various benchmarks, including language, reasoning, coding, and math tasks. The Phi-3 models, including the Phi-3-mini, Phi-3-small, and Phi-3-medium, are designed to be instruction-tuned and optimized for ONNX Runtime, ensuring broad compatibility and high performance.


 


You can learn more about Phi-3 in:



 


Introduction to ONNX


ONNX, or Open Neural Network Exchange, is an open-source format that allows AI models to be portable and interoperable across different frameworks and hardware. It enables developers to use the same model with various tools, runtimes, and compilers, making it a cornerstone for AI development. ONNX supports a wide range of operators and offers extensibility, which is crucial for evolving AI needs.


 


Why to Use ONNX on Local AI Development


Local AI development benefits significantly from ONNX due to its ability to streamline model deployment and enhance performance. ONNX provides a common format for machine learning models, facilitating the exchange between different frameworks and optimizing for various hardware environments. 


 


For C# developers, this is particularly useful because we have a set of libraries specifically created to work with ONNX models. In example: 



Microsoft.ML.OnnxRuntime, https://github.com/microsoft/onnxruntime

 

C# ONNX and Phi-3 and Phi-3 Vision


 


The Phi-3 Coookbook GitHub repository contains C# labs and workshops sample projects that demonstrate the use of Phi-3 mini and Phi-3-Vision models in .NET applications


 


It showcases how these powerful models can be utilized for tasks like question-answering and image analysis within a .NET environment.




 
































Project Description Location
LabsPhi301 This is a sample project that uses a local phi3 model to ask a question. The project load a local ONNX Phi-3 model using the Microsoft.ML.OnnxRuntime libraries. .srcLabsPhi301
LabsPhi302 This is a sample project that implement a Console chat using Semantic Kernel. .srcLabsPhi302
LabsPhi303 This is a sample project that uses a local phi3 vision model to analyze images.. The project load a local ONNX Phi-3 Vision model using the Microsoft.ML.OnnxRuntime libraries. .srcLabsPhi303
LabsPhi304 This is a sample project that uses a local phi3 vision model to analyze images.. The project load a local ONNX Phi-3 Vision model using the Microsoft.ML.OnnxRuntime libraries. The project also presents a menu with different options to interacti with the user. .srcLabsPhi304

 


To run the projects, follow these steps:


 




  1. Clone the repository to your local machine.




  2. Open a terminal and navigate to the desired project. In example, let’s run LabsPhi301.



    cd .srcLabsPhi301

     




  3. Run the project with the command



    dotnet run

     




  4. The sample project ask for a user input and replies using the local mode.


    The running demo is similar to this one:




20SampleConsole.gif


 


Sample Console Application to use a ONNX model


 


Let’s take a look at the 1st demo application with the following code snippet is from /src/LabsPhi301/Program.cs. The main steps to use a model with ONNX are:



  • The Phi-3 model, stored in the modelPath , is loaded into a Model  object.

  • This model is then used to create a Tokenizer  which will be responsible for converting our text inputs into a format that the model can understand.


And this is the chatbot implementation.



  • The chatbot operates in a continuous loop, waiting for user input.

  • When a user types a question, the question is combined with a system prompt to form a full prompt.

  • The full prompt is then tokenized and passed to a Generator object.

  • The generator, configured with specific parameters, generates a response one token at a time.

  • Each token is decoded back into text and printed to the console, forming the chatbot’s response.

  • The loop continues until the user decides to exit by entering an empty string. 


 


 


 

using Microsoft.ML.OnnxRuntimeGenAI;

var modelPath = @"D:phi3modelsPhi-3-mini-4k-instruct-onnxcpu_and_mobilecpu-int4-rtn-block-32";
var model = new Model(modelPath);
var tokenizer = new Tokenizer(model);

var systemPrompt = "You are an AI assistant that helps people find information. Answer questions using a direct style. Do not share more information that the requested by the users.";

// chat start
Console.WriteLine(@"Ask your question. Type an empty string to Exit.");

// chat loop
while (true)
{
    // Get user question
    Console.WriteLine();
    Console.Write(@"Q: ");
    var userQ = Console.ReadLine();    
    if (string.IsNullOrEmpty(userQ))
    {
        break;
    }

    // show phi3 response
    Console.Write("Phi3: ");
    var fullPrompt = $"{systemPrompt}{userQ}";
    var tokens = tokenizer.Encode(fullPrompt);

    var generatorParams = new GeneratorParams(model);
    generatorParams.SetSearchOption("max_length", 2048);
    generatorParams.SetSearchOption("past_present_share_buffer", false);
    generatorParams.SetInputSequences(tokens);

    var generator = new Generator(model, generatorParams);
    while (!generator.IsDone())
    {
        generator.ComputeLogits();
        generator.GenerateNextToken();
        var outputTokens = generator.GetSequence(0);
        var newToken = outputTokens.Slice(outputTokens.Length - 1, 1);
        var output = tokenizer.Decode(newToken);
        Console.Write(output);
    }
    Console.WriteLine();
}

 


 



This is a great example of how you can leverage the power of Phi-3 and ONNX in a C# application to create an interactive AI experience. Please take a look at the other scenarios and if you have any questions, we are happy to receive your feedback!


Best


Bruno Capuano


 


Note: Part of the content of this post was generated by Microsoft Copilot, an AI assistant.


 


 

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.