Integrating Twitter, SharePoint, and Azure Sentiment Analysis with Flow

Last month, I wrote a post that included steps for setting up Sentiment Analysis, an Azure Cognitive Service, and how to use it to score how positive your emails are.  This time, I’m going to leverage the service that was configured in that post by using it in a Flow.  The Flow will pull content from Twitter, store it in SharePoint, and determine the Sentiment score for the tweet. 

To begin, I setup a SharePoint list with a number field named Sentiment Score.  For the purpose of this demo, I’ll use the Title field to store the tweet text but in production, I’d create a separate field for it. 

 empty sp list.PNG

 Next, I click on Flow in the menu select Create a flow. 

 create a flow.PNG

 A menu will appear to the right of the page with a few templates but we’ll want to create our own so we’ll click on “See your flows” at the bottom. 

 See your flows.PNG

 Next, you’ll be taken to the Flow page where you’ll want to select “Create from blank”  

 Create from blank.PNG


The trigger for our flow will be when an item matching a particular hashtag is created in Twitter so you can either select the Twitter icon titled “When a new tweet is posted” or if it’s not there, you can click the Search button below it and find the Twitter trigger there. 

 start with trigger.PNG

Your flow designer will start you with the Twitter trigger.  When you first select it, you’ll need to provide credentials for Twitter.  After you do, your trigger will display a simple text box that lets you enter the text you’d like to search for.  In this case, I chose to search for #Microsoft.  This will grab any new tweets with that hashtag. 

flow - twitter

Next, I want to run that tweet against the sentiment analysis action.  I’m going to assume that you have the sentiment analysis service configured but if not, you can go back to my previous post where I walk through those steps.  To narrow down the actions, I searched for “sentiment” and it filtered it down to the results below. 

 Flow - sentiment.PNG

I then selected “Text Analytics” from the connector to show that there are multiple options, but I could’ve just selected the action titled “Text Analytics – Detect Sentiment”. 

 text analysis - actions.PNG


Next, it’s time to configure the sentiment action.  When the action first comes up, it’ll ask for a key and endpoint which you can get from the sentiment analysis service in Azure.  Once you provide that, you’ll get the action below which asks for the text that you want to analyze.  Using the Dynamic Content, you can tell the action to analyze the Tweet Text that is coming from the Tweet trigger and you can specify a language as well. 

 configure sentiment.PNGconfigured sentiment.PNG

 Once the text is scored, I can create another action to Create a SharePoint Item. 

 create item action.PNG

That will give me the action below which simply needs a URL, the List where we want to store our results.  This is the list that I created in the beginning with the Title and Sentiment Score fields.  Using dynamic content, you can save the Tweet Text to the Title field and the score from the sentiment action to the Sentiment Score field. 

 create sharepoint item.PNG

Once you’re done, the Flow should look something like this.  (Don’t forget to give your Flow a proper name by clicking on the text at the top left of the screen.  I named mine “Twitter Sentiment Analysis”)

flow complete.PNG

The result is a list that is populated with tweets and scores. 

 sharepoint populated list.PNG


This is just a simple proof of concept to show how simple it can be to do this.  Depending on how many tweets you expect to have, you may not want to create SharePoint list items for this.  Instead, you may want to store the content in a spreadsheet or database.  With a little more effort, you can create better ways to present the data using column formatting or SPFx web parts.   If you release a new products or have some sort of event, you can keep an eye on your social media buzz to see how people are receiving them.  


Creating an Outlook Add-In with Sentiment Analysis

A few months ago, I was reading an email that had a snippy tone to it and several people didn’t take too kindly to it.  Despite the tone, I don’t believe that the person who composed the email intended it to come off that way.  It got me thinking that it would be cool to have an Outlook Add-In that would use the Sentiment Analysis api made available by the Azure Cognitive Services to score an email.  I finally got around to putting this together this weekend and this is a quick overview of the setup.

Setting up Sentiment Analysis in Azure

Start by logging into your Azure portal and searching for Cognitive Services

cognitive services search.PNG

On the cognitive services blade, click the Add button which will show the “AI + Machine Learning” blade where you can search for “Text Analytics”.

cognitive services - add.PNG

text analytics search.PNG

After you select Text Analytics, you can provide you service name, subscription, location, etc as shown below.

create form.PNG

Your new service will have some basic information for getting started.  The keys that your service will use, links to additional information and tutorials, etc.

text analytics - configure.PNG

There isn’t anything else to configure.  The keys needed are in the link under Step 1.  You can regenerate the keys if you need to but this is where you’ll get the keys that you need when you call the api.

step 1 - manage keys.PNG

Building the Outlook Add-In

For this demo, I’m using a circular progress bar sample by Anders Ingemann.  I start by creating a Outlook Add-In project in Visual Studio 2017.  I’ll show how I created an Add-In that is available in the composer and scores your email message.vs - outlook add-in.PNG

Similar to SharePoint Add-Ins, the Outlook Add-In creates 2 projects; one that creates a manifest and the other is a web application.

vs- project files.PNG

If you run the project without making any changes, you’ll notice that the Add-In is available when reading an email, but I want it to be visible on compose.  You configure this via the manifest file found in the first project.  Look for the following lines (I apologize for the code images.  Posting code was causing issues):



Replace the first line with:


Now, since this is just a demo that I was playing with, I didn’t bother to change the file names and I left the MessageRead.css, MessageRead.html, and MessageRead.js files as is.  If this were an actual project, I would probably create new files for Compose. Open MessageRead.html and find the div with id “content-main.”  This is where the main body of your Add-In is rendered from.  Using the sample from the link above with some minor tweaks, I came up with the following:

compose html.PNG

The main change is the div with the class “numbers”, where I removed the list of spans (1 for each available percentage) and replaced it with a span where I will pass in the score provided by the sentiment analysis results. I created a new styles.less file and added the following contents, straight from the link above with one minor change.  I removed the inset class which is no longer needed since I’m passing in the score.

<pre>/*@import url(,300,400,700,900,100italic,300italic,400italic,700italic,900italic);*/
.radial-progress {
    @circle-size: 120px;
    @circle-background: #d6dadc;
    @circle-color: #97a71d;
    @inset-size: 90px;
    @inset-color: #fbfbfb;
    @transition-length: 1s;
    @shadow: 6px 6px 10px rgba(0,0,0,0.2);
    @percentage-color: #97a71d;
    @percentage-font-size: 22px;
    @percentage-text-width: 57px;
    margin: 50px;
    width: @circle-size;
    height: @circle-size;
    background-color: @circle-background;
    border-radius: 50%;
    .mask, .fill, .shadow
    width: @circle-size;
    height: @circle-size;
    position: absolute;
    border-radius: 50%;
.shadow {
    box-shadow: @shadow inset;
.mask, .fill {
    -webkit-backface-visibility: hidden;
    transition: -webkit-transform @transition-length;
    transition: -ms-transform @transition-length;
    transition: transform @transition-length;
    border-radius: 50%;
.mask {
    clip: rect(0px, @circle-size, @circle-size, @circle-size/2);
    clip: rect(0px, @circle-size/2, @circle-size, 0px);
    background-color: @circle-color;
.inset {
    width: @inset-size;
    height: @inset-size;
    position: absolute;
    margin-left: (@circle-size - @inset-size)/2;
    margin-top: (@circle-size - @inset-size)/2;
    background-color: @inset-color;
    border-radius: 50%;
    box-shadow: @shadow;
    height: @percentage-font-size;
    width: @percentage-text-width;
    overflow: hidden;
    position: absolute;
    top: (@inset-size - @percentage-font-size) / 2;
    left: (@inset-size - @percentage-text-width) / 2;
    line-height: 1;
    margin-top: -@percentage-font-size;
    transition: width @transition-length;
    width: @percentage-text-width;
    display: inline-block;
    vertical-align: top;
    text-align: center;
    font-weight: 800;
    font-size: @percentage-font-size;
    font-family: "Lato", "Helvetica Neue", Helvetica, Arial, sans-serif;
    color: @percentage-color;
@i: 0;
@increment: 180deg / 100;
.loop (@i) when (@i &lt;= 100) {
    .mask.full, .fill
    -webkit-transform: rotate(@increment * @i);
    -ms-transform: rotate(@increment * @i);
    transform: rotate(@increment * @i);
.fill.fix {
    -webkit-transform: rotate(@increment * @i * 2);
    -ms-transform: rotate(@increment * @i * 2);
    transform: rotate(@increment * @i * 2);
.inset .percentage .numbers {
    /*width: @i * @percentage-text-width + @percentage-text-width;*/
.loop(@i + 1);

The javascript logic consists of just a few simple steps.  First, we get the message and pass it to the calculateSentiment function.  Then we construct the “document” that the sentiment analysis will score.  For more info on this document schema, visit the how-to page.  We then do an http post to your endpoint.  The Ocp-Apim-Subscription-Key is the key that we get from Azure that we saw at the beginning of this post.  Finally, we get the result if the post succeeds, and we pass it to the function that sets the chart value.

<pre>(function () {
    'use strict';
    var item;
    // The Office initialize function must be run each time a new page is loaded.
    Office.initialize = function (reason) {
        item = Office.context.mailbox.item;
        $(document).ready(function () {
            // get the body of the message and calculate the score
            item.body.getAsync('text', function (async) {
    function calculateSentiment(emailMessage) {
        // build the document
        var documentsVal = [{
            "language": "en",
            "id": 1,
            "text": emailMessage
        var documentsKey = {
            "documents": documentsVal
            type: 'POST',
            url: "",
            headers: {
                "Ocp-Apim-Subscription-Key": "",
                "Content-Type": "application/json",
                "Accept": "application/json"
            data: JSON.stringify(documentsKey, null, 2),
            dataType: "json",
            success: function (result) {
                var documents = result.documents;
                for (var i = 0; i &lt; documents.length; i++) {
                    var msg = documents[i];
                    var score = msg.score;
            error: function (xhr, ajaxOptions, thrownError) {
                console.log(&quot;Error statue: &quot; + xhr.status);
                console.log(&quot;Thrown error: &quot; + thrownError);
    function setChart(pct) {        
        window.randomize = function () {
            $(&#039;.radial-progress&#039;).attr(&#039;data-progress&#039;, Math.floor(pct * 100));
            let displayVal = Math.round(pct * 100) + &#039;%&#039;;
        setTimeout(window.randomize, 200);

The end result is as follows:

This slideshow requires JavaScript.

Integrating FAQ Bots with SharePoint

Everyday, we hear more and more about bots and I’ve had the opportunity to build several over the last few months.  They do offer some pretty cool possibilities around process automation and productivity.  Lately, I’ve been talking to people about creating an FAQ bot to free up people from answering common questions.

Creating an FAQ bot with QnA Maker

To get started, you’ll want to go to the QnA Maker site, where you will login with your Azure credentials, create, and setup your new bot.

Once you’re logged in, click the “Create a knowledge base” button at the top of the page.


The next page is a simple 5 step process that will guide you through creating your knowledge base.




Step 1: Click the “Create a QnA service” button which will take you to Azure to create your App Service.

step 1

Once in Azure, you’ll enter some basic information about your bot like the name, which needs to be unique, the type of pricing that you’ll want to use, resource group, site location, and other.  Note: Something odd happens after this step.  When the resource group, app service plan, and web app are created, the Web App Bot is missing.  In the next section, we’ll fix that.

azure - create

Step 2: Select the Azure ID that you want to use to create your bot, your subscription service, and the service name that you created in the previous step.  If it doesn’t appear in the Azure QnA service dropdown, then you likely need to click the “refresh this page” link above the dropdown lists.

step 2

Step 3: Provide a name for your knowledge base.

step 3

Step 4: This is where you point to the source of your knowledge base.  If you have an FAQ page somewhere, you can provide a link to the page.  In my case, I will be choosing to upload a word document that I created and stored on my local machine.

step 4


Step 5: Click the “Create your KB” button and wait for the knowledge base to be created from your source entered in step 4.

step 5


step 5 - saving

Once it’s done, you’ll see that your knowledge base has been created.  If you see any issues or need to make changes, you can do that here.

knowledge base

Once everything looks fine, click the Publish link in the menu bar and the Publish button on the next page.

qna publish


When everything is published, you’ll get the Success page which contains some information that you’ll want to keep handy.  Specifically, the guid in the 1st line which represents your bot Id, the host url on the 2nd line, and the guid on the 3rd line which is used for authorization.

published qna bot


Creating the Web App Bot in Azure

In the previous section, I mention that the Web App Bot is missing.  I don’t know if that’s intentional but either way, we need to introduce it.

Resource Group - missing bot app

At the top of your resources group, click the Add button and search for Web App Bot.

Add - Web App Bot

Fill out the form to create your bot.  You can decide what plan you want to use, but choose the existing resource group that was created by the QnA Maker and under Bot template, choose Question and Answer.

Create - Bot Service

It’ll take a few minutes to create the bot.  Once it’s done, you’ll see a few more entries in your resource group, and then it’s time to configure some settings.

Resources revisited

Click on the new Web App Bot that you just created, go to the App Settings, and update the 3 settings that are highlighted.  QnAKnowledgebaseId, QnAAuthKey, and QnAEndpointHostName.

add app keys

The 3 values can be found in the QnA Maker.  We saw it in the success page but you can view it anytime from the QnA Maker site.

published qna bot

Once entered, you can then go to the Test in Web Chat page to make sure that bot is working as expected.  If you visit this page before editing the 3 values previously mentioned, the bot will respond by telling you to enter those values.

bot - test results

When working, you can go to Channels.  This is where you can configure your bot to work in other apps like Skype and Teams but we are simply adding it to a SharePoint site so we can use the Web Chat that is already configured.  Click on the Edit button next to Web Chat.

bot channels

You’ll want to copy the text in the Embed Code section and replace the ‘Your_Secret_here’ text with one of the secret keys above it.  Clicking the Show button will let you see the key that you’ll insert into the iframe markup.

embed code


Adding the QnA bot to SharePoint

This is the easy part.  Go to your SharePoint site, edit the page, and insert the Embed webpart.

embed QnA


Edit the web part and insert the iframe markup that you copied from the Channels page in the final step of the previous section.  The web part will require you to insert a height and width to the markup.

embed editor part

Close the editor part, publish your page, and voilà!  You now have a simple bot that will use natural language and an FAQ list to answer those questions that you get ALL the time.

end result

Experimenting with SharePoint Hub Sites

A colleague (@spbrusso) and I have been playing with hub sites for the last few days.  We noticed some quirky things and started digging around and this is what we found.


When you create a hub site, a directory is created under _Catalogs and it’s appropriately called “hubsite”.

hubsite catalog

In that folder is a json file containing your new nav bar details which include the logo and urls.  This nav bar doesn’t appear to automatically pick up joined sites.  You must manually manage your navigation from the hub site.

hub site nav

Here is a sample of that json file.


  "themeKey": null,

  "name": "Hub Site",

  "url": "",

  "logoUrl": null,

  "usesMetadataNavigation": false,

  "navigation": [


      "Id": 2003,

      "Title": "Team Two",

      "Url": "",

      "IsDocLib": false,

      "IsExternal": false,

      "ParentId": 1002,

      "ListTemplateType": 0,

      "Children": [ ]




Associated Sites

The top navigation doesn’t propagate immediately and isn’t controlled by the normal Navigation settings that we’ve used in the past.  My guess is that there is a timer job running in the background that updates the top navigation for associated sites.  When it runs, the json file at the hub is copied to the same location at the associated site.  If you change the navigation at the hub, know that you may need to wait an hour or so for the associated sites to show the same top nav.  (I didn’t time it, but I believe I waited 45-60 minutes before I saw a change).  You can’t edit the top nav from the associated site; you have to do it from the hub.

If you crack open the hub site with the SharePoint Client Browser tool, you’ll see the navigation node with the links that you’ve created.  This is not the case on the associated sites so I’m thinking that the process is that a user creates the nav, which is stored in that Top Nav Bar, the items get copied to the file, the file gets copied to the associated sites, and the associated sites read from their copies.

hub site top nav


When you first create a modern team site, not associated with a hub site yet, you will see a random theme applied.  If you leave the default theme on the hub site and then associate sites to it, they appear to keep their own theme.

If you set a theme at the hub and then associate a site to it, that site will instantly pick up the theme.

If you change the hub site’s theme, the associated sites won’t pick up the change until you disassociate them from the hub and re-associate.  I tried changing the top navigation and waiting to see if the theme would change when the nav changes were propagated but they didn’t.

Update: If you change the hub site’s theme, the associated sites may take 2 or more hours to pick up the theme change.  





Unable to Delete Some Sites in the Admin Center’s New Site Management Page

I noticed a little quirk in the new Site Management page.  It’s new so I don’t expect it to be perfect.  I was playing with Hub Sites for a separate blog post and I wanted to start from scratch so I deleted my Hub Site via the UI.  I then tried to delete the site that was joined to the hub but I wasn’t given the option to.  The issue wasn’t limited to sites that were associated with hub sites either.

Here’s a communication site that I am able to delete.  It also happens to be a hub site but the issue isn’t specific to hub sites or communication sites.

Site Management with delete

And here’s the site that I wanted to delete.  You’ll notice that the Delete icon is missing.  A few other sites were like that too.

Site Management without delete

I had to resort to PowerShell to remove the site.

Remove-SPOSite https://tenant.sharepoint/sites/teamone

How to get a User Profile Image from the Microsoft Graph

I have been seeing a few people ask how to get a profile image from the Microsoft Graph…. They’re usually just 1 step away from figuring it out.

Most will use the following to get the photo:

The above returns the following information about the image but not the image itself:

"@odata.context": "$metadata#users('48d31887-5fad-4d73-a9f5-3c356e68a038')/photo/$entity",
"@odata.mediaContentType": "image/jpeg",
"@odata.mediaEtag": "\"2E24ABF6\"",
"id": "240X240",
"height": 240,
"width": 240

If you want the image, you need to add “$value” to the end.$value

The above will return your image.


Note: You need a work or school account for this to work.



Office 365, the Microsoft Bot Framework, & Cognitive Services: 50,000 Foot View

Over the summer, I started working with the Microsoft Bot Framework for three reasons:

  1. Bots are a cool technology that has become mainstream
  2. It provides awesome new possibilities in terms of productivity and customer engagement
  3. The company that I work for is making a clear push toward AI and Machine Learning

I’ve been sitting on this blog post for a while because I didn’t want to regurgitate the information that I was sourcing from, then I landed a project that kept me busy, then I agreed to do a presentation on the topic, and finally, I ended up having to build a proof of concept with it.  NOW… I have some time to talk about it.

Building a bot is not overly complicated but building a bot is a little underwhelming if that’s all you do with it.  Integrating artificial intelligence, to me, feels like a must because it provides that next level of interaction that makes it worth while.  AI is what will make your bot feel like you’re not interacting with a bot… at least until it tells you that it doesn’t understand what you’re asking when you say “hello”, but we just need to account for those things.

This post will try not to be a repeat of info that is readily available.  Instead, I’ll walk you through a bot that I built which was a help desk triage bot.  It’s a pure proof of concept that I wouldn’t necessarily put in a production environment, but it shows you what’s possible.

The bot, which I named ANA is made up of 3 major components.

  • First, (and obviously) is the Microsoft Bot Framework which will handle the message handling.
  • Second, I leveraged Microsoft’s Cognitive Services (specifically the Language Understanding Intelligent Service or LUIS) to provide a simple level of interaction.
  • Third, I used the Microsoft Graph to further integrate it into the Office 365 ecosystem.

Setting up ANA

I’ll go over the steps I took to set my bot up but there are a few other resources available for getting started so I won’t do my normal step by step.

First, you can visit the Bot Framework Documentation site which has plenty of good information on getting started.  Really check this out, the bot framework isn’t a simple message handling framework.  It has some cool features like the ability to enter “3 days from now” and it knowing how to parse that text to assign a date value equivalent to 3 days after the current date.

Another source that I saw recently was by @zimmergren (Tobias Zimmergen) who recently did a fairly comprehensive post on building a bot via the Azure Bot Service. I used the steps for creating the bot using the .NET SDK.  I haven’t tried the Azure Bot Service steps as of yet so I can’t recommend an option.


In order to create the bot, you will need Visual Studio 2017.  I tried using VS2015 (because that’s what I had at the time) but the Bot Application template wasn’t available for it.  The nuget package can be found here and here is the Bot Template for Visual Studio.

If you need anything else, you’ll likely find the relevant resources in the Start Building Bots page.

You’ll also want to install the BotFramework-Emulator so that you could connect to your bot without having to deploy.  For information about the emulator, visit the Debug bots with the Bot Framework Emulator page.  You can find the emulator setup file on github.

The Project Setup

Create a new Bot Application project in Visual Studio 2017


Solution Structure

When you create your new Bot Application, you’ll get 2 classes.  The Message Controller and a Root Dialog.  The Dialogs are meant to be a way to compartmentalize your various types of communications.  Your message controller takes the message received by the user and determines what to do with it.  When you create a new project, it’s setup to send it to the RootDialog which does the simple task of counting the number of characters in your message.

public async Task Post([FromBody]Activity activity)
if (activity.Type == ActivityTypes.Message)
//await Conversation.SendAsync(activity, () => new Dialogs.RootDialog());
await Conversation.SendAsync(activity, () => MakeDialog());
var response = Request.CreateResponse(HttpStatusCode.OK);
return response;

Your web.config file also has 3 app settings that you’ll need when it’s time to deploy.

<!-- update these with your BotId, Microsoft App Id and your Microsoft App Password-->
<add key="BotId" value="YourBotId" />
<add key="MicrosoftAppId" value="" />
<add key="MicrosoftAppPassword" value="" />

Integrating the Language Understanding Intelligent Service (LUIS)

At this point, you have what amounts to as a Hello, World bot where you can see how things plug into each other.  I suggest you go over the Bot Framework documentation to get the details around creating dialogs and forms.  As you build your bot, you’ll find other things that you may want to incorporate.  For example, you may want to provide a Describe or Prompt attribute to change the way the Bot asks users for input.

If you want your bot to have some intelligence, you’ll want to start incorporating the Cognitive Services.  The Cognitive Services are so cool in my opinion.  They’re fairly simple to use and offer so much potential.

A very high level description of how they work is you feed it some input which will be different depending on which service you use and it returns a score.  That score is used to make ‘predictions’.  I know… that was maybe a bit too high level.  Let’s take a look at how I used it.

Ana has a few dialogs.  Some that I incorporated from demos, others that I created myself once I got the hang of it.  Two of those dialogs are the Greeting dialog and the HelpDesk dialog.

The Greeting dialog will receive a message like ‘hi’ or ‘hello’ and respond with a greeting of its own.  The Help Desk dialog was one that I developed once I got the hang of things.  Its job is to ask you questions about the kind of help desk support that you require and it takes that information and creates a task in Planner.  Again, we’re living in proof of concept land so this wasn’t intended for practical application.

Given that my solution accepts multiple dialogs or multiple types of conversations, a greeting and a help desk request, I need to be able to differentiate “hello” from “i need to submit a help desk ticket” and any variation of that.  This is where the Cognitive Services come in!

In the image below, you’ll see the list of Intents.  So when someone communicates with my bot, my bot will expect the intention to be a Greeting, a Help Desk request, or None which is where unknown requests go.  When this one is hit, that’s when you’re bot will throw it’s hands up… if it had hands.

LUIS Intents

Inside an intent is what is called an Utterance.  An utterance is the sentence that is associated with the intent and what the probability is that the sentence was intended to be.  If we look at the Greeting intent, we can see that there are 2 utterances associated with it.  ‘Hi’ has an 80% chance to be a greeting.  ‘Howdy’ has a 70% chance.

greeting utterance.PNG

Similarly, our help desk ticket has it’s own list of utterances.  In this case “I’d like to submit a help desk ticket” and “I’d like to submit a ticket” are both 100% intended to trigger my help desk ticket logic.

help desk utternance

The Suggested Utterances tab also stores a list of utterances that the service had no idea what to do with.  You can go in there and set the intent on the utterances so that those same words/sentences can also trigger the appropriate logic.

help desk suggested utterance

There’s more to it than just this but I highly recommend you read up on the Cognitive Services.  The site has some cool demos and there are other types of services like image recognition services and others.

LUIS and ANA Start Communicating

The following code starts with a couple of attributes.  LuisModel takes 2 guids and the text below shows you where you can get those.  The 2nd attribute is Serializable which is needed by the bot framework.

There are 3 methods which are also decorated with LuisIntent attributes.  The first has a blank value and this will be something that is caught in the None intent seen in the previous images.  The second intent is the Greeting intent.  This Greeting method will be called if the user enters Hi or Howdy as we saw in the Greeting intent.  Finally, we have the Help Desk Ticket, which is called… yep, you guessed it… when the Help Desk Intent is triggered.  There is also a HelpDeskTicket class that contains my the logic for submitting a ticket which uses the Microsoft Graph to create a task in Planner.

[LuisModel("<APP ID from LUIS Dashboard page>", "<key string from LUIS Publish App page>")]
public class LuisDialog : LuisDialog<object>

internal BuildFormDelegate<HelpDeskTicket> Ticket;

public LuisDialog()


public async Task None(IDialogContext context, LuisResult result)
await context.PostAsync("I'm sorry, I don't know what you mean.");

public async Task Greeting(IDialogContext context, LuisResult result)
context.Call(new GreetingDialog(), Callback);
public async Task Callback(IDialogContext context, IAwaitable<object> result)

public async Task HelpDeskTicket(IDialogContext context, LuisResult result)
if (this.Ticket is null)
this.Ticket = Models.HelpDeskTicket.BuildForm;
var helpDeskForm = new FormDialog<HelpDeskTicket>(new Models.HelpDeskTicket(), this.Ticket, FormOptions.PromptInStart);
context.Call<HelpDeskTicket>(helpDeskForm, Callback);


Data Access with the Microsoft Graph

The HelpDeskTicket class is where I kept my form logic.  Its fairly direct.  First it collects your help desk ticket information, then it authenticates, creates a Planner task object and assigns its values based on the user’s response, then it posts the task.

namespace Bots.Ana.Models
    public enum RequiredService


    public enum Severity

    public class HelpDeskTicket
		// the order of the variables determines the order of the questions
        public RequiredService ServiceRequired;
        public Severity IssueSeverity;
        [Prompt("Tell me about your issue.")]
        public String Description;

        public static IForm<HelpDeskTicket> BuildForm()
			// when the form is completed, run the submission logic
            OnCompletionAsyncDelegate<HelpDeskTicket> submission = async (context, state) =>
               await SubmitTicket(context, state);

            return new FormBuilder<HelpDeskTicket>()
                .Message("Welcome to the help desk ticketing bot!")

        private static async Task SubmitTicket(IDialogContext context, HelpDeskTicket state)
            string token = string.Empty;
            context.UserData.TryGetValue("AccessToken", out token);

            GraphServiceClient graphClient = SDKHelper.GetAuthenticatedClient(token); 

            GraphService service = new GraphService(context);
            var me = await service.GetCurrentUser();

            PlannerTask task = new PlannerTask();

            task.Title = $"{state.ServiceRequired} issue reported by {me.DisplayName}";
            task.PercentComplete = 0;
            task.PlanId = ConfigurationManager.AppSettings["HelpDeskPlanner"].ToString();

            // get the guid for the Planner Bucket from the web.config that matches the severity chosen
            switch (state.IssueSeverity)
                case Severity.High:
                    task.BucketId = ConfigurationManager.AppSettings["HighPriority"].ToString();
                case Severity.Medium:
                    task.BucketId = ConfigurationManager.AppSettings["MediumPriority"].ToString();
                case Severity.Low:
                    task.BucketId = ConfigurationManager.AppSettings["LowPriority"].ToString();
                case Severity.Emergency:
                    task.BucketId = ConfigurationManager.AppSettings["HighPriority"].ToString();
                    task.BucketId = ConfigurationManager.AppSettings["NoPriority"].ToString();

            PlannerTask planTask = await service.CreatePlan(task); 

            await context.PostAsync($"Your ticket has been submitted.  A technician will contact you soon.");


That was your 50,000 ft view of the application. I recently presented it at a user group where I went over the various integration points in Office 365. I showed the Bot running in Microsoft Teams and Skype. Here’s what the bot looks like in action.

First, a look at Microsoft Teams.

ana teams

Next, a look at Skype

ana skype