Building an App with SPFx, React, and TypeScript Part 1: Stateless Functional Components

Introduction

This is the first of a series of posts where I’ll walk through the process of building an event management web part for SharePoint Online. I decided to do this to go over several concepts in React and how they’re implemented using TypeScript. I go over the main points of the solution and try to walk through how I went about it. The solution is available on github.

I’m going to skip over setting up the project. There are several resources that can walk you through that process. I wrote a series a while back on quickly setting up a project that you can refer to but keep in mind that some of the details may be outdated as the packages used were older versions.

Setup

To start, I created my Event Hub project, I did a little bit of organization. Inside the components directory, I created an EventHub folder and moved my related event hub files into it. I also created a “statelessComponents” directory to separate my stateless functional components.

Note: This folder setup is just how I decided to do it for this particular solution. Typically my components are in a “Containers” folder and my stateless functional components are in a “Components” folder. You can organize your files however you want.

The EventHub component that was created for me is a Component that manages state and not a Stateless Functional Component. We’ll briefly touch on the EventHub just for some initial setup but this post will focus on Stateless Functional Components as most of your app’s components should be made up of these types of components.

Creating a Stateless Functional Component

Inside my statelessComponents directory, I created another called “chrome” and inside of it, I created chrome.tsx. This will be a stateless functional component since, as of now, I don’t plan on managing state with it. It’s purpose is to define the apps layout and for now, it’ll simply be a top nav bar and a main content area.

Before I write anything in chrome.tsx, I’m going to install a a required package that we’ll need.

npm i --save @types/react-router-dom

Once that’s installed, we can start building our chrome. As I mentioned, it’s role is to define the app’s layout. We’ll keep it simple and set up some placeholders. The following code is really simple. It’s just a stateless functional component so there’s no state and you can see a wrapping div, and two elements inside. The first element is a div where the top nav will appear and the main will display the contents of any component that we wrap with this chrome component.

import * as React from 'react';

const chrome = (props:any) => (
    <div>
        <div>Here is where the top nav will be</div>
        <main>
            {props.children}
        </main>
    </div>
);

export default chrome;

If we decide to add more to the layout, like a footer, a left nav, or anything else, we can edit the chrome component. We’ll revisit this component later as we start build but right now, it’s enough to show a header and the contents of other components wrapped by it.

Using our Chrome

At this point, let’s go back to EventHub.tsx, which was created for us when we setup our SPFx web part. The following is the default code that appears in the class. This is where the web part’s UI is defined. We’re going to remove most of it and apply our own.

import * as React from 'react';
import styles from '../EventHub.module.scss';
import { IEventHubProps } from './IEventHubProps';
import { escape } from '@microsoft/sp-lodash-subset';

export default class EventHub extends React.Component<IEventHubProps, {}> {
  public render(): React.ReactElement<IEventHubProps> {
    return (
      <div className={ styles.eventHub }>
        <div className={ styles.container }>
          <div className={ styles.row }>
            <div className={ styles.column }>
              <span className={ styles.title }>Welcome to SharePoint!</span>
              <p className={ styles.subTitle }>Customize SharePoint experiences using Web Parts.</p>
              <p className={ styles.description }>{escape(this.props.description)}</p>
              <a href="https://aka.ms/spfx" className={ styles.button }>
                <span className={ styles.label }>Learn more</span>
              </a>
            </div>
          </div>
        </div>
      </div>
    );
  }
}

I’m going to edit this component by removing the JSX that is being returned and replacing it with some simple text wrapped by my chrome component. We need to import Chrome from our chrome component which you can see right before the class definition. Then you’ll see that I use Chrome in the return statement with some simple text. We’ll replace the text with a component later but this should be enough to have our layout produce a header and a welcome message underneath.

import * as React from 'react';
import styles from '../EventHub.module.scss';
import { IEventHubProps } from './IEventHubProps';
import { escape } from '@microsoft/sp-lodash-subset';

import Chrome from '../../statelessComponents/chrome/chrome'

export default class EventHub extends React.Component<IEventHubProps, {}> {
  public render(): React.ReactElement<IEventHubProps> {
    return (
      <Chrome>
        Welcome to the event hub
      </Chrome>
    );
  }
}

Running a gulp serve will open our workbench and we can drop our event hub web part to the page. Again, I assume you know the basics of creating and running an SPFx web part. The result should look like the following image. Nothing special just yet but we have Chrome.tsx defining a top nav that we’ll see regardless of what component we want to render and we can see the contents of another component rendered underneath the top nav section.

Adding Navigation

Let’s focus on the top nav bar which we’ll call the Menu Bar. I’m going to create a series of stateless functional components (SFC) to make it up. The components will be the MenuBar which will contain a logo, and navigation items.

Let’s start building these components in sort of a reverse order. Structurally, our menu will look something like:

  • MenuBar
    • Logo
    • Navigation Items
      • Navigation Item

We’ll begin with the Navigation Items. Under the stateless components directory, I created a Navigation Folder which will contain the MenuBar component, the Navigation Items, and the Navigation Item. With that said, the path to my Navigation Items is /statelessComponents/Navigation/NavigationItems/NavigationItems.tsx.

For now, I’ll hard code a few items and start with an unordered list.

import * as React from 'react';

import styles from '../NavigationItems/NavigationItems.module.scss';

const navigationItems = () => (
    <ul className={styles.NavigationItems}>
        <li>Home</li>
        <li>About</li>
        <li>Members</li>
    </ul>
);

export default navigationItems;

Let’s create a component for the logo. I’m going to use the office fabric’s icons for this one so we’ll need to import it. I’m importing an older version of the package because as of the time that I’m writing this, the latest version isn’t working with SPFx.

npm install --save office-ui-fabric-react@5.135.0

Once installed, we can create our logo component which will simply be a call to an Office Fabric icon. This component will be located under statelessComponents/Logo/Logo.tsx.

import * as React from 'react';

import { Icon } from 'office-ui-fabric-react';

const logo = () => (
    <div>
        <Icon iconName='ScheduleEventAction' className='ScheduleEventAction' />
    </div>
);

export default logo;

Next, we’ll bring the Logo and NavigationItems together inside the MenuBar component. This new component will be found under statelessComponents/Navigation/MenuBar/MenuBar.tsx.

import * as React from 'react';

import NavigationItems from '../NavigationItems/NavigationItems';
import Logo from '../../Logo/Logo';
import styles from '../MenuBar/MenuBar.module.scss';

const menuBar = () => {
    return (
        <header className={styles.MenuBar}>
            <Logo  />
            <nav>
                <NavigationItems /> 
            </nav>
        </header>
    );
}

export default menuBar;

Now we need to go back to our Chrome.tsx and introduce MenuBar.tsx so that we can start seeing something other than our placeholder text. Back in the Chrome.tsx, we now import MenuBar and replace our placeholder with the new element.

import * as React from 'react';
import MenuBar from '../Navigation/MenuBar/MenuBar';

const chrome = (props:any) => (
    <div>
        <MenuBar />
        <main>
            {props.children}
        </main>
    </div>
);

export default chrome;

At this point, we have a menu bar at the top of our web part with a logo, followed by a few links. Our navigation items component has a hard coded set of links. I want to pull them out into their own component (navigation item). This SFC will change later and if you are familiar with routing in react, the href should provide a hint around what those changes will be.

import * as React from 'react';

import styles from '../NavigationItem/NavigationItem.module.scss';

export interface NavigationItemProps {
    url: string,
    children: React.ReactNode
}

const navigationItem = (props:NavigationItemProps) => (
    <li className={styles.NavigationItem}>
        <a 
            href={'#' + props.url}
            >{props.children}</a>
        
    </li>
);

export default navigationItem;

The above code is defining our list item, applying a class, adds an anchor tag inside of it and assigns the url prop to the href . The title of the link is the content that this navigationItem component will wrap. We’ll see that shortly.

Now that we have an SFC for our individual items, we need to go back to our NavigationItems SFC to start using our single items. Back in NavigationItems.tsx, we will import NavigationItem and replace our links. (The names are similar but hopefully, it’s no too difficult to follow along).

import * as React from 'react';

import NavigationItem from '../NavigationItem/NavigationItem';
import styles from '../NavigationItems/NavigationItems.module.scss';


const navigationItems = () => (
    <ul className={styles.NavigationItems}>
        <NavigationItem url='/' >Home</NavigationItem>
        <NavigationItem url='/about'>About</NavigationItem>
        <NavigationItem url='/members'>Members</NavigationItem>
    </ul>
);

export default navigationItems;

After some style updates, we have a simple bar along the top of our web part with a logo made from an Office Fabric UI icon and a few links. Here are a few shots of what our project structure and web part look like.

Conclusion

We started with an SPFx solution and restructured the directories a little bit. I used the Component folder for our stateful components and created a “statelessComponents” folder for our stateless functional components. That’s not a common practice. I did that just for this demo because I’m using this solution to explain certain concepts outside of this blog series and naming the folders this way will probably help people unfamiliar with react remember what is in those folders.

We did make some changes to our EventHub component just so that we can see our new additions but this post focused mostly on the components that don’t maintain their own state.

In the next post, we’re going to focus on components that will handle state. These components will maintain the content and pass down the appropriate data as props. We already saw an example of that when we set up our navigation items to pass props down to the navigation item.

The Return of SharePoint Saturday Philly

Last night, we announced that June 22, 2019 will be the return of SharePoint Saturday Phillywith a twist. The innovations made in Office 365, over the years, with SharePoint as a core product make it nearly impossible to look at SharePoint in a bubble. We’ve decided that SharePoint Saturday Philly will focus on the Microsoft Cloud as a whole. What does that mean? We want to see our speakers talk about Azure, Dynamics, DevOps, the suite of Office 365 products, and of course, SharePoint.

There’s been some buzz lately about bringing this event back to our area so we got a small group together and we’re making it happen, but we will need help from Sponsors to ensure that we can make it happen. We’ve opened the call for speakers. Attendee registration is also open.

Seats are limited. Let’s make this a successful return!

For more information, please visit our SPSPhilly page on SPSEvents.org or contact the SPSPhilly team at contact@spsphilly.org

Read Names from a Person Field with PnPJS

If you’re starting out and you need to read names from a person field, it may not be clear how to go about this. In this example, I have a list that has an Organizer person field. In order for me to get the name(s) in that field for a given item, I will name the fields that I need in my select and then expand my Organizer field. If you don’t do this, all that get’s returned is an OrganizerId field.

sp.web.lists.getByTitle(this.props.listName)
   .items.getById(id)
   .select("Title", "Organizers/Title", "Event_x0020_Location/Address", "Members")
    .expand("Organizers/Title")
    .get().then((item: any) => {
      
      // array of meeting organizers
      const meetingOrganizers = item['Organizers'];

      console.log(meetingOrganizers[0].Title)

});

In the example, we’re naming a few fields that we want returned. Title, Organizer/Title, Event Location/Address, and Members. Then we expand Organizers/Title. Expanding Organizer/Title will include the names of the individuals as an array of objects. I used a console.log to display the first record in this example. Hope that saves you some time.

How to Extract Location Data with PnPJS

The SharePoint location field is a nice way of adding location information with the help of Bing Maps. It contains quite a bit of information about a given location. The field stores the data in JSON format so it’s simple to get the data that you need.

A location will be stored in the following format:

{
	"EntityType": "LocalBusiness",
	"LocationSource": "Bing",
	"LocationUri": "https://www.bingapis.com/api/v6/localbusinesses/YN873x128404500",
	"UniqueId": "https://www.bingapis.com/api/v6/localbusinesses/YN873x128404500",
	"DisplayName": "Microsoft",
	"Address": {
		"Street": "45 Liberty Boulevard",
		"City": "Malvern",
		"State": "PA",
		"CountryOrRegion": "US",
		"PostalCode": "19355"
	},
	"Coordinates": {
		"Latitude": 40.05588912963867,
		"Longitude": -75.52118682861328
	}
}

If you’re building a web part with React, and are using the pnpjs libraries, it’s pretty straight forward and here’s a snippet showing how you might do it.

import { sp } from '@pnp/sp';

...

sp.web.lists.getByTitle(this.props.listName)
     .items.getById(id).get().then((item: any) => {
         //parse the event location info
        const location = JSON.parse(item['Event_x0020_Location']);
                   
        console.log(location.Address.City);
        console.log(location.Address.State);

});

So once you have an item, you can assign it’s location field to a variable/const using JSON.Parse. Once you have that, you can access the data via properties. Address is available and has properties of it’s own. City, State, etc. That’s it. Pretty simple.

Telephone Links In SharePoint

I was asked to build a contact directory in SharePoint. I started with the People web part but the list included people who didn’t work for the company so I scrapped that.

On attempt #2, I created a site page and tried the text, and markdown web parts. The text web part has a Hyperlink but this doesn’t work because it expects a full link but all we want is something like <a href=”callto:5551233777″>555-123-3777</a> or
<a href=”tel:5551233777″>555-123-3777</a>

Next, I tried the markdown web part but this only half worked. In the web part, I typed:

[Call Now](callto:5551233777)
[Call Now](tel:5551233777)

In the browser, the above creates a link that will prompt you to choose an app to make that call. That same page on a mobile browser or the SharePoint app will produce the same content, but no link. Bummer.

What does work is the hyperlink field in a list, but callto isn’t valid. Instead, you need to use “tel”. Using Tel in a list produces a link that will work on a mobile device.

Troubleshooting Permissions Issues with Flow

The Problem

I was helping a client with a permissions issue with Flow.  The Flow would get triggered manually by a user to kick off their performance review.  The problem was that some users didn’t have the option to start the Flow.

My Findings

I went into the Flow to make sure that the SharePoint library had permission by ensuring that it was in the “Run-only User” list as mentioned in this post.

Manage Run Only Users

That didn’t work.  So I got an end-user on the phone, and had her share her screen as I started digging.  What I found was that even though the user had permission to her content in a library, and the user was also in the member’s group on the site she was missing one important set of permission.

The Solution

In the library itself, the members group’s permissions were removed.  Once I reintroduced the group with edit and contribute permissions, the user was able to select her flow from the menu.

Automatically Tag and Caption Images in SharePoint

I was recently asked if SharePoint could meet a certain set of requirements that included auto tagging images uploaded to SharePoint.  The requirements came from a team that needs to manage many pictures for hundreds of locations.  By leveraging cognitive services, I was able to slap a solution together in a fairly short amount of time.

Setting up the Computer Vision API in Azure

Start by logging into your Azure portal and searching for Cognitive Services

cognitive services search.PNG

On the cognitive services blade, click the Add button which will show the “AI + Machine Learning” blade where you can search for “Computer Vision”.

Adding Computer Vision

 

After you select Computer Vision, you can provide you service name, subscription, location, etc as shown below.

Computer Vision Create 2

 

Your new service will have some basic information for getting started.  The keys that your service will use, links to additional information and tutorials, etc.

Computer Vision Quick Start

That’s it for configuring the service.  You’ll need to copy a key which will be needed for the api calls.  To do so, click the Keys link under section 1 “Grab your keys”.  You’ll also want to copy the endpoint in Section 2.  We’ll need both later when setting up the service in Flow.

Computer Vision Keys

Setup your Library

Now that our service is set up, we will need a library.  We’re going to use a simple document library with two text fields.  The first is a single line of text for the caption.  I’m naming this field “Description.”and the second is a multi-line text field to store our tags.  I named the second field “Image Tags.”  Ideally, we’d use a managed metadata field or the Enterprise Keywords but, currently, Microsoft Flow doesn’t support creating new terms and for this solution, it isn’t feasible to know what those tags are going to be in order to pre-populate the terms.

doclib

Create your Flow

Let’s take a look at the Flow at a high level.  We start by listening for files being created/uploaded to our library.  Get file metadata will get your Item Id.  Next, the Get File Properties will get you the columns that we’re going to populate.  Analyze Image will call our Computer Vision API.  Compose will get our tags from the previous action and the last step will populate our item with the Caption and Tags provided by the Computer Vision API.

flow-overview

Now, let’s look at the important parts.  Before we can use the Analyze Image action, we have to set it up.

analyze image

If it’s your first time using the service, you’ll need to do some basic setup.  For the Connection Name, I entered the name of my Azure Resource Group for simplicity.  The Key that I said you’ll want to copy earlier is what you’ll use in the Account Key.  I also said you should copy the endpoint.  The endpoint will be used in the Site URL box.

Computer Vision API Connection

When you hit the Create button, it’ll switch to the following screen.  Here you’ll be able to select a source.  We’ll choose Image Content and that will give you the Image Content field.  For the Image Content, we’ll provide “File Content” which comes from the very first step in the Flow.

Analyze Image 2

The next step is the Compose step which we’ll gloss over because all it’s doing is taking the “Tag Names” output from the Analyze Image step.

The final step is to update the file properties. We’ll going to feed the “Captions Caption Text” to our Description field and we’ll give the output from the Compose step to the Image Tags field.

Computer Vision Update Properties

Once the Flow is saved, and a few images have been uploaded, you’ll have a series of auto-tagged images.

tagged lib

Conclusion

The screenshot above shows an image of a skateboarder.  The computer vision api generated the caption “a man riding a skateboard” and a series of tags in the Image Tags field. The caption and tags are searchable; however, they are not filterable.

 

5 Cool Features in Microsoft Stream

I recently did a presentation at a local user group where I gave an overview of Microsoft Stream.  I think it’s a really good way to manage an organizations video content and the extra, AI based features that it offers makes it a huge time saver when searching for content within videos.  With that said, here are some features that make Microsoft Stream cool and interesting.

Facial Detection

When you upload a video to Stream, part of the processing that you’ll see happening after the upload is the service scanning the video for faces.  Any faces.  If the video has a image or presentation in the background with a picture of a person, that face will be detected too.  Those faces are then displayed in a timeline that you can click on to navigate directly to that point in the video.  So if you have a 2 hour video of a staff meeting and you want to watch the part where your boss was speaking, you don’t have to click repeatedly through the video to find when he or she was speaking.  You can just look for his or her face on the timeline and go directly to that point.

enable-faces
Image from the Stream product site

Text Analysis

In addition to facial detection, the audio is transcribed and displayed for you on the site.  You can perform a text search on the transcript.  So if you know that someone mentioned “cyber security” in the video, type it in and the transcript will show you all instances of that phrase.  You can then click on that text and go directly to that video.  You know how annoying it is to go through a long video looking for that one small topic.  This is a huge time saver.

Image from the Stream product site

Sometimes, the text analysis gets a word wrong.  Stream allows you to edit the transcript to correct any incorrect translations.

Links and Hashtags

When you upload a video, you can also provide a description.  In that description, you can provide hashtags to help people filter videos by topic.  If you want to tag all the videos for marketing events, or product demonstrations, you can simply add a meaningful hashtag that can be clicked on to show other videos with that hashtag.

Another cool feature is the ability to add a time code to the description.  You can provide a table of contents of sorts for your video by simply entering when it occurs in the video.  That time becomes a link that will take the user to that spot in the video.

time and hashtags

Integration

In the past, you’d likely have videos all over the place.  You might have a marketing site, or a product site, or a corporate events site and each would have their own videos.  With Stream, you can centralize all of your videos and the other Office 365 products have ways to show that video content.  For example, Teams has a Stream tab that lets you display a series of videos for an Office 365 Group or individual videos and SharePoint has the Stream web part which lets you do the same.

SP Stream

Live Events (preview)

This part is currently in preview but it’s worth mentioning.  Stream provides the ability to broadcast live videos.  If you have an all staff meeting, with the help of a streaming application, Stream can broadcast that recording to the rest of your organization.  Additionally, that live broadcast will get processed and it will also allow you to use the facial detection, closed captioning, and text search capabilities.

Monitor
Image from docs.microsoft.com/stream

The Desktop Diet Challenge

Two months ago, I was listening to “The Intrazone,” a podcast on the SharePoint intelligent intranet.  Episode 6 discussed their experiences with the Desktop Diet Challenge where they attempt to use the browser version of the popular Microsoft products like Word, Excel, PowerPoint, SharePoint, etc.  After listening to that episode, I decided to give it a try for a week.

Step 1: Outlook

I started off by immediately switching to the Outlook web app.  The first thing that I noticed was a little more productive.  I found that I spent less time checking email because I wasn’t constantly seeing the notification appear every time an email came in.  The web app does have a chime whenever an email comes in but my laptop is usually muted so I don’t notice it much.  A common cause of productivity loss is context switching and this one change dramatically reduced how often I “paused” what I was doing to check the latest message.

The web app also has some additional features that aren’t available by default for the desktop.  One feature that I liked was the ability to let meeting attendees vote for a meeting time before scheduling.  You can send the attendees several times to choose from and each selected time shows up as a “HOLD” on your calendar.  If everyone votes for the same time, it will automatically book that time and delete the other holds.  I also liked the contact card better on the web.  It provided more information about the content that users were working on and people that they’ve been working with lately.

Step 2: OneNote

I take lots of notes in OneNote and I have it open almost all of the time.  This one was difficult to switch from.  Surprisingly, it wasn’t because of any changes or lack of functionality.  For me, I found the client more convenient when switching from one notebook to another.  This switch was almost derailed before I even got started.  I was meeting with a client and I couldn’t connect to their wifi.  Since the purpose of the challenge was to avoid the desktop client, I wouldn’t have been able to access my notebooks.  Realistically, you’d sync your OneDrive files locally and then use the desktop app to work offline and when you get back online the files would sync.  Luckily, the meeting started a little late, and I managed to connect on time.

 Step 3: Word, Excel, PowerPoint

On day 2 or 3, that’s when I needed to crack open some spreadsheets, write a statement of work, and prepare slides for a presentation and the results were mostly positive… Mostly.

I was determined to stick to the challenge so I would go out of my way to do so.  If I had a file saved locally, I would upload it to OneDrive just to open it via the browser.

As far as the experience goes, it was mixed for me.  Overall, the functionality was there but there were some quirks.  Images in Word get positioned awkwardly.  They don’t appear to be properly aligned but if you open the same doc on the desktop client, the image looks fine.

PowerPoint… this one was the one that I wasn’t a fan of.  Similar to Word, it was awkward to align images and other things.

Excel was a good experience.  Overall, the experience felt the same.  The only exception was around filenames.  I had 2 files in different directories with the same name and the web app wouldn’t allow me to open both at the same time.

Step 4: Teams… kind of

Honestly, this one went nowhere.  At the time, I was working on a chat bot in a development tenant of mine and I had trouble logging into Teams on my work tenant so I stuck with the desktop version; however, I did use the mobile app to chat with coworkers when I wasn’t near a laptop.  Conversations feel natural on the mobile app but opening files felt a little slow, especially when viewing PowerPoint slides.  It wasn’t anything major but I could see it becoming a nuisance if you need to skim through a large number of slides.

Results

As I mentioned, the goal was to try it out for a week.  It has been two months and I still use Outlook’s web app almost exclusively.  I have opened the desktop version a couple of times since completing the challenge but not for anything like missing functionality.  I believe that the only times that I’ve opened it, it had to do with 3rd party tools that integrated and opened it for me but I haven’t sent an email or scheduled a meeting from it in 2 months.

I will flip flop between the versions of Word and Excel.  If I have a file saved locally, I won’t go out of my way to open it from the browser but files that get emailed to me will get saved to OneDrive and most likely opened via the browser.  If I have a document where I need to adjust the location of an image, I’ll use the client for that but I would say that most of my Word usage is through the web app but most of my Excel usage is through the client.  I can’t really say that there’s any reason preventing me from using Excel’s web app; it just kind of turned out that way.

I have not used PowerPoint’s web app for creating or editing presentations since the challenge but if I’m just reviewing slides, I will.  I thought that the experience of editing a presentation online just felt awkward so I haven’t bothered to try it again.

I really prefer the client for OneNote.  I just think it’s convenient and I switch notebooks a lot which as I mentioned above, wasn’t a great experience on the web.

As for Teams, that didn’t work for me.  I keep the desktop client open at all times and I will use the mobile app from time to time.

Integrating Twitter, SharePoint, and Azure Sentiment Analysis with Flow

Last month, I wrote a post that included steps for setting up Sentiment Analysis, an Azure Cognitive Service, and how to use it to score how positive your emails are.  This time, I’m going to leverage the service that was configured in that post by using it in a Flow.  The Flow will pull content from Twitter, store it in SharePoint, and determine the Sentiment score for the tweet. 

To begin, I setup a SharePoint list with a number field named Sentiment Score.  For the purpose of this demo, I’ll use the Title field to store the tweet text but in production, I’d create a separate field for it. 

 empty sp list.PNG

 Next, I click on Flow in the menu select Create a flow. 

 create a flow.PNG

 A menu will appear to the right of the page with a few templates but we’ll want to create our own so we’ll click on “See your flows” at the bottom. 

 See your flows.PNG

 Next, you’ll be taken to the Flow page where you’ll want to select “Create from blank”  

 Create from blank.PNG

 

The trigger for our flow will be when an item matching a particular hashtag is created in Twitter so you can either select the Twitter icon titled “When a new tweet is posted” or if it’s not there, you can click the Search button below it and find the Twitter trigger there. 

 start with trigger.PNG

Your flow designer will start you with the Twitter trigger.  When you first select it, you’ll need to provide credentials for Twitter.  After you do, your trigger will display a simple text box that lets you enter the text you’d like to search for.  In this case, I chose to search for #Microsoft.  This will grab any new tweets with that hashtag. 

flow - twitter

Next, I want to run that tweet against the sentiment analysis action.  I’m going to assume that you have the sentiment analysis service configured but if not, you can go back to my previous post where I walk through those steps.  To narrow down the actions, I searched for “sentiment” and it filtered it down to the results below. 

 Flow - sentiment.PNG

I then selected “Text Analytics” from the connector to show that there are multiple options, but I could’ve just selected the action titled “Text Analytics – Detect Sentiment”. 

 text analysis - actions.PNG

 

Next, it’s time to configure the sentiment action.  When the action first comes up, it’ll ask for a key and endpoint which you can get from the sentiment analysis service in Azure.  Once you provide that, you’ll get the action below which asks for the text that you want to analyze.  Using the Dynamic Content, you can tell the action to analyze the Tweet Text that is coming from the Tweet trigger and you can specify a language as well. 

 configure sentiment.PNGconfigured sentiment.PNG

 Once the text is scored, I can create another action to Create a SharePoint Item. 

 create item action.PNG

That will give me the action below which simply needs a URL, the List where we want to store our results.  This is the list that I created in the beginning with the Title and Sentiment Score fields.  Using dynamic content, you can save the Tweet Text to the Title field and the score from the sentiment action to the Sentiment Score field. 

 create sharepoint item.PNG

Once you’re done, the Flow should look something like this.  (Don’t forget to give your Flow a proper name by clicking on the text at the top left of the screen.  I named mine “Twitter Sentiment Analysis”)

flow complete.PNG

The result is a list that is populated with tweets and scores. 

 sharepoint populated list.PNG

 Conclusion 

This is just a simple proof of concept to show how simple it can be to do this.  Depending on how many tweets you expect to have, you may not want to create SharePoint list items for this.  Instead, you may want to store the content in a spreadsheet or database.  With a little more effort, you can create better ways to present the data using column formatting or SPFx web parts.   If you release a new products or have some sort of event, you can keep an eye on your social media buzz to see how people are receiving them.