Is This Good?
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 24th post in my series 50 Days of LIT Prompts.
To the extent this week's templates have shared a theme, it has been analyze and structure. So, it was only a matter of time before we ran into a use case built around sentiment analysis. Today, we'll build a template that allows you to select the text of several social media posts and label each posts' sentiment while creating a JSON object to hold data about each post, including this new sentiment label. Customarily, this sort of analysis is used on social media to help product or brand management folks stay abreast of public sentiment, or to feed algorithmic stock traders—buy when sentiment is positive, sell when it's negative. ;)
To show off the output, I decided to turn today's template on a Mastodon post from earlier this week in which Meredith Whittaker announce a new Signal feature. Users can now use Signal without sharing their phone number with other users. It's a feature folks have wanted for a long time, and though the assembled replies are mostly positive, there are one or two neutral and negative comments to be found. For context, positive, nuetral, and neagtive are the triad of traditional sentiment markers. Here's output for the first ten posts, original plus replies.
{
"posts": [
{
"author": "Meredith Whittaker",
"body": "IT IS HAPPENING! Today, Signal launches phone number privacy & usernames! These features let you use Signal w/o sharing your phone number with the people you talk to. Proud to add more privacy to Signal, & proud of the smart, careful work the team did to make this happen ♥️",
"sentiment": "positive"
},
{
"author": "Meredith Whittaker",
"body": "Also, note: You can sign up for beta to test these features. Note that Apple caps the number of iOS beta testers, and we have reached that limit. If you use Signal on iOS, you can get around this by signing up for Desktop beta, linked to your iOS account.",
"sentiment": "neutral"
},
{
"author": "Janet Vertesi",
"body": "@Mer__edith hooray and congrats, Meredith and Signal Foundation! a big step :))",
"sentiment": "positive"
},
{
"author": "can",
"body": "@Mer__edith The link for the MacOS desktop version doesn't seem to work?",
"sentiment": "negative"
},
{
"author": "f.rift :fire_blue:",
"body": "@Mer__edith is it still impossible to downgrade in place if you choose to leave the beta program later on android?",
"sentiment": "neutral"
},
{
"author": "Eye",
"body": "@Mer__edith Thank you for enabling this 👏👍🙂",
"sentiment": "positive"
},
{
"author": "Nakul Bhalla :verified:",
"body": "@Mer__edith Yay!! 🥳",
"sentiment": "positive"
},
{
"author": "Marc Hedlund",
"body": "@Mer__edith nice! Congrats on the launch.",
"sentiment": "positive"
},
{
"author": "Ethan Marcotte",
"body": "@Mer__edith Yessss congratulations on the launch!",
"sentiment": "positive"
},
{
"author": "Mr. Completely",
"body": "@Mer__edith very welcome update, thank you",
"sentiment": "positive"
}
]
}
What I find interesting about this template isn't its ability to assign labels but the fact that everything it does is based on a simple text input. It is identifying individual posts and their authors, assigning a sentiment, and packaging it all up in JSON. We're not parsing posts via an API; we're just dumping them all into a prompt and letting the LLM do the work. That's quite something. Of course, as with all of the outputs we've seen before, we shouldn't take it as a given that they're right. Proceed with caution.
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Pattern (Template)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using {{highlighted}}
. See the extension's documentation.
The {{highlighted}}
variable contains any text you have highlighted/selected in the active browser tab when you open the extension. Like yesterday, this prompt pattern is pretty straight forward. Structurally, it also looks a lot like yesterday's template. Highlight the text of the posts you want to examine, and run the template.
Here's the template's title.
Social Sentiment
Here's the template's text.
The following is a collection of posts from social media. Your job is to read them and then provide a JSON object specifying their author, content, and assigning them a sentiment (i.e., negative, positive, or neutral). I'll provide an example of the JSON after the text of the posts. Be sure to read all of the post and use the whole context to determine sentiments.
Here are the posts:
------ START POSTS ------
{{highlighted}}}
------ END POSTS ------
Now provide a JSON object listing out each of the posts and the sentiment you assigned (i.e., negative, positive, or neutral). If you see a boosted or retweeted post, and it isn't a quote tweet, use the author and text from the original post. The list should be of the following structure which shows a single post:
{
"posts": [
{
"author":"author of post",
"body":"text of the post",
"sentiment":"sentiment of the post"
}
]
}
Remember, you should include objects for each of the posts found above.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Since we're seeking fidelity to a text, I went with the least "creative" setting—0. - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Screen + clipboard
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the screen and clipboard so the results will be ready to paste where we like. - Post-run Behavior:
FULL STOP
. Like the choice of output, we can decide what to do after a template runs. To keep things simple, I went with "FULL STOP." - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Working with the above template
To work with the above template, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Take it for a spin. Find a thread or timeline you want to examine, and give it a try.
TL;DR References
ICYMI, if you didn't click through above, you might want to give them a look now.
- Planet Money Episode 763: BOTUS. Wall Street is increasingly being taken over by computers and bots, even in the realm of stock-picking. Bots are cheaper, less emotional, and more disciplined than human stock-pickers, and they can process large amounts of information at once. To understand how these stock-picking bots work, the Planet Money podcast built their own bot called @BOTUS. This bot looks at President Trump's Twitter feed and trades stocks based on his tweets, using real money. If Trump tweets positively about a company, the bot buys its stock, and if he tweets negatively, the bot sells it short. The bot holds the position for 30 minutes before getting out. The Planet Money staff members have invested $1,000 of their personal funds in this experiment to see if the bot can make money. Summary based on a draft from our day one template.