Your Own Personal Anger Translator
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 25th post in my series 50 Days of LIT Prompts.
Key & Peele had a series of sketches built around President Obama's anger translator, Luther. The conceit of the bit was that Luther gave voice to the things Obama couldn't or wouldn't say. For example, Luther translated Obama's, "On the domestic front, I just want to say to my critics, I hear your voices, and I'm aware of your concerns," into, "So maybe if you could chill the hell out for like a second, then maybe I could focus on some shit, you know?" It was a brilliant exploration of Obama's place in the American experiment and the impossible needled he was asked to thread day in and day out. It resonated with many because they understood the righteous frustration that comes from having to conform one's outward expression to external expectations. There's something deeply cathartic about giving actual voice to that frustration, and over the years we were introduced to other translators, including Michelle, Malia, and Hillary's. So, I thought it might be interesting to see if we could harness a Large Language Model (LLM) to provide anyone with a personal anger translator. However, there's a twist. This translator would serve as Luther in reverse. That is, it would take an unvarnished blunt and imprudent expression of raw emotion and shape it into a more palatable variant. I'm told that sometimes it can be helpful to write out a snide reply or "subtweet" without the intention of hitting send just to work through your rage. Now with modern technology, you can write that brutally honest reply and have it "translated" with the click of a mouse—mechanized catharsis on demand.
To test our translator I thought I would turn it on Obama's own words the one time Luther found himself unnecessary. In 2015 Keegan-Michael Key reprised his role as Luther at the White House Correspondents' Dinner. At the end of the bit, President Obama, the real President Obama, not Jordan Peele, goes off on an "angry tirade" about climate change, and Luther finds himself telling the President that he dosn't need an anger translator. See video above. So, here is a transcript of Obama's lines followed by our template's output.
"Angry" Obama:
Look at what's happening right now. Every serious scientist says we need to act. The Pentagon says it's a national security risk. Miami floods on a sunny day and instead of doing anything about it we've got elected officials throwing snowballs in the Senate! It is crazy! What about our kids?! What kind of stupid short-sighted irresponsible...
AI Obama (our template's output):
I understand your concerns about the current situation. Many scientists have emphasized the need for action, and even the Pentagon has recognized it as a national security risk. It can be frustrating to witness elected officials engaging in seemingly unproductive actions, like throwing snowballs in the Senate, instead of addressing the issue. We must consider the well-being of future generations and the responsibility we have towards them.
Obviously, "Angry" Obama was still constrained by the real Obama and things are cut off before they really get out of hand, but I think you get the idea. I suggest you create your own version of today's template and give it something a little more challenging.
Speaking of challenging, some of you may be wondering, "Did I just read that I should use LLMs to check my righteous anger and sound more respectable? WTF? Did you even get the point of the bit?" This series is a tightrope walk. I've tended to post daily updates in two places, Mastodon and LinkedIn. As you might imagine, they have very different cultures. On balance, my corner of Mastodon seems allergic to posts expressing positive framings of AI, while LinkedIn rejects anything with the hint of criticism. So, every other post is a Trojan Horse, luring in one side or the other with the hope they'll read far enough to see the other side. In keeping with my hot take on AI, I'm constantly trying to derail boosters from the AI hype train while simultaneously asking skeptics to look seriously at the potential for pro-social uses. We've talked in prior post about how LLMs reinforce the biases in their training data and how the creation of AI tools can come with potentially high moral costs. There's a lot to unpack in relation to respectability politics, code swiching, and the biases baked into language use. We can't touch all of these here, but we can scratch the surface.
I want to come back to where we began, with the expectation that some modes of communication are valid while others are not. Why is it Obama could never show anger but his immediate successor did (and does) nothing but show anger? Perhaps Obama is pertanaturally even keeled, but even if this is the case, why was this something America insisted on? I susspect we all know the answer. And so, before we build our automated catharcis machine, let us take a moment to consider how others are using LLMs and what assumptions they and we hold. Here's a post I saw recently over on Mastodon, and I'd appreciate you giving it a read before I say more.
Now a few things about myself, like Rua I'm a neurodivergent scholar, and I share the heart ache they must feel at the situation their students find themselves in. I'm dyslexic, and I know what it's like to have people judge you on your writing. I know what it is to have someone assume you must be lazy or not taking things seriously because you repeatedly misspelled the name of a supreme court justice on your law school final exam. I know what it is to curtail your choice of words because the words you can spell and the words you want to use don't always overlap, and you aren't allowed to use a dictionary or spellcheck. I'm also appalled that apparently no one prior to Rua had taken the time to help these students understand how these tools work. As you might guess from this series, I do, however, think they can be useful, just not the way a lot of folks tend to use them. It's clear the students above were not using them to good purpose.
Spoiler alert: I have NOT been using LLMs to help write the body of these blog posts. As is clearly noted in TL;DR References, I have used them to produce summaries of works there, but the text you have been reading "above the fold," as it were, is LLM-free. That's because this writing is where I figure things out. As Ted Chiang observed, "Your first draft isn't an unoriginal idea expressed clearly; it's an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That's what directs you during rewriting, and that's one of the things lacking when you start with text generated by an A.I." That being said, having AI write for you is not the only way to use these tools, and the TL;DR summaries are not "Writing" with a capital W. We find ourselves at an interesting moment in time, a moment where we as a society don't yet know how to navigate the possibilities available to us. I hope we can all agree that I am not somehow cheating when I use a spellcheck, but it's worth remembering that it wasn't too long ago that many might have disagreed. And I still find folks who look down on my "reading" via audio books.
Over the next few weeks, I'll explore how one might responsibly use LLMs in "Writing," but until then, I welcome the opportunity to yell into the void, to seek catharsis.
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Pattern (Template)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using {{highlighted}}
. See the extension's documentation.
The {{highlighted}}
variable contains any text you have highlighted/selected in the active browser tab when you open the extension. If the text within brackets is not the name of a predefined variable, like {{What do you want to say?}}
, it will trigger a prompt for your user that echo's the placeholder (e.g., a text bubble containing, "What do you want to say?"). After the user answers, their reply will replace this placeholder. A list of predefined variables can be found in the extension's documentation.
To use this template, select the text you are responding to and execute the template. You will then be asked what you want to say. Don't hold back. The highlighted text will help provide context for the translation.
Here's the template's title.
Anger Translator
Here's the template's text.
You are an "anger translator." Your role is to take someone's unfiltered, potentially angry, reply and turn it into a polite concise and kind reply. That is, you turn angry or blunt text into a respectful not angry version. To help you craft your translated reply here is the context to which it is replying:
---- START CONVERSATION SO FAR ----
{{highlighted}}
---- END CONVERSATION SO FAR ----
Here is the "angry" reply you need to translate: {{What do you want to say?}}
---
Now reply with your translation.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen + clipboard
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the screen and clipboard so the results will be ready to paste where we like. - Post-run Behavior:
FULL STOP
. Like the choice of output, we can decide what to do after a template runs. To keep things simple, I went with "FULL STOP." - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Working with the above template
To work with the above template, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Don't count to ten. Find a social media post that made your blood boil. Highlight the text, run the template, and tell it what you really think.
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- ChatGPT Is a Blurry JPEG of the Web by Ted Chiang. Writing at the beginning of ChatGPT's rise to prominence, this article discusses the analogy between language models like ChatGPT and lossy compression algorithms. Chiang argues that while models can repackage/compress web information, they lack true understanding. Ultimately, Chiang concludes that starting with a blurry copy is not ideal when creating original content and that the struggling to express thoughts is an essential element of the writing process. Summary based on a draft from our day one template.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. There's a lot of history behind this paper. It was part of a chain of events that forced Timnit Gebru to leave Google where she was the co-lead of their ethical AI team, but more than that, it's one of the foundational papers in AI ethics, not to be confused with the field of "AI safety," which we will discuss later. It discusses several risks associated with large language models, including environmental/financial costs, biased language, lack of cultural nuance, misdirection of research, and potential for misinformation. If you want to engage critically with LLMs, this paper is a must read.
- Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence by Shakir Mohamed, Marie-Therese Png & William Isaac. The article discusses the integration of decolonial theory into artificial intelligence (AI) to address ethical and societal impacts. It highlights the importance of critical science and post-colonial theories in understanding AI's role in modern societies, emphasizing the need for a decolonial approach to prevent harm to vulnerable populations. The paper proposes tactics for developing a decolonial AI, including creating a critical technical practice, seeking reverse tutelage, and renewing affective and political communities. These strategies aim to align AI research and technology development with ethical principles, centering on the well-being of all individuals, especially those most affected by technological advancements. Summary based on a draft from our day one template.