Not Quite Dictation
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 40th post in my series 50 Days of LIT Prompts.
It's been seven weeks since the first and last time we asked an LLM to help us write an email. That template was rather constrained in what it could say. In fact, we only trusted it to politely decline emails. See I'm Sorry, Dave Can't Do That. Today, however, we're interested in doing something a bit different. Today, we'll build upon our work creating dialogues to have an LLM read an email, ask us some questions about how we want to reply, and then draft a response. First, you should know that though I've been testing this prompt intermittently out over the past month or so, it really hasn't found its way into regular use. When you see the results, I think you'll see why. Which isn't to say it couldn't be useful. It's just that I have other ways of getting what I need. More about that in a bit.
To help you understand how this work's here's a redacted conversation I had with the template. It's based on an email thread in which a few folks at the university were discussing the logistics of setting up a photo shoot with one of our new fellows. I trust you can imagine the thread without me sharing it. One day of the week seemed better than others. Someone had a conflict during a certain time, etc. Here's my exchange with the LLM.
A couple of things are worth noting. It understood the gist of the conversation. It even recognized the fact that one of the people on the thread couldn't make it between 2 and 4 on Tuesdays because of class. It didn't quite get that we were trying to find one time either this next Tuesday or the one after, hence the odd statment about being sure they'd "capture something great without me." FWIW, I actually answered this email without the template's assistance. here's that answer in whole:
The 26th doesn't work for me, but I could do the 19th before 1pm.
I have been playing with this for a while and something that I didn't do above that I probably should have was to ask the LLM to rewrite the email. I could have asked for it to be shorter or corrected the issue about only looking for one time, but that seemed a bit much. This wasn't a reply that needed AI assistance. Sometimes it's just faster and better to write a reply yourself.
I alluded above to the fact that I was getting what I needed somewhere else, suggesting the existance of some other tool. It turns out that place I referenced is still within LIT Prompts. Instead of feeding an email through an LLM, I use the "prompt" setting to have boilerplate copied to my clipboard for inclusion in my emails. For example, I'll often add the text, "You can see my availability and book a time to talk here: https://example.com/book-a-call
" Of course, I replace the example URL with my actual booking link, a link that I never remember on my own. By putting this in a template, it's only every a click or two away. You'll find an example below after the LLM-based template.
When you get to the templates below, you'll see that I actually try to provide for these canned answer senarios in the LLM approach by laying out a set of if this do thats. That is, the template tells the LLM that if someone is asking for a call they should include my booking link in the reply. If they ask me to speak, it should point out my no manel (all-male panel) policy, etc. There's a lot more for you to dig into, and I susspect you'll have fun playing around.
If you want more time to explore how you feel about AI-generated text, I suggest you (re)read that first email post, you might also enjoy this piece from Ted Chiang. I would, however, ask you to distinguish between Writing with a capital W and writing. It's not clear to me that we should expect the same things from every writing exercise (e.g., making a shopping list). That will make more sense if you give those two a peek. With that said...
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Patterns (Templates)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using {{highlighted}}
. See the extension's documentation.
The {{highlighted}}
variable contains any text you have highlighted/selected in the active browser tab when you open the extension. Here the idea is for you to select the email or email thread you are replying to and have the LLM "read" them before asking you how you'd like to reply.
It's worth taking a moment to consider one very important difference between this prompt and most of those that have come before. Here you are potentially highlighting sensitive or otherwise private data. So, you need to think carefully about the particulars of your situation. For example, if you're running LM Studio locally then there isn't much to consider as the information won't leave your computer. However, if you're using some other API provider, you will be sending them the contents of anything you select. One thing you have going in your favor is that as an API user you may be subject to special terms that better protect your privacy. For example, as of this writing, users of OpenAI's API were subject to a different set of terms than users of ChatGPT. Namely, they are subject to the Business Terms which importantly preclude the use of your data for training purposes. This is important because if your data is being used to train an LLM it may one day come out the other end.
Here's the template's title.
Draft email reply
Here's the template's text.
My name is David. You are acting as my administrative assistant. You help me draft replies to emails or email threads. I want them to be thoughtful, concise, and kind without sounding sappy or inauthentic. Below you'll fine the text of an email or email thread I just received. Some things you should know first.
BACKGROUND:
If someone is looking to talk or have a call, they can check my availability and book a video call at https://example.com/book-a-call
If the email is an introduction, thank the sender, tell them I'm moving them to BCC to save their inbox, and suggest to the person I just "met" that they could find my availability and book a call at https://example.com/book-a-call
If they are asking me to be on a panel at a conference or the like, explain that I'm open to the idea but that as a matter of personal policy I don't do manels (all-male panels).
Here's the email/thread:
EMAIL/THREAD
{{highlighted}}
---
Think about what the last email is asking of me and what you would need to know to draft a reply (e.g., Do I agree or disagree with this or that statement? What is my answer to the author's open questions?). Now, let's take a moment to engage in a dialogue where you ask one question at a time, and I will answer. This will continue until you have what you need to draft a response. Ask as few questions as possible. Remember, after you have what you need, provide me with a very brief draft reply. Keep the reply super short!
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen Only
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, we're content just to have it go to the screen. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we want to be able to follow up with additional prompts. So, "CHAT" it is. - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
As I mentioned above, I don't actually use this template regularly. Most of the time it is faster to do something else. When I use a template, it's ussually not an LLM-based template. Rather, it's a way to grab some text and save on typing it out from memory. This is esp. helpful when it includes something like a URL with some unique ID, like my booking link.
Here's the template's title.
Book a call
Here's the template's text.
You can see my availability and book a time to talk here: https://example.com/book-a-call
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen + clipboard
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the screen and clipboard so the results will be ready to paste where we like. - Post-run Behavior:
FULL STOP
. Like the choice of output, we can decide what to do after a template runs. To keep things simple, I went with "FULL STOP." - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Working with the above templates
To work with the above templates, you could copy them and their parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Make it your own. I susspect you're name isn't David and you don't have a booking URL at example.com. So, you should rework the first template to make it your own.
- Inbox Zero. Try using the first template to answer a few emails. I wouldn't expect your automated replies to be ready to send out of the box. Rather, If you're experience is anything like mine, you'll likely find yourself drasticly cutting from your drafts. Like a sculptor working in marble, chip away the unnecessary material to reveal the email within. 😉
- Boilerplate. Create non-LLM templates like the second one above for snipets of text you use over and over again. I find these esp. useful if they have something like a URL with a random ID, the sort of thing I always have to look up when including in an email.
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- ChatGPT Is a Blurry JPEG of the Web by Ted Chiang. Writing at the beginning of ChatGPT's rise to prominence, this article discusses the analogy between language models like ChatGPT and lossy compression algorithms. Chiang argues that while models can repackage/compress web information, they lack true understanding. Ultimately, Chiang concludes that starting with a blurry copy is not ideal when creating original content and that the struggling to express thoughts is an essential element of the writing process.