Sadly Not, Havoc Dinosaur

I'm Sorry, Dave Can't Do That

Use the text of an email to draft a polite reply declining any request(s)

Headshot of the author, Colarusso. David Colaursso

This is the 3rd post in my series 50 Days of LIT Prompts.

With apologies to Kubrick and Clarke, at least I can say I held out two full days before alluding to a killer AI. That's also how long I resisted sharing a use case in which we use an LLM to help write something. There's a great deal to be said about authorship in the age of ChatGPT, and perhaps we'll explore this more in future posts. For the moment, however, let me provide a framing I find useful. It starts by recognizing that context matters and that writing itself is not one task. The production of written work involves the application of multiple overlapping tasks. Consider the traditional roles found in a newsroom (e.g., editors in chief, assignment editors, writers, fact checkers, copy editors, and the like). When delegating any of these tasks, to a human or otherwise, it is important to have in mind what role(s) we are delegating and why. Putting one's name to a document means different things in different contexts. Just ask the paralegal who writes all their partner's "first drafts." For instructors worried about the use of AI by their students, I suggest they name the role(s) an assignment is looking to assess. This allows the instructor and student to properly evaluate whether or not the use of this or that tool is acceptable. If handwriting is among the matters being assessed, a word processor is out. Copy editing? This may or may not exclude the use of spell check. What about grammar check? If, however, there is no instructor, and the question is left to us alone, we have to be honest with ourselves about the job at hand.

Today we'll be using our LLM to respond to unwanted emails, some of which I suspect may themselves have been written by AI. I have in mind a particular class of emails for which I almost always answer, "no." For me, it's unsolicited emails from strangers who are trying to sell me something or ask for something but not in a spam sort of way. The point is, I want to acknowledge the email and politely decline, but I really don't want to take too much time struggling over what to say. When we're done here, you'll be able to select the text of an email (assuming you're using a web interface), click a button, and have a draft email declining any request(s) in your clipboard ready to paste into a reply. This is a very narrow use case and probably not the type of writing Ted Chiang had in mind when he the following observation:

Some might say that the output of large language models doesn't look all that different from a human writer's first draft, but, again, I think this is a superficial resemblance. Your first draft isn't an unoriginal idea expressed clearly; it's an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That's what directs you during rewriting, and that's one of the things lacking when you start with text generated by an A.I.

Sometimes writing doesn't need to express original ideas, and sometimes you aren't playing the role of a Writer with a capital W. Sometimes you just need to solve the blank page problem. Today's template will do that, but we need to stay vigilant and always be honest with ourselves about the goals of our writing. I suspect Chaing confronted with this narrowing might ask us to consider, not only what writing does for the writer, but what a writer owes their audience. What roles have we reserved for ourselves? Is this "division" of labor appropriate given the context? Here's a thought experiment, how would you feel about having a personal assistant ghostwriting your emails, prepping a first draft? Would you tell people? Why or why not?

Of course, when answering such questions, it is best if we understand fully how our tools work. Time for our micro-lesson, followed immediately by prompt work.

Artificial Neurons

Yesterday, we took our first step towards understanding how LLMs are made. This involved learning about logistic regression, and as we noted then, yes, there's going to be some math, but there won't be a quiz, and I give you permission to skim over things if you like. That being said, if you can get through this, the payoff will be big. Like, "understand what this AI thing actually is" big.

You'll remember that our regression was able to take in a measure of snowfall and predict the odds class would be canceled. What if we wanted to consider more than just how much snow fell? It turns out that all we have to do is add two numbers for each new input—the new input we want to consider and some "weight" to multiply by that input. This new weight is analogous to yesterday's B1 which we multiplied by the value of x (our snowfall). Remember, this was found by fiddling with values untill our curve matched the data. Mathematically, all this means is that we add the new weight, B2, and the new input x2 to the equation from yesterday.

\[y = {{1} \over {1 + e^{-(B_0 + B_1·x + B_2·x_2)}}}\]

And we can keep doing this for as many new inputs as we like.

\[y = {{1} \over {1 + e^{-(B_0 + B_1·x + B_2·x_2 + B_3·x_3 + B_4·x_4 + . . . + B_n·x_n)}}}\]

In this way, we can account for things like temperature and windspeed when predicting whether or not class will be canceled. Of course, we get the values of B0 through Bn by fiddling with those values until our graph "fits." It's just like yesterday except there are now more values (dimensions) to fiddle with. We put in values for x through xn and get out a value for y which we are reading as our prediction for whether school will close.

Now, if we take our regression and lay it out like so... and set this next to a neuron...

We can start to see how one might analogize the two. Let's call our construction an artificial neuron. It takes in inputs and sums them up such that different inputs trigger different outputs. This very roughly acts "like" a neuron which takes inputs in through the dendrites, "sums" these within the cell body, and outputs a signal via the axon. This the point where I emphasize the fact that artificial neurons are at best cartoon versions of real neurons, and also, I'm glossing over a lot of nuance (e.g., most folks wouldn't want to say a neuron is the same as a logistic regression; it's more of a process than the final function and weights; also, folks don't tend to use a sigmoid anymore, etc.). I think the above, however, will serve as a good foundation for what follows.

We've come a long way from predicting snow days. Tomorrow, we'll deal with artificial neural networks, predicting words can't be that far off. Until then, let's build something!

We'll do our building in the LIT Prompts extension. If you aren't familiar with the extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).

Up Next

Questions or comments? I'm on Mastodon @Colarusso@mastodon.social


Setup LIT Prompts

7 min intro video

LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.

To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.

Install the extension

Follow the links for your browser.

  • Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
  • Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."

If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.

Point it at an API

Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.

Login to OpenAI, and navigate to the API documentation.

Once you are looking at the API docs, follow the steps outlined in the image above. That is:

  1. Select "API keys" from the left menu
  2. Click "+ Create new secret key"

On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:

  1. open the extension
  2. click on Templates & Settings
  3. enter the API Base and Key (under the section OpenAI-Compatible API Integration)

Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.

If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.

If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.


The Prompt Pattern (Template)

When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using the {{highlighted}} variable. See the extension's documentation.

The {{highlighted}} variable contains any text you have highlighted/selected in the active browser tab when you open the extension. Like yesterday, this prompt pattern is pretty straight forward. Highlight the text of an email, and run the template. The LLM will generate a draft reply declining any request(s) and place it in your clipboard. To accomplish this last part be sure to set output to "Screen + clipboard."

It's worth taking a moment to consider one very important difference between this prompt template and those which have come before. Here you are potentially highlighting sensitive or otherwise private data. So, you need to think carefully about the particulars of your situation. For example, if you're running LM Studio locally then there isn't much to consider as the information won't leave your computer. However, if you're using some other API provider, you will be sending them the contents of anything you select. One thing you have going in your favor is that as an API user you may be subject to special terms that better protect your privacy. For example, as of this writing, users of OpenAI's API were subject to a different set of terms than users of ChatGPT. Namely, they are subject to the Business Terms which importantly preclude the use of your data for training purposes. This is important because if your data is being used to train an LLM it may one day come out the other end.

Here's the template text.

{{highlighted}}

---

For the above email or email thread, draft a brief professional reply politely declining its request. Keep the email super short while being responsive to the specific ask(s).

And here are the template's parameters:

Working with the above template

To work with the above template, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.

You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:


Kick the Tires

It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.


TL;DR References

ICYMI, if you didn't click through above, you might want to give this a look now.