A Pome for the Moment
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 10th post in my series 50 Days of LIT Prompts.
I'm sitting down to write this post the day before it will go live, but before I do so, I check the time.
In February's chill, at four twenty-two, Thursday's dusk whispers winter's adieu.
This was generated by today's prompt template at 4:22pm on Thursday February 1st. It's a simple prompt, using the month, day, and time to write a short pome. I rather like it. As we've seen before, the LIT Prompts extension comes preloaded with some variables. Like last Friday's template, I thought it would be nice to seed a poem with some of these variables, namely the month, day and time. I did not, however, say what hemisphere I'm in as part of the prompt. I happen to be typing this from my home in New England. So, the assumption that it is both chilly and near dusk ring true. It's only the start of February, but pitchers and catchers report for spring training in two weeks. So, I'll allow "winter's adieu." Had I been in New South Wales, however, this pome would strike me as out of season. Of course, this shouldn't come as a surprise given what we know about how LLMs encode and reflect the assumptions found in their training data. It would seem, most of those texts came from the northern hemisphere.
Incidentally, this week marked the launch of a Kickstarter for Poem/1: AI rhyming clock, an e-paper clock that uses an LLM to create poems for every minute of the day. Having planned this post some time back, I found this coincidence both exciting and stressful. Exciting because I think it's a cool project, stressful because Matt Webb, the clock's creator, seems like a nice guy on Mastodon, and I don't want him thinking I stole his idea. Honestly, I planned this a few weeks back. That being said, when I saw the announcement, I remembered having seen his original post on the concept. So, I went over to his Kickstarter and pledged to buy him a coffee. Maybe you want to do the same, or maybe you want one of these cool clocks. 😃
Now that we have that out of the way, let's build our own virtual rhyming clock template.
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Pattern (Template)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll meet our few new variables: {{hours}}
, {{minutes2d}}
, {{ampm}}
, {{Month}}
, and {{DayOfWeek}}
. See the extension's documentation.
FWIW, these come from a collection of date and time variables. Here's the full list:
- Day of week (0-6):
{{dayOfWeek}}
- Day of week (English):
{{DayOfWeek}}
- Month (1-12):
{{month}}
- Month (01-12):
{{month2d}}
- Month (English):
{{Month}}
- Day of Month (0-31):
{{day}}
- Day of Month (01-31):
{{day2d}}
- Year:
{{year}}
- Hour (1-12):
{{hours}}
- Hour (01-12):
{{hours2d}}
- Hour (0-23):
{{hours24}}
- Hour (00-23):
{{hours242d}}
- AM or PM:
{{ampm}}
- Minute (0-59):
{{minutes}}
- Minute (00-59):
{{minutes2d}}
- Second (0-59):
{{seconds}}
- Second (00-59):
{{seconds2d}}
As we noted above, the time of day can provide a prompt with powerful seeds, like the idea that it's a certain season. Anywho, here's the template's title.
time2poem
Here's the template's text.
Write a two-line rhyming poem about it being {{hours}}:{{minutes2d}} {{ampm}} in {{Month}} on a {{DayOfWeek}}.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.9
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Today we're all about being creative. So, I went with a pretty "creative" setting—0.9. - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen Only
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, we're content just to have it go to the screen. - Post-run Behavior:
FULL STOP
. Like the choice of output, we can decide what to do after a template runs. To keep things simple, I went with "FULL STOP." - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window. Here we left the option unchecked, but sometimes when running a chain of prompts, it can be useful to hide a button.
Working with the above templates
To work with the above templates, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Variations on a theme. Perhaps instead of a poem, your clock should produce prose in the style of a pirate? Play with the form and tone. Tell it you're in the southern hemisphere. Have fun!
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. There's a lot of history behind this paper. It was part of a chain of events that forced Timnit Gebru to leave Google where she was the co-lead of their ethical AI team, but more than that, it's one of the foundational papers in AI ethics, not to be confused with the field of "AI safety," which we will discuss later. It discusses several risks associated with large language models, including environmental/financial costs, biased language, lack of cultural nuance, misdirection of research, and potential for misinformation. If you want to engage critically with LLMs, this paper is a must read.
- Poem/1: AI rhyming clock by Matt Webb on Kickstarter. This project is a clock called Poem/1 that features an e-paper display and uses AI to generate rhyming poems. Backers can buy the creator, Matt, a coffee, or purchase the Poem/1 clock. The clock is expected to be delivered in August 2024 for those who pledge £119. The project is part of Kickstarter, a platform that connects creators with backers to fund various projects. Summary based on a draft from our day one template.