Flip a Poem; Roll an "App"

David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 5th post in my series 50 Days of LIT Prompts.
This week we've used LLMs to talk with texts, reply to emails, and figure out unfamiliar phraseology with input provided by the browser. In future posts, we'll tackle more complex prompt patterns, but we'll also take advantage of LIT Prompt's virtual dice and export functionality to make some really interesting interactions you can share or save for later. Alas, before we run, we must walk, and before we walk, we must crawl. To get our bearings, today's prompt finds us flipping a virtual coin, writing a poem about the flip's outcome, and packaging the whole thing up as a "web app." By way of foreshadowing, LIT Prompts comes preloaded with a virtual coin and 4, 6, 8, 10, and 20-sided dice. 🤔 I wonder what could be coming?
If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
The Prompt Pattern (Template)

When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll meet our third predefined variable, {{coinFlip}}
. See the extension's documentation.
The {{coinFlip}}
variable returns either "heads" or "tails." This allows us to write prompts that will behave differently based on the outcome of the flip. To illustrate this point, the prompt below asks the LLM to write a prompt about the outcome. This is clearly a silly prompt, but consider, that we'll soon meet a collection of many-sided dice. Big things are coming! If nothing comes to mind, and you know someone who likes role-playing games ask them about the power of combining rules, randomness, and narratives.
Here's the template text.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.9
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Today we're all about randomness. So, I went with a pretty "creative" setting—0.9. - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen Only
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, we're content just to have it go to the screen. - Post-run Behavior:
FULL STOP
. Like the choice of output, we can decide what to do after a template runs. To keep things simple, I went with "FULL STOP." - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window. Here we left the option unchecked, but sometimes when running a chain of prompts, it can be useful to hide a button.
Working with the above template
To work with the above template, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Export and Share
After you've made the template your own and have it behaving the way you like, you can export and share it with others. This will produce an HTML file you can share. This file should work on any internet connected device. To create your file, click the Export Interactions Page button. The contents of the textarea above the button will be appended to the top of your exported file. Importantly, if you don't want to share your API key, you should temporarily remove it from your settings before exporting.
If you want to see what an exported file looks like without having to make one yourself. You can use the buttons below. View export in browser will open the file in your browser, and Download export will download a file. In either case the following custom header will be inserted into your file. It will NOT include an API key. So, you'll have to enter one when asked if you want to see things work. This information is saved in your browser. If you've provided it before, you won't be asked again. It is not shared with me. To remove this information for this site (and only this site, not individual files), you can follow the instructions found on my privacy page. Remember, when you export your own file, whether or not it contains and API key depends on if you have one defined at the time of output.
Custom header:
Not sure what's up with all those greater than and less than signs? Looking for tips on how to style your HTML? Check out this general HTML tutorial.