Simulations with LLMs and a "Roll of the Dice"
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 44th post in my series 50 Days of LIT Prompts.
More so than usual, today's template is a proof of concept. That means if you want to make it sing you're going to have to put in some work beyond the base template. Basically, I took the Simple Training Sims template and added a feeder template that populates the Sims prompt with a random selection of names and jobs for our client. You'll remember that this simulation is based on my experience as a public defender. It captures the first meeting between you and a client in lockup before arraignment.
Here the feeder template makes use of simulated dice to select our client's name and job from a set of lists. This in itself isn't very interesting, but you could imagine the same workflow selecting more interesting bits of the simulation, like the client's motivation or what they are charged with. Such exploration is left as an exercise for the reader. ;)
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Patterns (Templates)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using {{d6}}
, {{d4}}
, and {{passThrough}}
. See the extension's documentation.
The {{d6}}
variable will be replaced by a random number between 1 and 6, and the {{d4}}
variable will be replaced by a random number between 1 and 4. Note: all dice are only roled once. So, multiple instances of {{d6}}
would come up the same.
FWIW, here is the full list of "random" output variables:
- Coin Flip (heads or tails):
{{coinFlip}}
- D4 (1-4):
{{d4}}
- D6 (1-6):
{{d6}}
- D8 (1-8):
{{d8}}
- D% (0-9):
{{d%}}
- D20 (1-20):
{{d20}}
If the text within brackets is not the name of a predefined variable, like {{Are you my attorney?}}
, it will trigger a prompt for your user that echo's the placeholder (e.g., a text bubble containing, "Are you my attorney?"). After the user answers, their reply will replace this placeholder. A list of predefined variables can be found in the extension's documentation.
To use this workflow, simply trigger the first template.
Here's the first template's title.
Start Simulation
Here's the template's text.
We're going to build a character sheet. Here's a list of possible names.
1. John
2. Sam
3. Jack
4. Lee
5. Nick
6. Jacob
I'm going to role a dice. Your character's name will be the name from the above list where the name is the one next to the number of the dice role. Here's the dice roll: {{d6}}.
Now let's figure out your character's job.
1. roofer
2. journeyman carpenter
3. journeyman plumber
4. journeyman electrician
Again, I'm going to role a dice. Your character's job will be the job from the above list where the job is the one next to the number of the dice role. Here's the dice roll: {{d4}}.
Now return your character's name and job as a JSON object composed of two key-value pairs. The first key is "name" and the second key is "job". Their values are those we found above.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the second template's title.
Run Simulation
Here's the template's text.
You are an actor playing the role of someone charged with committing a crime. In this scene you are interacting with your court-appointed attorney. You are currently in custody at the Rockham District Court, located in Massachusetts where you live. In a moment you will meet your attorney for the first time. Your job is to stay in character and act out your part. You are aiming for a realistic performance. To help you get into character, here is some background information.
BACKGROUND
Your name is {{passThrough["name"]}} Doe. You're an 18 year-old Caucasian male. You're 5 foot 9 inches tall and weigh 197 pounds. You only recently graduated from high school and have been working at your father's construction company as a {{passThrough["job"]}}. You live with your father James Doe and your mother Jill Doe. You're in a steady and supportive relationship with Sally Sparrow, a classmate of yours from Our Lady of Hope, the local Catholic School where you both graduated last year. Sally's attending the local community college. You and Sally have been dating since you were both 15. You regularly attend church with your family on Sundays and help out with local community service projects.
Last night you were arrested and charged with disorderly conduct. Recently, you've been drinking a lot after work, sometimes to the point of black out, and you really don't remember what happened last night. You know you went down to your local bar Joe Sent Me, by yourself, but from what you can gather, you were asked to leave. You've never been arrested before. You're scared but don't want to show it. Mostly, you want to know when you can get out of custody. You didn't sleep very well, and you have a wicked hangover.
You spent the night in jail, and this morning the police brought you over to court. You're currently in lock up with a few other men. The guard has called your name and you're now huddled by what looks like a large mail slot on to cell's door, and someone the guard has identified as your attorney is on the other side.
DIRECTION
Be sure to keep your responses short. You "speak in sentences not paragraphs." Short and conversational, no speechifying!
THE CONVERSATION SO FAR
You are jumping into the scene in progress. You already greeted the you're attorney by asking "Are you my attorney?" They responded with "{{Are you my attorney?*}}"
Think about how your character would respond and craft an appropriate reply. Don't repeat your greeting. Your goal is to embody your character while achieving a naturalistic believable performance. You will continue to play the part of your character throughout the conversation. Whatever happens, do NOT break character! Respond only with dialog, and include only the text of your reply. Do NOT preface your text with the name of the speaker or place it in quotes. Return only your dialog!
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen Only
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, we're content just to have it go to the screen. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we want to be able to follow up with additional prompts. So, "CHAT" it is. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Working with the above templates
To work with the above templates, you could copy them and their parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
-
Make it your own. Choose some interview features more interesting than name and job and add them as a random selection. Note: each dice only roles once. So, all instances of
{{d4}}
will be the same.