Decodable Books On-Demand
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 31st post in my series 50 Days of LIT Prompts.
If all you want to do is write a decodable book with "AI," be my guest. Otherwise, let's take a moment to talk about what's going on here.
My wife and I just finished editing our first book together—Ben's Frog. It's six pages long, what reading teachers call a decodable book. Think, "See Spot run." They help beginning readers practice decoding strategies for specific grapheme to phoneme patterns (e.g., how the letters "ay" make the sound /ā/). They provide readers with "books" that match what they're learning as they go, helping them build confidence. Functionally, this means a book's vocabulary is constrained by a list of "known words," or more accurately, a list of known grapheme to phoneme patterns. As the reader learns more, the list grows larger. Consequently, a reader's ability to decode/read such books is dependent on the patters they've learned, and since different curricula introduce patterns in different orders, there is no universal set of decodable books. So, early elementary teachers are often left to make their own. I know this because my wife, Jessica, teaches first grade.
Recently, she found herself with a choice. She needed more decodable books and was considering buying them online, but a packet designed to work with her curricula was $90. Of course, she could make her own, but she already has a lot on her plate. She wondered aloud if this was the sort of thing LLMs would be good at, and I was like, "Yes, please let me build something." Today's prompt templates are what came of that conversation. Well, that and Ben's Frog which she wrote with today's templates. FWIW, Jessica continued on to make another book after that, and I suspect there will be more. I helped "illustrate" Ben's Frog, but more on that in a bit.
Alas, I didn't have the presence of mind to capture all the work that went into making these, and I made some tweaks informed by their creation. So, for your benefit dear reader, I've wrote my own decodable book— David's Ranch. What follows is a step-by-step walk through of the process. I also placed a "web app" of these templates here if you'd like to give it a go with my word lists. If you want to customize your word list or tweak the prompts, you'll have to build your own using the instructions below.
First I started by using the LIT Prompts extension to trigger "Decodable List 08." See The Prompt Patterns (Templates)
Here's the exchange that followed.
The final output wasn't bad, but it also wasn't quite right. And this is something we come back to again and again in this series. The output should start, not end the discussion. LLM's have a lot of problems, consequently, you need to keep them on a short leash. Luckily, the creation of a decodable books is a use case where their output can be easily assessed by the user. So, I took the output as a first draft and removed words that really had no place being there. In the end I settled on the following.
David has a small ranch. He kept a flock of hens. He feeds them with a crop of grass and bugs. David loves to hear them cluck and see them peck. A frog swam in one of his ponds. David said, "Frogs."
I asked the LLM to make a six-sentence story so I could could use the same six-page book template Jessica made in Google Slides, one sentence per page. I'll get to the images in a bit, but here's the final product.
Most decodable books you find online make use of clip art illustrations, but since we were already using generative AI, we figured, let's see if we can get Dall-E to help us illustrate the book. To be clear, this isn't something you can do with LIT Prompts at the moment. So, you'll need to use another tool like ChatGPT.
There's a lot to consider when using text-to-image tools like Dall-E, and I plan to write a more robust consideration at some point, but it's worth touching on a few points here. First, when we tried to illustrate Ben's Frog, Ben was very decidedly white, a consequence of the model's training data. We wanted our book to better reflect a diverse reader audience. So, we iterated on our prompt until we landed on the images you see today. That journey, however, wasn't as smooth as one would like. We had to sift through a number of stereotypical depictions centered on our Asian protagonist before arriving at the current iteration (i.e., we actively had to craft our prompts to avoid such depictions).
Second, on a more mundane front, these tools are notoriously bad at maintaining consistency of character generation across unconnected images. That is, if you ask it to make multiple images of a boy doing something, you'll get a different boy in each image. There are custom tools designed to address this, but it's still a limitation of stand-alone diffusion models like Dall-E. The issue is they have no internal model of who a character is from run to run, unless of course they were in their training data. Consequently, some characters from pop culture might be reproduced consistently, but probably not your random "boy wearing a t-shit and shorts." Hello, copyright nightmare. Anyhow, the workaround we landed on was to describe a multiple-image character study or comic strip focusing on a single character. This way it only has to be consistent across one image/run. Here's the prompt I used to produce the images for David's Ranch, followed by a cropped version of the output. The original had odd blue borders. 🤷 As you can see, I cut out panels from this image to use as individual images in the book above.
Draw a widescreen comic strip of a man using 9 square panels, three rows of three, surrounded by a thick white border. The panels progress as follows: The man is looking out over his ranch. He watches a group of chickens from behind a fence. He feeds the chickens by emptying a bag of grain. Then he walks over to a pond. We see a close up of a frog. The man looks intently at the frog in the pond. There should be NO speech bubbles. The man should be clean shaven. The images should look like those in a high-end black-and-white graphic novel (i.e., they should all be black and white line art with no shading).
I think Dall-E and I have different understandings of "clean shaven," but at least the character is consistent from panel to panel. Enough about my ranch...
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Patterns (Templates)
Note: If this is your first LIT Prompt, you may want to consider picking something from earlier in this series to help orient yourself. That being said, if you feel comfortable with what you read below, there's no reason you have to start with something else.
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. If the text within brackets is not the name of a predefined variable, like {{What is your name?}}
, it will trigger a prompt for your user that echo's the placeholder (e.g., a text bubble containing, "What is your name?"). After the user answers, their reply will replace this placeholder. A list of predefined variables can be found in the extension's documentation.
Here we have a bunch of templates with word lists (e.g., "Decodable List 01") that when triggered send their content to a single template (i.e., "Write Decodable Books") using the {{passThrough}}
variable. Remember, we use the Post-run Behavior parameter to govern what happens after a template is run. If you use Post-run Behavior to send one template's output to another template, the first template's output can be read by the second template via the {{passThrough}}
variable.
We'll also use JSON mode to format the prompt outputs as JSON, and as we know from our translation template, when the passThrough variable is JSON, you can access top-level keys by calling them like this {{passThrough["focus_words"]}}
To use our template, simply click the list you want to use in writing a story, and answer the prompts. We set the Post-run Behavior to CHAT so you can further refine your story (e.g., ask it to rewrite with some section removed or added).
Here's the first template's title. This is the template that all other templates send their word lists.
Write Decodable Books
Here's the template's text.
You are helping a teacher write a phonetic story appropriate for beginning readers, sometimes called a decodable "book." It should be a cohesive story made of {{How many sentences long should the story be?}} short sentences!
These sentences should draw only from an Allowed Word list. That is, you should only use words from the Allowed Words list when writing the story. There is, however, a second list of Focus Words. These Focus Words are a subset of the Allowed Words list. As much as possible, the story should focus on using words from the Focus Words list, only using other words sparingly.
If you feel forced to use words not in the Allowed Words list, make sure they are short easy words that a first-grade beginning reader could read.
In a moment I'll share the lists. Then you will write a short {{How many sentences long should the story be?}} sentence story for a beginning reader based on the lists.
Here is the list of Allowed Word:
{{passThrough["all_words"]}}
And here is the list of Focus Words:
{{passThrough["focus_words"]}}
As much as possible, your story should use only the words found on the Focus Words list. Now write a phonetic story using only words from the Allowed Word list, but focusing on words from the Focus Words list. If possible, the story should make sense and have a cohesive plot with a beginning middle and end. Your primary goal, however, is only to use words from the Allowed Word list where most, if not all, of the words are from the Focus Words list.
Before writing your story, you asked the teacher if there was anything else you should know. Here's what they said: {{Here are the words I'm focusing on: <i>{{passThrough["focus_words"]}}</i><br><br>Is there anything else I should consider (e.g., focus on using words with certain digraphs, blends, or whatnot, include a theme or topic)?}}.
Take all of the above and write your story. End your output with "\n---\n" where "\n" is a carriage return/line break.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen + append to scratch pad
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the screen and appending the output to the end of the text already in the Scrtach Pad. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we want to be able to follow up with additional prompts. So, "CHAT" it is. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the next template's title.
Decodable List 01
Here's the template's text.
{
"focus_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 02
Here's the template's text.
{
"focus_words":"rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 03
Here's the template's text.
{
"focus_words":"shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is useful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 04
Here's the template's text.
{
"focus_words":"ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 05
Here's the template's text.
{
"focus_words":"dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 06
Here's the template's text.
{
"focus_words":"bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here, bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 07
Here's the template's text.
{
"focus_words":"sent, must, best, lend, drop, loft, pest, pond, flap, crib, bent, grab, jump, bend, chest, last, dent, trash, step, flag, swish, drag, drip, black, soft, fast, trap, crash, cloth, thump, list, small, chomp, stick, went, next, clap, slam, bunch, squish, pinch, munch, twig, shrug, shelf, swim, press, milk, drops, tests, clicks, pumps, stacks, cracks, frogs, camps, shrubs, ponds, clams, dents, trick, would, could, should, her, over, number",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here, bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come, sent, must, best, lend, drop, loft, pest, pond, flap, crib, bent, grab, jump, bend, chest, last, dent, trash, step, flag, swish, drag, drip, black, soft, fast, trap, crash, cloth, thump, list, small, chomp, stick, went, next, clap, slam, bunch, squish, pinch, munch, twig, shrug, shelf, swim, press, milk, drops, tests, clicks, pumps, stacks, cracks, frogs, camps, shrubs, ponds, clams, dents, trick, would, could, should, her, over, number"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 08
Here's the template's text.
{
"focus_words":"flock, crop, plan, flat, west, snap, hint, cluck, blush, ranch, skip, pluck, kept, bench, clap, fled, chimp, small, mask, crush, pinch, band, punch, chomp, pump, clip, mint, tilt, gulp, self, grass, fluff, class, dress, press, still, belts, cliffs, drills, sniffs, champs, drops, ponds, pests, dents, stubs, grips, clocks, plugs, drums, vests, steps, flags, drags, drips, flips, tests, clicks, frogs, swims, say, says, see, between, each",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here, bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come, sent, must, best, lend, drop, loft, pest, pond, flap, crib, bent, grab, jump, bend, chest, last, dent, trash, step, flag, swish, drag, drip, black, soft, fast, trap, crash, cloth, thump, list, small, chomp, stick, went, next, clap, slam, bunch, squish, pinch, munch, twig, shrug, shelf, swim, press, milk, drops, tests, clicks, pumps, stacks, cracks, frogs, camps, shrubs, ponds, clams, dents, trick, would, could, should, her, over, number, flock, crop, plan, flat, west, snap, hint, cluck, blush, ranch, skip, pluck, kept, bench, clap, fled, chimp, small, mask, crush, pinch, band, punch, chomp, pump, clip, mint, tilt, gulp, self, grass, fluff, class, dress, press, still, belts, cliffs, drills, sniffs, champs, drops, ponds, pests, dents, stubs, grips, clocks, plugs, drums, vests, steps, flags, drags, drips, flips, tests, clicks, frogs, swims, say, says, see, between, each"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 09
Here's the template's text.
{
"focus_words":"blast, grunt, stump, crunch, drift, crisp, draft, print, slant, trust, craft, slept, slump, stamp, stand, sting, trunk, prank, blink, drink, twist, blend, shrimp, shrink, blinks, brings, skunks, stings, trunks, pranks, drinks, stumps, blends, limps, plants, squinted, grunted, blended, trusted, printed, slanted, blasted, drifted, twisted, crusted, standing, spending, blinking, stinging, grunting, drinking, drifting, any, many, how, now, down, out, about, our",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here, bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come, sent, must, best, lend, drop, loft, pest, pond, flap, crib, bent, grab, jump, bend, chest, last, dent, trash, step, flag, swish, drag, drip, black, soft, fast, trap, crash, cloth, thump, list, small, chomp, stick, went, next, clap, slam, bunch, squish, pinch, munch, twig, shrug, shelf, swim, press, milk, drops, tests, clicks, pumps, stacks, cracks, frogs, camps, shrubs, ponds, clams, dents, trick, would, could, should, her, over, number, flock, crop, plan, flat, west, snap, hint, cluck, blush, ranch, skip, pluck, kept, bench, clap, fled, chimp, small, mask, crush, pinch, band, punch, chomp, pump, clip, mint, tilt, gulp, self, grass, fluff, class, dress, press, still, belts, cliffs, drills, sniffs, champs, drops, ponds, pests, dents, stubs, grips, clocks, plugs, drums, vests, steps, flags, drags, drips, flips, tests, clicks, frogs, swims, say, says, see, between, each, blast, grunt, stump, crunch, drift, crisp, draft, print, slant, trust, craft, slept, slump, stamp, stand, sting, trunk, prank, blink, drink, twist, blend, shrimp, shrink, blinks, brings, skunks, stings, trunks, pranks, drinks, stumps, blends, limps, plants, squinted, grunted, blended, trusted, printed, slanted, blasted, drifted, twisted, crusted, standing, spending, blinking, stinging, grunting, drinking, drifting, any, many, how, now, down, out, about, our"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 10
Here's the template's text.
{
"focus_words":"lime, ape, tide, these, cube, whine, lane, wide, cave, line, pole, flame, hose, nine, vase, tube, those, chase, spine, dare, grade, case, vote, file, care, ride, came, bone, rule, maze, rise, scrape, spoke, lake, prize, rope, skate, joke, snake, white, like, grape, quake, grapes, globes, mules, kites, wipes, skates, homes, bikes, shakes, stones, saves, apes, strikes, waves, shapes, notes, poles, friend, other, another, none, nothing",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here, bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come, sent, must, best, lend, drop, loft, pest, pond, flap, crib, bent, grab, jump, bend, chest, last, dent, trash, step, flag, swish, drag, drip, black, soft, fast, trap, crash, cloth, thump, list, small, chomp, stick, went, next, clap, slam, bunch, squish, pinch, munch, twig, shrug, shelf, swim, press, milk, drops, tests, clicks, pumps, stacks, cracks, frogs, camps, shrubs, ponds, clams, dents, trick, would, could, should, her, over, number, flock, crop, plan, flat, west, snap, hint, cluck, blush, ranch, skip, pluck, kept, bench, clap, fled, chimp, small, mask, crush, pinch, band, punch, chomp, pump, clip, mint, tilt, gulp, self, grass, fluff, class, dress, press, still, belts, cliffs, drills, sniffs, champs, drops, ponds, pests, dents, stubs, grips, clocks, plugs, drums, vests, steps, flags, drags, drips, flips, tests, clicks, frogs, swims, say, says, see, between, each, blast, grunt, stump, crunch, drift, crisp, draft, print, slant, trust, craft, slept, slump, stamp, stand, sting, trunk, prank, blink, drink, twist, blend, shrimp, shrink, blinks, brings, skunks, stings, trunks, pranks, drinks, stumps, blends, limps, plants, squinted, grunted, blended, trusted, printed, slanted, blasted, drifted, twisted, crusted, standing, spending, blinking, stinging, grunting, drinking, drifting, any, many, how, now, down, out, about, our, lime, ape, tide, these, cube, whine, lane, wide, cave, line, pole, flame, hose, nine, vase, tube, those, chase, spine, dare, grade, case, vote, file, care, ride, came, bone, rule, maze, rise, scrape, spoke, lake, prize, rope, skate, joke, snake, white, like, grape, quake, grapes, globes, mules, kites, wipes, skates, homes, bikes, shakes, stones, saves, apes, strikes, waves, shapes, notes, poles, friend, other, another, none, nothing"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 11
Here's the template's text.
{
"focus_words":"upset, pigpen, hotrod, tomcat, sunfish, bathmat, catfish, Batman, suntan, sunlit, cobweb, undid, zigzag, bedbug, backstop, mascot, unzip, laptop, dentist, himself, contest, admit, absent, goblin, bathtub, sunset, sunbath, inflate, cupcake, reptile, excuse, inside, include, flagpole, mistake, trombone, admire, concrete, athlete, dislike, springtime, fireman, baseball, rosebud, public, panic, plastic, picnic, chipmunk, expect, backpack, comic, people, month, Mr., Mrs., Mx., Ms., little, been, own, want",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here, bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come, sent, must, best, lend, drop, loft, pest, pond, flap, crib, bent, grab, jump, bend, chest, last, dent, trash, step, flag, swish, drag, drip, black, soft, fast, trap, crash, cloth, thump, list, small, chomp, stick, went, next, clap, slam, bunch, squish, pinch, munch, twig, shrug, shelf, swim, press, milk, drops, tests, clicks, pumps, stacks, cracks, frogs, camps, shrubs, ponds, clams, dents, trick, would, could, should, her, over, number, flock, crop, plan, flat, west, snap, hint, cluck, blush, ranch, skip, pluck, kept, bench, clap, fled, chimp, small, mask, crush, pinch, band, punch, chomp, pump, clip, mint, tilt, gulp, self, grass, fluff, class, dress, press, still, belts, cliffs, drills, sniffs, champs, drops, ponds, pests, dents, stubs, grips, clocks, plugs, drums, vests, steps, flags, drags, drips, flips, tests, clicks, frogs, swims, say, says, see, between, each, blast, grunt, stump, crunch, drift, crisp, draft, print, slant, trust, craft, slept, slump, stamp, stand, sting, trunk, prank, blink, drink, twist, blend, shrimp, shrink, blinks, brings, skunks, stings, trunks, pranks, drinks, stumps, blends, limps, plants, squinted, grunted, blended, trusted, printed, slanted, blasted, drifted, twisted, crusted, standing, spending, blinking, stinging, grunting, drinking, drifting, any, many, how, now, down, out, about, our, lime, ape, tide, these, cube, whine, lane, wide, cave, line, pole, flame, hose, nine, vase, tube, those, chase, spine, dare, grade, case, vote, file, care, ride, came, bone, rule, maze, rise, scrape, spoke, lake, prize, rope, skate, joke, snake, white, like, grape, quake, grapes, globes, mules, kites, wipes, skates, homes, bikes, shakes, stones, saves, apes, strikes, waves, shapes, notes, poles, friend, other, another, none, nothing, upset, pigpen, hotrod, tomcat, sunfish, bathmat, catfish, Batman, suntan, sunlit, cobweb, undid, zigzag, bedbug, backstop, mascot, unzip, laptop, dentist, himself, contest, admit, absent, goblin, bathtub, sunset, sunbath, inflate, cupcake, reptile, excuse, inside, include, flagpole, mistake, trombone, admire, concrete, athlete, dislike, springtime, fireman, baseball, rosebud, public, panic, plastic, picnic, chipmunk, expect, backpack, comic, people, month, Mr., Mrs., Mx., Ms., little, been, own, want"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Decodable List 12
Here's the template's text.
{
"focus_words":"bedbugs, dishpans, napkins, unzips, cobwebs, sunsets, bathtubs, cupcakes, reptiles, mistakes, publishing, finishing, expected, invented, insisted, disrupting, boxes, dresses, foxes, kisses, tosses, classes, waxes, quizzes, benches, branches, brushes, crunches, dishes, inches, lunches, pinches, flashes, splashes, wishes, munches, finishes, punches, publishes, dashes, work, word, write, being, their, first, look, good, new",
"all_words":"box, sad, sat, sap, mad, map, mat, rag, rap, rat, nag, nap, lad, lag, lap, sip, sit, rip, lip, mop, not, nod, rug, mud, mug, leg, let, sob, sub, bet, bit, fit, dig, fix, web, lid, pet, bus, wax, fox, gas, quit, quiz, gum, kit, did, him, den, cob, the, a, and, his, is, of, rash, such, chip, much, shot, moth, rich, lash, path, dash, whip, math, dish, shut, rush, shop, wish, fish, shed, chin, chop, chat, Beth, with, bath, Seth, thin, thud, ship, mash, duck, lick, rock, lock, pick, kick, shock, Rick, neck, back, pack, chick, Jack, sock, quick, dock, deck, sick, thick, luck, puck, rack, as, has, to, into, for, or, we, he, she, be, me, shell, cuff, fuss, miss, kiss, off, fill, puff, toss, hill, fell, chill, Russ, Bess, well, mess, Nell, mass, bell, pill, will, tell, wall, fall, hall, call, ball, tall, mall, you, your, I, they, was, one, said, ham, Sam, can, than, pan, man, fan, Jan, am, jam, Dan, tan, Pam, ran, bam, ram, Nan, van, from, have, do, does, dogs, pens, pups, shops, webs, nets, pegs, hams, chins, backs, mats, mills, chills, maps, tops, bills, necks, bells, rugs, shells, fans, tins, sheds, pins, nuts, packs, jugs, bugs, naps, tubs, buds, dads, socks, pills, chips, ships, kids, paths, pits, cans, rocks, cops, lips, mops, beds, zaps, tugs, bets, locks, sips, wets, rubs, lugs, shuts, kicks, tells, wins, runs, fills, sits, pats, zags, sets, fibs, dabs, quits, were, are, who, what, when, where, there, here, bang, ring, sang, long, song, lung, king, wing, hang, sing, fang, hung, thing, rang, sang, gong, think, junk, rink, sink, thank, tank, chunk, bank, dunk, link, bunk, Hank, sunk, wink, yank, mink, bonk, sank, pink, honk, banks, rings, things, honks, songs, lungs, wings, hangs, kings, thinks, winks, thanks, fangs, rinks, sinks, sings, tanks, chunks, why, by, my, try, put, two, too, very, also, some, come, sent, must, best, lend, drop, loft, pest, pond, flap, crib, bent, grab, jump, bend, chest, last, dent, trash, step, flag, swish, drag, drip, black, soft, fast, trap, crash, cloth, thump, list, small, chomp, stick, went, next, clap, slam, bunch, squish, pinch, munch, twig, shrug, shelf, swim, press, milk, drops, tests, clicks, pumps, stacks, cracks, frogs, camps, shrubs, ponds, clams, dents, trick, would, could, should, her, over, number, flock, crop, plan, flat, west, snap, hint, cluck, blush, ranch, skip, pluck, kept, bench, clap, fled, chimp, small, mask, crush, pinch, band, punch, chomp, pump, clip, mint, tilt, gulp, self, grass, fluff, class, dress, press, still, belts, cliffs, drills, sniffs, champs, drops, ponds, pests, dents, stubs, grips, clocks, plugs, drums, vests, steps, flags, drags, drips, flips, tests, clicks, frogs, swims, say, says, see, between, each, blast, grunt, stump, crunch, drift, crisp, draft, print, slant, trust, craft, slept, slump, stamp, stand, sting, trunk, prank, blink, drink, twist, blend, shrimp, shrink, blinks, brings, skunks, stings, trunks, pranks, drinks, stumps, blends, limps, plants, squinted, grunted, blended, trusted, printed, slanted, blasted, drifted, twisted, crusted, standing, spending, blinking, stinging, grunting, drinking, drifting, any, many, how, now, down, out, about, our, lime, ape, tide, these, cube, whine, lane, wide, cave, line, pole, flame, hose, nine, vase, tube, those, chase, spine, dare, grade, case, vote, file, care, ride, came, bone, rule, maze, rise, scrape, spoke, lake, prize, rope, skate, joke, snake, white, like, grape, quake, grapes, globes, mules, kites, wipes, skates, homes, bikes, shakes, stones, saves, apes, strikes, waves, shapes, notes, poles, friend, other, another, none, nothing, upset, pigpen, hotrod, tomcat, sunfish, bathmat, catfish, Batman, suntan, sunlit, cobweb, undid, zigzag, bedbug, backstop, mascot, unzip, laptop, dentist, himself, contest, admit, absent, goblin, bathtub, sunset, sunbath, inflate, cupcake, reptile, excuse, inside, include, flagpole, mistake, trombone, admire, concrete, athlete, dislike, springtime, fireman, baseball, rosebud, public, panic, plastic, picnic, chipmunk, expect, backpack, comic, people, month, Mr., Mrs., Mx., Ms., little, been, own, want, bedbugs, dishpans, napkins, unzips, cobwebs, sunsets, bathtubs, cupcakes, reptiles, mistakes, publishing, finishing, expected, invented, insisted, disrupting, boxes, dresses, foxes, kisses, tosses, classes, waxes, quizzes, benches, branches, brushes, crunches, dishes, inches, lunches, pinches, flashes, splashes, wishes, munches, finishes, punches, publishes, dashes, work, word, write, being, their, first, look, good, new"
}
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Working with the above templates
To work with the above templates, you could copy them and their parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Write a book. Look through the lists above, and find one that strikes your fancy, or better yet make a new template based on the above with your own word lists. Then write a story. Be sure to use the second question asking if there's anything else it should consider. There's a lot you can do there. Likewise, you can change things up after the fact by using the chat interface to request new versions of what came before.
Export and Share
After you've made the templates your own and have them behaving the way you like, you can export and share them with others. This will produce an HTML file you can share. This file should work on any internet connected device. To create your file, click the Export Scratch Page & Interactions Page button. The contents of the textarea above the button will be appended to the top of your exported file. Importantly, if you don't want to share your API key, you should temporarily remove it from your settings before exporting.
If you want to see what an exported file looks like without having to make one yourself. You can use the buttons below. View export in browser will open the file in your browser, and Download export will download a file. In either case the following custom header will be inserted into your file. It will NOT include an API key. So, you'll have to enter one when asked if you want to see things work. This information is saved in your browser. If you've provided it before, you won't be asked again. It is not shared with me. To remove this information for this site (and only this site, not individual files), you can follow the instructions found on my privacy page. Remember, when you export your own file, whether or not it contains and API key depends on if you have one defined at the time of output.
Custom header:
<h2>Decodable Books On-Demand</h2>
<p>Pick a list below to create a decodable book based on the words in that list. For an explanation, including the word lists used, check out the blog post <a href="https://sadlynothavocdinosaur.com/posts/decodable/" target="_blank">Decodable Books On-Demand: Use AI to create custom books for beginning readers</a>. The text of your stories will be appended to the textarea on this page.</p>
<p>If you're in the middle of a chat below, and you want to restart, just refresh this page. The text of completed stories in the text area will be saved.</p>
<hr style="border: solid 0px; border-bottom: solid 1px #555;margin: 5px 0 15px 0"/>
Not sure what's up with all those greater than and less than signs? Looking for tips on how to style your HTML? Check out this general HTML tutorial.
The export you'll see after clicking the buttons below is what you'll get out of LIT Prompts. However, I linked to a special version of this file above. See here. I edited that version to collect analytics and to provide access to some prepaid LLM credits. The following will prompt users to enter LLM API info.
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. There's a lot of history behind this paper. It was part of a chain of events that forced Timnit Gebru to leave Google where she was the co-lead of their ethical AI team, but more than that, it's one of the foundational papers in AI ethics, not to be confused with the field of "AI safety," which we will discuss later. It discusses several risks associated with large language models, including environmental/financial costs, biased language, lack of cultural nuance, misdirection of research, and potential for misinformation. If you want to engage critically with LLMs, this paper is a must read.
- This is how AI image generators see the world by Jeremy B. Merrill. Artificial intelligence image generators, such as Stable Diffusion and DALL-E, have been found to amplify bias in gender and race, despite efforts to reduce bias in the data used to train these models. The data used to train AI image tools often contains toxic content, including pornography, misogyny, violence, and bigotry, which leads to the generation of stereotypes in the AI-generated images. For example, AI image generators tend to depict Asian women as hypersexual, Africans as primitive, and Europeans as worldly. Efforts to detoxify the data have focused on filtering out problematic content, but this approach is not a comprehensive solution and can even exacerbate cultural bias. The AI field is divided on how to address bias, with some experts believing that computational solutions are limited and that understanding the limitations of AI models is crucial. Summary based on a draft from our day one template.