The Library of Unwritten Books

David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 46th post in my series 50 Days of LIT Prompts.
The Library of Unwritten Books may be the coolest thing I've ever made, not the most important, but the coolest. It creates "novel novellas" on-demand. Unlike text-adventure games with fixed texts, these stories are an open-ended exercise in collaborative storytelling. You are a reader-author. Large language models (LLMs) mediate your collaboration, re-shaping and reflecting your words and those of authors past. I'm reminded of these words from Carl Sagan.
What an astonishing thing a book is. It's a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it and you're inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you. Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic.
This Library is a different sort of magic, for instead of transporting its readers into the mind of a single author it places us somewhere in the zeitgeist. LLMs, as we know, are machines for completing sentences. They work by predicting the next plausible string of words. As Ted Chiang observed, they are blurry JPEGs of the Web. We harness this fact to produce something novel based on the input of our reader-authors, the "compressed" writings used to train the LLM, and random chance.
It's worth noting that some folks have equated the training of these models with theft, but I don't think that's right. What they offer is something much stranger than copies. In a real sense, they are mathematical distillations of a zeitgeist found in their training data. A rebuttal of "scraping is stealing" is beyond the scope of this post. So, I'll point you to the words of Cory Doctorow who makes clear such a framing is not only ahistorical but also a trap! As for model outputs, that is a different matter. In the end, I suspect the real answer to the fears sparked by AI isn't copyright. It's antitrust and labor law. Unions. The answer involves unions. You really should read the Doctorow piece. FWIW, folks are also working on training models entirely on licened or public domain works. Now, back to The Library.
Remember this: as a reader-author what you read is a reflection of what you write. If you respond passively, providing short replies or taking only the road presented, your journey will stay safe and predictable. If, however, you embrace your role as an author, there is much to explore for you are exploring the shadows cast by the cultural artifacts upon which the model was trained. Be warned, you might not like what you find. Then again, you may discover something beautiful. Afterall, we and our artifacts contain multitudes.
If you examine the prompts that power The Library, you'll see they are asked to lean into genre convention. You'll also discover that the action is driven by mechanics similar to that of many popular role playing games. As the reader-author you are asked what actions you want to take. The LLM evaluates how likely you are to succeed in the world of your story (e.g., is this realistic fiction or fantasy). Based on this assessment it assigns a "difficulty" to the task and rolls a virtual dice behind the scenes. If the roll is high enough you succeed, too low you fail. The LLM shares the result in prose, moves the story forward a beat, and asks for you to take the wheel.
If you want to understand how it all fits together and make your own library, the templates below lay it all out. My hope is that you will download LIT Prompts and tweak the templates' language to your liking. However, I expect you may first be interested in experiencing this new role of reader-author. So, I invite you to check out your own unwritten book here.
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
The Prompt Patterns (Templates)

When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using the {{scratch}}
, {{passThrough}}
, and {{d20}}
varaiables. See the extension's documentation.
If the text within brackets is not the name of a predefined variable, like {{What do you want to do?}}
, it will trigger a prompt for your user that echo's the placeholder (e.g., a text bubble containing, "What do you want to do?"). After the user answers, their reply will replace this placeholder. A list of predefined variables can be found in the extension's documentation. We'll use this behavior to ask you a set of questions before creating your story.
We'll also use JSON mode to format some of the prompt outputs as JSON, and as we know from our translation template, when the passThrough variable is JSON, you can access top-level keys by calling them like this {{{{passThrough["title"]}}}}
If you've been following along, the template behavior should be pretty straight forward. However, I discovered a bug in the extension while exporting today's templates. Normally, if you ask the same question (e.g., {{What do you want to do?}}
) multiple times the extension will only ever ask it once per run. To allow for the possibility that you might want to re-ask the same question when using a string of templates, I introduced the ability to force prompting of user questions by placing an asterisk before the final set of curly brackets (e.g., {{What do you want to do?*}}
).
I have this working over at The Library of Unwritten Books and am updating the extension, but Chrome and Firefox can take a week or so before approving updates. Which is to say, if you try to use the templates as shared below you will enter a loop because it will fail to aske you {{What do you want to do?}}
more than once. That is it will just reuse your first answer over and over again. You can avoid this behavior by changing the post-run behavior for "append 2" to "FULL STOP" instead of "Role Play 01." Of course, to move forward you will have to run "Pick up where you left off" after each round. When the bug fix is live, I'll this post to make that clear. That being said...
Here's the template's title.
Read-write a new story
Here's the template's text.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-3.5-turbo-1106
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Since we're seeking fidelity to a text, I went with the least "creative" setting—0. - Max Tokens:
500
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
Frame
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger theFrame
template. - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the template's title.
Frame
Here's the template's text.
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Temperature:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Max Tokens:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Hidden + replace scratch pad
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen to hide the output from the screen and replace the the current text of the Scrtach Pad with this output. - Post-run Behavior:
Introduction
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger theIntroduction
template. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the template's title.
Pick up where you left off
Here's the template's text.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen Only
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, we're content just to have it go to the screen. - Post-run Behavior:
Role Play 01
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger theRole Play 01
template. - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the template's title.
Introduction
Here's the template's text.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.9
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Today we're all about being creative. So, I went with a pretty "creative" setting—0.9. - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
The story begins
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger theThe story begins
template. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the template's title.
The story begins
Here's the template's text.
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Temperature:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Max Tokens:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen + append to scratch pad
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the screen and appending the output to the end of the text already in the Scrtach Pad. - Post-run Behavior:
Role Play 01
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger theRole Play 01
template. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the template's title.
Role Play 01
Here's the template's text.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-3.5-turbo-1106
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.8
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
move forward
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger themove forward
template. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the template's title.
move forward
Here's the template's text.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-3.5-turbo-1106
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.8
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
append 2
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger theappend 2
template. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the template's title.
append 2
Here's the template's text.
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Temperature:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Max Tokens:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen + append to scratch pad
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the screen and appending the output to the end of the text already in the Scrtach Pad. - Post-run Behavior:
Role Play 01
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger theRole Play 01
template. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the template's title.
Download current story
Here's the template's text.
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Temperature:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Max Tokens:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
SAVE TO FILE
. Like the choice of output, we can decide what to do after a template runs. Here we will save the output to a file. This will trigger your browser's download feature. - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Working with the above templates
To work with the above templates, you could copy them and their parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Make it your own. Without editing any of the templates, see what you can do just by providing answers to the 6 questions. One you find arrangements you like, consider hard coding these and building on them.
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- Cosmos by Carl Sagan. Though I quoted and linked to the PBS mini-serise, I figured here I'd incluse a link to the book, published in 1980 as a companion to mini-series, Cosmos: A Personal Voyage the book consists of 13 chapters, each corresponding to an episode of the television series. It explores various topics such as the history of science and civilization, the nature of the Universe, space exploration, the inner workings of cells and DNA, and the implications of nuclear war. Sagan aimed to explain complex scientific ideas in a way that is accessible to anyone interested in learning. The book became a best-seller, spending 50 weeks on the Publishers Weekly list and 70 weeks on the New York Times Best Seller list. It received the Hugo Award for Best Non-Fiction Book in 1981 and contributed to the increased visibility of science-themed literature. Summary based on a draft from our day one template.
- ChatGPT Is a Blurry JPEG of the Web by Ted Chiang. Writing at the beginning of ChatGPT's rise to prominence, this article discusses the analogy between language models like ChatGPT and lossy compression algorithms. Chiang argues that while models can repackage/compress web information, they lack true understanding. Ultimately, Chiang concludes that starting with a blurry copy is not ideal when creating original content and that the struggling to express thoughts is an essential element of the writing process.
- Bullies want you to think they're on your side by Cory Doctorow. The article discusses how bullies, including Big Tech companies, use a tactic of convincing their victims that they are the only ones who can keep them safe. This tactic involves creating a fortress-like environment where the victim surrenders their agency and becomes trapped under the bully's control. The author gives examples of how this tactic is used by tech companies, such as Apple and Amazon, to gain control over users and exploit their data. The article also explores how media companies are pursuing legal action against AI companies while simultaneously seeking to control and exploit their workers. The author argues that workers can only achieve a fair deal by forming unions to gain bargaining power. The article concludes by highlighting the importance of workers' rights in the face of AI advancements and the potential negative consequences of implementing stricter copyright laws. Summary based on a draft from our day one template.
- Wherein The Copia Institute Tells The Copyright Office There's No Place For Copyright Law In AI Training by Cathy Gellis. This article outlines a comment filed by the Copia Institute with the US Copyright Office, arguing that copyright law should not apply to AI training. The comment states that copyright law should not interfere with AI training because it would impede the public's right to consume works. They argue that AI training is an extension of the public's right to use tools, including software tools, to help them consume works. The comment also notes that AI training is not the same as copying or distributing copyrighted works, as it involves the analysis and processing of information rather than the creation of new works. They conclude that copyright law should not have a role in AI training and that AI training should be considered fair use or exempt from copyright altogether.
- Will A.I. Become the New McKinsey? by Ted Chiang. This article explores the potential risks and consequences of artificial intelligence (A.I.) in relation to capitalism. Chiang suggests that A.I. can be seen as a management-consulting firm, similar to McKinsey & Company, which concentrates wealth and disempowers workers. He argues that A.I. currently assists capital at the expense of labor, and questions whether there is a way for A.I. to assist workers instead of management. Chiang also discusses the need for economic policies to distribute the benefits of technology appropriately, as well as the importance of critical self-examination by those building world-shaking technologies. He concludes by emphasizing the need to question the assumption that more technology is always better and to engage in the hard work of building a better world. Summary based on a draft from our day one template.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. There's a lot of history behind this paper. It was part of a chain of events that forced Timnit Gebru to leave Google where she was the co-lead of their ethical AI team, but more than that, it's one of the foundational papers in AI ethics, not to be confused with the field of "AI safety," which we will discuss later. It discusses several risks associated with large language models, including environmental/financial costs, biased language, lack of cultural nuance, misdirection of research, and potential for misinformation. If you want to engage critically with LLMs, this paper is a must read.