Magnifying Ideas and Expanding Text with AI
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 28th post in my series 50 Days of LIT Prompts.
Yesterday we used AI to shorten text. Today, however, we'll write a template we can use to expand text. That's right, we'll select one or two sentences and turn them into a paragraph.
Here it's worth remembering some of what I said in the series' first post. I observed that I saw a lot of folks wanting to feed 5 words to AI and get out 500 (e.g., write me an essay discussing the lessons of the French Revolution) and suggested that this was like looking through the wrong end of the telescope. What people needed to be doing was feeding the AI 500 words and asking it to generate 5. In this way we could mitigate hallucinations and the like. You may also remeber this observation from Ted Chiang which showed up the first time we considered having AI "write" something:
Some might say that the output of large language models doesn't look all that different from a human writer's first draft, but, again, I think this is a superficial resemblance. Your first draft isn't an unoriginal idea expressed clearly; it's an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That's what directs you during rewriting, and that's one of the things lacking when you start with text generated by an A.I.
In our template we dodged this bullet by distinguishing between Writing with a capital W and plain old writing. That is, we employed AI's assistance only in the drafting of boilerplate form-like text. In fact, I admitted to NOT using any LLM assistance in the writing of these introductions, citing that very text from Chiang and explaining, "this writing is where I figure things out." So, what gives? Am I advocating looking through the wrong end of the telescope? Not exactly.
In the same post where I admitted to forgoing an LLM's assistance, I teased the idea that I would start to explore how one might responsibly use large language models (LLMs) for "Writing." This is a first step in that exploration. It's also another step on our journey to create advanced simulations because the way we'll get an LLM to expand text is to have it simulate a writing assistant who questions us, the author. After our Socrates simulation, the idea of a dialogue shouldn't be a surprise, but as we've noted, our dialogs have been rather shallow, lacking the ability to do anything after reaching a goal. Today that changes.
Today's prompt will have our "assistant" ask us follow up questions inspired by our original text and subsequent answers. After it has gotten enough context, it will take a stab at writing some text to help flesh things out. The trick to expanding our short sentence into a paragraph is to have us write a couple of paragraphs. This means we avoid looking through the wrong end of the telescope by feeding the LLM more text than we get out.
Is this really Writing with a capital W? Maybe. It's hard to say. I suspect the answer depends on how you approach the LLM's questions. It is after all a tool, and it can be used with varying degrees of skill. If you put a lot into your answers, the output is recognizably yours. If you're curt, it tends to "fill in the gaps" which we know to be a problem.
Here's an exchange I had with the prompt triggered by the text, "I went for a run today." Note: I said I'd show you how AI could help write something personal and original. I didn't say it would be good. ;)
Like I said, I didn't say it would be good, but as you can see, the output is clearly my words repackaged. This is not a blurry JPEG of the web. It might even be an okay journal entry. Of course, as with all of our AI outputs we should take this as a draft. What I find interesting is that it go me to admit something I wouldn't usually, that running is "me time." I feel a little guilty calling it that, but here we are. No doubt I fell victim to the ELIZA effect. Which is to say, the structure of a dialog tricked me into revealing something I might not have absent the prompt because on some level I felt like I was having a real conversation.
As we often do in this series, we have found a way to recast one of the issues with LLMs as a strength. Usually, the tendency to treat LLMs as people is a problem, but it proves a useful way to break down ones guard. Many AI supporters suggest that AI writing can help folks overcome the blank page problem. Chiang reminds us that the blank page problem isn't a bug. It's a feature. We need the struggle, but maybe, just maybe, if we're very careful, we can use LLMs to help us get the ball rolling without having them do all of the hard work. This should bring to mind a quote from another piece by Chiang.
If there is any lesson that we should take from stories about genies granting wishes, it's that the desire to get something without effort is the real problem. Think about the story of "The Sorcerer's Apprentice," in which the apprentice casts a spell to make broomsticks carry water but is unable to make them stop. The lesson of that story is not that magic is impossible to control: at the end of the story, the sorcerer comes back and immediately fixes the mess the apprentice made. The lesson is that you can't get out of doing the hard work. The apprentice wanted to avoid his chores, and looking for a shortcut was what got him into trouble.
It turns out the way to get AI to write something interesting is to avoid using it as a shortcut. So...
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Patterns (Templates)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using {{scratch}}
and {{highlighted}}
. See the extension's documentation.
The {{highlighted}}
variable contains any text you have highlighted/selected in the active browser tab when you open the extension. The {{scratch}}
variable contains the text in your Scratch Pad. Remember, the scratch pad is accessible from the extension's popup window. The button is to the right of the Settings & Templates button that you have used before. However, today we'll be using the Scratch Pad as a place to store our work as we go. Which is to say, you won't need to access it manually when using this prompt.
If the text within brackets is not the name of a predefined variable, like {{What made you think that?}}
, it will trigger a prompt for your user that echo's the placeholder (e.g., a text bubble containing, "What made you think that?"). After the user answers, their reply will replace this placeholder. A list of predefined variables can be found in the extension's documentation.
Here we'll have the prompts write novel questions such as {{What made you think that?}}
and pass them to the next template using the {{passThrough}}
variable. We use the Post-run Behavior parameter to govern what happens after a template is run. If you use Post-run Behavior to send one template's output to another template, the first template's output can be read by the second template via the {{passThrough}}
variable.
We'll also use JSON mode to format some of the prompt outputs as JSON, and as we know from our translation template, when the passThrough variable is JSON, you can access top-level keys by calling them like this {{{{passThrough["reply"]}}*}}
To use our template, simply select the text you want to expand and run the "Expand selected (short) text" template. That being said, this is actually the most complicated prompt we've seen so far. We'll use the DYNAMIC behavior for the first time and there are actually 6 individual templates all feeding into each other. FWIW, using the DYNAMIC Post-run Behavior will trigger the promt named in the {{passThrough["next"]}}
variable.
Admittedly, that's a lot to deal with. So, to help make this easier to understand, I've used the LIT Prompts comments feature. That is, any text between [#
and #]
isn't part of the template's output. This will let me explain as we go. That is, text in the comments is for YOU, not the LLM.
Here's the first template's title.
Expand selected (short) text
Here's the template's text.
[# This template is the first in a chain of templates that can either end or loop back on itself. It works by getting the LLM to generate some dialog and send that along with text the user has highlighted to another template. That template takes an action and feeds into another template, and so on and so on. Note: we're using gpt-4o-mini as a model here and in some of the subsequent templates in this chain. When this model is retired it will break things and require updating. #]You are an actor playing the role of a helpful writing assistant. In this scene you will interact with a writer. You will ask them some questions about some copy they are working on. You're goal is to ask them enough question such that their answers can be used to expand on the existing text. That is, you want them to give you things one could use to expand on the existing text. As this is a dialogue, we will present it in the form of a transcript. The writer will start by reading what they have so far.
WRITER: {{highlighted}}
Think about how your character would respond and craft an appropriate reply. You will provide the text of this reply along with one other piece of information as a JSON object. The object will have two key-value pairs. The first key-value pair's key is "transcript" and the value is that of the transcript above, starting with "WRITER:" and followed by the text of their copy. Be sure to escape an quotation marks. The second key-value pair has a key called "reply" and its value is the response you crafted above (i.e., it is the text of your character's reply to the above, your first question for the writer). Include only the text of your reply (e.g., do NOT preface the text with the name of the speaker).
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. Here we're usinggpt-4o-mini
because of its support for JSON mode. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
Role Play 1
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Here's the next template's title.
Role Play 1
Here's the template's text.
{{passThrough["transcript"]}}
YOU: {{passThrough["reply"]}}
WRITER: {{{{passThrough["reply"]}}*}} [# Here we've encased {{passThrough["reply"]}} inside a set of curly brackets. Imagine {{passThrough["reply"]}} has the value "What made you think that?" Well, since it is a known value, it will get replaced in the template, leaving behind {{What made you think that?}}. However, this is not a known value. So the user will be asked "What made you think that?" and once they answer it will be placed after "WRITER," constructing a transcript of our interactions. Why the asterisk? It's a way to force user input. Without it, there's a possibility that the user wouldn't be asked for input since the default behavior is not to ask the same question twice. Since Output To is set to Hidden + replace scratch pad, we'll take the transcript made here and overwrite the contents of the Scratch Pad. And since Post-Run Behavior is set to "Role Play 2" that template will be triggered. #]
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Temperature:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Max Tokens:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Hidden + replace scratch pad
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen to hide the output from the screen and replace the the current text of the Scrtach Pad with this output. - Post-run Behavior:
Role Play 2
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the next template's title.
Role Play 2
Here's the template's text.
[# This template looks very much like the first in our chain, except it pulls from the Scratch Pad and feeds into "Role Play 3." #] You are an actor playing the role of a helpful writing assistant. In this scene you will interact with a writer. You are asks them questions about some copy they are working on. You're goal is to ask them enough question such that their answers can be used to expand on the existing text. That is, you want them to give you things one could use to expand on the existing text. As this is a dialogue, we will present it in the form of a transcript. The writer began by reading the copy they have so far.
{{scratch}}
Think about how your character would respond and craft an appropriate reply. You will provide the text of this reply along with one other piece of information as a JSON object. The object will have two key-value pairs. The first key-value pair's key is "transcript" and the value is that of the transcript above, starting with "WRITER:" the text of their copy and the subsequent questions and answers. Be sure to escape an quotation marks. And DO NOT repeat yourself (i.e., ask new questions). The second key-value pair has a key called "reply" and its value is the response you crafted above (i.e., it is the text of your character's reply to the above, your question for the writer). Make sure it's a question. Include only the text of your reply (e.g., do NOT preface the text with the name of the speaker).
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. Here we're usinggpt-4o-mini
because of its support for JSON mode. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
2000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
Role Play 3
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the next template's title.
Role Play 3
Here's the template's text.
YOU: {{passThrough["reply"]}}
WRITER: {{{{passThrough["reply"]}}*}} [# Here unlike "Role Play 1" we append to, rather than overwrite, the Scratch Pad, meaning we just add to the transcript before passing things on to "Role Play 4." Again we place an asterisk before the closing curly brackets to force user input. #]
And here are the template's parameters:
- Output Type:
Prompt
. By choosing "Prompt" the template runs without being submitted to an LLM. It's output is just the template after slotting in variable values. - Model:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Temperature:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - Max Tokens:
n/a
. Since Output Type is set to Prompt, we don't have to set LLM-specific parameters. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Hidden + append to scratch pad
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen to hide the output from the screen and append the output to the end of the text already in the Scrtach Pad. - Post-run Behavior:
Role Play 4
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger thetemplate.
- Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the next template's title.
Role Play 4
Here's the template's text.
[# This looks a lot like "Role Play 2," but since it uses the Post-run Behavior DYNAMIC, it can trigger different templates based on the contents of the transcript (i.e., it will either loop back to "Role Play 2" or move us along to "Role Play 5. #]You are an actor playing the role of a helpful writing assistant. In this scene you will interact with a writer. You are asks them questions about some copy they are working on. You're goal is to ask them enough question such that their answers can be used to expand on the existing text. That is, you want them to give you things one could use to expand on the existing text. As this is a dialogue, we will present it in the form of a transcript. The writer began by reading the copy they have so far.
{{scratch}}
You will provide a JSON object in response to the above with a key named `next`. In your role as a writing assistant, consider if there is enough material in the above transcript to pad the original copy by 20%. You probably need at least three or four rounds of Q&A. However, if the replies are light on content, you may need more. If you have enough material to add 20% in length to the original copy, set the value of `next` to "Role Play 5". Otherwise, if you feel you need more, the value of `next` should be "Role Play 2".
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. Here we're usinggpt-4o-mini
because of its support for JSON mode. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
250
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
Yes
. This asks the model to output its answer in something called JSON, which is a nice machine-readable way to structure data. See https://en.wikipedia.org/wiki/JSON - Output To:
Hidden
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the hide the output entirely. This is uesful when passing output to another template. - Post-run Behavior:
DYNAMIC
. Like the choice of output, we can decide what to do after a template runs. Here we will trigger the promt named in thepassThrough["next"]
variable. - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Here's the next template's title.
Role Play 5
Here's the template's text.
[# Having collected more context from the user, we're now ready to produce some new text and copy that to the clipboard (Output To = Screen + clipboard). #]You are a helpful writing assistant. You've just had a conversation with a writer about some copy they're working on, and your task is to take what you learned from that conversation and rewrite the original copy such that its about 20% longer. Here's the text of your conversation. The writer began by reading the copy they have so far.
{{scratch}}
Use what you learned above to rewrite the original copy, adding details learned above. Do your best to keep the writer's voice and style while adding relevant details from your conversation to that first entry. Do NOT embellish! Do NOT make things up! Keep your additions firmly based on the content of your conversation, and don't make your copy too long! You goal is simply to flesh out the original text (i.e., the writer's first utterance above), adding about 20% in length. That being said, provide your new longer copy below.
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
1000
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen + clipboard
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, I've chosen the screen and clipboard so the results will be ready to paste where we like. - Post-run Behavior:
FULL STOP
. Like the choice of output, we can decide what to do after a template runs. To keep things simple, I went with "FULL STOP." - Hide Button:
checked
. This determines if a button is displayed for this template in the extension's popup window. We've checked the option because this template shouldn't be triggered by the user directly. Rather, it needs to be triggered by another template so that there's something in the{{passThrough}}
variable.
Working with the above templates
To work with the above templates, you could copy them and their parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Writes block? The next time you find yourself suffering from writers block, try turning this tool on whatever you've managed to get on the page.
- Try a different assistant. Edit the prompts to change what type of questions it asks by suggesting a different persona. Give it some personality.
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- ChatGPT Is a Blurry JPEG of the Web by Ted Chiang. Writing at the beginning of ChatGPT's rise to prominence, this article discusses the analogy between language models like ChatGPT and lossy compression algorithms. Chiang argues that while models can repackage/compress web information, they lack true understanding. Ultimately, Chiang concludes that starting with a blurry copy is not ideal when creating original content and that the struggling to express thoughts is an essential element of the writing process.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. There's a lot of history behind this paper. It was part of a chain of events that forced Timnit Gebru to leave Google where she was the co-lead of their ethical AI team, but more than that, it's one of the foundational papers in AI ethics, not to be confused with the field of "AI safety," which we will discuss later. It discusses several risks associated with large language models, including environmental/financial costs, biased language, lack of cultural nuance, misdirection of research, and potential for misinformation. If you want to engage critically with LLMs, this paper is a must read.
- Will A.I. Become the New McKinsey? by Ted Chiang. This article explores the potential risks and consequences of artificial intelligence (A.I.) in relation to capitalism. Chiang suggests that A.I. can be seen as a management-consulting firm, similar to McKinsey & Company, which concentrates wealth and disempowers workers. He argues that A.I. currently assists capital at the expense of labor, and questions whether there is a way for A.I. to assist workers instead of management. Chiang also discusses the need for economic policies to distribute the benefits of technology appropriately, as well as the importance of critical self-examination by those building world-shaking technologies. He concludes by emphasizing the need to question the assumption that more technology is always better and to engage in the hard work of building a better world. Summary based on a draft from our day one template.