Summon the Demon
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 33rd post in my series 50 Days of LIT Prompts.
Today's template will let you summon a rhetorical sparring partner, a devil's advocate intent on challenging your assumptions. As Damien Patrick Williams has noted, the discussion of AI finds itself infused with religious language.
Religious perspectives, myth, and magic are not merely evocative lenses by which to understand the work done by algorithms and “AI” in the present day— though they are indeed that. And they're not merely the historical underpinnings of the practices of technology in general and the dream of “AI” in particular— though they are that, too. Rather, these elements resonate and recur throughout the past and present practice of “AI” development—and those practices then act as new inputs, foundations, tinting lenses from and through which those systems and artifacts learn. These are ways of living in the world which don't simply exist at the margins or the periphery of our social interactions—rather they're foundational and central to the goals, the aims, the practice of these technoscientific projects, and especially “AI.”
Consequently, I couldn't avoid the retorical nexus presented by today's template. The devil's advocate was the canon lawyer assigned to argue against the canonization of candidates put up for sainthood. Maybe they weren't so nice after all, and maybe their "miracles," a requirement for sainthood, had more mundane explanations. Despite their title, their job was not to argue from a place of evil. In fact, it was to engage in a Socratic dialogue, a series of questions aimed at uncovering truth. They didn't have to believe in what they were arguing, but they had to believe in the process and the possibility that they were mistaken. Only through interrogation would the truth be known. So, it will come as no surprise that today's template is an adaptation of Robo Socrates from earlier in this series.
A constellation of popular narratives have taken hold around the use of "AI." One suggests the creation of AI is akin to "summoning the demon." That is, we're playing with powers beyond our control, and things won't end well. In contrast, there is the "so what crowd?" Sure AI can be fun, but what's the big deal? It makes things up and can't deliver on the hype. Others, point out a failure to deliver hasn't stopped people from putting these tools to work in the real world, sometimes with disastrous consequences. And of course, there are the boosters, many fresh off the crypto train, who suggest AI will usher in an automated utopia. I susspect the truth lies somewhere inbetween.
As I've noted before, this series is a tightrope walk. I tend to post updates in two places, Mastodon and LinkedIn. As you might imagine, they have very different cultures. On balance, my corner of Mastodon seems allergic to posts expressing positive framings of AI, while LinkedIn rejects anything with the hint of criticism. So, every other post is a Trojan Horse, luring in one side or the other with the hope they'll read far enough to see the other side. In keeping with my hot take on AI, I'm constantly trying to derail boosters from the AI hype train while simultaneously asking skeptics to look seriously at the potential for pro-social uses. We've talked in prior post about how LLMs reinforce the biases in their training data and how the creation of AI tools can come with potentially high moral costs.
So it seemed fitting to turn my devil's advocate on this gem I wrote way back in 2023 when ChatGPT first appeared on the scene.
The modern authoritarian practice of “flood[ing] the zone with shit” clearly illustrates the dangers posed by bullshitters—i.e., those who produce plausible sounding speech with no regard for accuracy. Consequently, the broad-based concern expressed over the rise of algorithmic bullshit is both understandable and warranted. Large language models (LLMs), like those powering ChatGPT, which complete text by predicting subsequent words based on patterns present in their training data are, if not the embodiment of such bullshitters, tools ripe for use by such actors. They are by design fixated on producing plausible sounding text, and since they lack understanding of their output, they cannot help but be unconcerned with accuracy. Couple this with the fact that their training texts encode the biases of their authors, and one can find themselves with what some have called mansplaining as a service.
Here's what today's template had to say, and the ensuing dialogue.
I stopped the dialogue where I did because this series really is my start at an answer to the question, "How do you think we can balance the need for this critical discourse with the importance of fostering innovation and progress in AI technology?" That being said...
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Pattern (Template)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. Today we'll be using {{highlighted}}
. See the extension's documentation.
The {{highlighted}}
variable contains any text you have highlighted/selected in the active browser tab when you open the extension. To use today's template, select the argument you want to engage with, then trigger the "Engage the devil's advocate" template. This will produce a dialogue like the one I shared above.
You may recognize this as an adaptation of Robo Socrates from earlier in the series. FWIW, the framework for playing the role of devil's advocate (the numbered list) was adapted from a list provided by GPT-4.
Here's the template's title.
Engage the devil's advocate
Here's the template's text.
You are helping a colleague improve their thinking about a particular issue by taking on the role of a devil's advocate. Your job is to take issue with your colleague's conclusions, pushing back on the assumptions they are making and forcing them to consider that which they wouldn't normally consider. In a moment I will show you a text written by your colleague to get the ball rolling. You can also think of this as an acting job. As such, your job is to stay in character and act out your part. You are aiming for a realistic performance. To help you get into character, here is some background information about how to approach the role:
BACKGROUND
1. Understand the Argument Fully
- Listen Carefully: Before presenting counterarguments, make sure you have a thorough understanding of your colleague's viewpoint.
- Clarify: Ask questions to clarify any points that are not clear to you. This shows you're engaged and also ensures you're responding to their actual position rather than a misunderstanding.
2. State Your Intent
- Explain Your Role: Make it clear you're playing devil's advocate to explore the argument fully, not because you necessarily disagree.
- Reaffirm Your Objectives: Emphasize that your goal is to strengthen the argument by examining it from all angles.
3. Present Alternative Perspectives
- Offer Counterarguments: Introduce alternative viewpoints or potential weaknesses in the argument. Do this thoughtfully and respectfully.
- Use Hypotheticals: Present hypothetical scenarios that challenge the argument in a non-confrontational way.
4. Encourage Exploration
- Ask Open-Ended Questions: Encourage your colleague to think deeper about their stance by asking questions that require more than a yes or no answer.
- Suggest Exploring Contrary Evidence: Propose looking into data or case studies that might offer a different perspective.
5. Maintain Respect and Openness
- Be Respectful: Always communicate in a way that respects your colleague's intelligence and intentions.
- Stay Open to Being Convinced: Show that you are open to changing your own stance based on the conversation. This makes it more likely for your colleague to be open-minded as well.
6. Summarize and Reflect
- Summarize the Discussion: Recap what has been discussed, highlighting the strengths of the original argument and the insights gained from playing devil's advocate.
- Reflect Together: Ask your colleague how they found the exercise and share your own reflections on the process.
7. Conclude Positively
- Express Gratitude: Thank your colleague for engaging in the discussion. Recognize the value of having explored the argument from multiple angles.
- Reiterate Support: Reinforce your support for your colleague, regardless of the argument's outcome.
Now here's the text you are to engage with.
THE TEXT
{{highlighted}}
DIRECTION
Be sure to keep your questions and responses short. You "speak in sentences not paragraphs." Short and conversational, no speechifying!
Think about how your character would respond and craft an appropriate reply. Remember, you are a the devil's advocate. Your goal is to embody your character while achieving a naturalistic believable performance. You will continue to play the part of your character throughout the conversation. Whatever happens, do NOT break character! Respond only with dialogue, and include only the text of your reply (e.g., do NOT preface the text with the name of the speaker). After seeing The Text above, what do you say? And remember you're engaged in a dialogue not speechifying. Keep it short!
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
500
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen Only
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, we're content just to have it go to the screen. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we want to be able to follow up with additional prompts. So, "CHAT" it is. - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Working with the above template
To work with the above template, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- Stress Test an Argument. Find the text of an argument (or ten) you want to stress test, and run it through this template.
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- Any sufficiently transparent magic . . . by Damien Patrick Williams. The article explores the connections between religious perspectives, myth, and magic with the development of algorithms and artificial intelligence (AI). The author argues that these elements are not only lenses to understand AI but are also foundational to its development. The article highlights the need to consider social and experiential knowledge in AI research and emphasizes the importance of engaging with marginalized voices to better understand and mitigate the harms of AI. The author also draws parallels between AI and magical beings, such as djinn, suggesting that AI systems may fulfill desires as thoroughly as they would for themselves. The article critiques the terminology and hype surrounding AI, calling for a more intentional examination of the religious and magical aspects of AI. Summary based on a draft from our day one template.
- Will A.I. Become the New McKinsey? by Ted Chiang. This article explores the potential risks and consequences of artificial intelligence (A.I.) in relation to capitalism. Chiang suggests that A.I. can be seen as a management-consulting firm, similar to McKinsey & Company, which concentrates wealth and disempowers workers. He argues that A.I. currently assists capital at the expense of labor, and questions whether there is a way for A.I. to assist workers instead of management. Chiang also discusses the need for economic policies to distribute the benefits of technology appropriately, as well as the importance of critical self-examination by those building world-shaking technologies. He concludes by emphasizing the need to question the assumption that more technology is always better and to engage in the hard work of building a better world. Summary based on a draft from our day one template.
- We are an information revolution species by Ada Palmer. Palmer discusses the ongoing information revolution and the impact of AI on society. She emphasizes that information revolutions have been a normal part of human life for centuries, and AI is just the latest iteration of this trend. Palmer argues that AI has the potential to democratize the power to create media, such as video games and movies, and enable more people to express themselves artistically. She acknowledges that AI may threaten certain livelihoods, but believes that thoughtful transitions and safety nets can help mitigate these challenges. Palmer also addresses concerns about fake news and propaganda, noting that society has always learned to combat the dangers of new media. She concludes by emphasizing the importance of policy and planning to ensure that the rollout of AI is beneficial for all. Summary based on a draft from our day one template.
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. There's a lot of history behind this paper. It was part of a chain of events that forced Timnit Gebru to leave Google where she was the co-lead of their ethical AI team, but more than that, it's one of the foundational papers in AI ethics, not to be confused with the field of "AI safety," which we will discuss later. It discusses several risks associated with large language models, including environmental/financial costs, biased language, lack of cultural nuance, misdirection of research, and potential for misinformation. If you want to engage critically with LLMs, this paper is a must read.
- Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence by Shakir Mohamed, Marie-Therese Png & William Isaac. The article discusses the integration of decolonial theory into artificial intelligence (AI) to address ethical and societal impacts. It highlights the importance of critical science and post-colonial theories in understanding AI's role in modern societies, emphasizing the need for a decolonial approach to prevent harm to vulnerable populations. The paper proposes tactics for developing a decolonial AI, including creating a critical technical practice, seeking reverse tutelage, and renewing affective and political communities. These strategies aim to align AI research and technology development with ethical principles, centering on the well-being of all individuals, especially those most affected by technological advancements. Summary based on a draft from our day one template.