Sadly Not, Havoc Dinosaur

Algorithmic BS

Shoot the breeze with your base LLM

Headshot of the author, Colarusso. David Colaursso

This is the 34th post in my series 50 Days of LIT Prompts.

I hope folks are adding these daily templates to a growing list of LIT Prompts. I find myself reaching for various prompt templates throughout the day. That being said, today we cover a template that really should be in your collection. You may have thought to add it yourself, or it may have been so obvious that you overlooked it. I'm talking about a template for chatting with an LLM. That's right, your very on generic Chatbot.

It's almost too simple, but I genuinely find it useful to have access to an LLM from the browser without the need to open ChatGPT or the like. Hopefully, you do too. That being said, I'd be remiss if I didn't provide context for today's title. So, before we jump into making something, here's an expanded excerpt from the piece I quoted yesterday.

The modern authoritarian practice of “flood[ing] the zone with shit” clearly illustrates the dangers posed by bullshitters—i.e., those who produce plausible sounding speech with no regard for accuracy. Consequently, the broad-based concern expressed over the rise of algorithmic bullshit is both understandable and warranted. Large language models (LLMs), like those powering ChatGPT, which complete text by predicting subsequent words based on patterns present in their training data are, if not the embodiment of such bullshitters, tools ripe for use by such actors. They are by design fixated on producing plausible sounding text, and since they lack understanding of their output, they cannot help but be unconcerned with accuracy. Couple this with the fact that their training texts encode the biases of their authors, and one can find themselves with what some have called mansplaining as a service.

For one, "algorithmic BS artists" lack agency. They do not understand their input or output. Their "dishonesty" is a consequence of their use case, not their character. Context matters, and tools are not moral actors. Any agency, moral or otherwise, lies with the developers and users of such tools. By stepping into these roles we can better explore the questions presented by their use. Additionally, as educators, it is part of our duty to prepare our students for the realities of a world where such tools exist. To do that we think it's important to understand, not just how they work now, but to explore new use cases. The tools presented here are, in part, an attempt to imagine pro-social uses for such technology, ones that don't result in the death of scholarship or truth. In fact, they are attempts to use them in service of both. Of course, any assessment of a tool's use must consider a broad context, including its creation. This raises a good many questions. Readers can find more discussion of these at Coding The Law.org and see an example of how we've responded to some of them in our prior AI work. Below, however, we will focus on the tools found here on Find My Cite, which largely ask, "can we work with this particular 'bullshitter' (i.e., LLMs)?"

Since writing that, I've found a good number of readings to help people think about what it means to work with a BS artist. You can find some of them below. That being said...

Let's build something!

We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).

Up Next

Questions or comments? I'm on Mastodon @Colarusso@mastodon.social


Setup LIT Prompts

7 min intro video

LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.

To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.

Install the extension

Follow the links for your browser.

  • Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
  • Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."

If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.

Point it at an API

Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.

Login to OpenAI, and navigate to the API documentation.

Once you are looking at the API docs, follow the steps outlined in the image above. That is:

  1. Select "API keys" from the left menu
  2. Click "+ Create new secret key"

On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:

  1. open the extension
  2. click on Templates & Settings
  3. enter the API Base and Key (under the section OpenAI-Compatible API Integration)

Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.

If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.

If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.


The Prompt Pattern (Template)

When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables (e.g., {{highlighted}} grabs the highlighted text from your active browser window). If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. If the text within brackets is not the name of a predefined variable, like {{What is your name?}}, it will trigger a prompt for your user that echo's the placeholder (e.g., a text bubble containing, "What is your name?"). After the user answers, their reply will replace this placeholder. Here we use it to get the conversation going by having the LLM open with "Yes?" A list of predefined variables can be found in the extension's documentation.

Here's today's template title.

BS with a "bot"

Here's the template's text.

{{Yes?}} [# {{Yes?}} isn't a predefined variable. So, the user will be presented with a text input, and since Post-run Behavior is set to CHAT, this ends up being a plain old chat with an LLM. #]

And here are the template's parameters:

Working with the above template

To work with the above template, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.

You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:


TL;DR References

Here are blubs for a selection of works you should probably read before having too many chats with an LLM.