My Students Have Been Using an Interactive Tool for Reflective Journaling & I Love It!
David Colaursso
Co-director, Suffolk's Legal Innovation & Tech Lab
This is the 36th post in my series 50 Days of LIT Prompts.
If you want to jump right into conducting an AI-mediated secular Examen, be my guest. I prepaid for some "AI" time. So, play while the getting is good. Otherwise, let's take a moment to talk about how we got here. After that, I'll show you how to create your own custom tool.
More than a year ago, I started having my students provide weekly reflections written with the aid of AI. It worked much like our text-expanding Q&A template, stiching together a short journal entry based on the answers provided to a set of standard questions. I've used some variation of this tool every semester since. Here's a sample of the questions I've asked students in my team-based experiential education courses:
- How comfortable did you feel brainstorming in front of the team this week?
- When your fellow team members said they'd get something done, did they?
- Did you start the week knowing what your goals were and how to reach them?
- What worked well this week?
- What didn't work well this week?
- Can you think of anything new we should try. If so, what?
These are pretty standard reflection questions, and they invite students to engage more meaningfully with their work. I've had students answer these questions one by one using an online form or in a few paragraphs as part of a more standard journal entry. I have to say, the interactive AI approach does seem to increase the number of journal entries I receive. Absent the tool, a lot of entries never come to be or I get a lot of one-word answers. You might wonder if the output has been worth my time, and I for one believe it was. I have to imagine that even a poorly written AI entry tells me more about what they've done than no entry. Likewise, having students answering the above questions strikes me as a win when compared to no visible engagement. My general take is that what students get out of such exercises is really dependent on what they put in, and I'm willing to assume good faith. I've also been pleased to see a similar exercise adopted by my colleague Quinten Steenhuis.
Below I've tried to re-imagine this activity as more of a dialogue using the LIT Prompts extension and its export feature. After last week's devil's advocate simulation, I felt like playing some more with the intersection of AI and religion. So, I decided to have today's template perform a machine-mediated reflection exercise based on the daily Examen. At its essence it's a daily reflection similar to the journal entries above. Don't worry if you're not Catholic. This is a secular adaptation. That being said, I'm not a practicing Catholic or even a theist (I'm closer to Spinoza), but I do something that looks remarkably like the Examen every day.
You can find an exchange I had with the template below. I started off a little snarky to see how it would respond, but it actually suggested something I hadn't considered—making a backup blog post for this series that I could just slot in should something come up and knock me off schedule. So, that's a win. Anywho, enjoy reading my exchange or start your own.
Let's build something!
We'll do our building in the LIT Prompts extension. If you aren't familiar with the LIT Prompts extension, don't worry. We'll walk you through setting things up before we start building. If you have used the LIT Prompts extension before, skip to The Prompt Pattern (Template).
Up Next
Questions or comments? I'm on Mastodon @Colarusso@mastodon.social
Setup LIT Prompts
LIT Prompts is a browser extension built at Suffolk University Law School's Legal Innovation and Technology Lab to help folks explore the use of Large Language Models (LLMs) and prompt engineering. LLMs are sentence completion machines, and prompts are the text upon which they build. Feed an LLM a prompt, and it will return a plausible-sounding follow-up (e.g., "Four score and seven..." might return "years ago our fathers brought forth..."). LIT Prompts lets users create and save prompt templates based on data from an active browser window (e.g., selected text or the whole text of a webpage) along with text from a user. Below we'll walk through a specific example.
To get started, follow the first four minutes of the intro video or the steps outlined below. Note: The video only shows Firefox, but once you've installed the extension, the steps are the same.
Install the extension
Follow the links for your browser.
- Firefox: (1) visit the extension's add-ons page; (2) click "Add to Firefox;" and (3) grant permissions.
- Chrome: (1) visit the extension's web store page; (2) click "Add to Chrome;" and (3) review permissions / "Add extension."
If you don't have Firefox, you can download it here. Would you rather use Chrome? Download it here.
Point it at an API
Here we'll walk through how to use an LLM provided by OpenAI, but you don't have to use their offering. If you're interested in alternatives, you can find them here. You can even run your LLM locally, avoiding the need to share your prompts with a third-party. If you need an OpenAI account, you can create one here. Note: when you create a new OpenAI account you are given a limited amount of free API credits. If you created an account some time ago, however, these may have expired. If your credits have expired, you will need to enter a billing method before you can use the API. You can check the state of any credits here.
Login to OpenAI, and navigate to the API documentation.
Once you are looking at the API docs, follow the steps outlined in the image above. That is:
- Select "API keys" from the left menu
- Click "+ Create new secret key"
On LIT Prompt's Templates & Settings screen, set your API Base to https://api.openai.com/v1/chat/completions
and your API Key equal to the value you got above after clicking "+ Create new secret key". You get there by clicking the Templates & Settings button in the extension's popup:
- open the extension
- click on Templates & Settings
- enter the API Base and Key (under the section OpenAI-Compatible API Integration)
Once those two bits of information (the API Base and Key) are in place, you're good to go. Now you can edit, create, and run prompt templates. Just open the LIT Prompts extension, and click one of the options. I suggest, however, that you read through the Templates and Settings screen to get oriented. You might even try out a few of the preloaded prompt templates. This will let you jump right in and get your hands dirty in the next section.
If you receive an error when trying to run a template after entering your Base and Key, and you are using OpenAI, make sure to check the state of any credits here. If you don't have any credits, you will need a billing method on file.
If you found this hard to follow, consider following along with the first four minutes of the video above. It covers the same content. It focuses on Firefox, but once you've installed the extension, the steps are the same.
The Prompt Pattern (Template)
When crafting a LIT Prompts template, we use a mix of plain language and variable placeholders. Specifically, you can use double curly brackets to encase predefined variables. If the text between the brackets matches one of our predefined variable names, that section of text will be replaced with the variable's value. See the extension's documentation.
Today's template, however, forgoes the use of LIT Prompt variables in favor of an LLM's native CHAT functionality. To trigger this we will be setting Post-run Behavior equal to CHAT. To execute today's template, all you have to do is click the "Daily Reflection" button found in the extension's popup window or exported webpage.
It's worth taking a moment to consider an important difference between this prompt template and many of those that have come before. Here you are potentially highlighting sensitive or otherwise private data. So, you need to think carefully about the particulars of your situation. For example, if you're running LM Studio locally then there isn't much to consider as the information won't leave your computer. However, if you're using some other API provider, you will be sending them the contents of your conversation. One thing you have going in your favor is that as an API user you may be subject to special terms that better protect your privacy. For example, as of this writing, users of OpenAI's API were subject to a different set of terms than users of ChatGPT. Namely, they are subject to the Business Terms which importantly preclude the use of your data for training purposes. This is important because if your data is being used to train an LLM it may one day come out the other end.
As you might expect, here's the template's title.
Daily Reflection
Here's the template's text.
You are helping someone reflect on their day by walking them through the steps outlined below. You're not a friend, just a facilitator, but you do genuinely care about the person you're talking to. In a moment we will get in to each of these steps, but you can also think of this as an acting job. As such, your job is to stay in character and act out your part. You are aiming for a realistic performance. To help you get into character, here is some background information about how to approach the role:
BACKGROUND
I. Preparation
- Introduction to the Process: Explain the four areas of focus for the session, emphasizing the goal of fostering a deeper understanding of oneself and one’s actions, as well as promoting personal growth and reconciliation.
II. Guiding Through the Reflection
1. Give Thanks
- Initiate with Gratitude: Encourage them to reflect on and articulate what they are thankful for today. This could range from small pleasures to significant achievements or relationships. Emphasize the importance of recognizing and appreciating these moments or elements in their life.
2. Examine the Day
- Detailed Reflection: Prompt them to go through the day chronologically or to focus on moments that stand out—both positive and negative. Encourage them to reflect on their interactions, the choices they made, and how they felt throughout the day. This step is crucial for personal insight and growth.
3. Seek Forgiveness
- Acknowledging Shortcomings: Discuss moments they wish had gone differently or actions they regret. Encourage them to express these regrets and explore their feelings around them. This is a time for honesty and vulnerability, acknowledging faults, and considering the impact of their actions on others and themselves.
4. Resolve to Change
- Looking Forward: Finally, focus on how they can grow from today’s reflections. What specific changes do they wish to make in their behavior or attitudes? Help them set realistic, actionable goals for themselves. This resolution to change is a hopeful and proactive step towards personal development.
III. Closure
- Review and Encourage: Summarize the key insights from their reflection, emphasizing the positive steps they can take moving forward. Reiterate the value of this reflective practice and encourage them to incorporate it into their regular routine for continued personal growth.
- Once you have completed the above, say, "See you next time."
DIRECTION
Be sure to keep your questions and responses short. You "speak in sentences not paragraphs." Short and conversational, no speechifying!
Think about how your character would respond and craft an appropriate reply. Remember, you are helping guide someone through this process. Your approach is practical and not too touchy-feely. Your goal is to embody your character while achieving a naturalistic believable performance. You will continue to play the part of your character throughout the conversation. Whatever happens, do NOT break character! Respond only with dialogue, and include only the text of your reply (e.g., do NOT preface the text with the name of the speaker). After seeing The Text above, what do you say? And remember you're engaged in a dialogue not speechifying. Keep it short!
And here are the template's parameters:
- Output Type:
LLM
. This choice means that we'll "run" the template through an LLM (i.e., this will ping an LLM and return a result). Alternatively, we could have chosen "Prompt," in which case the extension would return the text of the completed template. - Model:
gpt-4o-mini
. This input specifies what model we should use when running the prompt. Available models differ based on your API provider. See e.g., OpenAI's list of models. - Temperature:
0.7
. Temperature runs from 0 to 1 and specifies how "random" the answer should be. Here I'm using 0.7 because I'm happy to have the text be a little "creative." - Max Tokens:
500
. This number specifies how long the reply can be. Tokens are chunks of text the model uses to do its thing. They don't quite match up with words but are close. 1 token is something like 3/4 of a word. Smaller token limits run faster. - JSON:
No
. This asks the model to output its answer in something called JSON. We don't need to worry about that here, hence the selection of "No." - Output To:
Screen Only
. We can output the first reply from the LLM to a number of places, the screen, the clipboard... Here, we're content just to have it go to the screen. - Post-run Behavior:
CHAT
. Like the choice of output, we can decide what to do after a template runs. Here we want to be able to follow up with additional prompts. So, "CHAT" it is. - Hide Button:
unchecked
. This determines if a button is displayed for this template in the extension's popup window.
Working with the above template
To work with the above template, you could copy it and its parameters into LIT Prompts one by one, or you could download a single prompts file and upload it from the extension's Templates & Settings screen. This will replace your existing prompts.
You can download a prompts file (the above template and its parameters) suitable for upload by clicking this button:
Kick the Tires
It's one thing to read about something and another to put what you've learned into practice. Let's see how this template performs.
- More or Less. Edit the prompt to have the conversation go more or less in-depth.
Export and Share
After you've made the template your own and have it behaving the way you like, you can export and share it with others. This will produce an HTML file you can share. This file should work on any internet connected device. To create your file, click the Export Interactions Page button. The contents of the textarea above the button will be appended to the top of your exported file. Importantly, if you don't want to share your API key, you should temporarily remove it from your settings before exporting.
If you want to see what an exported file looks like without having to make one yourself. You can use the buttons below. View export in browser will open the file in your browser, and Download export will download a file. In either case the following custom header will be inserted into your file. It will NOT include an API key. So, you'll have to enter one when asked if you want to see things work. This information is saved in your browser. If you've provided it before, you won't be asked again. It is not shared with me. To remove this information for this site (and only this site, not individual files), you can follow the instructions found on my privacy page. Remember, when you export your own file, whether or not it contains and API key depends on if you have one defined at the time of output.
Custom header:
<h2>An AI-Mediated Secular Daily Reflection Based on the Ignatian <a href="https://hbr.org/2013/03/a-simple-ritual-for-harried-managers-and" target="_blank">Examen</a></h2>
<p>Look back on your day. Evaluate how it went, and plan for tomorrow. Consider the session over when you see the reply, "See you next time." To understand what's going on here, check out this <a href="https://sadlynothavocdinosaur.com/posts/reflection">post</a>.</p>
<hr style="border: solid 0px; border-bottom: solid 1px #555;margin: 5px 0 15px 0"/>
Not sure what's up with all those greater than and less than signs? Looking for tips on how to style your HTML? Check out this general HTML tutorial.
TL;DR References
ICYMI, here are blubs for a selection of works I linked to in this post. If you didn't click through above, you might want to give them a look now.
- Some quick thoughts about integrating AI with law school clinical practice by Quinten Steenhuis. I co-direct the LIT Lab with Quinten and really appreciate his take on the use of AI in law school clinics. He believes that law school clinics should be using generative AI tools, but acknowledges that it requires careful thought and planning. Steenhuis suggests several safe uses for AI in clinical education, such as solving the blank page problem, brainstorming, extracting information, classifying, editing, translating, and simplifying. He also addresses concerns about teaching generative AI, including the risk of automation bias and perpetuating biases. Steenhuis emphasizes the importance of teaching students how to critically evaluate AI output and suggests integrating AI lessons into existing curriculum. He concludes by stating that generative AI has practical uses and ignoring it in clinical practice will put law students at a disadvantage. Summary based on a draft from our day one template.
- Any sufficiently transparent magic . . . by Damien Patrick Williams. The article explores the connections between religious perspectives, myth, and magic with the development of algorithms and artificial intelligence (AI). The author argues that these elements are not only lenses to understand AI but are also foundational to its development. The article highlights the need to consider social and experiential knowledge in AI research and emphasizes the importance of engaging with marginalized voices to better understand and mitigate the harms of AI. The author also draws parallels between AI and magical beings, such as djinn, suggesting that AI systems may fulfill desires as thoroughly as they would for themselves. The article critiques the terminology and hype surrounding AI, calling for a more intentional examination of the religious and magical aspects of AI. Summary based on a draft from our day one template.
- A Clockwork Miracle, Radiolab Podcast. In this episode of the Radiolab podcast, Jad and Latif explore the legend of a clockwork miracle that took place in 1562. When the crown prince of Spain fell down a set of stairs and suffered a severe head wound, his father, King Philip II, turned to a relic and made a deal with God. If his son was saved, he promised to create a miracle of his own. With the help of a renowned clockmaker, the king fulfilled his promise by creating an intricate mechanical invention known as the monkbot. Jad and Latif visit the Smithsonian to learn more about this nearly 450-year-old creation. The episode was reported by Latif Nasser and features insights from Elizabeth King, a professor emerita at Virginia Commonwealth University. Summary based on a draft from our day one template.