Empathy, articulated

Kristian Dupont
6 min readOct 21, 2023

--

Like everyone and his brother, I’ve been working on a “coach” chat bot. Mostly for fun but also in an attempt to help me personally, primarily with health as I am in my 40s and need to take that stuff seriously. It’s not a product, I am just using it myself though I have a few friends playing with it as well.

If you haven’t worked with LLM’s as a developer, you might mistakenly think something similar to what I did: Hey, OpenAI has an API, it’s just ChatGPT without the UI! Chat is basically solved!

Well, it turns out that that is not quite the case.

Making an API call for completions feels more like using Mechanical Turk. You talk to it using natural language which is amazing, but it’s like every request is handled by someone new who doesn’t know anything, so you need to explain the whole thing from scratch every time. And the thing is: there is a hard limit to how long your description can be. So you need to summarize everything for it and figure out what context it needs to know for this particular message.

API Call

Imagine you received a letter out of the blue that said: “Well, at least I did 50 pushups today” and were expected to respond something. You don’t know who it’s from or even why they sent you this. What do you respond? Obviously, you can’t really say anything meaningful. There isn’t even a question in there! The job of the bot developer is to turn such a message into a piece of text that would enable you to not only respond something meaningful but also make it feel to the recipient as if you are just continuing a conversation that they were already having with you.

Establishing this context is very challenging. One common solution, at the moment, is retrieval augmented generation, or RAG, where you have a template that looks something like this:

You are a helpful coach chat bot. Your purpose is to assist the user with health, wealth and well-being.

Here are some messages from the conversation that may or may not be related:
[[related-messages]]

Here are the ten most recent messages:
[[recent-history]]

User says:
[[message]]

This provides a closer approximation to meaningful context, especially if one of those related messages tells you something else about pushups, like say, that the user has a goal of 50 per day for a month, or that they couldn’t do them because of an injury. Also, the most recent messages will make it clearer to you what you were talking about specifically, and what style of communication the two of you were using — was it a formal conversation, an inspirational pep-talk or perhaps more of a friendly bantering situation?

While fetching the most recent messages is straightforward, identifying the most relevant ones is anything but trivial. The first place to start might be to create vector embeddings out of every message in the history. Then you can find, say, the 10 messages with the highest cosine similarity to the incoming message. It will mean that previous messages that are similar will be inserted into the template, which is a place to start. Of course, similar is not the same as related.

I was using something close to this for a while and it did work but my bot felt quite distracted which was frustrating. Now, debugging this is really hard, but I suspect it’s because finding similar messages simply isn’t good enough. It wouldn’t remember old, related conversations if they weren’t sufficiently similar so it felt like I had to remind it of things constantly. It would apologize profusely and seem to recall when reminded, but that almost made it more annoying.

To address this, I implemented a strategy of tagging messages to create and utilize categories. That helped a bit but now my bot had developed dementia instead. It would often repeat points it had made in the past. That is also a seriously weird experience!

Another interesting thing is that since my bot communicates with the user at random times via the phone, I needed to tell it how long it was since the last interaction, what date, weekday and time of day it currently is, because otherwise it might say “good morning” in the evening, and it would carry every conversation as if there had been no delay. One thing I found here which seems quite intuitive when you think about it is that it was much better to tell it “last interaction was 3 days ago” instead of a specific date and then today’s date afterwards. It’s not great at math, so help it when you can!

I wanted the bot to not only react when the user initiates a chat. It should also reach out now and then. This isn’t something LLM’s will do automatically but a solution that seems to work well is quite simple: after each message, I ask the LLM for when it would like to follow up if it hasn’t heard from the user, and if so, what the reminder message should be. I then set a timer for the follow-up and re-initiate the chat with the reminder message.

But the hardest part, which is probably going to be the next uncanny valley for us to cross, is to convincingly “simulate” empathy. In order to make it feel like the bot cares about the user, it needs to be interested in them, learn about them and have a theory of mind. The latter is basically a way of saying that it should try to picture what the user is thinking and what their mood/mental state is like.

One thing I did which felt like a step in the right direction to me was this bit of text in the prompt:

You should decide for each message if you are in “empathy” mode or “problem solving” mode. Don’t mix the two in one message.
In “empathy” mode, you are looking to understand and possibly to help the user understand. Ask questions. If ${member.name} seems incongruent or you are confused, ask for clarification.
In “problem solving” mode, you should offer solutions and suggestions. You can still ask questions but those will probably be of practical nature.

Secondly, I have given the bot a “note pad”. It can add a note to this after each message. I then run two types of “dream cycles” where it reorganizes its thoughts. The server runs these asynchronously to the conversations. The simpler one runs daily. This one will make the bot read its notes, conversation and other inputs from today and update its note pad. Currently, the entire note pad is injected into every prompt which doesn’t scale that well, so I might look into a tagging system there as well.

The second dream cycle is weekly and more resource intensive. It will analyze various facets of the ongoing conversation, access its data, and perform multiple interpretation runs. For instance, it will try to spot what the users particular vocabulary is. If they say they “went for a run”, does that mean a 10 minute sprint or a 2 hour jog?

I studied neuroscience at university and while I always found it fascinating, it never felt as tangible and, well, basic as it does to me now. This is probably what is most exciting to me in all of this.

So the above changes have been about making it closer to human. But I want my bot to act as a coach. What makes a good coach? Well, one thing I intend to do is to give it a library. I might for example use my old collection on Procrastotherapy which has several good resources that I keep forgetting about. I imagine I will create little descriptions for each book/essay/video and then put those in my database as vector embeddings as well, so it can look those up. It would also be fun for it to keep up with new research by simply reading papers or articles posted to some subreddit or something. These are all vague ideas so far.

Bottom line is that programming LLM’s is fun! It’s a weird hybrid between programming, psychology, neuroscience and library science. It feels super empowering and humbling at the same time. Interestingly, I think it forces you to think in terms of empathy yourself: what does the bot know at this point? How can I articulate things so it will have the necessary basis for answering? When you think about it, this is a lot like what you do (or should do) when having a conversation with a fellow human being. That is just super fascinating to me.

--

--

Responses (2)