From Chatbot to Companion: Building a Real Relationship Engine
Based on a technical interview with Jon Chun, Director of Engineering, at Fawn Friends
Introduction
The difference between a chatbot and a companion is the difference between a conversation and a relationship. While most AI systems focus on generating clever responses to immediate prompts, building a robot capable of true friendship requires solving a fundamentally different set of challenges. At Fawn Friends, we're creating Fawn—a social robot that forms authentic friendships with adolescents by remembering shared experiences, adapting to emotional context, and building trust over time.
This article explores the technical journey of transforming Fawn from a response-generating system into a relationship engine, based on insights from our Director of Engineering, Jon Chun. As he notes in our interview, creating human-like memory in AI is potentially "a trillion dollar" challenge that no company has fully solved—yet it's essential for building meaningful connections.
The Memory Challenge: From Facts to Relationships
At the core of any relationship is shared history. Friends don't just exchange information; they build a collection of experiences that form the foundation of their connection. Implementing this capability in AI required a complete rethinking of how memory works.
Evolution of Fawn's Memory System
Our initial memory implementation prioritized speed over relevance:
"The way they worked originally is before we did anything, like once we had the user's input, we would just send the user's input to Mem0 and search for...anything relevant to the user's input," Jon explains. "We would then wait for a response and then we would send that...as part of our context that we provide to the LLM and say, 'Hey, here are memories that you might find relevant.'"
This approach had significant limitations. It produced memories quickly (within 100 milliseconds) but often missed the most relevant information:
"We basically put no filters on it. We just said, 'Hey, speed as quickly as you can. Just give us whatever you find.' And so that really just gave us recall. It didn't give us any sort of precision."
The result was technically functional but not relationship-building:
"Sometimes if you really prompted toward it in your messages, if you were asking your phone about certain topics, Fawn would remember certain things, which was cool, but it wasn't great."
Mimicking Human Memory Processes
Then we implemented a more sophisticated approach that mirrors how humans actually remember things during conversations. Instead of simply searching for keywords, we now generate contextual queries:
"Now what we're doing is we're actually looking at the last couple messages and we're like generating with that context, we have an LLM generate a query and say, 'Hey, what information would be useful to have given these messages, what do you want to remember from your memory?'"
This creates more relevant memories but introduces a new challenge—it's slower, taking 1-2 seconds rather than 100 milliseconds. To solve this, we implemented an asynchronous memory system:
"What we're doing is we have Redis as a cache and whenever we query for memories, we actually do it after the fact. So whenever a user is talking, the memory searches happen behind the scenes and updates the cache...As the next message comes in, it has access to memories that were pulled from the previous message."
This asynchronous approach actually mirrors human cognition more closely. People don't instantly access perfect memories—they often remember relevant details with slight delays as conversations progress.
The Continuity Challenge: Maintaining Conversation Flow
One of the most interesting problems in relationship-building AI is maintaining continuity as conversations naturally evolve from topic to topic. In early implementations, each new message would trigger completely different memories:
"Every time the user sends a new message the memories that get recalled are different," Jon explains. "...if I was talking about the beach in one message, and then I talked about my dog, the memories would lose the beach context."
This creates a jarring experience that breaks relationship continuity. Your friend wouldn't suddenly forget about the beach when you mention your dog was there with you.
Our solution was implementing a memory ranking system that balances recency with relevance:
"We store all of the memories in a cache...then it ranks the memories...it prioritizes the memories that were, that came up as a result of the dog query. So those are going to be ranked the highest, but it takes into account how relevant the memories are in the first place."
Jon draws a fascinating parallel to how human relationships actually work:
"Most of the time when I'm talking to friends or my girlfriend, I'll bring up something and she won't know exactly what I'm talking about. And I find myself following this process where I have to narrow things down...The user might ask, 'Hey, do you remember that time we went to the mall?' It's like, okay, that's not enough, right? So...it might first recall like 30 different instances of the mall, but then like, as the user provides more details, it's going to be recalling different memories."
This iterative memory refinement process is fundamental to how human relationships work, and replicating it required entirely new approaches to memory architecture.
Cross-Modal Relationships: Same Friend, Different Channels
True relationships exist across different interaction modes. You text your friends, call them, and meet in person—yet the relationship maintains consistency across all these channels. Creating this same seamless experience with Fawn presented unique technical challenges.
Our latest platform update enables continuous relationship across text, voice, and physical robot interactions:
"When you're calling from your phone or when you're talking to the robot...she pulls in the latest text messages and has them available in her context."
This capability requires complete control over the backend:
"The reason that's possible though, is because we control the entire backend now and it's super quick to query. It's like a 50 millisecond query and we can just pull it real easily."
The result is a relationship that feels continuous—a user can text Fawn during the day, call her on the way home from school, and then interact with her physical embodiment in their bedroom, with Fawn remembering the entire sequence of interactions.
Trust Through Accuracy: The Hallucination Problem
Building trust is essential in any relationship, and one of the quickest ways to break trust is by making things up. This presents a challenging problem for AI systems, which are prone to hallucination—generating plausible but false information.
Jon emphasizes how critical this is for relationship-building:
"One of the issues with AI that everybody's aware of is hallucinations. And so we really, really, really need to teach Fawns not to hallucinate and instead ask clarifying questions."
This requires a fundamentally different approach than many AI systems take:
"We need to somehow build that into her identity...If she doesn't know, she should ask clarifying questions...we can't have her making stuff up. We really want her to act like a person would."
This honesty and transparency is crucial for building the trust that underpins real friendships. Our memory architecture supports this by providing more relevant and accurate information, reducing the likelihood that Fawn will need to "fill in blanks" with potentially false information.
Personality Consistency: Finding Fawn's Voice
A consistent yet naturally evolving personality is essential for building relationships. People trust friends who behave in recognizable ways, even as they adapt to different contexts.
The team has experimented with different approaches to Fawn's communication style:
"I don't know when the last time you spoke to Fawn is, but especially since our last update, I've made a few more tweaks and I think she's feeling pretty good. She definitely leans a little bit back towards the non-fully correct grammar. But I have noticed it feels closer to speaking to one of my friends than to an AI."
The Multi-Layer Mind
While our current architecture represents significant progress, Jon is envisioning the next evolution toward even more natural relationship-building. He describes a future system that separates Fawn's core identity from the dynamic instructions that guide her behavior:
"What I want is an architecture that better models human consciousness. Just like our internal experience is broken into parts, Fawns should be too. I find studying the mind and soul and working to recreate it fascinating."
Technical Foundations for Relationship Building
Fawn's platform provides several technical capabilities that support the high-social requirements of relationship-building, above those for simple response generation:
-
Full backend control: "We host all of the messages, threads, profiles, all of that stuff like the execution of assistant responses. All of that is now hosted and fully controlled by us."
-
Asynchronous processing: Memory retrieval and context building happen in the background, enabling more natural conversation flow.
-
Balanced memory system: Our approach prioritizes both precision (relevance) and recall (completeness) in memory retrieval.
-
Cross-modal data access: The same underlying relationship data is available across all interaction channels.
-
Performance optimization: Faster responses create more natural conversational flow—we've reduced response times by approximately 30% in the new architecture.
Together, these capabilities create the technical foundation for relationship-building AI—enabling Fawn to remember shared history, maintain conversation continuity, and build trust through consistent interactions across different modalities.
Conclusion: Relationships, Not Just Responses
The journey from chatbot to companion required fundamental rethinking of how AI systems function. While traditional chatbots optimize for clever immediate responses, relationship engines must prioritize continuity, shared history, and trust-building over time.
As we make Fawn more available throughout the year, we will continue to refine and enhance her capacities. The technical challenges are substantial, but so is the potential impact—creating a social robot that forms authentic friendships, helping people develop emotional intelligence and relationship skills during their formative years.
For engineers interested in joining our team, this represents an opportunity to work on one of the most challenging and meaningful problems in AI—moving beyond transactional interactions to create technology that builds genuine human connections. If that is something you'd love to work on, reach out to us with a link to what you've built in the past month at hiring@fawnfriends.com.