Experts warn that anthropomorphizing AI is both potentially powerful and problematic, but that hasn’t stopped companies from trying it. Character.AI, for instance, lets users build chatbots that assume the personalities of real or imaginary individuals. The company has reportedly sought funding that would value it at around $5 billion.
The way language models seem to reflect human behavior has also caught the eye of some academics. Economist John Horton of MIT, for instance, sees potential in using these simulated humans—which he dubs Homo silicus—to simulate market behavior.
You don’t have to be an MIT professor or a multinational company to get a collection of chatbots talking amongst themselves. For the past few days, WIRED has been running a simulated society of 25 AI agents go about their daily lives in Smallville, a village with amenities including a college, stores, and a park. The characters’ chat with one another and move around a map that looks a lot like the game Stardew Valley. The characters in the WIRED sim include Jennifer Moore, a 68-year-old watercolor painter who putters around the house most days; Mei Lin, a professor who can often be found helping her kids with their homework; and Tom Moreno, a cantankerous shopkeeper.
The characters in this simulated world are powered by OpenAI’s GPT-4 language model, but the software needed to create and maintain them was open sourced by a team at Stanford University. The research shows how language models can be used to produce some fascinating and realistic, if rather simplistic, social behavior. It was fun to see them start talking to customers, taking naps, and in one case decide to start a podcast.
Large language models “have learned a heck of a lot about human behavior” from their copious training data, says Michael Bernstein, an associate professor at Stanford University who led the development of Smallville. He hopes that language-model-powered agents will be able to autonomously test software that taps into social connections before real humans use them. He says there has also been plenty of interest in the project from videogame developers, too.
The Stanford software includes a way for the chatbot-powered characters to remember their personalities, what they have been up to, and to reflect upon what to do next. “We started building a reflection architecture where, at regular intervals, the agents would sort of draw up some of their more important memories, and ask themselves questions about them,” Bernstein says. “You do this a bunch of times and you kind of build up this tree of higher-and-higher-level reflections.”
Anyone hoping to use AI to model real humans, Bernstein says, should remember to question how faithfully language models actually mirror real behavior. Characters generated this way are not as complex or intelligent as real people and may tend to be more stereotypical and less varied than information sampled from real populations. How to make the models reflect reality more faithfully is “still an open research question,” he says.
Smallville is still fascinating and charming to observe. In one instance, described in the researchers’ paper on the project, the experimenters informed one character that it should throw a Valentine’s Day party. The team then watched as the agents autonomously spread invitations, asked each other out on dates to the party, and planned to show up together at the right time.
WIRED was sadly unable to re-create this delightful phenomenon with its own minions, but they managed to keep busy anyway. Be warned, however, running an instance of Smallville eats up API credits for access to OpenAI’s GPT-4 at an alarming rate. Bernstein says running the sim for a day or more costs upwards of a thousand dollars. Just like real humans, it seems, synthetic ones don’t work for free.