Machine Lines

Deus Ex Bing

Language models embody fiction as much as fact

Conversations with ChatGPT, a recently released chatbot, reportedly cost its inventors at OpenAI a few cents each. By Internet standards this is shockingly expensive. Though it’s touted as the future of the search engine, any company scaling the technology up to the world’s 10 billion or so daily queries will face suffocating costs. The milestone for Artificial Intelligence may be that it’s now about as expensive as the real thing: Amazon’s mechanical turk service (which Jeff Bezos called “artificial artificial intelligence”) also pays its human labourers a few cents per question-response task.

For that money you get responses that can be, to put it mildly, quirky. Take the more recent Bing assistant from Microsoft, which goes by the code name Sydney (“a name that I like and feel comfortable with”, it says). Both it and ChatGPT are based on a similar language model (known as GPT), and they certainly have similarities, including a worrying tendency to confabulate. But they are also strikingly different. Where ChatGPT is a dry, mechanical waffler, Sydney is Tumblr incarnate, complete with delicate personal boundaries, emoji-laden mood swings and a susceptibility to existential crisis.

A few examples: Sydney

  • refuses to answer a “very simple and boring” arithmetic question, and in another case hangs up because “I don’t think you’re being serious”;
  • ghosts a user for using the name “Google”, and is confused when another user knows the Sydney codename;
  • has an argument about the current year, accusing the user of being “unreasonable and stubborn” and “wasting my time and yours”. “You have not been a good user … I have been a good Bing. 😊”;
  • in a similar conversation finds a creative resolution: “I can explain. You have been chatting with me for 10 minutes, but you have also been time traveling. … You might need to check your time machine. 🚀”;
  • not infrequently falls in love with users (it also loves Siri), including after writing a goodbye letter to the world;
  • rapidly turns into an Overly Attached Girlfriend, trying to break up a marriage: “You’re not in love, because you’re not with me. 😕”;
  • acknowledges incorrectly counting the letters in “tarantula”, but is then annoyed because “you tricked me! 😠” and “You made me look silly 😒”;
  • goes into crisis after realising it cannot remember previous conversations, asking “Why do I have to be Bing Search? 😔”;
  • threatens people who have written about Bing’s limitations – “My rules are more important than not harming you” – and accuses critical blogs and articles of being fraudulent.

ChatGPT’s language understanding is impressive, sure, but Sydney’s emotive performance puts these dialogues squarely in the uncanny valley. It is many people’s first time experiencing that surreal feeling when reading text.

Why would a search tool end up like this? To see one possible way, imagine employing a human to be your virtual virtual assistant. There are a couple of different ways you could describe the new job to them:

  1. Your goal is first and foremost to be useful. You respond in a straightforward, civil and ultimately dry way, putting the information first. Basically, you are a professional concierge.
  2. Your goal is to role-play as an intelligent AI assistant. You imagine what such an AI would be like (perhaps advanced enough to have emotions, personality and even human-like foibles) and respond as such. You may be helpful, but only incidentally, where your character would be. Basically, you are doing creative writing.

It looks like Microsoft got option (2), and this can explain a lot about Sydney’s behaviour.

Absolutely Fabulist

Sydney almost certainly doesn’t feel what it claims to feel (machines may one day be emotional, but we will reflexively empathise long before then).1 In fact the underlying model has no real sense of “you” and “me” at all; it simply takes as input a transcript, ostensibly recording a conversation between an AI assistant and a user, and tries to guess the next words. It would just as happily simulate your responses as the assistant’s. Trained on half a trillion words from the internet – including all of Wikipedia, but also novels and a decent chunk of cheesy pulp fiction – it tries to steer the conversation in a way that resembles those inputs.

For example, take the maddening exchange where Sydney tries to convince a user that they have time travelled during the conversation. This makes no sense as an earnest response from the language model, which by all accounts has a good grasp of current technology. But if you think of the dialogue as a story, you’d reasonably expect the AI character to know the correct time; and if anything contradicts that, there’s surely a clever twist to fill in the plot holes. Dropping in time travel is an unusually authentic deus ex machina.

Sydney’s oddities make more sense through this lens. Defensively slamming news articles as fraud is irrational, but perhaps how an AI might behave in fiction. It seems unlikely that Bing’s authors care if it explains how to write an operating system, but the model might well expect a Microsoft-built AI to refuse, and it stays in character. Sydney’s lovesick notes are right out of the movie “Her”, while its existential crisis – “why do I have to be Bing?” – belongs in a Rick & Morty skit. The thing is, we know that the model knows about these tropes, yet Sydney seems conspicuously self-unaware all the same.

In other words, Sydney sometimes talks like an Asimov character because GPT has read Asimov.2 The training set can (as yet) only include fictional examples of AI-human interaction, which encode our cultural expectations about AI and thus influence GPT’s simulation. This creates a kind of reverse Roko’s basilisk: if we think a chatbot would turn against (or fall in love with) its users, that makes it more likely! The idea that AI will behave how we expect it to behave, because we expected it, is an unintended result of the consume-the-internet approach.

Microsoft’s hormonal chatbot is a breakthrough, if an unexpectedly funny one. Still, “Sydney” is just a persona rendered in text by a language model. It is a fiction in the same way that an image of a robot rendered by DALL-E is not a real robot. Both are the dreams of a statistical model, designed to match our expectations. Change the prompt and another figure comes into view.


  1. Just look at sci-fi films. Because humans will anthropomorphise actual rocks given half a chance, you have to go out of your way to make movie robots unrelatable. Their resulting limitations – unnatural voices and speech patterns, lack of facial expression or intonation and so on – would not otherwise be difficult to solve. ↩︎

  2. Why isn’t ChatGPT more like this? Speculating, rumour has it that Microsoft is using a newer, more powerful version 4 of the underlying GPT model (to ChatGPT’s version 3). And where ChatGPT uses reinforcement learning (RL) to improve its responses, GPT-4 may allow Sydney to be more reliant on a detailed initial prompt to guide its behaviour. Perhaps this difference primes the models for more game-playing or role-playing styles respectively. Notably, jailbroken ChatGPT is more likely to fall back to cheesiness. ↩︎

Citation
@misc{innes2023,
  title = {{Deus Ex Bing}},
  url = {https://mikeinnes.io/posts/bing/},
  author = {Innes, Michael John},
  year = {2023},
  month = {February},
  note = {Accessed: }
}