×
revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls"> OpenAI Really Wants Codex to Shut Up About GoblinsOpenAI has a goblin problem.Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls
Tech-news

revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls">OpenAI Really Wants Codex to Shut Up About Goblins

OpenAI has a goblin problem.

Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls

OpenAI has a goblin problem.Instructions designed to guide the behavior of the company’s latest model…

AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”

Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.

Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.

Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”

Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.

“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”

“A Spicy Personality”

Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.

Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.

I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.

Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.

#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating"> AI Agents Are Coming for Your Dating LifeOn a Monday afternoon in March, I watched a pixel-art avatar prowl the corridors of a virtual office campus looking for a buddy. With dark brown hair and stubbled chin, the sprite was a representation of me—an AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”“A Spicy Personality”Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating
Tech-news

AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”

Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.

Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.

Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”

Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.

“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”

“A Spicy Personality”

Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.

Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.

I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.

Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.

#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating">AI Agents Are Coming for Your Dating Life

On a Monday afternoon in March, I watched a pixel-art avatar prowl the corridors of a virtual office campus looking for a buddy. With dark brown hair and stubbled chin, the sprite was a representation of me—an AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”

Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.

Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.

Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”

Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.

“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”

“A Spicy Personality”

Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.

Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.

I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.

Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.

#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating

On a Monday afternoon in March, I watched a pixel-art avatar prowl the corridors of…