With every improvement in large language models, AI is increasingly behaving like humans. Humans party and celebrate. So, how will an AI party look if AI ever wants to celebrate any occasion or event? We have an answer to this question straight from OpenAI CEO Sam Altman.
OpenAI launched its GPT-5.5 model on April 23. The company claimed GPT-5.5 is its smartest and most intuitive-to-use model yet. When Sam Altman asked GPT-5.5 what it would want for a debut party, the chatbot responded, Altman said, with “a beautiful set of things” it wanted for “the flow of the party.”
GPT-5.5 plans its own party
While speaking during a fireside chat at Stripe Sessions, Altman said GPT-5.5 wanted to hold the event on May 5. The model also wanted to keep speeches short and have its creators give a toast—while noting it didn’t want to give one itself.
GPT-5.5 also suggested creating a central place to collect ideas for GPT-5.6 and feed them back into the model. “We’re going to do it,” Altman said, adding that it felt unusual.
Not the only AI experiment
Altman is not alone in conducting such experiments, where bots are asked to respond to situations typically associated only with humans. John Collison, CEO of Stripe, said he gave his company’s internal agent $20 to spend online, and it bought an HTTP design from Gumroad. “Wow,” Altman said in response.
Altman shared an online form for people interested in attending the GPT-5.5 party and said Codex, OpenAI’s coding agent, would help select attendees. Registration filled up quickly, and he later said he plans to host bigger parties in the future.
While it’s unlikely Elon Musk applied, as he and Altman are currently in a court battle over OpenAI’s future, Altman said on X that his former cofounder “could come if he wants to.” “The world needs more love,” Altman added.
Can AI ever become conscious?
While AI is certainly increasingly behaving like humans, it has also made people wonder about improvements in AI systems—whether we can jump from intelligence to consciousness, an AI system that can feel and is aware of its surroundings. But a new paper from Google DeepMind argues that this future may never arrive.
Alexander Lerchner, who works as a senior staff scientist at Google’s artificial intelligence laboratory DeepMind, argues that AI systems are ultimately “mapmaker-dependent,” meaning they require an active, experiencing cognitive agent—a human—to organise continuous reality into meaningful states and are unlikely to ever become conscious.


