What do you think when you hear AI? Perhaps you think of Claude Code and how it can do your work. Or maybe, you imagine humanoid robots automating jobs. But did you picture an AI that could control your body? That is exactly what an MIT team made in a recent hackathon and won – an AI that can control your body and do things.
The six-member team won the MIT Hard Mode 2026 hackathon with its wearable AI system called Human Operator. The team describes it as a human augmentation tool that “allows AI to briefly take control of your body to help you learn or do things you cannot do.”
AI that controls your movements
In a video uploaded on YouTube, we can see the Human Operator in action. The wearable system looks almost like an exoskeleton on the arm of the person in the clip. The user then says, “Hello AI” and his hand starts waving, which is said to have been moved entirely by AI.
In the same video, the person asks, “How does it feel to have a body?” And the AI triggers movements in the person’s hands to signal an “OK” gesture with his fingers. Later on, the man then has his hand over a piano and the AI seemingly plays the song on its own, without the user actually having any idea on how to play the instrument.
At a time when AI is doing a lot more work when it comes to coding or making documents, this could in theory, allow you to just ask the AI to move your body to do things too.
Now, this may look fascinating. But how does this actually work?
The science behind it
As per reports, the Human Operator combines a vision-language model, voice input and electrical muscle stimulation – that sends small currents through the skin to contract specific muscles – to help users learn or perform actions they may not be able to manage on their own.
That is, while most AI models usually stop at vision or voice capabilities, think Gemini Live, the Human Operator actually uses everything it understands to send signals to your muscles and induce movement.
The system works by using a camera to capture what the user is looking at while voice input is processed through Anthropic’s Claude API. The model determines the movement required and maps it into a sequence of muscle commands. Those commands are then sent to electrical muscle stimulation electrodes placed on the wrist and fingers.
But there are some limitations too. The video shows a demo where the user is able to make a drink just by asking AI. However it labels it as an “imagined future use case.”
Who built this AI system?

The system was built in 48 hours by Peter He, Ashley Neall, Valdemar Danry, Daniel Kaijzer, Yutong Wu and Sean Lewis, the project won first place in the hackathon’s Learn Track. Hard Mode is a 48-hour event focused on intelligent physical systems that can sense, adapt and respond to people in real time.



