Robots have always been part of CES. For years, they have danced, played games, and entertained crowds on the show floor. But at CES this year, something feels different. Humanoid robots are no longer just performing. They are interacting, responding, and in some cases, making decisions on their own.
One of the busiest stops on the show floor was the booth for Intbot, where humanoid robots carried on conversations with attendees without a human operator guiding the interaction.
Nylo, a full-size humanoid, along with his little sister, “manned” the entire booth.

When I approached, Nylo, on his own, greeted me, saying, “You picked the right spot to visit. You’re getting the full humanoid social robot experience. No strings, no human puppet ears, just me running the show.”
Intbot’s approach stands out because it is not focused on flashy physical tricks. While other companies are building robots that can dance, box, or play ping pong, Intbot focused on what they believe matters most:
“We build a social intelligence layer that connects robots with humans in a natural way,” explained IntBot’s CEO, Dr. Sharon Yang. “We think for robots to coexist with humans, they have to understand humans. They have to understand what humans want and what we’re interested in. When to talk to us and when not to talk to humans.”
The robots can adjust how they speak depending on who they are talking to, even changing language style when interacting with children. In some cases, they can recognize repeat visitors and remember previous interactions, creating a more personal experience. The most interesting, some might say concerning is how Dr. Yang described the humanoids.
The rapid advancement of AI accelerates the development.
Vision is the other major piece of the puzzle. Several companies at CES showed how advances in 3D camera technology are giving robots a much better understanding of the world around them. Using depth-sensing cameras developed by RealSense, humanoid robots can track motion, predict where movement is headed, and react in real time. That means dodging an incoming punch, maintaining balance, or safely navigating crowded spaces.

Nadav Orbach is CEO of RealSense. “Basically, similar to human eyes, we work with similar concepts fo stereoscopic vision. We are able to, in real time, give the robot an understanding of its surroundings in 3D”.
“As far as the question, what needs to happen in order to get this autonomy, we need AI and we need vision perception. And with the two coming together, that’s when we get the robustness for mass deployment,” he said.
So, how long before we start seeing humanoid robots in our businesses and homes?
“I think it’s going to be awhile,” said Orbach. “I give it five years. But we will see relative deployments over the next two years, so it’s super exciting.”

When artificial intelligence and advanced vision systems come together, developers say that is when humanoid robots become robust enough for real-world deployment. And in some places, that is already happening. Humanoid robots are currently working in select hotels as concierges, greeting guests, answering questions, and providing basic assistance.
So how close are we to seeing humanoid robots become a common sight in daily life? Developers caution that widespread deployment will take time, but many believe the next five years will bring limited but meaningful rollouts in controlled environments like hotels, offices, and public venues.
As CES wraps up, robotics companies will leave Las Vegas with massive amounts of data gathered from real human interactions on the show floor. Every conversation, reaction, and unexpected moment helps them refine how these machines think and respond.
Five years may sound like a long time. At CES, it feels closer than ever.

