Technician in blue scrubs works with an industrial robotic arm in a modern factory.

Designing physical AI with humans at center

Physical artificial intelligence (AI) must be thoughtfully developed so that it can work efficiently with humans in manufacturing.


In brief
  • Safety, speed and non-verbal communication need to be considered when designing physical AI systems.
  • To keep humans in control, continuous onboarding, clear intent and handoffs, and effective confidence signaling are necessary.
  • A dynamic, two-way relationship is essential if physical AI is to learn from and adapt to humans.

Designing for AI in the world 

AI is starting to move off the screen and into the world, within the products we carry, the tools we use, the machines we produce with and the spaces we share. This is happening rapidly in industrial and manufacturing contexts. Embodying AI into a dynamic physical system such as a robot brings exciting potential, but it also changes the goals, responsibilities and skills for product development teams. Traditional machines follow defined rules, typically with minimal human interaction, while physical AI systems can interact, learn and adapt in real time.
 

As Raj Sharma of Ernst & Young LLP recently advised, organizations must address the opportunities and challenges of physical AI to be successful, including managing data, maintaining compliance and, critically, keeping humans in the loop.1 No longer just shipping hardware with software, physical AI enables behaviors that change. With this comes tradeoffs — adaptivity over predictability, initiative over user control, personalization over privacy — that must be thoughtfully defined and communicated.

Human + physical AI communication

Design principles for human plus generative AI (GenAI) interaction are typically prescribed around the traditional human-computer model of one user and a screen.2 By comparison, communication with physical AI systems that move in space, interact with multiple people at once and can pose a safety risk may be less explicit and clear. Broad differences in purpose, use context and ergonomics can all influence the form factor and engagement with AI in physical systems.

Christopher Smith of EY Studio+ has described the “choreography” necessary for success between humans and agentic AI. Agents must be trustworthy, legible and aligned with human goals.

A physical AI system must effectively communicate the following with its human co-workers:

  • What are its capabilities and limits?
  • What is its next action?
  • How confident is it in its decisions?
  • How do we work together collaboratively?

What are its capabilities and limits? Ongoing onboarding 

Physical AI systems must continuously communicate their capabilities and limitations. Unlike traditional machines, which operate within fixed parameters, physical AI systems are adaptive and can change behaviors over time. This changes onboarding from a one-time event to an ongoing dialogue between humans and AI.

At the start, teams should be introduced to what the system can sense and what it cannot, its operational boundaries and how to pause or override actions. This includes clear instructions for emergency stops and fail-safe mechanisms. Transparency at this stage builds trust and reduces anxiety about unpredictability. A well-designed onboarding experience should also set expectations for variability. It should make it clear that the system may evolve and how everyone will be kept informed.

As physical AI systems learn and evolve, their capabilities may expand or shift. Manufacturing culture is already well setup for communicating changes through stand-ups and shift handovers. Teams should build in regular updates on physical AI systems so personnel understand new features or constraints. For example, a collaborative system could announce that it has learned a new assembly technique or that its range of motion has been adjusted. These updates should be clear and contextualized into existing workflows for understanding.

Moreover, onboarding is not just for the people. The AI must also continuously learn about its environment, materials and co-workers. A staged approach works best, ensuring confidence and safety while fostering collaboration. For example:

  • Shadowing: The AI observes and mimics human actions without autonomy.
  • Assisting: The AI performs tasks under close supervision.
  • Semi-autonomy: The AI takes initiative but always allows easy human override.

(See “Toward a framework for levels of robot autonomy in human-robot interaction”   for a detailed approach to task allocation).3

Additionally, empowering people to ask the AI about its reasoning (“Why did you choose this action?”) creates transparency and supports informed consent. Over time, this two-way learning builds a shared model between humans and machines that is essential for trust and efficiency.

What is its next action? Legibility of intent

Legibility, the ability to understand what a system is about to do, is critical for collaboration and safety. In digital interfaces, this is largely achieved through text, as well as visual state changes. For physical AI, intent must often be conveyed through non-verbal signals because of practical limits on communication and speed needs in dynamic environments. Legibility is the new usability.

Physical AI should use a combination of visual, auditory and haptic signals to communicate intent. Examples include:

  • Visual: lights that change color to indicate states (e.g., sensing, deciding, acting), or projected arrows showing movement direction
  • Auditory: tones or spoken alerts for critical actions, in addition to speech for more detailed communications
  • Haptic: vibrations on wearable devices for proximity warnings

Signals must be salient under real-world conditions. Consider line-of-sight, ambient noise and human perceptual capabilities. Redundant cues (e.g., combining lights and sounds) improve reliability in noisy or visually cluttered environments. Testing these signals under real-world conditions is essential to ensure they remain effective when attention is divided.

Borrowing from human-to-human interactions, the system can use “gaze” direction (direction of optical sensors), posture (articulation) and motion pacing (cadence of movement) to indicate intent. For instance, slowing down before turning can signal caution, much like human body language. These cues feel natural and reduce cognitive load.

Like onboarding, legibility is bidirectional. Physical AI must interpret human signals like speech, gestures and movement to ensure safe and collaborative operation. This mutual understanding forms the foundation of safety and trust in shared spaces and rapid activity.

How certain is it in its actions? Level of confidence

As physical AI operates in real space where errors can have tangible consequences, confidence signaling is essential. Systems should not act when certainty does not meet a prescribed threshold, but acting with reduced confidence may be necessary and useful when learning new tasks or dealing with novel situations.

Co-workers need to know when the system is uncertain before it acts. Low-confidence actions could lead to collisions, dropped objects or privacy breaches. Communicating uncertainty allows humans to attend to, and intervene, proactively and prevents small errors from cascading into major failures.

As with signaling intent, confidence signals should be immediate and multimodal. In fact, intent and confidence should be a combined communication where appropriate.  They should also scale with risk so that, the higher the potential impact, the more prominent the signal. For example, a high confidence action could be done at normal speed paired with a solid green light and slow pulsed sound. When confidence is below a set threshold (but above a minimum for allowable action), the task can occur at a reduced speed with a flashing amber light and higher frequency sound to signal relative uncertainty and provide more time for human redirection.

How do we work together? Shared responsibility

Collaboration with physical AI requires clear boundaries of responsibility. Unlike digital systems, where handoffs are typically informational, physical AI involves tangible objects and environments, making accountability critical.

In a simplified situation, task allocation may be split such that the AI handles repetitive or precision tasks and humans oversee judgment-based decisions and safety-critical interventions. In reality, responsibility can shift fluidly based on context. If the AI encounters uncertainty or an unexpected scenario, control should revert to the human seamlessly. Interfaces for handoff, such as voice commands or physical buttons, must be simple and accessible. These transitions should feel natural, not abrupt, to maintain workflow continuity.

Shared responsibility fosters trust when workers feel empowered to intervene and when the AI transparently communicates its status and rationale. Over time, these interactions should evolve into a choreography where humans and AI anticipate each other’s moves.

Physical AI will succeed or fail on human terms. Interactions need to go beyond the transactional to the emotional, supporting empathy demand — the human desire “to feel recognized, respected, and responded to at an appropriate emotional level, specific to their unique circumstance, and influenced by the technological context of the interaction.”

In the context of physical AI, empathy demand means making sure people feel understood and supported, not just helped with tasks. Ongoing onboarding, clear communication of intent and confidence, and effective handoffs are core to these goals. When designed effectively, these interactions will build trust and empower people while still protecting their safety, privacy and independence.

Key takeaways

  • Beyond screen-based AI: Physical AI shares the same human-AI interaction challenges as digital systems but adds the critical complexities of safety, speed and non-verbal communication in real-world environments. 
  • Core principles: Effective human-machine interaction relies on ongoing onboarding, clarity of intent, confidence signaling and clear handoffs to keep people informed and in control. 
  • Two-way interactions: These principles are not just one-sided. Physical AI must also learn from and adapt to people, creating a dynamic, reciprocal relationship.

Questions and Answers

Summary 

Careful consideration of how humans and artificial intelligence (AI) are to work together safely and efficiently is needed for physical AI to succeed in advanced manufacturing.

About this article

Related articles

The four-collar workforce: leading blended human + AI teams

As AI agents join human teams, traditional white- and blue-collar distinctions are evolving into a four-collar framework: white, blue, green and gray.

Why advanced manufacturing needs lean production now more than ever

Explore how advanced manufacturing technologies and lean production converge to boost efficiency, agility and innovation in today’s competitive landscape.