Stanford and Meta inch towards AI that acts human with new ‘CHOIS’ interaction model

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Researchers from Stanford University and Meta’s Facebook AI Research (FAIR) lab have developed a breakthrough AI system that can generate natural, synchronized motions between virtual humans and objects based solely on text descriptions.

Also Read : Apple has seemingly found a way to block Android’s new iMessage app

The new system, dubbed CHOIS (Controllable Human-Object Interaction Synthesis), uses the latest conditional diffusion model techniques to produce seamless and precise interactions like “lift the table above your head, walk, and put the table down.”

The work, published in a paper on arXiv, provides a glimpse into a future where virtual beings can understand and respond to language commands as fluidly as humans.

iSlumped