Today's Editorial

15 January 2020

 

‘Virtual human’ NEONs

Source: By Nandagopal Rajan: The Indian Express

Among the most-discussed new concepts at the annual Consumer Electronics Show (CES) in Las Vegas this year was NEON. The first project of Samsung’s Star LabsNEONs are being called the world’s first artificial humans. They look and behave like real humans, and could one day develop memories and emotions — though from behind a 4K display. It won’t be wrong to call them life-size human avatars, or maybe a human interface for whatever you want to do with technology.

Star Labs is headed by India-born scientist Pranav Mistry who underlines that what was showcased at CES was the product of just four months’ work. Mistry, who is president and CEO of Star Labs, was earlier behind a lot of products released by Samsung, including the Galaxy Gear smartwatchStar Labs is Samsung Technology & Advanced Research Labs (not to be confused with the fictional labs of DC Comics), an independent project funded by the Korean tech giant.

So what are NEONs?

The company says NEONs are computationally created virtual humans — the word derives from NEO (new) + humaN. For now the virtual humans can show emotions when manually controlled by their creators. But the idea is for NEONs to become intelligent enough to be fully autonomous, showing emotions, learning skills, creating memories, and being intelligent on their own. Star Labs thinks they can be “friends, collaborators, and companions”, but all that is a few years away. At CES, the idea was to showcase the concept to the world.

How do the NEONs work?

Mistry started work on NEONs by trying to replicate a friend. Initially, the models were trained on his face, and there were significant errors. But then, they started getting better, almost indistinguishable from the original.

Mistry says there are two core technologies behind his virtual humans. First, there is the proprietary CORE R3 technology that drives the “reality, real time and responsiveness” behind NEONs. The company claims CORE R3 “leapfrogs in the domains of Behavioral Neural NetworksEvolutionary Generative Intelligence and Computational Reality”, and is “extensively trained” on how humans look, behave and interact.

But in the end, it is like a rendition engine, converting the mathematical models to look like actual humans. At the moment the latency for response is less than a few milliseconds. CORE R3 can also connect to other domain-specific and value-added services like language kits.

The next stage will be SPECTRA, which will complement CORE R3 with the “spectrum of intelligence, learning, emotions and memory”. But SPECTRA are still in development, and is not expected before NEONWORLD 2020 later this year.

CORE R3 is the front-end reality engine that is able to give you that real expression. My NEON, right now, does not know when to smile. When you come tomorrow and talk to a NEON, they don’t know that you were here yesterday,” Mistry explained. The spectrum of emotions and knowledge will come only when the NEONs are “actually in the field”; Mistry says this is the layer SPECTRA will enable. Mistry thinks with initial focus on B2B scenarios, he will get more time to work on tech that helps solve for complex memory.

How could NEONs be used?

Mistry sees a world in which NEONs are the interface for technologies and services. They will answer your queries at a bank, welcome you at a restaurant, or read out the breaking news on television at an unearthly hour. Mistry says this form of virtual assistance would be more effective, for example, while teaching languages, as NEONs will be capable of understanding and sympathising.

However, Mistry is clear that a physical form for his NEONs is not possible in the near future. “I don’t think that we are anywhere close to having the physical embodiment of the NEONs in the next 25 or 30 years.” Also, he does not want to enable NEONs on existing robots — but would not mind collaborating with companies like Google, Facebook, and Baidu that have done work in similar fields.

How are NEONs different from Virtual Assistants?

Virtual Assistants now learn from all the data they are plugged into. NEONs will be limited to what they know and learn. Their leaning could potentially be limited to the person they are catering to, and maybe her friends — but not the entire Internet. They will not be an interface for you to request a song, rather they will be a friend to speak to and share experiences with, says Star Labs.

Unlike deep fakes, Star Labs says, CORE R3 does not manipulate any scene, videos, or sequence, and instead creates unique behaviours and interactions in real time. “CORE R3 creates new realities,” it says.

What about personal data with NEON?

Mistry does not want his NEONs to have collective memory, or to share data among themselves. So what is known to one NEON cannot be useful for another? “My network is a small network that can live independently,” Mistry said, underlining the difference between his plan and what has been seen so far among other Internet-based companies. Also, Star Labs says “no one except you and your NEON can ever have access to your interactions”, and that private data will never be shared without your permission.

So, what's next for NEON?

Star Labs is talking of launching a beta later this year, and claims it is flooded with queries from potential partners. You could find NEONs trying to chat you up on airport screens or bank lobbies soon. Mistry says he does not envision a physical for his NEONs, and anyway that technology is not in the realm of the possible in his lifetime.