Facebook today announced that it intends to buy Ctrl-labs, a New York-based startup developing a wristband that translates musculoneural signals into machine-interpretable commands. The acquisition hasn’t yet closed, and the terms weren’t revealed publicly. But the Menlo Park company plans to bring the startup into its Reality Labs division, whose principal work concerns virtual and augmented reality.
Ctrl-labs CEO and cofounder Thomas Reardon will join Facebook, as will other employees who opt to do so. Prior to the acquisition, Ctrl-labs raised $67 million from investors including GV (Google’s venture capital arm), Amazon’s Alexa Fund, Lux Capital, Spark Capital, Matrix Partners, Breyer Capital, and Fuel Capital.
“We know there are more natural, intuitive ways to interact with devices and technology. And we want to build them,” Facebook AR/VR VP Andrew Bosworth wrote in a blog post announcing the deal. “It’s why we’ve agreed to acquire Ctrl-labs. They will be joining our Facebook Reality Labs team where we hope to build this kind of technology, at scale, and get it into consumer products faster.”
The deal comes months after Ctrl-labs announced it had acquired patents associated with Myo, a wearable created by North (formerly Thalmic Labs) that enables control of robotics and PCs via gestures and motion. Ctrl-labs chief strategy officer Josh Duyan said at the time they’d bolster Ctrl-labs’ developer tools and lay the cornerstone of a surface EMG control industry standard ahead of its developer kit’s expanded availability.
Ctrl-labs’ device — Ctrl-kit — currently comprises two parts: an enclosure roughly the size of a large watch that’s packed with wireless radios, and a tethered component with electrodes that sits further up the arm. The accompanying SDK ships with built-out JavaScript and TypeScript toolchains and prebuilt demos, and programming is largely done through WebSockets.
The final version of Ctrl-kit will be in one piece, and it won’t be an entirely self-contained affair. The developer kit has to be wirelessly tethered to a PC for some processing, according to Ctrl-labs, but the goal is to get to the point where overhead is such that it can run on wearable system-on-chips.
Ctrl-kit leverages EMG to translate mental intent into action. Sixteen electrodes monitor the motor neuron signals amplified by the muscle fibers of motor units, from which they measure signals, and with the help of AI algorithms trained using Google’s TensorFlow distinguish between the individual pulses of each nerve.
The system works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires merely the firing of a neuron down an axon, or what neuroscientists call action potential. That puts it a class above wearables that use electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin.
Video games top the list of apps Ctrl-labs expects its early adopters to build, particularly virtual reality games, which the company believes are a natural fit for the sort of immersive experiences EMG can deliver. (Imagine swiping through an inventory screen with a hand gesture, or piloting a fighter jet just by thinking about the direction you want to fly.) And not too long ago, Ctrl-labs demonstrated a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop with their fingertips.
It’s not difficult to imagine Ctrl-labs’ tech complementing that which Facebook is actively developing. Earlier this year, Facebook provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs — Facebook’s Pittsburgh-based division devoted to augmented reality and virtual reality R&D — described a prototypical system capable of reading and decoding study subjects’ brain activity while they speak.
Ctrl-labs earlier this year joined the nonprofit consortium Khronos Group’s OpenXR working group, which seeks to create a royalty-free API and device layer for virtual reality and augmented reality apps. A provisional version (version 0.9) of the standard was released in March, with companies including AMD, Arm, Google, Microsoft, Nvidia, Mozilla, Qualcomm, Samsung, Valve, Unity, LG, Epic Games, HP, HTC, Intel, MediaTek, Razer, and Unity Technologies contributing to its development and implementation.
Ctrl-labs says that the partnership “represents [its] desire to support and collaborate with the [extended reality] developer community.”