Fanuc partners with Nvidia to bring intelligent robots to factories
Industrial robot giant Fanuc announced a working partnership with Silicon Valley chipmaker Nvidia, who specialize in artificial intelligence, to enable Fanuc robots to learn as they work.
The graphics processing units that Nvidia makes, which enable high-speed parallel computation, are especially well suited to deep learning. Fanuc will implement the Nvidia processors in a central system controlling all the robots in a factory, as well as inside each robot.
This arrangement shows how recent developments in artificial intelligence are poised to overhaul the manufacturing industry.
“The age of AI is here”, said Jen-Hsun Huang, founder and CEO of NVIDIA.
“GPU deep learning ignited this new wave of computing where software learns and machines reason. One of the most exciting creations will be intelligent robots that can understand their environment and interact with people. NVIDIA is delighted to partner with FANUC, the world leader of industrial robotics, to realize a future where intelligent machines accelerate the advancement of humanity.”
These next-generation capabilities will include:
- Deep learning robot training with NVIDIA GPUs
- GPU-accelerated AI inference in FANUC Fog units used to drive robots and machines
- Embedded systems for robots to do inference locally
Currently, robots are mostly programmed to repeat a single job with utmost precision and accuracy; however this approach loses its efficiency when the production run’s needs change. The robots then need to be entirely reprogrammed. Though robots are easier to programme now than even just a few years ago, their learning abilities haven’t developed too far.
Reinforcement learning uses a large or deep neural network that controls a robotic arm’s movement and varies its behaviour, reinforcing actions that lead it closer to an end goal, like picking up a particular object. The process can also be sped up by having lots of robots work in concert and then sharing what they have learned.
Reinforcement learning is a particularly hot research area in robotics. The technique was used by Google to build a program that taught itself to play the incredibly complex and subtle board game Go to a superhuman level. As with Go, the skills required to have a robot manipulate objects, or perform other tasks, can be complex to program by hand.
Yezhou Yang, an assistant professor who runs a robot learning lab at Arizona State University, says that, when it comes to reinforced learning, because no one is involved in programming the robot, it will be difficult for human operators to understand how the system actually works.
Yang: “We need some sort of interface. I believe a huge amount of work still needs to be done along the lines of explainability.”