Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives).

Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, active object categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situated meaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation.

Robot learning can be closely related to adaptive control, reinforcement learning as well as developmental robotics which considers the problem of autonomous lifelong acquisition of repertoires of skills. While machine learning is frequently used by computer vision algorithms employed in the context of robotics, these applications are usually not referred to as "robot learning".


Projects

Maya Cakmak, assistant professor of computer science and engineering at the University of Washington, is trying to create a robot that learns by imitating - a technique called "programming by demonstration". A researcher shows it a cleaning technique for the robot's vision system and it generalizes the cleaning motion from the human demonstration as well as identifying the "state of dirt" before and after cleaning.[1]

Similarly the Baxter industrial robot can be taught how to do something by grabbing its arm and showing it the desired movements.[2] It can also use deep learning to teach itself to grasp an unknown object.[3][4][5]

Sharing learned skills and knowledge

In Tellex's "Million Object Challenge," the goal is robots that learn how to spot and handle simple items and upload their data to the cloud to allow other robots to analyze and use the information.[4]

RoboBrain is a knowledge engine for robots which can be freely accessed by any device wishing to carry out a task. The database gathers new information about tasks as robots perform them, by searching the Internet, interpreting natural language text, images, and videos, object recognition as well as interaction. The project is led by Ashutosh Saxena at Stanford University.[6][7]

RoboEarth is a project that has been described as a "World Wide Web for robots" − it is a network and database repository where robots can share information and learn from each other and a cloud for outsourcing heavy computation tasks. The project brings together researchers from five major universities in Germany, the Netherlands and Spain and is backed by the European Union.[8][9][10][11][12]

Google Research, DeepMind, and Google X have decided to allow their robots share their experiences.[13][14][15]

See also

References

  1. Rosenblum, Andrew. "The robot you want most is far from reality". MIT Technology Review. Retrieved 4 January 2017.
  2. "Hands-on with Baxter, the factory robot of the future". Ars Technica. 15 June 2014. Retrieved 4 January 2017.
  3. "Deep-Learning Robot Takes 10 Days to Teach Itself to Grasp". MIT Technology Review. Retrieved 4 January 2017.
  4. 1 2 Schaffer, Amanda. "10 Breakthrough Technologies 2016: Robots That Teach Each Other". MIT Technology Review. Retrieved 4 January 2017.
  5. Platt, Robert (3 May 2023). "Grasp Learning: Models, Methods, and Performance". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 363–389. arXiv:2211.04895. doi:10.1146/annurev-control-062122-025215. ISSN 2573-5144. Retrieved 4 May 2023.
  6. "RoboBrain: The World's First Knowledge Engine For Robots". MIT Technology Review. Retrieved 4 January 2017.
  7. Hernandez, Daniela. "The Plan to Build a Massive Online Brain for All the World's Robots". WIRED. Retrieved 4 January 2017.
  8. "Europe launches RoboEarth: 'Wikipedia for robots'". USA TODAY. Retrieved 4 January 2017.
  9. "European researchers have created a hive mind for robots and it's being demoed this week". Engadget. Retrieved 4 January 2017.
  10. "Robots test their own world wide web, dubbed RoboEarth". BBC News. 14 January 2014. Retrieved 4 January 2017.
  11. "'Wikipedia for robots': Because bots need an Internet too". CNET. Retrieved 4 January 2017.
  12. "New Worldwide Network Lets Robots Ask Each Other Questions When They Get Confused". Popular Science. 9 March 2013. Retrieved 4 January 2017.
  13. "Google Tasks Robots with Learning Skills from One Another via Cloud Robotics". allaboutcircuits.com. Retrieved 4 January 2017.
  14. Tung, Liam. "Google's next big step for AI: Getting robots to teach each other new skills | ZDNet". ZDNet. Retrieved 4 January 2017.
  15. "How Robots Can Acquire New Skills from Their Shared Experience". Google Research Blog. Retrieved 4 January 2017.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.