An Investigative Analysis using ‘Prior’ Knowledge of Physical Interaction to Learn State Representation in Robotics
Main Article Content
Abstract
The effectiveness of learning in robots is heavily influenced by state representations. In turn, physics gives structure to both the world’s largest changes and the manner wherein robots may influence them. Using prior knowledge of engaging with the material realm, robots may develop state descriptions that are consistent with mechanics. Six mechanical priors were discovered, along with a description of how they can be used for language modelling. We demonstrate the effectiveness of our technique inside a virtual slots auto racing game and a virtual navigating assignment involving disturbing motion information. Our method extracts mission condition models from elevated observations even when task-irrelevant diversions are prevalent. We also show that the state representations learnt by our technique significantly increase reinforcement learning generalisation.