About

This is a joint workshop with two themes that share common interests and motivations.

Theme 1 - Declarative knowledge in learning and control of robot behaviors:

For the purposes of explainability, abstraction, efficiency, or robustness, declarative knowledge is being incorporated into the decision-making process. We aim to explore novel ways to leverage complementary features of the different forms of decision making in order to inform the future research of deliberative systems relying on declarative knowledge that can be learned from, and shared with, humans.

Knowledge representation and reasoning is one of the earliest research topics in AI. From Prolog to PDDL, many declarative paradigms have been used in robot systems for representing human knowledge. However, people frequently find those methods not performing well in scalability and robustness, which is particularly important in robotics. Numerical approaches to both planning and learning have dominated recent developments in every level of robot behaviors. Current trends in explainable AI and neuro-symbolic reasoning attempt to marry some of the desirable features of traditional logic-based approaches with the generalization offered by more recent numerical methods.

Under this theme, we intend to bring together robotics researchers from the once distant fields of knowledge representation and reasoning, symbolic and motion planning, reinforcement learning, and more generally machine learning for behavior recognition and synthesis, whose worlds are inevitably colliding. We aim to address the question of how to incorporate human knowledge in declarative forms into robot behaviors.

Theme 2 - Neurosymbolic robotics for learning symbolic representations from sub-symbolic representations:

Neurosymbolic AI has emerged to integrate successful ideas in deep learning and classical symbolic reasoning in a single framework. Such a framework will have the desirable perception abilities of deep networks for bottom-up computation whilst allowing the system to make symbolic reasoning for top-down computation.

Neurosymbolic approahces are underexplored in robotics and we believe that the advances in this field might produce a step-change towards truly intelligent robots as robots operate through low-level sensorimotor signals with high bandwidth yet reasoning requires high-level signals with low bandwidth. Essentially, "neurosymbolic robotics" can be a unified framework for high-level intelligence. Under this theme, we aim to draw attention to neurosymbolic AI methods, discuss their applicability and promising research directions in robotics. We will encourage participants to debate whether such an approach is feasible for robotics.

We want to draw researchers that are interested in but not limited to deep learning, classical symbolic AI, logical reasoning, and knowledge representation. In particular: robotics researchers who work on symbolic AI approaches and would like to incorporate recent advances in deep learning, robotics researchers who work on deep learning and would like to exploit the reasoning and explainability related capabilities of symbolic manipulation systems. We also want to draw researchers from other AI fields (such as vision and natural language processing) who work in the intersection of neural and symbolic approaches, and cognitive and developmental roboticists who work on symbol grounding and emphasize symbol emergence.

Participation

Registration link: https://forms.gle/mCdfUoXBfa4aU3eMA
Youtube live stream link: https://youtu.be/6IdzxYmxPIo
Zoom and gather town links will be provided to the registered participants.

Important dates:

  • Paper submission: 6/20/2021 6/27/2021
  • Acceptance notification: 6/30/2021
  • Workshop: 7/15/2021

Submissions:

We accept regular papers (up to 8 pages of unpublished work) and extended abstracts (up to 4 pages of novel work or from a recently published paper) in standard RSS format, excluding unlimited pages for references. The review process will be single-blind. 10 minutes will be allocated for each regular paper, and 2 minutes for each extended abstract. There will also be a poster presentation session through gather.town for discussions.

Papers will be submitted through EasyChair: https://easychair.org/cfp/DNR-ROB-2021

There is a one-page CFP in PDF format: Call for Papers

Live Record

Invited Speakers

Schedule

Time Description
08:00 - 08:20 (EDT, July 15)
14:00 - 14:20 (CEST, July 15)
21:00 - 21:20 (JST, July 15)
Introduction & Opening Remarks
08:20 - 08:40 (EDT, July 15)
14:20 - 14:40 (CEST, July 15)
21:20 - 21:40 (JST, July 15)
Invited talk 1: Benjamin Kuipers

Title: Acting, Learning, and Knowing in Large-Scale Space

Abstract: For an embodied mobile agent, human or robot, knowledge of its spatial environment (a “cognitive map”) can be learned from its changing perceptions, and can be used to plan how to act to achieve its goals. The human cognitive map is remarkable for its ability to express and use multiple kinds of spatial knowledge: of perceptual space, of small-scale and large-scale navigational space, and of metrical and topological relations within and among those spaces. To have the flexibility and robustness of the human cognitive map, robots need the same. Considering interactions between different kinds of spatial knowledge suggests roles for symbolic and neurosymbolic inference.

Bio: Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan. He was previously at the University of Texas at Austin, where he held an endowed professorship and served as Computer Science department chair. He received his B.A. from Swarthmore College, his Ph.D. from MIT, and he is a Fellow of AAAI, IEEE, and AAAS. His research in artificial intelligence and robotics has focused on the representation, learning, and use of foundational domains of commonsense knowledge, including knowledge of space, dynamical change, objects, and actions. He is currently investigating ethics as a foundational domain of knowledge for robots and other AIs that may act as members of human society.
08:40 - 09:00 (EDT, July 15)
14:40 - 15:00 (CEST, July 15)
21:40 - 22:00 (JST, July 15)
Invited talk 2: Katerina Fragkiadaki

Title: Object-centric 3D neural scene representations for visuomotor control

Abstract: Current state-of-the-art neural architectures localize rare object categories in images, yet, they miss basic facts that a two-year-old has mastered: that objects have 3D extent, they persist over time despite changes in the camera view, they do not 3D intersect, and others. We will discuss models that learn to map 2D and 2.5D images and videos into amodal completed 3D feature maps of the scene and the objects in it by predicting views. We will show the proposed models learn object permanence, have objects emerge in 3D without human annotations, and learn action-conditioned object dynamics that generalize across scene arrangements and camera placements. We will describe learning libraries of object-centric controllers that build upon such amodal 3D feature representations to manipulate diverse objects under diverse camera placements via sample efficient reinforcement learning.

Bio: Katerina Fragkiadaki is an Assistant Professor in the Machine Learning Department in Carnegie Mellon University. She received her Ph.D. from University of Pennsylvania and was a postdoctoral fellow in UC Berkeley and Google research after that. Her work is on learning visual representations with little supervision and on combining spatial reasoning in deep visual learning. Her group develops algorithms for mobile computer vision, learning of physics and common sense for agents that move around and interact with the world. Her work has been awarded with a best Ph.D. thesis award, an NSF CAREER award, AFOSR YIP award, Google, TRI, Amazon and Sony faculty research awards.
09:00 - 09:20 (EDT, July 15)
15:00 - 15:20 (CEST, July 15)
22:00 - 22:20 (JST, July 15)
Invited talk 3: Cynthia Matuszek

Title: Grounding Language Symbols in Percepts and Contexts

Abstract: As robots and other intelligent, embodied agents move from labs and factories into human spaces, it is becoming progressively more impractical to assume that we will be able to predetermine the environments, tasks, and human interactions they will need to be able to handle. Letting robots learn from end users via natural language is an intuitive, versatile approach to handling novel situations robustly. Grounded language acquisition is concerned with learning the meaning of language as it applies to the physical world in which robots operate; at the same time, physically embodied agents offer a way to learn to understand natural language in the context of the world to which it refers. Neural approaches have shown success in grounding language given large corpora of information, while language itself is an inherently symbolic, hierarchical structure. In this presentation, I will describe several outcomes of treating language and perceptual data as combined projections of a shared, non-observable embedding. I will give an overview of our work on joint statistical models to learn the grounded semantics of natural language describing objects, spaces, and actions, and present ongoing work on learning from unconstrained human-robot interactions.

Bio: Cynthia Matuszek is an assistant professor of computer science and electrical engineering at the University of Maryland, Baltimore County, and the director of UMBC’s Interactive Robotics and Language lab. After working as a researcher on the Cyc project, she received her Ph.D. in computer science and engineering from the University of Washington in 2014. Her research is focused on how robots can learn grounded language from interactions with non-specialists, which includes work in not only robotics, but human-robot interactions, natural language, machine learning, machine bias, and collaborative robot learning, informed by a background in common-sense reasoning and classical artificial intelligence. Dr Matuszek has been named in the IEEE bi-annual “10 to watch in AI,” and has published in machine learning, artificial intelligence, robotics, and human-robot interaction venues.
09:20 - 10:00 (EDT, July 15)
15:20 - 16:00 (CEST, July 15)
22:20 - 23:00 (JST, July 15)
Panel discussion 1
Chair: Roderic Grupen
Co-chair: Justus Piater
RSS CONFERENCE BREAK
13:00 - 13:20 (EDT, July 15)
19:00 - 19:20 (CEST, July 15)
02:00 - 02:20 (JST, July 16)
Invited talk 4: Anthony Cohn

Title: Human-like planning for reaching in cluttered environments

Abstract: Humans, in comparison to robots, are remarkably adept at reaching for objects in cluttered environments. The best existing robot planners are based on random sampling of configuration space, which becomes excessively high dimensional with large number of objects. Consequently, most planners often fail to efficiently find object manipulation plans in such environments. We addressed this problem by identifying high-level manipulation plans in humans, and transferring these skills to robot planners. We used virtual reality to capture human participants reaching for a target object on a tabletop cluttered with obstacles. From this, we devised a qualitative representation of the task space to abstract the decision making, irrespective of the number of obstacles. Based on this representation, human demonstrations were segmented and used to train decision classifiers. Using these classifiers, our planner produced a list of waypoints in task space. These waypoints provided a high-level plan, which could be transferred to an arbitrary robot model and used to initialise a local trajectory optimiser. We evaluated this approach through testing on unseen human VR data, a physics-based robot simulation, and a real robot (dataset and code are publicly available). We found that the human-like planner outperformed a state-of-the art standard trajectory optimisation algorithm, and was able to generate effective strategies for rapid planning- irrespective of the number of obstacles in the environment.

Bio: Anthony (Tony) Cohn is Professor of Automated Reasoning in the School of Computing, at the University of Leeds. His current research interests range from theoretical work on spatial calculi (receiving a KR test-of-time classic paper award in 2020) and spatial ontologies, to cognitive vision, grounding language to vision, robotics, modelling spatial information in the hippocampus, and Decision Support Systems, particularly for the built environment. He is Editor-in-Chief Spatial Cognition and Computation and was previously Editor-in-chief of the AI journal. He is the recipient of the 2015 IJCAI Donald E Walker Distinguished Service Award, as well as the 2012 AAAI Distinguished Service Award. He is a Fellow of the Royal Academy of Engineering, the Alan Turing Institute in the UK, and is also a Fellow of AAAI, AISB, and EurAI.
13:20 - 13:40 (EDT, July 15)
19:20 - 19:40 (CEST, July 15)
02:20 - 02:40 (JST, July 16)
Invited talk 5: Nick Hawes

Title: Mission Planning with Uncertain Models

Abstract: Mission planning for long-horizon tasks requires the planning agent to use a model to encode its interaction with its environment. In most robotic tasks some parts of this model are known with certainty, whereas other parts may only be known with uncertainty at design time, and must be updated via learning either between missions (i.e. “offline”) or during execution (“online”). In this talk I’ll give a high-level summary of our recent work on planning under uncertainty with such uncertain models. This will range from planning in MDPs with a Gaussian Process prior over a single state features, to planning in Uncertain MDPs and Bayes-Adaptive MDPs where the true model cannot be known with certainty.

Bio: Nick Hawes is an Associate Professor in the Oxford Robotics Institute, part of the Department of Engineering Science at the University of Oxford, and a Fellow of Pembroke College. He leads the GOALS research group which performs research on problems in mission planning and decision making for autonomous system, particularly goal-oriented, long-lived robots acting in uncertain environments. He is an Associate Editor for the Journal of AI Research, and a Group Leader for AI and Robotics at the UK’s Turing Institute.
13:40 - 14:00 (EDT, July 15)
19:40 - 20:00 (CEST, July 15)
02:40 - 03:00 (JST, July 16)
Invited talk 6: George Konidaris

Title: Signal to Symbol (via Skills)

Abstract: I will address the question of how a robot should learn an abstract, task-specific representation of an environment. I will present a constructivist approach, where the computation the representation is required to support - here, planning using a given set of motor skills - is precisely defined, and then its properties are used to build the representation so that it is capable of doing so by construction. The result is a formal link between the skills available to a robot and the symbols it should use to plan with them. I will present an example of a robot autonomously learning a (sound and complete) abstract representation directly from sensorimotor data, and then using it to plan. I will also discuss ongoing work on making the resulting abstractions portable across tasks.

Bio: George Konidaris is the John E. Savage Assistant Professor of Computer Science at Brown and the Chief Roboticist of Realtime Robotics, a startup commercializing his work on hardware-accelerated motion planning. He holds a BScHons from the University of the Witwatersrand, an MSc from the University of Edinburgh, and a PhD from the University of Massachusetts Amherst. Prior to joining Brown, he held a faculty position at Duke and was a postdoctoral researcher at MIT. George is the recent recipient of an NSF CAREER award, young faculty awards from DARPA and the AFOSR, and the IJCAI-JAIR Best Paper Prize.
14:00 - 14:20 (EDT, July 15)
20:00 - 20:20 (CEST, July 15)
03:00 - 03:20 (JST, July 16)
Invited talk 7: Sheila McIlraith

Title: Building Human-Taskable Robots that Learn

Abstract: Wouldn’t it be great if robots could be given a few simple directives and learn the details of how to realize a task? Reinforcement Learning (RL) is proving to be a powerful technique for building robots that learn, but a challenge to their broad adoption is the difficulty of reward function specification, and more generally how to build taskable RL-based robots. In this talk I’ll show how formal languages and automata can be used to represent complex non-Markovian reward functions. I’ll present the notion of a Reward Machine, an automata-based structure that provides a normal form representation for reward functions, exposing function structure in a manner that greatly expedites learning. Finally, I’ll also show how these machines can be generated via symbolic planning or learned from data, solving (deep) RL problems that otherwise could not be solved.

Bio: Sheila McIlraith is a Professor in the Department of Computer Science at the University of Toronto, a Canada CIFAR AI Chair (Vector Institute), and a Research Lead at the Schwartz Reisman Institute for Technology and Society. McIlraith’s research is in the area of AI sequential decision making broadly construed, with a focus on human-compatible AI. Her research straddles machine learning and knowledge representation and reasoning. McIlraith is a Fellow of the ACM and AAAI.
14:20 - 15:00 (EDT, July 15)
20:20 - 21:00 (CEST, July 15)
03:20 - 04:00 (JST, July 16)
Panel discussion 2
Chair: Matteo Leonetti
Co-chair: Yuqian Jiang
BREAK
15:15 - 15:30 (EDT, July 15)
21:15 - 21:30 (CEST, July 15)
04:15 - 04:30 (JST, July 16)
Contributed talks 1 (2 minutes for each paper):

  1. Learning Robot Manipulation Programs:A Neuro-symbolic Approach, Parag Singla, Rohan Paul, Rahul Jain and Vishwajeet Agrawal
  2. Towards Learning Grounding for Abstract Control Policies, Stevan Tomic, Federico Pecora and Alessandro Saffiotti
  3. Planning from Pixels in Environments with Combinatorially Hard Search Spaces, Marco Bagatella, Mirek Olšák, Michal Rolínek and Georg Martius
  4. Integrating Knowledge-based Reasoning and Data-driven Learning in Robotics, Mohan Sridharan
  5. Perceptual Reasoning and Interactive Learning for Planning Urban Driving Behaviors, Cheng Cui, Saeid Amiri, Xingyue Zhan and Shiqi Zhang
  6. SORNet: Spatial Object-Centric Representations for Sequential Manipulation, Wentao Yuan, Chris Paxton, Karthik Desingh and Dieter Fox
15:30 - 15:50 (EDT, July 15)
21:30 - 21:50 (CEST, July 15)
04:30 - 04:50 (JST, July 16)
Invited talk 8: Masataro Asai

Title: Symbol Grounding with Graphical Models: Classical Planning as an Example

Abstract: Building a rational, logical autonomous agent that can make high-level decisions in the real world is one of the ultimate goals of Artificial Intelligence and Robotics. However, symbolic systems that excel at logical reasoning are typically not directly compatible with raw inputs from the real world due to the Knowledge-Acquisition Bottleneck --- For example, Domain-independent classical planners require symbolic models of the problem domain and instance as the input.
We present Latplan, an unsupervised architecture that uses deep learning to produce the PDDL inputs for classical planners without supervision. Given only a set of unlabeled image transitions in the environment, Latplan generates a propositional PDDL definition of the environment, performs planning, and returns a visualized plan.
The process involves a fundamental problem of symbol grounding for propositional and action symbols, which require separate mechanisms. I discuss that at the basis of each mechanism lie Generative Models, which provide mathematical rigor for symbol grounding.

Bio: Masataro Asai is a Research Staff Member at IBM Research Cambridge (MIT-IBM Watson AI Lab). He received a Ph.D from University of Tokyo. His main expertise is Classical Planning and Heuristic Graph Search, while his recent work focuses on the automatic identification of discrete symbolic entities that aids cognitive tasks including planning, i.e., symbol grounding, with the help of Deep Neural Networks.
15:50 - 16:10 (EDT, July 15)
21:50 - 22:10 (CEST, July 15)
04:50 - 05:10 (JST, July 16)
Invited talk 9: Leslie P. Kaelbling

Title: Rich Representations for Rational Robots

Abstract: For robots to operate flexibly and intelligently in complex domains over long horizons, they will need to represent and “reason” with information about objects, space, physics, geometry, and people. They will need to represent their own uncertainty as well as possibly the beliefs and objectives of others, and the ways in which their own sensors and effectors connect to external reality. There is unlikely to be one representation that will serve all these purposes. I’ll share some high-level ideas about, and some concrete technical progress toward, an approach that builds and uses multiple representations, creating them dynamically to address important subproblems as they arise.

Bio: Leslie is a Professor in EECS at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founder of the Journal of Machine Learning Research. Her goal is to make robots that are as smart as you are.
16:10 - 16:30 (EDT, July 15)
22:10 - 22:30 (CEST, July 15)
05:10 - 05:30 (JST, July 16)
Invited talk 10: Luís C. Lamb

Title: Neurosymbolic AI: A Short Overview

Abstract: The integration of learning and reasoning has been the subject of growing research interest in AI. However, both areas have been developed under clearly different technical foundations, by separate research communities. Neurosymbolic AI aims at integrating neural learning with symbolic methods from computational logic and knowledge representation. In this talk, we present an brief overview of the evolution of neurosymbolic AI methods, with attention to developments towards integrating machine learning and reasoning into a unified foundation that contributes to explainable AI. We concluded by illustrating how advances in neural-symbolic computing can lead to the construction of richer AI systems.

Bio: Luis C. Lamb is a Full Professor and Secretary of Innovation, Science and Technology of the State of Rio Grande do Sul, Brazil. He was formerly Vice President for Research (2016-2018) and Dean of the Institute of Informatics (2011-2016) at the Federal University of Rio Grande do Sul (UFRGS), Brazil. He holds both the Ph.D. in Computer Science from Imperial College London (2000) and the Diploma of the Imperial College, MSc by research (1995) and BSc in Computer Science (1992) from UFRGS, Brazil. His research interest includes neurosymbolic AI, the integration of learning and reasoning, and AI fairness. He co-authored two research monographs: Neural-Symbolic Cognitive Reasoning, with Garcez and Gabbay (Springer, 2009) and Compiled Labelled Deductive Systems, with Broda, Gabbay, and Russo (IoP, 2004). His research has led to publications at flagship AI and neural computation conferences and journals. He was co-organizer of two Dagstuhl Seminars on Neurosymbolic AI: the Dagstuhl Seminar 14381: Neural-Symbolic Learning and Reasoning (2014) and Dagstuhl Seminar 17192: Human-Like Neural-Symbolic Computing (2017) and several workshops on neural-symbolic learning and reasoning at AAAI and IJCAI.
16:30 - 17:10 (EDT, July 15)
22:30 - 23:10 (CEST, July 15)
05:30 - 06:10 (JST, July 16)
Panel discussion 3
Chair: Shiqi Zhang
Co-chair: Emre Ugur
17:10 - 17:25 (EDT, July 15)
23:10 - 23:25 (CEST, July 15)
06:10 - 06:25 (JST, July 16)
Contributed talks 2 (2 minutes for each paper):

  1. Learning Goal-Based Abstractions from Human Teachers, Nakul Gopalan and Matthew Gombolay
  2. Efficient Hierarchical Navigation and Manipulation by Constraint-induced Option-Reward Design, Zhiao Huang, Xiaochen Li and Hao Su
  3. Composable Causality in Semantic Robot Programming, Emily Sheetz, Xiaotong Chen, Zhen Zeng, Kaizhi Zheng, Qiuyu Shi and Chad Jenkins
  4. A Framework for Creative Problem Solving Through Action Discovery, Evana Gizzi, Mateo Guaman Castro, Wo Wei Lin and Jivko Sinapov
  5. Learning Quadruped Locomotion Policies with Reward Machines, David DeFazio and Shiqi Zhang
  6. A Neurosymbolic Framework for Symbol Emergence from Interaction Experience, Alper Ahmetoglu, M. Yunus Seker, Justus Piater, Erhan Oztop and Emre Ugur
17:25 - 18:30 (EDT, July 15)
23:25 - 00:30 (CEST, July 15)
06:25 - 07:30 (JST, July 16)
Poster session on Gather town (link will be sent after registration)

Accepted Papers

Perceptual Reasoning and Interactive Learning for Planning Urban Driving Behaviors
Cheng Cui, Saeid Amiri, Xingyue Zhan and Shiqi Zhang


SORNet: Spatial Object-Centric Representations for Sequential Manipulation
Wentao Yuan, Chris Paxton, Karthik Desingh and Dieter Fox


Planning from Pixels in Environments with Combinatorially Hard Search Spaces
Marco Bagatella, Mirek Olšák, Michal Rolínek and Georg Martius


Learning Goal-Based Abstractions from Human Teachers
Nakul Gopalan and Matthew Gombolay



Learning Robot Manipulation Programs:A Neuro-symbolic Approach
Parag Singla, Rohan Paul, Rahul Jain and Vishwajeet Agrawal



Composable Causality in Semantic Robot Programming
Emily Sheetz, Xiaotong Chen, Zhen Zeng, Kaizhi Zheng, Qiuyu Shi and Chad Jenkins

A Framework for Creative Problem Solving Through Action Discovery
Evana Gizzi, Mateo Guaman Castro, Wo Wei Lin and Jivko Sinapov


Towards Learning Grounding for Abstract Control Policies
Stevan Tomic, Federico Pecora and Alessandro Saffiotti


A Neurosymbolic Framework for Symbol Emergence from Interaction Experience
Alper Ahmetoglu, M. Yunus Seker, Justus Piater, Erhan Oztop and Emre Ugur

Organizing Team

Acknowledgments

This workshop is supported by TÜBİTAK (The Scientific and Technological Research Council of Turkey) ARDEB 1001 program (120E274).