Arthur Wandzel, Panpan Cai, and David Hsu,  Perception for Planning: Integrating Attention into Planning for Deep Robot

Navigation among Crowds, Work in Progress. 

Arthur Wandzel, Seungchan Kim, Stefanie Tellex, and Yoonseon Oh, Sample Bounds for Robust Multi-Object POMDP Planning via Rademacher Complexity, Preprint (2020)

Object-based reasoning in real-world environments imposes a challenge: as the number of considered objects scale, planning becomes increasingly computationally intractable. In this paper, we derive general upper bounds on the number of samples for Q-value estimation in the context of object-based, online POMDP planning. Our bounds feature a novel application of Rademacher complexity for POMDPs, which comprises of two terms: a regularization term that penalizes complex POMDP models and a counting term that scales with the size of the POMDP problem. We compare bounds as we vary model factorization in terms of objects. We conclude by empirically validating our theoretical findings by demonstrating the advantage of belief factorization for supporting sample-efficient multi-object POMDP planning on a number of domains.

Keywords: object-based reasoning, Rademacher complexity,  computational learning theory, POMDPs, planning

Screen Shot 2020-01-23 at 1.07.52 PM.png
Screen Shot 2019-08-10 at 4.10.12 PM.png
Arthur Wandzel, Yoonseon Oh, Michael Fishman, Nishanth Kumar, Lawson L.S. Wong, and Stefanie Tellex,  Multi-Object Search using Object-Oriented POMDPs, IEEE International Conference on Robotics and Automation (ICRA 2019)

A core capability of robots is to reason about multiple objects under uncertainty. Partially Observable Markov Decision Processes (POMDPs) provide a means of reasoning under uncertainty for sequential decision making, but are computationally intractable in large domains. In this paper, we propose Object-Oriented POMDPs (OO-POMDPs), which represent the state and observation spaces in terms of classes and objects. The structure afforded by OO-POMDPs support a factorization of the agent’s belief into independent object distributions, which enables the size of the belief to scale linearly versus exponentially in the number of objects. We formulate a novel Multi-Object Search (MOS) task as an OO-POMDP for mobile robotics domains in which the agent must find the locations of multiple objects. Our solution exploits the structure of OO-POMDPs by featuring human language to selectively update the belief at task onset. Using this structure, we develop a new algorithm for efficiently solving OO-POMDPs: Object- Oriented Partially Observable Monte-Carlo Planning (OO- POMCP). We show that OO-POMCP with grounded language commands is sufficient for solving challenging MOS tasks both in simulation and on a physical mobile robot.

Keywords: multi-object search, grounded language, POMDPs, reinforcement learning, planning

Screen Shot 2019-08-10 at 5.03.42 PM.png
Steven J. Jones, Arthur Wandzel, and John E. Laird,  Efficient Computation of Spreading Activation Using Lazy Evaluation, International Conference on Cognitive Modeling (ICCM 2016)

Spreading activation is an important component of many computational models of declarative long-term memory retrieval but it can be computationally expensive. The computational overhead has led to severe restrictions on its use, especially in real-time cognitive models. In this paper we describe a series of successively more efficient algorithms for spreading activation. The final model uses lazy evaluation to avoid much of the computation normally associated with spreading activation. We evaluate its efficiency on a commonly-used word sense disambiguation task where it is significantly faster than a naive model, achieving an average time of 0.43ms per query for a spread to 300 nodes.

Keywords: cognitive modeling, probabalistic methods, memory retrieval, SOAR