USING BIOROBOTS TO INVESTIGATE EXTANT AND EXTINCT ANIMAL LOCOMOTION
The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience, biomechanics, and robotics can fruitfully interact. In this talk, I will present how biorobots and numerical models can be used to explore the interplay of the four main components underlying animal locomotion, namely central pattern generators (CPGs), reflexes, descending modulation, and the musculoskeletal system. Going from lamprey to human locomotion, I will present a series of models that tend to show that the respective roles of these components have changed during evolution with a dominant role of CPGs in lamprey and salamander locomotion, and a more important role for sensory feedback and descending modulation in human locomotion. I will also present a recent project showing how robotics can provide scientific tools for paleontology to provide quantitative estimations of the likelihood of gaits of extinct animals.
Auke Ijspeert is a professor at the Swiss Federal Institute of Technology in Lausanne, Switzerland (EPFL), IEEE Fellow, and head of the Biorobotics Laboratory. He has a B.Sc./M.Sc. in physics from the EPFL (1995), and a PhD in artificial intelligence from the University of Edinburgh (1999). He has been at EPFL since 2002, where he was first a Swiss National Science Foundation assistant professor, then an associate professor (2009), and since 2016 a full professor. His research interests are at the intersection between robotics and computational neuroscience. He is interested in using numerical simulations and robots to gain a better understanding of animal locomotion and movement control, and in using inspiration from biology to design novel types of robots and locomotion controllers (see for instance Ijspeert et al, Science, Vol. 315. no. 5817, pp. 1416 - 1420, 2007 and Ijspeert, Science Vol. 346, no. 6206, 2014). He is also interested in the control of exoskeletons for lower limbs. With his colleagues, he has received paper awards at ICRA2002, CLAWAR2005, IEEE Humanoids 2007, IEEE ROMAN 2014, CLAWAR 2015, and CLAWAR 2019. He is associate editor for Soft Robotics, the International Journal of Humanoid Robotics, and the IEEE Transactions on Medical Robotics and Bionics, and a member of the Board of Reviewing Editors of Science magazine.
ADAPTIVE BEHAVIOR THROUGH DECENTRALIZED REINFORCED LEARNING IN SOFT ROBOTIC MATTER
Researchers have started to explore the use of compliance in the design of soft robotic devices, that have the potential to be more robust, adaptable, and safer for human interaction than traditional rigid robotics. State-of-the-art developments push these robotic systems towards applications such as soft rehabilitation and diagnostic devices, exoskeletons for gait assistance, grippers that can handle highly diverse objects, and electronics that can be embedded in the human body. Despite these exciting recent developments, due to their inherent non-linear response, current soft robotic systems are difficult to scale, and are mainly passive: they typically do not adjust their behavior to changes in their environment. To enable modularly scalable and autonomous soft robots we have developed a new type of soft robot that is assembled from identical building blocks with embedded actuation, sensing and computation. In this robotic system, behavior emerges from local interactions, rather than being centrally controlled. Here we show that we are able to implement decentralized learning in this system. Using a stochastic optimization approach, the assembled soft robot achieves overall self-learned locomotion by having each individual building block update their phase only based on their own position. As such, this robotic system has material-like properties, e.g. it is scalable and not sensitive to damage; if we would cut the robotic system in two, both parts should maintain similar bulk behavior.
Johannes T.B. (Bas) Overvelde started as a tenure-track Group Leader at AMOLF in 2016, where he started the Soft Robotic Matter Group. His group focusses on the design, fabrication and fundamental understanding of mechanical metamaterials and soft robots that are capable of autonomously adapting to – and even harnessing – variations in their environment. The group aims to uncover principles that help us understand how non-linearity and feedback can result in the emergence of complex – but useful – behaviour in soft actuated systems. Overvelde received both his BSc and MSc degrees at the TU Delft (NL) cum laude, and after receiving a Fulbright grant in 2012, in April 2016 Overvelde finished his PhD in Applied Mathematics at Harvard University (US).
FROM WALKING TO COGNITION, A DECENTRALIZED, INSECT-INSPIRED HEXAPOD CONTROLLER
Simulation studies provide an important tool to understand how behavior of animals may be controlled. Generally, such studies focus on a comparatively narrow behavioral segment, an approach that necessarily circumvents the question as to how an architecture may be designed that is able to control a broad range of behaviors as observed in animals. To cope with this question, we apply an ANN-based decentralized and semi-hierarchical architecture. This architecture consists of separate, partially-autonomous sensori-motor memories arranged at various levels. These memories pursue local goals, while receiving memory-specific sensory signals and may operate at different time scales. To describe the properties of this network, we start with “lower-level” behavior, in this case hexapod walking. The network shows both adaptivity and stability against disturbances, describing quite a number of behavioral data as well as neurophysiological ones. It will then be shown how such a network may be expanded to deal with higher level control tasks as for example navigation and, after a minor further expansion, with cognitive behavior in a strict sense. The latter means that, in critical situations, the network is able to invent new behaviors by exploiting the decentralized memory architecture, and is able to plan ahead by using an internal body model for simulation and can thereby search for a solution of the current problem.
Holk Cruse studied biology, physics and mathematics at the University of Freiburg/Breisgau (Germany). From 1981 until 2008 Professor for Biological Cybernetics/Theoretical Biology at the University of Bielefeld (Germany). Research focus on insect locomotion including behavioural studies as well as software and hardware simulation on both the reactive and the cognitive level.
"LOOKING" AND "SEEING" IN VISION AND OTHER SENSES IN MAN, ANIMALS AND MACHINES
In human vision, looking orients head and gaze to put attended objects into the central visual field for seeing or scrutiny. It enables attention to select a tiny fraction of sensory input information into the attentional bottleneck. This bottleneck, more severe in lower animals, should also apply to most robots. I will present recent findings in a new framework for understanding vision. This framework views vision as containing encoding, selection, and decoding stages, putting attentional selection (looking) at the center stage. In primates, selection starts in the primary visual cortex (V1), suggesting a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 to aid seeing (decoding), through analysis-by-synthesis, should query for additional information and be mainly directed at the foveal region. Hence, looking and seeing are mainly by the peripheral and central vision, respectively. Non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions (in seeing). In some animals (like rodents), some senses like vision and audition serve the "peripheral" role for "looking" to orient the central "seeing" senses, by (e.g.) whiskers, tentacles, snouts, nose, lips, and tongues, towards the attending object for better scrutiny.
Li Zhaoping obtained her B.S. in Physics in 1984 from Fudan University, Shanghai, and Ph.D. in Physics in 1989 from California Institute of Technology. She was a postdoctoral researcher in Fermi National Laboratory in Batavia, Illinois USA, Institute for Advanced Study in Princeton New Jersey, USA, and Rockefeller University in New York USA. She has been a faculty member in Computer Science in Hong Kong University of Science and Technology, and was a visiting scientist at various academic institutions. In 1998, Zhaoping's colleagues and her co-founded the Gatsby Computational Neuroscience Unit at University College London. From October 2018, she is a professor at University of Tuebingen and the head of the Department of Sensory and Sensorimotor Systems at the Max Planck Institute for Biological Cybernetics in Tuebingen, Germany. Zhaoping's research experience throughout the years ranges from areas in high energy physics to neurophysiology and marine biology, with most experience in understanding the brain functions in vision, olfaction, and in nonlinear neural dynamics. In late 90s and early 2000s, she proposed a theory (which is being extensively tested) that the primary visual cortex in the primate brain creates a saliency map to automatically attract visual attention to salient visual locations. This theory, and the supporting experimental evidence, have led Zhaoping to propose a new framework for understanding vision. She is the author of Understanding Vision: theory, models, and data , Oxford University Press, 2014.