Workshops & Tutorials

Sunday, October 1st, Full-Day Workshops and Tutorials (8:30 am to 5:30 pm)

Organizers: Francesco Maurelli, Huihuan (Alex) Qian, Nina Mahmoudian, Hyun-Taek Choi

Abstract: For the development of the human society, there are extensive opportunities for robots to work long-term in the vast marine environment, e.g. for observation, transportation, environment protection, etc. Significant challenges exist as well, e.g. in autonomy, energy, communication, robustness, and so forth. To advocate the research for long-term marine robotics, this workshop will gather scholars in areas of robot design, perception, control, energy, planning, human-robot interaction, etc., to share their insights on how to enable marine robots to arrive at remote locations, explore for longer time, become more autonomous, overcome faults, perceive and so forth. Young researchers will also present their latest discoveries in workshop papers. Final discussions will enable speakers and audiences to interact and brainstorm how to enable marine robots for long-term functionalities. This workshop will provide a platform for marine robotic researchers at various career stages to exchange ideas so as to enable robots for long-term marine applications. It will also elaborate opportuities and challenges in marine robotics so as to inspire and attract researchers in other robotic fields to join the collaboration.

Website: https://crai.cuhk.edu.cn/event/MarineRoboticsWorkshop

Organizers: Christian Pek, Sanne van Waveren, Hang Yin

Abstract: Reinforcement learning (RL) has shown remarkable achievements in applications ranging from autonomous driving, object manipulation, or beating best players in complex board games. However, elementary problems of RL remain open: exploratory and learned policies may cause unsafe situations, lack task-robustness, be unstable, or require many samples in the learning process. By satisfactorily addressing these problems, RL research will have long-lasting impact and see breakthroughs on real physical systems and in human-centered environments. Different communities have proposed multiple techniques to increase safety, transparency, and robustness of RL. The aim of this workshop is to provide a multidisciplinary platform to (1) jointly identify and clearly define major challenges in RL, (2) propose and debate existing approaches to ensure desired properties of learned policies from various perspectives, and (3) discuss opportunities to accelerate RL research. The themes of the workshop would comprise (but not be limited to) RL and control theory, RL and Human-Robot Interaction, RL and Formal Methods, benchmarking of RL, etc. In the tradition of our previous RL-CONFORM workshops, we encourage a fruitful and lively discussion between researchers that is open to anyone.

Website: https://rlconform-workshop.github.io

Organizers: Drew Hanover, Markus Wulfmeier, Ben Moran

Abstract: The world around us is inherently “multi-agent”. When you drive your car, or walk through a crowd, you are engaging in a collaborative and/or competitive environment with other agents. This phenomenon makes the world an exciting place because it requires individuals to learn game-oriented strategies for all aspects of life which maximize our returns, whether it be longevity, wealth, or happiness. The environment is typically partially observable, necessitating decision making strategies which are capable of accounting for uncertainties in highly complex domains. In this workshop, we aim to explore these ideas by presenting the latest advancements in multi-agent reinforcement learning and game theory from top researchers and practitioners in the field. The workshop will cover a broad range of topics, including multi-robot coordination and communication, task allocation, resource management, planning, control, and game theory. Additionally, we encourage topics on algorithmic approaches for multi-task expertise and distillation. The workshop will include practical demonstrations and hands-on sessions to provide participants with the requisite tools to enable robots to interact in a highly complex, multi-agent world.

Website: https://djhanove.github.io/IROS23_MRS/

Organizers: Naveen Kumar Uppalapati, Marija Popovic, Anand Kumar Mishra, Arun Narenthiran Sivakumar, Girish Chowdhary, Girish Krishnan, Robert Shepherd

Abstract: Agriculture continues to face major challenges in feeding the growing population due to shrinking land availability and changing climate. Sustainable intensification is necessary to increase agricultural productivity without negative impacts on the environment. Robotics can help with this goal both as a platform for crop monitoring and as an enabler of automation in farms. However, this requires addressing challenging problems in robot design, perception, planning, and control. In addition, increased interdisciplinary collaborations between crop science, robotics, and Artificial Intelligence (AI) are necessary to develop novel and affordable solutions for farmers. Our workshop aims to bring in leading researchers and entrepreneurs from multiple disciplines, like robotics, crop science, and agronomy, to discuss the state – of – the – art and key challenges to enable smart farming using robotics and AI-based agricultural technologies. We focus on three key themes: 1) Farm of the Future: Advancing Agriculture through Robotics, 2) Sustainable Agriculture through Collaboration: Role of Crop Scientists in Guiding Robotic Innovation, and 3 ) Commercializing Agricultural Robotics: Bridging the Gap Between Research and Industry for Sustainable Farming Solutions. The workshop will include talks and panel discussions on the above themes and a poster session for researchers in this area to present their work.

Website: https://sites.google.com/illinois.edu/iros2023-agrobotics

Organizers: Craig Carignan, Mini C. Rai, Carol Martinez Luna, Giacomo Marani

Abstract: Robotic in-space assembly utilizing in-situ resources will enable the construction of large-scale assets at a significantly reduced cost. High-value infrastructures requiring large-scale assembly include but are not limited to solar power satellite systems (SSPS), large-aperture space telescopes for continent-scale Earth and astronomical observations, orbiting laboratories, commercial platforms, and transportation hubs. Invariably, robotic technology for these complex missions will require a variety of locomotion and assembly capabilities beyond those currently employed in orbit onboard the International Space Station (ISS). Moreover, future assembly missions will likely involve multiple mobile space robots with manipulation and grasping capabilities that can collaborate and operate in semi to fully autonomous modes. This interdisciplinary workshop will bring together researchers from academia and practitioners from government, industry, and regulatory bodies to identify the frontiers of assembly missions, address barriers, formulate solutions, and share promising technologies for expediting challenging space missions. Notably, the workshop participants will explore the feasibility of employing intelligent sensing and planning technology for using robots for autonomous construction of large-scale high-value infrastructure will be examined. This workshop will feature invited talks by prominent speakers, panel townhall and poster/video presentations solicited from the community, emphasizing the importance of space sustainability and the vital roles to be played by autonomous robots made using next-generation technology.

Website: https://wvrtc.com/iros2023

Organizers: Luca Carlone, Margarita Chli, Tobias Fischer, Grace Gao, Sourav Garg, Stephen Hausler, Stephanie Lowry, Michael Milford, Amir Patel, Sebastian Scherer, Olga Vysotska, Peng Yin

Abstract: Localization, Visual Place Recognition (VPR) and Simultaneous Localization And Mapping (SLAM) techniques are never the end, but rather a means to enable higher-level tasks for robots. Major advances in localization capability have been made in the robotics, computer vision and machine learning fields, especially over the past two decades, with the advent of mature SLAM systems and modern machine-learning-driven approaches. Yet localization technology is still sparsely deployed in enduring large-scale commercial applications, and despite the adage that “SLAM is solved”, for many applied roboticists it is abundantly clear that there are substantial challenges to still overcome. Involving both researchers and end-users from industry, this workshop will focus on the key reasons we are developing localization and mapping systems, and use those insights to drive a reflection on the key methods by which we are approaching research. We will evaluate how we can improve the metrics and benchmarks by which we assess performance in the research field to make them better proxies of performance in actual deployed situations. To maximize inclusivity, we are providing substantial funding to support researchers from under-represented and lower socio-economic regions to attend and participate in the workshop, both in person and remotely.

Website: https://tinyurl.com/Localization2023

Organizers: Robert Baines, Laura Blumenschein, Jennifer Case, Gina Olson, Andrew Spielberg

Abstract: Soft roboticists are facing challenges with reproducibility, which prevents researchers from making holistic comparisons to prior work, impedes full understanding of results, and forces the need to “reinvent the wheel,” delaying fundamental advances. Reproducibility of results is key to advancing science as well as achieving technology transfer from research laboratories to industrial applications. The purpose of this workshop is to put forth a set of voluntary draft guidelines for soft roboticists concerning fabrication/manufacturing, test procedures, and reporting. Keynotes from members of academia, government, and industry will share difficulties they have encountered when trying to reproduce results from the soft robotics literature. The workshop will include breakout sessions moderated by the organizers and speakers to discuss relevant guidelines related to the topic of each keynote. We will synthesize the perspectives of all the breakout groups into a set of draft guidelines that we intend to publish. Collectively, we hope to improve the reporting standards of soft robotics and drive the field as a whole toward more rigorous research practices.

Website: https://sites.google.com/umass.edu/softrobotreporting/home

Organizers: Jana Pavlasek, Anthony Opipari, Tom Power, Tucker Hermans, Fabio Ramos, Chad Jenkins

Abstract: Advances in robot learning in recent years have yielded outstanding performance across robotic tasks, compelling roboticists to reexamine the role of long-standing, reliable probabilistic inference algorithms. Probabilistic methods have a long history of enabling effective operation under uncertain and unstructured environments and provide a unifying perspective on perception, control and learning. Robot learning, on the other hand, promises generalizability, eliminating the need for carefully hand-crafted models required in many classical probabilistic algorithms. However, it remains unclear whether we can achieve reliable, adaptable behavior by relying on data alone. This workshop sets out to address the question: How can roboticists achieve the best of both deep learning and probabilistic inference? Recently, the robotics community has seen an emergence of interest in hybrid methods which aim to exploit the benefits of both underlying model classes. Key developments in Differentiable Probabilistic Robotics have included the introduction of learned components within inference frameworks and end-to-end differentiable algorithms for Bayesian inference. These methods offer the opportunity to develop robust and reliable learning and adaptive systems. This workshop aims to connect researchers working at the intersection of robotics, deep learning, and probabilistic inference to facilitate breakthrough research in these areas.

Website: https://diff-prob-rob.org

Organizers: Abdalla Swikir, Fares Abu-Dakka, Sami Haddadin, Gitta Kutyniok, Wolfram Burgard, Majid Zamani, Necmiye Ozay

Abstract: Formal methods are concerned with producing precise, unambiguous task specifications or constraints that systems are expected to satisfy. They allow us to design provably-correct controllers, which ensures that the behavior of an AI agent satisfies its formal specification, in a dynamic or stochastic environment. Examples of AI agents are robots. Since robotic systems are hybrid, complex, and sometimes safety-critical, satisfying their formal specification and verifying their closed-loop systems are uniquely challenging. Hence, using formal methods in designing and controlling life and mission-critical robotic systems is highly desirable and essential. Specifically, it allows for the early detection of errors in the robotic system design, which can save time and resources. It also ensures that the robotic system satisfies its specifications, thereby reducing the risk of unexpected behavior that could result in accidents or damage to the robot or its environment. Finally, it enables the systematic verification of the robot’s behavior in different scenarios, which is particularly important in safety-critical applications. In this workshop, we will cover some essential concepts in formal methods and control theory and illustrate the potential of such techniques in developing high-quality and safe robotic systems. The talks in this workshop will cover topics including but not limited to Abstraction-based methods for planning and control design for robotics systems. Learning methods for robot control with formal specifications. Game-theoretic planning and control frameworks for robot manipulation. Formal synthesis for compositional and decentralized control for multi-robot systems Reachability analysis and safety verification for safety-critical robotic applications.

Website: https://sites.google.com/view/w-iros23/home

Organizers: Luis Figueredo, Abdeldjallil Naceri, Waldez Gomes, Sylvain Calinon, Ayse Kucukyilmaz, Praminda Caleb-Solly, Hinrich Schütze, Emmanuel Senft, Sami Haddadin

Abstract: As life expectancy continuously rises, the world’s population grows older: According to recent projections by the United Nations, people over age 65 — so-called third-agers — are the fastest-growing age group. In 2025, third-agers are expected to constitute 25% of the world’s Population. This demographic change majorly increases the prevalence of age-related conditions and raises novel social-economical challenges related to the increasing demand for health care, assisted living facilities, or retirement homes, while addressing the reduced workforce. This workshop aims to bring to light the current and envisioned AI and robotic solutions in the context of Geriatronics and to address pressing challenges regarding (i) How the new generation of robotics and AI technology can address the challenges stemming from the increasing shortage and geographic disparities of care providers and the overburdened health-care? (ii) How to build a new generation of accessible, intelligent assistive robotics technologies that enhance independence and self-determination for people with lived experience of disability? (iii) How to systematically identify the potential facilitators and barriers to the acceptance of a robotic-technology-based assistive system that enhances improved independence and overall health benefits? How to improve adaptation, interaction, and acceptance? (iv) How to address the natural morals and ethics and/or individual needs/dilemmas in a world of care-supporting robots?

Website: https://sites.google.com/view/geriatronics-iros-2023/home

Organizers: Nils Wilde, Javier Alonso-Mora, Daniel Brown, Connor Mattson, Katia Sycara

Abstract: With autonomous robots becoming increasingly capable we expect to see their deployment in a broader range of applications and at larger scale. Thus, there is a need for human-robot interaction (HRI) studies on how multi-robot systems can work with and around humans, how end-users can refine their behavior, and how to enable efficient coordination of robot fleets to accomplish complex tasks. In this workshop, we focus on fundamental challenges around the interaction of humans with multi-robot systems. From an HRI perspective, this prompts new challenges arising from the complexity and scalability issues specific to multi-robot problems. Conversely, the design of multi-robot systems needs to consider the usability by, and interaction with, different end-users such as system operators, human co-workers or people sharing the robots’ environment. Thus, we aim to bring together experts from both HRI and multi-robot systems to discuss pressing research questions such as “How can we develop intuitive HRI frameworks that scale to complex multi-robot systems?”, “What are open algorithmic challenges for human multi-robot interaction?”, “How can multi-agent systems benefit from human feedback and human-in-the-loop learning?”, “How can multi-robot systems convey their strategies to users?”, “What are effective representations for multi-robot plans and strategies?” We envision that this workshop will initiate active discussions between academic and industrial researchers, encourage new connections and collaborations, and ultimately help advance the efficient deployment of robot fleets by respective end-users.

Website: https://sites.google.com/view/hmri-2023/home

Organizers: Omur Arslan, Nikolay Atanasov, Mehmet Dogar, H. Jin Kim, Rafael Papallas

Abstract: Robots capable of autonomous navigation and manipulation with advanced perception and decision-making skills offer tremendous potential to assist people with challenging and repetitive tasks in the service industry, including transportation, logistics, and healthcare. Recent advances in artificial perception enable robots to have semantic understanding and contextual awareness of their surroundings. Similarly, recent years have seen significant progress in decision-making for autonomous navigation and manipulation in complex situations. However, the gap between robot perception and decision-making remains large, as many techniques continue to rely on separation principles between perception, planning, and control. The objective of this workshop is to inspire the robotics community to pursue techniques that tightly integrate perception, planning, and control to achieve physically and contextually safe robot navigation and manipulation in real human environments. Robots sharing the same environment with people need novel semantic planning objectives that integrate perception and planning at a high level to generate contextually relevant robot behavior. At a low level, integrated perception and control require new metric and contextual safety constraints to enable physically safe and socially aware robot behavior. Uncertainty and error quantification in learning-based perception and control is essential for safe and robust robot operation. This workshop will bring together experts from academia and industry to identify and discuss the current challenges and emerging opportunities in perception-aware robot navigation and manipulation, leading to robot perception techniques that actively plan observations and interactions to acquire informative data, and robot planning and control techniques that actively utilize geometric and semantic perceptual information in generating, executing, and adapting robot actions.

Website: https://ippc-iros23.github.io/

Organizers: Rahul Mangharam, Johannes Betz, Venkat Krovi, Hongrui Zheng

Abstract: Decision-making in real-world multi-agent systems, especially where agent teams collaborate to compete while operating with full dynamics in changing environments and imperfect opponent information, is an important and challenging problem with many applications in next-generation robotics. The MAD-Games Multi-Agent Dynamic Games workshop at IROS aims to explore the latest advances in using game-theoretic and multi-agent learning approaches to help performant autonomous agents understand safe interaction with diverse competitive strategies. One of the key challenges in designing effective algorithms is handling the complex and highly interactive behavior between agents. Existing game theory models often rely on an oversimplified discretization of agent action spaces and strong assumptions such as perfect information and rational decision-making. These lead to inaccurate descriptions of the dynamics of real-world interactions. Additionally, the highly dynamic and unpredictable nature of real-world robotic systems can make it difficult to predict the behavior of the system, leading to safety risks. These dynamic games are inherently challenging as (a) crashing is dangerous and expensive, (b) Current sensors cannot infer the intention or strategy of the opponent, and (c) We are in a small-data regime as we cannot use past games to learn from as strategies, environments, and vehicles change. The MAD Games workshop focuses on addressing these challenges by bringing together researchers and practitioners to (1) explore the latest developments in game theory for interactive decision-making with learning-enabled multi-agent systems and (2) investigate how game-theoretic models can be adapted to handle the complexities of real-world robotic systems.

Website: https://iros2023-madgames.f1tenth.org/

Organizers: Fabio Bonsignorio, Angela Faragasso, Tomoyuki Yamamoto, Signe A. Redfield, Angel Pasqual del Pobil

Abstract: The reproducibility of research results and the objective and operational comparison of different methods is still not mainstream in Robotics, AI and Automation Research. However, thanks to the Reproducible-article process in IEEE Robotics & Automation Magazine, it is now possible both to publish reproducible research results and objectively compare the performances of different methods. This interactive and participatory workshop aims at providing the necessary knowledge to develop and report reproducible research and to enable objective comparison of performance. In this workshop we will focus on benchmarking methods with a special focus on statistical significance of results, approaches to randomized experimental studies and (ontologies for) the classification of intelligent and autonomous robotic systems. Basics about methods for reproducible robotics techniques will also be delivered to allow profitable attendance of researchers still not aware of reproducible research methods. The workshop will encourage interaction by challenging the participants to solve simple but not simpler case studies about benchmarking experiments in different competing groups. Attendees will be encouraged to submit to the peer scrutiny of other participants about their own case studies and the final discussion and analysis of the case studies will be an integral part of the learning process.

Website: http://www.robot.t.u-tokyo.ac.jp/TCPEBRAS_IROS2023/index.html

Organizers: Sehoon Oh, Jinoh Lee, Manuel Keppler, Nicholas Paine and Elliott J Rouse

Abstract: Emerging robotic systems hold the potential to transform nearly every aspect of daily life. However, while substantial contributions have been made in artificial intelligence, the physical abilities of most robotic systems remain limited; these systems often move slowly when performing precise motions and present challenges in scalability and adaptability across different uses. Flexible robots containing various elastic components that provide a host of potential benefits, including compliance, energy storage, and shock tolerance, making them suitable for a wide range of applications, such as rehabilitation, exploration and rescue, and collaborative robots. Despite this promise, the motion of elastic robots has been limited in terms of speed and highly dynamic motion; to solve the challenges of rapid, dynamic motion with flexible robotic systems requires a comprehensive approach. This workshop aims to bring together experts in the field of elastic robots to discuss the latest techniques and applications in the highly dynamic motion of flexible robots. 1) To provide a forum for researchers and practitioners to present and discuss the advances to realize dynamic motions on flexible/elastic robots. 2) To identify the challenges and opportunities in high-speed motions of flexible/elastic robots as an emerging research topic to be addressed. 3) To promote collaboration and knowledge sharing among researchers and practitioners in the field.

Website: https://sites.google.com/view/flexible-robots-iros23-ws

Organizers: Fabian Otto, Nicolas Schreiber, Ning Gao, Vien Ngo, Danny Driess, Gerhard Neumann, Clemens Eppner, Georgia Chalvatzaki

Abstract: The field of robotics is constantly evolving and demands robots to operate effectively in complex, 3D environments. A critical requirement for this is the ability of robots to learn policies for tasks such as grasping, manipulation, motion planning, and more. In recent years, reinforcement learning has emerged as a promising approach for policy learning in 3D geometric spaces. Specifically, researchers have focused on the use of point clouds, neural representations of geometry (NeRFs, neural SDFs, or occupancy networks), occlusion maps, and other techniques to achieve this. The persistent challenge in integrating geometric representation in policy learning lies in effectively bridging the gap between high-dimensional and low-dimensional representations of complex spatial environments. To encourage knowledge sharing and foster collaborations, this workshop aims to bring together researchers and practitioners working in this area to discuss the latest developments and identify challenges. Through this collaborative effort, we hope to further advance the field of robotics and reinforce the importance of policy learning in geometric spaces.

Website: https://sites.google.com/view/iros23-policy-learning

Organizers: Tianwei Zhang, Emel Demircan, Yan Gu, Yuquan Wang

Abstract: Humanoid robots are increasingly expected to move with the same agility and speed as humans. However, traditional reactive control designs often lack a predictive horizon or rely on linear time-invariant models, which makes it difficult to guarantee the existence or uniqueness of a feasible solution. As a result, the current state-of-the-art in predictive control is often limited to suboptimal motions, such as fixed angular momentum trajectories, fixed center of mass height, or coplanar foot contacts. This workshop aims to address these challenges by bringing experts with diverse backgrounds in academia and industry, to share the latest control design and software tools spanning optimization-based control, hybrid control, and planning. Topics will include, but are not limited to, high-order differential dynamics, centroidal dynamics approximation, angular momentum tracking, floating-base state estimation, balance control on uneven terrain, whole-body model predictive control, human motion understanding and imitation, control design on open-source platforms, and emerging data-driven approaches. Attendees will have the opportunity to engage in face-to-face discussions with both junior and senior scientists, gain hands-on experience with the latest control software, and network with researchers from the United States, Europe, and Asia. We will also live-cast the workshop on YouTube and disseminate recorded presentations through our website. Beyond scientific contributions in humanoid robotics, this workshop has a significant societal impact. Real-time locomotion control is essential for developing future assistive devices, such as exoskeletons or prostheses. Our U.S.-based organizers will host an outreach event for local college students during the proposed workshop. In summary, this workshop will highlight the latest key theories, techniques, and insights for the development of next-generation humanoid robot systems with agile and robust locomotion capabilities, while promoting idea exchange and community engagement.

Website: https://iros-2023-humanoid.github.io/

Organizers: Mohsen Kaboli, Vincent Hayward, Anirvan Dutta, Henrik Jörntell, Etienne Burdet

Abstract: Haptics intelligence, or the sense of touch, enables humans to interact with their environment and is crucial to manipulation challenges in everyday life. It enables grasping, manipulation, learning, and decision-making based on the information from mechanoreceptors distributed in muscles and over the skin as well as the interaction dynamics. Humans use haptic exploration to interact with their environment, e.g. to recognise an object’s shape and mechanical properties. Haptic sensing is a capability that robotic systems need to acquire for safely and efficiently interacting with their environment and with humans. Haptic sensing is essential to perform a variety of tasks in industrial applications, consumer services, and other highly dynamic environments such as assistance and care for elderly, housekeeping, etc. However, it is still challenging for robots to carry out haptic exploration smoothly and recognise objects efficiently, especially in an unstructured environment with soft materials and different textures, inherent sensor and motor noise, and delay in sensory signal processing. Traditionally, robotics has started to treat haptics like vision, and the dominating aspect of haptics, mechanical interaction, has often been neglected. Mechanical interaction greatly complicates the design of sensing techniques that are fundamentally robust to sensing conditions and resilient to wear and abrasion, and conditions haptic perception. Therefore, this year we will focus on neuroscience, material and signal processing aspects of haptics considering the mechanical interaction dynamics. We have assembled a team of experts in all these fields contributing to cutting edge research in haptics both in humans and for robots. The workshop will enable exchange of ideas between these experts and the underlying field, to foster fresh ideas through interdisciplinary discussions while paving the way for future collaborations.

Website: www.robotact.de/robotac-2023

Organizers: Zhenshan Bing, Fan Wu, Jihong Zhu, Fares Abu-Dakka, Korbinian Nottensteiner, Dawn Tilbury, Albrecht Schmidt, Birgit Voger-Heuser

Abstract: The future of manufacturing requires factories to be flexible and reconfigurable to provide on-demand production. Various technologies have been identified as the keys to realize this goal, such as IoT, digital twin, AI-enabled robotics and automation. The workshop aims to provide a comprehensive understanding of the latest advancements in Robotics & AI technologies in the context of future factories, by bringing together researchers from different backgrounds to share their expertise and experience on implementing innovative solutions to challenging problems. It seeks to identify challenging problems in Robotics and AI imperative to achieve flexible manufacturing and production as a service. The participants of the workshop will be encouraged to think critically to explore new opportunities and provide new prospects in various research directions to address the critical questions as well as considering their impact on society, such as job displacement, privacy, and inequality. They will also be provided with hands-on experience with AI tools and technologies, such as machine learning algorithms and robotic process automation.

Website: https://sites.google.com/view/robot-ai-future-factory/

Organizers: Ebubekir Avci, Eric Diller, Pietro Valdastri

Abstract: Robotic systems for treatment and diagnosis of the gastrointestinal tract have attracted increasing attention in the last two decades. Miniaturization of mechanical and electronic components has enabled researchers to develop functional capsules with robotic capabilities including locomotion, delivery, and collection of samples. Numerous applications, such as drug delivery, tissue biopsy, and detection of gut related diseases, have been realised in vivo. In addition to further development of novel platforms, this research area is now at the point where it faces the challenges of transforming exceptional research prototypes into robust and marketable products. Researchers must call on a number of technical domains to address the challenges in this field including locomotion, localisation, actuation, sensing, manufacturing, and instrumentation. This workshop aims to bring researchers together from across the international community to present an overview of the application domains of gut related robotics, and the challenges that should be addressed for realization of these systems. Presentations of both recognized research teams (academia and industry) and highly promising early-career researchers will help to advance the state-of-the-art scientific results in the direction of marketable products.

Website: https://seat-research.massey.ac.nz/iros2023ws/

Organizers: Jörn Syrbe, Michael Beetz, Petra Wenzl, Karinne Ramirez-Amaro, David Vernon, Leonie Dziomba

Abstract: The implementation and design of cognitive Robot-Systems is a challenging process which requiers an overview of the physics of robots & their environments, sensors & sensor data, perception, probabilistic state estimation, knowledge representation and reasoning, robot learning, task and motion control, and cognitive architectures. It takes just as much effort to consider these issues when designing robots as it does to teach the entirety of robotics. How can we establish and foster the capabilities to understand the interrelation of constituent system parts and how systems work over time and within the context of larger systems (System thinking), the decomposition, pattern recognition, abstraction, and algorithm design (Computational thinking), strategies, processes and abstraction (Mathematical thinking), the ability to make reasoned decisions (Decision making), the awareness on how technology impacts society (Sustainability), and the intrinsic motivation and willingness to solve challenges of robotics (Motivation)? In this workshop, we address this very challenge by systematizing a best practice from experienced teachers and researchers. How is it possible to integrate the needs of all stakeholders? What are the perspectives of Students, Educators & Researchers, as well as state of the art in cognitive robotics? All of these considerations should also include the demands of society on the subject of cognitive robotics.

Website: https://ease-crc.org/teaching-cognitive-robotics/

Organizers: Rajkumar Muthusamy, Tarek Taha, Paolo Dario, Maria Pia Fanti

Abstract: The last-mile delivery, which focuses on delivering goods to customers, is expensive, challenging and contributes significantly to global carbon emissions. The recent pandemic and expansion of e-commerce have accelerated the adoption of last-mile ecosystems, including micro-fulfilment, dark stores, and a variation of ground and aerial robots. Moreover, the norms of last-mile delivery are changing with evolving consumer needs and industry requirements for sustainability and human-centricity. The next wave of autonomous last-mile robots will be human-centric and energy-efficient, combining advanced technical capabilities with collaborative aspects. This workshop aims to explore and discuss existing and envisioned robot-assisted last-mile delivery models, systems and technologies, identifying key challenges, research gaps, and real-world problems. The key questions to be addressed include 1. the lessons learned from past last-mile delivery robot projects 2. the autonomous features of last-mile robots that address human acceptance and include collaborative elements to enhance customer experience, 3. the opportunities and challenges in using heterogeneous fleets for rapid and frequent delivery, 4. how to advance and mature the technologies utilized in last-mile robots, and 5. how existing regulations and standards influence the technological development and deployment of these systems to ensure that they are safe, environmentally friendly and sustainably deployed. The workshop will encourage poster/paper submissions exploring innovative, creative and futuristic solutions for last-mile delivery using intelligent robots. The workshop aims to build a research community to engage and contribute to realising next-gen last-mile robots and sustainable delivery models, bridging the gap between researchers, logistic operators and entrepreneurs.

Website: https://www.lastmilerobotics.dfl.ae/

Organizers: H.J. Terry Suh, Tao Pang, Xianyi Cheng, Alp Aydinoglu, Simon Le Cleac’h, Brian Pancher, Russ Tedrake

Abstract: Contact-rich manipulation remains a grand challenge for robotics. By enabling robots to automatically reason through a rich set of contacts with objects and the surrounding environment, dexterous manipulation capabilities can widely broaden the spectrum of physical tasks that can be automated. Traditionally, model-based methods have tackled contact-rich manipulation by utilizing and studying the structure of contact models to come up with planning and control algorithms. In contrast, recent learning-based methods have achieved new capabilities by utilizing massive amounts of data, yet the methods do not take advantage of model structure within contact-rich manipulation. The current shift of paradigm towards learning-based methods presents many opportunities for the model-based manipulation community that can combine the effectiveness of learning-based methods with the efficiency of models. The objective of this workshop is to bring together researchers in the model-based manipulation community to present their work and review state-of-the art methods in the field. In conjunction, the workshop aims to facilitate discussions on the future of the field and ask several important questions, such as: how can we synthesize model-based approaches and recent learning approaches into a coherent whole? Do we believe that there is structure we can leverage from the models that we use to better inform planning and control algorithms?

Website: https://sites.google.com/view/iros2023-contactrich

Organizers: Nicholas R. Gans, William J. Beksi, Katherine A. Skinner

Abstract: The goal of this workshop is to engage experts and researchers on the synthesis of photo-realistic images and virtual environments, particularly in the form of public datasets, software tools, or infrastructures, for robotics research. Such public datasets, software tools, and infrastructures will lower entry barriers by enabling researchers that lack expensive hardware (e.g., complex camera systems, robots, autonomous vehicles, etc.) to simulate and create datasets representative of such hardware and scenarios. PIES-Rob will be a full-day workshop. It will feature a mix of presentations, open panel discussions, and an invited poster session. There will be eight invited speakers and a keynote speaker to discuss their related research, thoughts, and experiences on the needs and directions of creating public datasets, software tools, and infrastructures for synthesizing photorealistic images and environments.

Website: https://sites.google.com/view/pies-rob2023/pies

Sunday, October 1st, Half-Day Morning Workshops and Tutorials (8:30 am to 12:30 pm)

Organizers: Shingo Shimoda, Juan C. Moreno, Diego Torricelli, Jesus Tornero, Fady Alnajjar, Qi An, Sayako Ueda, Zen Koh, Yasuhisa Hirata, Jose L. Pons

Abstract: Recent advances in robotics and AI have opened up numerous possibilities for improving the usability of various devices and discovering new phenomena across various fields. One of the most important fields that can benefit from these advancements is medicine. In the medical domain, these applications go beyond just surgical robots and rehabilitation devices that enhance or replace human capabilities. They include addressing issues beyond traditional medical approaches, such as identifying disease sources, visualizing the recovery process, developing new therapeutic treatments, and providing appropriate movement support, based on detailed measurement and modeling of bio-signals from humans. To meet these unmet medical needs, it is crucial to have closer collaboration of the engineering and the medical fields. One approach is through the establishment of “AI Labs” attached to hospitals, which utilize robotics technology and AI to measure and support patients in actual medical settings. Some of these labs are located as described in Figure 1. These AI Labs are generating new knowledge for treating and rehabilitating various diseases. However, each AI Lab operates independently, and the knowledge and methods obtained are not standardized across organizations. To accelerate research in this field, it is essential to share the approaches for establishing novel standards. This workshop aims to establish a network of AI Labs in hospitals across five countries, as depicted in Figure 1. Through this network, we believe that engineers and medical doctors not only from these Labs, but also various researchers who are interested in this field can easily collaborate and discuss the possibilities for enhancing medical research and practice.

Website: https://sites.google.com/view/iros-workshop-worldwide/

Sunday, October 1st, Half-Day Afternoon Workshops and Tutorials (1:30 pm to 5:30 pm)

Organizers: Katie Skinner, Ram Vasudevan, Manikandasriram Srinivasan Ramanagopal, Radhika Ravi, Austin Buchan, Spencer Carmichael, Gaurav Pandey, Alexandra (Alexa) Carlson

Abstract: Current systems for autonomous vehicle (AV) perception rely on widely used sensors such as cameras and LiDAR systems. However, AV perception systems based on conventional sensors suffer in various lighting and adverse weather conditions. For example, conventional visual frame-based cameras are widely used due to their low-cost and high spatial resolution. Yet, AV perception systems based on conventional cameras are challenged by low-light conditions, fog, rain, and snow, and suffer from issues including motion blur, limited frame rates, and low dynamic range. Novel sensors, such as event and thermal cameras, offer alternatives that address these limitations. Event cameras feature high temporal resolution, effectively no motion blur, high dynamic range, low power requirements, and low latency. These qualities make event cameras promising under adverse lighting, fast motion, and resource constraints. Thermal cameras sensitive to the Long Wave Infrared spectrum can operate in the absence of visible light and are robust to changes in illumination and visual obscurants such as fog, dust, and smoke. Alongside these advantages are challenges: thermal cameras are lower contrast and experience greater motion blur, while event cameras represent a new paradigm that is not directly compatible with existing algorithms. Advancing AV capabilities to leverage novel sensors will require further research development across academia and industry. This workshop seeks to engage the research community in discussion of exciting new research topics in novel sensors for AV perception, and will feature the release of a unique dataset for AV perception with novel sensors, and a student poster session.

Website: https://sites.google.com/umich.edu/novelsensors2023

Thursday, October 5th, Full-Day Workshops and Tutorials (8:30 am to 5:30 pm)

Organizers: Allan Wang, Nathan Tsoi, Benjamin Stoler, Phani Teja Singamaneni, Alhanof Alolyan, Rohan Chandra, Marynel Vázquez, Jean Oh, He Wang, Rachid Alami, Aaron Steinfeld

Abstract: With robots beginning to share spaces with humans in uncontrolled environments, social robot navigation is an increasingly relevant area of robotics research. However, robots navigating around humans still presents many challenges. Recent works have shown that it is insufficient for robots to simply consider humans as “dynamic obstacles”. How can a robot navigate safely and efficiently among crowds? How can a robot capture environmental context and obey social rules? How should we measure and benchmark a robot’s navigation performance? We propose to bring together experts and community members from diverse backgrounds to discuss advances in social navigation and to unify efforts around evaluation methods. This workshop is a sequel to our successful workshop at ICRA’22. The day-long event will feature invited expert speakers, poster presentations, a moderated panel discussion, and benchmark challenges. We will announce the winner of the SEANavbench challenge. SEANavbench is our open-source benchmarking suite which enables training and evaluating social navigation systems in a high visual fidelity environment. We will also announce two new social-navigation benchmark tracks for 1) multi-agent planning and 2) trajectory forecasting.

Website: https://seanavbench23.pages.dev/

Organizers: Nikhil Deshpande, Helen Oleynikova, Mitsuhiro Kamezaki, S. Farokh Atashzar, Daniel Szafir, Fumihiro Kato, Jeffrey Delmerico

Abstract: Extended reality (XR) is a “catch-all” term, extending the representation of the real world to humans, to enhance their interaction with it through (a) Augmented Reality, enhancing a user’s perception of the real world through spatially superimposed virtual information, (b) Virtual Reality, immersing the user in a 3D virtual world allowing presence and interaction with virtual information, and (c) Mixed Reality, combining the virtual and real worlds and the spatial superimposition for perception and interaction. These topics intersect with robotics in important ways: (i) XR essentially requires the same spatial perception of the environment as a robotic agent interacting in a real-world environment; (ii) XR interfaces and robots utilize prior knowledge of how humans perceive, represent, and move through space for tasks, e.g., semantic scene understanding; and (iii) XR is a simulation environment for realistic physics-based interaction between elements, which can be exploited for robot learning. XR and Robotics draw on contributions from diverse fields, e.g., robot mechanisms, control, haptics, simulations and digital twins, HCI/HRI, AI/ML/DL, SLAM, neural interfaces, natural language and gesture understanding, user interface design, as well as cognitive psychology. This workshop brings together researchers from the XR and Robotics worlds, with the goal of discussing how XR helps address challenges in robotics through the theoretical and experimental bases of perception, cognition, and interaction behavior in XR technologies, including environments design, data representation (virtual models, interaction physics, rendering, video, point-clouds, 3D reconstruction, ambient sensors), sensory substitution (tactile, auditory), natural gesture and intention understanding, real-time data transmission, etc.

Website: https://sites.google.com/view/xr-robotics-iros2023/

Organizers: Luka Peternel, Wansoo Kim, Heni Ben Amor, Arash Ajoudani, Eiichi Yoshida

Abstract: Collaborative and wearable robots in workspaces present potentially powerful tools for the field of ergonomics, paving the way to promote work efficiency and safety. These robots can anticipate and mitigate the physical risk factors related to work-related musculoskeletal disorders, for example, by incorporating human models into robot control to make it aware of the human co-worker’s ergonomic status and actively reconfiguring the work process with the robot. To tackle these major challenges and opportunities, we previously organized a series of successful workshops at IROS and ICRA. So far, our focus was primarily on using classical human physical and cognitive models and the associated pros and cons. Our previous workshop talks and discussions spotlighted the reliability of such models, however, raised several open questions about their personalization. On the other hand, the robotics community has made substantial progress on the topics of machine learning in the past decade, which offers great potential for model personalisation and enhanced adaptiveness of models and robot behaviours to facilitate ergonomics. Hence, the objectives of the proposed workshop are i) to review the progress which was achieved since our last workshop, ii) to discuss how robot learning methods can help with ergonomic human-robot collaboration iii) to form guidelines for future research in this direction. This agenda requires experts from various research fields and interdisciplinary discussions. Thus, we assembled a diverse yet synergistic set of organizers and speakers, who are leading experts in their respective areas that are highly relevant to the workshop topic.

Website: http://harco.hanyang.ac.kr/2023/03/03/IROS2023_workshop.html

Organizers: Julia Starke, Noémie Jaquier, Leimin Tian, Yasuhisa Hirata, Dana Kulic, Serena Ivaldi, Tamim Asfour

Abstract: The development of assistive robotic technologies is crucial to tackle upcoming critical societal challenges, including the aging society and the increasing intermixing of work and leisure, among others. Assistive robots may be of particular help in caregiving, in completing household chores, and in supporting and augmenting humans. In particular, the support of assistive robots may enable elderly and impaired people to lead an autonomous and self-determined life. To provide personalized assistance, future assistive robots must be able to operate and evolve around humans in a daily basis and in dynamic real-world environments, to efficiently and continually learn new tasks from human and interaction with the world, to coherently extrapolate their knowledge to solve previously unseen problems, and to quickly adapt to changes. This workshop is aimed as a discussion on the development of assistive robots — including humanoid robots, exoskeletons, and other assistive devices — to support people in their daily life. We aim at bringing together researchers from various robotic areas to explore the core challenges, ranging from building assistive robots tailored to human environments to incrementally learning safe robot skills based on experience and interaction. Finally, this workshop aims at building bridges between the robotics community and researchers in other disciplines such as social and sport sciences, which are crucial to complement the core robotics research and strive towards versatile and holistic assistive robotic systems.

Website: https://sites.google.com/view/iros2023-assistive-robotics/

Organizers: Tabitha Edith Lee, Zizhao Wang, Sarvesh Patil, Caleb Chuck, Jiaheng Hu, Prof. Yuke Zhu, Prof. Oliver Kroemer

Abstract: The grand vision of robotics for general-purpose machines that can act, perceive, and learn in real-world environments must be met with intelligent capabilities of equal measure. To date, tremendous advances in robotic artificial intelligence have been achieved primarily through correlation-based methods. Yet, correlation is not causation: correlation-based methods can lack robust generalization, are brittle to distribution shifts, and can learn incorrect, spurious relationships in observed data. However, a hallmark of human intelligence — reasoning about cause-and-effect — can provide clues towards the next generation of embodied intelligence for robots. Indeed, the advantages of adopting the principles of causality have been witnessed to date in fields such as biomedical science, economics, and genomics. Recently, it has been argued that the machine learning community should adopt the principles of causal inference, towards causal learning of representations. Analogously, this workshop argues that the field of robotics stands to gain by integrating causality, moving robotics towards human-like embodied intelligence. Can robots learn and leverage the causal structure of problems? Can interventions and counterfactuals provide greater robot intelligence? And, ultimately, what next-generational capabilities can be unlocked through robots that can answer the question of “Why”? To this end, this workshop brings together experts in both causality and robotics to present recent advancements in the intersection of these areas and discuss implications of causal-based methods for robotics. Academic and industry researchers will discuss the fundamental challenges and opportunities that arise in the development and deployment of robots and autonomous agents that leverage these principles, advancing robotics beyond correlations.

Website: https://sites.google.com/view/iros23-causal-robots

Organizers: Giovanni Pittiglio, Yash Chitalia, Animesh Garg, Xiaoguang Dong

Abstract: The application of robotic solutions to healthcare has soared in the past two decades, spanning from rigid to flexible and fully soft robots which can aid in diagnosis and therapy inside the human body. Control, design, sensing and fabrication of these robots is based on understanding their behavior and interaction with the anatomy; this is a major challenge and general approaches are hard to define. In addressing this fundamental challenge, researchers have considered several techniques which range from model-based to data-driven approaches. The former is based on mechanically accurate modelling, while the latter focus on information provided from data and are devoted to learning. This workshop aims to bring together researchers from the areas of machine learning and medical robotics to explore the opportunities and interplay between these two sub-domains. The goal is to understand areas to explore and how researchers can combine these main approaches, towards improving the current practice in the modeling and control of new robot architectures with inherent uncertainties (e.g. soft and continuum robots and wire-actuated robots).

Website: https://medrob-workshop.github.io/

Organizers: Steven Ceron, Kirstin Petersen, Yufeng Chen, Daniela Rus

Abstract: Natural swarms consistently demonstrate that a collective composed of simple constituents can exhibit functions that far exceed the capabilities of any single agent. Social amoeba, for example, is composed of thousands of cells that use local oscillatory chemical signaling to collectively reconfigure their morphology and functions as a response to nutrients in the surrounding environment. This species embodies many features that swarm roboticists envision in scalable, self-reconfigurable robot collectives at all length scales: local-to-global behaviors, low-level communication, plasticity, and simple constituents. This vision drives our workshop’s goal: to bring together roboticists working on micron scale and macro-scale systems with a shared interest in swarming systems. We will bring together eminent experts on swarm robotics that demonstrate that regardless of the length scale, they can exploit robot morphology, physical interactions among agents, and low-level coordination mechanisms to enable collective behaviors for diverse applications. Our workshop will feature keynote presentations by experts and rising stars on recent research on swarm robotics ranging from simulation to physical realization studies; directed group discussions between researchers originating from diverse fields related to robot swarm fabrication, actuation, sensing, and coordination; and poster presentation sessions with video demonstrations of robot swarms. We aim to (1) inspire new robot collective systems that borrow methods of interaction between agents from the opposite end of the length scale, (2) introduce researchers to diverse sets of local interaction mechanisms that can enable complex and functional collective behaviors at various length scales, and (3) facilitate discussion between established researchers and rising stars.

Website: https://swarmsatallscales.weebly.com

Organizers: Michael Beetz, Jörn Syrbe, Arthur Niedzwiecki, Giang Nguyen, Sascha Jongebloed

Abstract: Cognitive robotics is a driving force for the development of AI systems that can master complex tasks autonomously. These knowledge-based robotic systems are able to interpret their environment, thus allowing them to interpret the context and to understand vaguely formulated instructions. The skills to understand and develop such systems can best be taught using the robot control system as a structure that is then deconstructed into its components, enabling students to understand all relevant components as well as how they interact with each other. One of the milestones of the European Robotics and AI Network (euROBIN) is for a robot to receive a parcel from a human, open, and empty it. Based on this scenario, we present a best practice tutorial on how to implement a solution for this task that integrates all necessary software components in the framework of the robot control process. In the context of this tutorial, we focus on knowledge representation and reasoning, planning, and the simulation framework (Mujoco), bringing these components together into a learning environment that – in the extended version – introduces the whole control process of Cognitive Robotics. The learning environment follows an immersive approach, using a physics-based simulation environment for visualization purposes that helps to illustrate the concepts taught in the tutorial. Using Jupyter Notebooks in a Docker environment, our learning environment is easily accessible without having to install different software packages and is independent of the learners’ technical setup. The tutorial is a shortened version of the full one-week tutorial presented at the EASE Fall School 2023 (which is a consistent further development of the hands-on courses of the Fall Schools, which have been held annually since 2018) and aims to introduce the teaching approach as a best practice example.

Website: https://ease-crc.org/teaching-cognition-enabled-cognitive-robotics-in-an-integrated-learning-environment/

Organizers: Signe Redfield, Dejanira Araiza Illan, Emily C. Collins, Michael Fisher, Kevin Leahy, Joanna Olszewska, Nico Hochgeschwender, John S. Baras, Angelo Ferraro, Cristian Vasile, Javier Ibañez-Guzmán

Abstract: “It works really well!” is the expected conclusion of academic publications, technical reports, and project outcomes that seek to demonstrate a designed system (i.e. robot, autonomous vehicle, controller) is functionally sound, safe and/or trustworthy. In this workshop, we aim to dissect and examine under a critical eye each component of the phrase: the “It”, namely how the system under- test has been defined for verification and validation purposes, and how this meets the needs for robotics in sectors such as autonomous vehicles and medical robots; the “works”, namely the definition of the system’s tasks or behaviours, including systems that might learn and at industrial scale; the “really well”, namely the formulation of metrics, evaluation tools, reproducibility and replicability frameworks, and extensibility to an “industrial grade”; and, furthermore, the ability to openly report “failures”. The workshop will bring together experts in robotics, verification and validation, end-user experience, regulations and standards, formal methods, assurance and certification, and industries including healthcare, transportation, aviation, military, and automotive. We will have three sessions with invited talks followed by Q&A, and a fourth session with an interactive panel. We aim to open new avenues for future interdisciplinary collaboration among participants and expect that this will significantly advance the field of verification of autonomous systems.

Website: https://robotistry.org/vaswg/IROS23_Workshop/

Organizers: Raffaello Camoriano, Cristina Piazza, Giuseppe Averta, Lorenzo Natale, Carlo Masone

Abstract: Building robots capable of dexterous interaction with objects to carry out fine manipulation tasks has always been a grand challenge in robotics. The non-smooth, brittle nature of manipulator-object mechanics, together with perceptual uncertainty, easily violate the assumptions of early planning and control methods. Furthermore, accurate physical modeling of complex or non-rigid mechanical systems requires large amounts of computations, which is incompatible with real-time control. Such challenges led researchers to develop a wide range of approaches, from adaptive control tailored to the (potentially changing) properties of the object at hand, to advanced perception to tackle measurement uncertainty. Machine learning also contributed by providing actionable representations of complex geometries and visual appearance, and by encoding hard-to-model expert demonstrations to reduce the cost of trial-and-error. In turn, these informed the development of novel robot control methods enabling more robust and dexterous skills. At the same time, the employment of mechanical models proved effective for enforcing structural constraints in robot control systems (including learning-based ones), thus improving safety and guiding exploration. However, there are still many open challenges that need to be addressed to achieve long-horizon robotic manipulation and sidestep the computational burden of accurate simulation of contact-rich scenarios. The ambition of this workshop is to provide a comprehensive overview of the broad and scattered state of the art in robot manipulation and grasping, spanning model-based and learning-based approaches. Talks and interactive sessions will enable a deeper understanding of current approaches in different use cases, while stimulating the development of new methods.

Website: https://sites.google.com/view/learning-meets-models-iros2023

Organizers: Giuseppe Loianno, Davide Scaramuzza

Abstract: Autonomous aerial and ground robots have the potential to assist humans in complex, timesensitive, and dangerous tasks such as search and rescue or monitoring in indoor and outdoor environments. This requires single and multiple robots to autonomously navigate in a coordinated, agile, and collaborative manner in uncertain, dynamic, cluttered, and extreme environments. The focus of this workshop is to study and analyze the role and benefits of data-driven techniques in the sense and act problem for achieving robot super autonomy. Robot super autonomy refers to unmanned, agile, resilient, and collaborative machines that can make decisions without human intervention, and can outperform current autonomous vehicles in uncertain, complex, dynamic, extreme, and cluttered environments. In this context, Learning-based solutions are becoming essential approaches in complementing or replacing physics-based techniques at the control, perception, and planning levels to boost online navigation performances, robots’ resilience, and algorithms’ scalability in terms of the number and types of robots. This workshop will bring together researchers, industry experts, and practitioners from the aerial, ground, space, and legged robotics communities. The audience will be heterogeneous in terms of expertise and interests, involving people from academia, industry, and government agencies, as well as practitioners focusing on the broad areas of robotics, AI, and machine learning for autonomous robots working across air, space, ground, and off-terrain domains. The workshop will be extremely relevant for researchers to foster ideas in the areas of robotics, AI, and related disciplines, learn about the challenges and limitations of current approaches for robot super autonomy, and explore potential solutions. Finally, this event includes several activities tailored for early-career researchers, giving them an opportunity to learn about the latest advances in autonomous robot technology.

Website: https://wp.nyu.edu/workshopiros2023superautonomy/

Organizers: Soheil Gholami, Kunpeng Yao, James Hermus, Etienne Burdet, Aude Billard

Abstract: The ability of humans to effectively coordinate their limbs in diverse situations, such as manipulating objects, dancing, or playing the guitar, has captivated the attention of researchers from a variety of fields from neuroscience to robotics. Neuroscience researchers often use robotics techniques to model human motor control. Conversely, roboticists strive to use new control and learning algorithms to endow robots with human-like abilities such as regulating many degrees of freedom, behaving in a way that is robust to disturbances, and managing complex contact situations. The interplay of human motor control and robotics has been fruitful, with achievements in inverse optimal control, learning-based methods, and variable impedance control. These existing interdisciplinary studies present new challenges that require further discussion and exploration. This full-day workshop will bring together young and senior researchers at the forefront of human motor control and robotics to discuss the trends and challenges in these fields. Moderated live Q&A panel discussions will provide an opportunity for attendees to discuss open questions and challenges facing the field of behavioral neuroscience and robotics in future research, and to examine how state-of-the-art studies apply these approaches in both fields.

Website: https://www.epfl.ch/labs/lasa/events/multi-limb-coordination-in-human-neuroscience-and-robotics/

Organizers: Konstantinos Karydis, Stefano Carpin, Stavros Vougioukas, Cyrill Stachniss

Abstract: Increasing population, decreasing arable land, climate change, and a declining skilled workforce pose unprecedented challenges to the ability to satisfy the growing demand for food on a global scale, in a sustainable manner. It is thus becoming more and more important to increase or at the very least maintain current productivity while using fewer inputs, such as water and agrichemicals. Precision agriculture (PA) aims to address this issue. In recent years there has been a boom in agricultural robotics and allied technology (e.g., vegetation-specific sensors) employed within PA routines. Such efforts have so far been undertaken in a rather decoupled manner. However, several research labs (both in academic and private sector settings) around the world are increasingly co-designing actuation and perception systems, to help introduce the agricultural robots of the future. This workshop aims to highlight and in turn help push forward co-design of actuation and perception efforts for future agricultural robotics. Its main objective is to bring together researchers that focus on robotics and sensor technologies, and who are situated both within academia and industry. Doing so will create stronger links between both ends. Academic researchers will have the opportunity to learn more about specific technologies that are already field-ready while industry partners will gather more detailed information about some key current challenges holding back agricultural robotics research that may not be possible to resolve in a scalable manner within the limits of academic research.

Website: https://sites.google.com/view/agrobotics

Organizers: Nikolay Atanasov, Luca Carlone, Kevin Doherty, Kaveh Fathian, Golnaz Habibi, Jonathan How, John Leonard, Carlos Nieto, Hasan Poonawala, David Rosen, Sebastian Scherer, Chen Wang, Shibo Zhao

Abstract: This workshop aims to present the latest advancements and frontier techniques in computer vision and machine learning that are expected to have a significant impact on robotic perception and mapping and set the direction of research in the next 5-10 years. Through a series of invited and contributed talks by renowned academic leaders and researchers, the event will discuss frontier technologies for robotic perception and mapping with particular focus on addressing existing computer vision challenges such as dealing with dynamic environments and non-rigid objects, trade-offs between scalability (capturing large environments over long periods of operation without running out of memory) versus expressivity (capturing precise details about the environment characteristics, including geometry, semantics, dynamics, topology), as well as addressing machine learning challenges such as reducing training and inference time of machine learning models, fitting large models on small robotic platforms, trade-offs between pre-training and fine-tuning environment models, and ensuring generalization and robustness. To encourage interaction among participants, the workshop will feature panel discussions, posters, and spotlight talks. The event will adopt a hybrid format with both in-person and remote participants. All talks and accepted contributions will be published on the workshop’s webpage to expand its reach and impact. This workshop is a follow-up to the well-received ICRA 2022 workshop on “Robotic Perception and Mapping: Emerging Techniques,” which had the largest attendance across all workshops with over 1,000 participants. This follow-up workshop will provide a new perspective by inviting a new set of speakers and discussing frontier research on computer vision and machine learning (instead of mapping, which was the focus of the previous workshop). To differentiate the workshop from traditional computer vision and machine learning conferences, the talks are particularly focused on techniques with direct application in robotic perception and autonomy.

Website: https://sites.google.com/view/ropem/

Organizers: M. Ani Hsieh, Herbert Tanner, Kleio Baxevani, Victoria Edwards, Thales Costa Silva

Abstract: The goal of this workshop is to bring together ocean science experts and roboticists across academia, government, and industry. The ocean is one of the Earth’s most valuable natural resources and a predominant driver of climate-weather patterns across the globe. Academic robotics research efforts have mostly focused on environmental monitoring, exploration, and information acquisition in support of ocean sciences. However, the growing need for climate adaptation, mitigation, and carbon emission reduction has increased the demand for commercial marine activities such as offshore wind exploitation and fisheries. Blue Economy—a term used to refer to all economic activity along the coasts and off to deep water—encompasses a wide range of industries and disciplines whose focus and interests are diverse and lack a well-defined regulatory framework. This creates a unique set of challenges when setting technological and scientific goals and specifications for prospective science and technology innovators and developers. This workshop will bring roboticists and practitioners in ocean related fields, to a common forum where they can both recognize each other’s perspectives on challenges and opportunities for sustainable and climate-change friendly growth in the Blue Economy. The workshop will enable the development of new synergies, opportunities for collaboration, and share challenges in current approaches. The goal is to unlock the potential for exponential growth of the Blue Economy. The workshop will feature presentations by invited speakers, a framed panel discussion, and two interactive sessions. During the interactive sessions, students and companies will present cutting-edge research and development and have networking opportunities.

Website: https://sites.google.com/udel.edu/iros2023-robotsblueconworkshop/home

Organizers: Tommaso Lenzi, Bobby Gregg, Elliott Rouse

Abstract: Ambulation with conventional prostheses is slower, less stable, and less efficient than able-bodied ambulation, causing reduced mobility and quality of life. Robotic powered prostheses have the potential to close the gap between the performance of existing lower limb prostheses and human legs. In contrast to conventional passive prostheses, powered prostheses can provide biomechanically-accurate kinetics and kinematics, including during activities that require energy injection. Powered prostheses have evolved from devices tethered to a power supply or computer to devices that have onboard electronics and batteries. Powered prosthetic devices are now able to assist during walking, ambulation on stairs and ramps, and sit-stand transitions. However, there are significant challenges that the field needs to address for powered prostheses to fully realize their potential. Powered prostheses are often heavy, bulky, noisy, and fragile compared to conventional prostheses. Even the most advanced controllers cannot match the agility of the human body. Powered prostheses have made many advances but there are still challenges that must be addressed. This workshop will focus on the state-of-the-art and open challenges in powered prosthetics. Top researchers in the field will present their current and future approaches to designing and controlling powered prostheses. Live demonstration of powered prostheses will provide hands-on experience with leading technology. Poster sessions and panel discussions will provide further opportunities for participants to interact. This workshop will allow both established researchers and those new to the field of robotic prostheses, to experience where the field is today and what challenges lie ahead.

Website: https://belab.mech.utah.edu/iros2023/

Organizers: Neil T. Dantam, Khen Elimelech, Federico Pecora, Lydia E. Kavraki

Abstract: Academic research into Task and Motion Planning (TAMP) has proceeded for well over a decade, producing key results in the analysis of, and algorithms for, TAMP scenarios. However, while certain aspects of TAMP research are currently reduced to practice, the overall inroads thus-far remain limited. This workshop aims to bring together academic and industry experts to discuss and reflect on the current state and direction of TAMP research and practice. Academic participants will offer the latest theoretical results and challenges in TAMP, while industry participants will present both successful applications and pragmatic challenges to TAMP adoption. Particularly, we aim to identify and discuss unsolved scientific challenges, and necessary systems developments, to broaden the impact of TAMP research. We hope that this workshop will stimulate insightful discussions and prioritization of efforts in TAMP research to address key practical needs.

Website: https://dyalab.mines.edu/2023/iros-workshop/

Organizers: Xiaonan Huang, William Johnson, Kun Wang, Shiyang Lu, Luyang Zhao, Joran Booth, Rebecca Kramer-Bottiglio, Devin Balkcom, Kostas E. Bekris

Abstract: Composed of rigid struts and compliant tendons, tensegrity robots boast a remarkable strength-to-weight ratio and demonstrate extraordinary shape morphability, stiffness tuning, and impact resistance. These favorable properties make tensegrity robots an attractive technology for the next generation of adaptive, multi-terrain robots. The unique advantages of tensegrity robots show promise for impact-resistant planetary rovers, pipe-climbing robots that can change their shape, lightweight aerial robots, adaptive underwater robots, and more. However, tensegrity robots’ compliance, coupled dynamics, and many degrees of freedom pose challenges in actuation, sensing, and control. No work to date has achieved the ultimate goal of demonstrating a fully autonomous, untethered tensegrity robot that is capable of navigating unstructured terrain and surviving mechanical impacts. To address these grand challenges, the field of tensegrity robotics needs more research into tools and technologies for automated system design, state estimation, environmental sensing, and autonomous navigation. This workshop aims to bring together not only researchers who study tensegrity robots but also experts in complementary domains, specifically modular robotics, control systems, and sensing and perception, who can provide insights into and solutions to open challenges in our field. In this workshop, we will showcase the state of the art of tensegrity robotics, discuss the grand challenges tensegrity robots face, attract and engage new participants in the field, and facilitate future collaborations.

Website: https://www.eng.yale.edu/faboratory/tensegrityworkshop

Organizers: Jim Pippine, Robert Griffin, Paul Oh, Taskin Padir

Abstract: In October 2012, the US Defense Advanced Research Projects Agency initiated a three year Robotics Challenge (DRC) designed to move humanoid robots out of the lab and into the field to assist in roles like disaster relief. The Challenge would culminate in 2015 with over 20 teams from around the world competing in a live event. It has been over 10 years since the start of the DRC and other competitions including the Avatar XPRIZE (2022) and the DARPA Subterranean Challenge (2021) have advanced the technologies, but a robust humanoid still does not exist. The workshop will review what was learned at those events by both the organizers and the participants and ask the question of how do we move forward as a community? The presentation and panels will discuss important lessons from prior work and highlight the key barriers to advancing humanoids. The intent is to identify areas that both researchers and sponsor agencies see as technical roadblocks and build a framework for pursuing further research.

Website: https://goldenknighttechnologies.com/

Organizers: Andrew Spielberg, Andrea Censi, Emilio Frazzoli, Gioele Zardini

Abstract: Building robots is the closest humans have come to creating artificial life. Embodied with sensing, action, and reasoning capabilities, robots hold the immense promise of surpassing the abilities of the animal kingdom. However, it is unclear that human designers alone can emulate or exceed life forms that have evolved over billions of years, filling various ecological niches and diverse physical forms. This tutorial explores the emerging field of computational robot design—computer-aided or computer-driven workflows that co-design robots by integrating geometry, topology, actuation, sensing, materiality, sensorimotor control, proprioception, and task-planning with higher-level reasoning. The tutorial provides participants with an introduction to robot co-design and aims to connect multiple communities to enable the development of composable models, algorithms, fabrication processes, and hardware for embodied intelligence. Accessible to all backgrounds and seniority levels, the tutorial covers various topics of interest to the robotics community, including soft robotics, robot learning, future mobility systems, and more. This tutorial is intended to be accessible from any background and seniority level and will present applications to a wide array of topics of interest to the robotics community. This tutorial combines its common themes into a cohesive theory that will be put into practice during an interactive session. Participants will have the opportunity to explore dedicated demos and “learn by doing” through guided exercises.

Website: https://sites.google.com/view/iros2023codesign/home

Organizers: Maximilian Naumann, Maximilian Igl, Simon Suo, Thomas Gilles, Yiren Lu, Fabien Moutarde, Anca Dragan, Shimon Whiteson

Abstract: Simulation is a crucial tool to accelerate the development of Autonomous Driving (AD) algorithms. Realistic traffic agent models reduce the sim-to-real gap and have the potential to massively scale evaluation of AD algorithms, paying into both development and safe deployment of AD systems. Research in traffic agent modeling has recently made great advances, for example due to the switch to graph neural networks and training through sequential decisions. Yet, the research is scattered over different robotics and machine learning venues. This workshop shall provide a platform to highlight recent advances and future directions towards realistic traffic agent models. Furthermore, this workshop shall provide insights from related fields such as prediction, motion planning and AV safety. Through a mix of invited talks from both academia and industry, paper presentations, and a panel discussion, attendees will be able to catch up with the latest advances, promising directions and most pressing challenges. Attendees will also be able to network in this growing field of research and related areas.

Website: https://agents4ad.github.io/

Organizers: Tadahiro Taniguchi, Emre Ugur, Masahiro Suzuki, Dimitri Ognibene, Lorenzo Jamone, Yukie Nagai, Tatsuya Matsushima, Tetsunari Inamura

Abstract: This workshop will explore new frontiers in robotics, highlighting world models, predictive coding, probabilistic generative models, and the free energy principle. The ultimate goal of cognitive and developmental robotics is to create autonomous robots capable of actively exploring their environment, acquiring knowledge, and continuously learning skills. Crucially, to develop robots that learn through interactions with their environment, their learning processes should emulate human cognitive development and learning, which are based on engagement with the physical and social world. This workshop focuses on world models and predictive coding in cognitive robotics. Recently, world models have attracted significant interest in artificial intelligence, as cognitive systems learn these models to improve future sensory predictions and optimize their policies or controllers. In neuroscience, predictive coding posits that the brain constantly anticipates its inputs and adjusts its models to manage dynamics and control behavior in its environment. Both concepts may underlie the cognitive development of robots and humans capable of continuous or lifelong learning. The workshop aims to provide a platform for researchers and practitioners in cognitive robotics to exchange ideas and explore new avenues for the development of autonomous cognitive and developmental robots. This enriching experience will contribute to the growth and advancement of the field of cognitive robotics.

Website: https://world-model.emergent-symbol.systems/

Thursday-Friday, October 5-6, Two-Day Workshop (By-Invitation-Only)

Note: The workshop is by invitation only and does not require IROS conference registration. If you are interested in attending, please contact the organizers.

Organizers: Claire Le Goues, Sebastian Elbaum

Abstract: The progress in robot development and its impact in the last decade have been astounding. Yet, software engineering techniques and tools have not kept up with this revolution, and in many cases are hindering it. The workshop will bring together thought leaders from academia and industry, in robotics and software engineering, to identify key problems in software engineering for robotics that we should tackle in the next 5 years, and to coalesce a community around those problems. The workshop will produce a roadmap for the community, as well as generate new connections, collaborations, and synergy amongst diverse researchers interested in this problem domain.

Website: https://se4robotics.github.io/