Today, human intervention is the only effective course of action after a natural or artificial disaster. This is true both for relief operations, where search and rescue of survivors is the priority, and for subsequent activities, such as those devoted to building assessment. In these contexts, the use of robotic systems would be beneficial to drastically reduce operators’ risk exposure. However, the readiness level of robots still prevents their effective exploitation in relief operations, which are highly critical and characterized by severe time constraints. On the contrary, current robotic technologies can be profitably applied in procedures like building assessment after an earthquake. To date, these operations are carried out by engineers and architects who inspect numerous buildings over a large territory, with a high cost in terms of time and resources, and with a high risk due to aftershocks. The main idea is to have the robot acting as an alter ego of the human operator, who, thanks to a virtual-reality device and a body-tracking system based on inertial sensors, teleoperates the robot.
The goal of this article is to discuss the exploitation of the perception and manipulation capabilities of the WALK-MAN robot for building assessment in areas affected by earthquakes. The presented work illustrates the hardware and software characteristics of the developed robotic platform and results obtained with field testing in the real earthquake scenario of Amatrice, Italy. Considerations on the experience and feedback provided by civil engineers and architects engaged in the activities are reported and discussed.
Operations in Postearthquake Scenario and Robotic Applications
In the past few years, the high number of disasters, such as the Fukushima Daiichi nuclear accident, has raised attention for the development and deployment of search-and-rescue robotic platforms in disaster scenarios [1]. Earthquakes often lead to structural-integrity failures where buildings may be breaking, tearing apart, or collapsing. The 2016 earthquake in Amatrice, Italy, can be considered an example of such a disaster (Figure 1). On 24 August 2016, a severe 6.0-magnitude earthquake followed by at least five aftershocks, which ranged between 5.9 and 6.5 magnitude, took place in Italy and affected four different regions (Lazio, Abruzzo, Umbria, and Marche) and 180 municipalities. This set of earthquakes was the biggest in Italy in the last three decades and affected more than 25,000 people (who had to be evacuated from their homes) and more than 62,000 buildings.
An overview of the seismic event in Italy in August 2016. (a) The area affected by the earthquake with a color scale based on the moment of magnitude. (b) The town of Amatrice (earthquake epicenter). The central part is completely destroyed, whereas the buildings of the peripheral areas (red arrows) resisted. (c) The earthquake magnitude data, from August to October. (Images courtesy of Istituto Nazionale di Geofisica e Vulcanologia.)
Rescuer intervention in this scenario is usually characterized by two separate phases: 1) the rescue of and assistance provided to the people who are trapped under the rubble or are injured, and 2) the technical assessment of damaged buildings and the assistance to inhabitants, who need to recover items from their homes. The rescue phase is always immediate, given that the operation time may affect the life of the people in danger. However, the second phase usually takes weeks or months, time during which a limited number of technical experts enter for inspection of all of the damaged buildings in the affected area [Figure 1(b)]. This procedure also has to be repeated after every aftershock [Figure 1(c)] [25]. During both these phases, the emergency responders involved are at high risk because they need to enter partially collapsed buildings or areas with severely damaged masonry. Traversing doors, narrow passages, and areas obstructed by rubble or objects scattered on the ground makes the indoor environment very complex and the operations lengthy and tiring.
Unfortunately, events in the recent past have shown how dangerous and critical this kind of work can be. On 26 September 1997, technicians were inspecting the status of the Basilica di San Francesco in Assisi, Italy, after an earthquake. During the inspection, an aftershock caused a collapse of the Basilica, causing the death of four of the technicians.
To support or replace humans in dangerous operations, robotic platforms should possess human-like capabilities, especially concerning locomotion and manipulation skills for traversing rubble, clearing paths, and retrieving objects [2], [3]. Research in this field has been nurtured through the organization of several competitions, such as RoboCupRescue, euRathlon, and the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge. In these contests, robots have to face a sequence of tasks inspired by real scenarios that highlight different aspects and challenges related to emergency operations.
Search-and-rescue robotics activities in real scenarios have mainly focused on providing three-dimensional (3-D) mapping of the environment or human localization [4], [5]. Often, these systems provide an integrated and intuitive interface for users who are not roboticists. In [1], the key features for search-and-rescue robots are summarized as survivability, mobility, sensing, communication, and operation. Moreover, autonomous operations in complex unstructured environments require extensive programming efforts to consider all of the environmental constraints, and often robots cannot cope with unforeseen events. An alternative approach for these tasks is to provide intuitive interfaces to the pilot for teleoperating the robot [6]. Similar approaches have been presented in various other fields, such as space [7] or surgery [8], [9].
Recent developments on legged locomotion for full-body humanoid or animaloid robots, although very promising, do not show reliable and robust enough performance yet for these environments, especially in tasks with time-execution constraints, as demonstrated by the DARPA Robotics Challenge held in 2015. The FP7 European project WALK-MAN [26] is focused on developing a humanoid robot that can address several of the aforementioned challenges that may arise in a disaster. In this project, we collaborate with the Protezione Civile Città Metropolitana di Firenze, Italy, to identify the requirements and application technologies for a humanoid robot that needs to take part in an intervention, such as after an earthquake.
This article presents a use case for humanoid robots in postearthquake scenarios as avatars for remote inspection, damage assessment, and object retrieval. We discuss the mission specifications coming from Protezione Civile Città Metropolitana di Firenze operators, present the system setup and a novel, intuitive, and immersive teleoperation interface designed to address this challenge, and report on the results of the on-site testing. This article focuses on the modifications and development of new components to address the challenges posed by very specific postearthquake scenarios. A detailed description of the WALK-MAN hardware and software architecture can be found elsewhere [10].
Given the critical aspects of a rescue task compared to the stability and time constraints of a robotic system, it is still unrealistic to approach the first phase of intervention. Hence, our work has been devoted to field testing of the perception and manipulation capabilities required to tackle the operations related to the second phase, as described previously. For this scope, we developed a robotic platform based on the WALK-MAN robot technology, which consists of a wheeled base and a humanoid upper body. In this way, both perception and manipulation tasks can take place during the operation. Its compliant arms, with underactuated end-effectors, provide a sturdy hardware for adaptive and powerful manipulation. At the same time, its perception capabilities, together with the teleoperation interfaces for vision and bimanual manipulation, provide the pilot with a set of tools for remote building assessment. Thanks to the introduced platform, the operators can remotely assess the building damage level through the evaluation table of the standard postearthquake form [11], and it may be possible for the data collected to be streamed to a remote consulting engineering firm to perform a deeper analysis of the structural integrity of the building by postprocessing the data.
The wheeled base has been designed to focus on the assessment activities with a teleoperated robot, reducing the complexity of the system with respect to teleoperated control of legged locomotion. In the article, we present a description of the hardware platform, the software control architecture, the teleoperation interface that was used to complete several dexterous tasks, and the results of the building inspection. The system effectiveness was demonstrated both in the laboratory and during several field tests (for video footage of the robot deployment on site at Amatrice, see [27]). Finally, we report end-user feedback that was collected from the experts of Protezione Civile Città Metropolitana di Firenze and the Amatrice municipality during the field tests.
Mission Objectives and Requirements
Thanks to the support of the Italian Protezione Civile Città Metropolitana di Firenze, real field testing was organized in Amatrice in one of the buildings affected by the earthquake [Figure 2(a)]. The focus of the field activity was to evaluate the feasibility of the following tasks:
building a 3-D map of the house interior status
measuring the building structural damages
recovering some objects from the house
installing monitoring systems and sensors inside the damaged building.
An overview of the mission organization. (a) The inspected house and the location of the outdoor pilot station. (Background image courtesy of Google Maps.) (b) The inspected building layout and the mission plan (the mission objectives, the planned path, and the spot suitable for room scansion are indicated). (c) A part of the building assessment standard form. (Source: Dipartimento della Protezione Civile, Presidenza del Consiglio dei Ministri.)
As for the last point, the technical experts involved suggested the use of the robot to place indoor wall position sensors that monitor building movement and to equip the robot with additional sensors, such as a multigas detector or thermal camera.
Figure 2(b) shows an overview of the inspected four-room house, which includes several connecting doors. Two indoor mission targets were a priori defined: an object to be retrieved in spot A and a door to be opened in spot B. To complete the tasks, we plotted a mission plan [Figure 2(b)] to find a path for 1) reaching the mission targets, and 2) reaching suitable locations to perform a room scan. A possible path is shown by the dotted line, whose action feasibility was verified every time online by the robot operators.
During the robotic field tests, a group of technical experts were close to the pilot station to perform the building evaluation remotely through the robotic platform. The building assessment is normally done by filling out a suitable technical form following the postearthquake procedures [11]. Figure 2(c) shows excerpts of the forms that the technical team has to fill out for each inspected building. Such forms are meant for a fast and qualitative evaluation of the building structural conditions [i.e., Figure 2(c) section 4 lists very heavy, medium, and light damage]. The information to report is essential and strongly oriented toward short-term countermeasures [e.g., the right part of the table in Figure 2(c) sections 4–6], which are evaluated based on the experience of the operator and supported by the measurements that can be taken on the field (i.e., measurement tape). The analysis of these forms provides very useful guidelines to develop specifications for the robotic mission. Accordingly, our aim was to provide the operator with an appropriate sensory feedback as he or she was personally inspecting the building, together with the possibility to extract basic quantitative measurements (point-to-point distance or angles among planes). Moreover, the assessment forms concern both the damage to structural (walls, roof, and so forth) and nonstructural elements, such as hydraulic or gas pipelines and electrical systems. Especially to detect the latter [Figure 2(c) section 5], given the limitations of autonomous recognition systems, it is essential to have the human in the loop to perform an evaluation based on his or her expertise.
To define the mission requirements, our design team went to the town of Amatrice one month before the official mission to visit the areas affected by the earthquake. Figure 3 shows some of the photographs taken during the inspection. Among the normal households features (i.e., doors, tables, and so on), the main characteristics of a postearthquake scenario are debris on the ground, collapsed furniture limiting accessibility to the rooms, and damages to the building structure.
The house interior status. In the photos, it is possible to recognize doors, objects scattered on the ground, collapsed furniture, and vision-obstructing items.
The combined information provided by the filled-out form and the inspection of the interior of the house was used to define the hardware specifications of the robotic platform, which are provided in Table 1. The requirements are divided into five domains, which define the specifications of the different subsystems that constitute our robotic platform: perception, manipulation, mobility, autonomy, and user interface for teleoperation. Specifications contained in Table 1 represent the input for the following sections where the implementation of the setup is discussed.
Robotic Platform Setup
The mission field was organized in three areas: 1) the remote pilot station [shown in Figure 4(e)], 2) the outside zone peripheral to the building, and 3) the indoor zone, where the robot operates. The overall infrastructure was organized as depicted in Figure 4(a)–(c). The components that belong to the robot and operator are seen in Figure (a) and (c), respectively. Two Ethernet cables, one dedicated to the control data and one dedicated to the vision data, have been connected to two wireless routers, one near the pilot station and one near the entrance of the building. In this way, the robot remotely receives commands and sends the streaming visuals back to the teleoperator.
(a)–(c) The communication and control architecture scheme, (d) the robotic platform, based on the upper body of the WALK-MAN robot, and (e) the remote pilot station.
Robot
For this mission, we developed a prototype robotic platform based on the WALK-MAN technology [10] and on the specifications determined by the scenario requirements, listed in Table 1. The robot consists of a wheeled base for better stability and a humanoid upper body for visual inspection and manipulation task completion [Figure 4(b)]. The overall size of the platform is crucial for this application due to the restricted indoor passages. Moreover, it defines the mobility capabilities of the robot. In particular, the width of its base determines the minimum allowed corridor size, whereas its length affects the turning radius of the mobile base. For these reasons, the robot was provided with the smallest mobile base available on the market and comparable with the upper-body weight and size. Overall dimensions are reported in Figure 4(d).
The end-effectors are based on the Pisa/IIT SoftHand [12], so that they increase the robustness, reliability, and efficiency of the manipulation system while reducing its mechanical and control complexity. Each end-effector is equipped with six-axis force/torque sensors that provide feedback for the manipulation tasks.
The exteroceptive visual perception system of the robot is a MultiSense SL [28] integrated in the robotic head. It includes a stereo red, blue, and green (RGB) camera, a rotating two-dimensional (2-D) lidar scanner, and an inertial measurement unit (IMU) sensor. We set the resolution of the stereo camera to one megapixel for the RGB-depth (RGB-D) data with an update rate of 15 Hz, and the laser scanner returns 1,024 points at 60 Hz and rotates at 1 rad/s. A ZED stereo camera [29] is placed on top of the robotic head and returns images of the reconstructed 3-D environment to the pilot station for teleoperation and inspection purposes. To cope with the variety of light conditions in postearthquake scenarios, the robot head is equipped with four light-emitting diode units (brightness 690 lm/unit, power 6 W/unit). Their strobing and light intensity can be actively controlled by the pilot to tune them according to need. The robot is powered by a custom lithium-ion battery (29 V–63 Ah) that provides it with about 3 h of power autonomy.
Pilot Station and Teleoperation Interfaces
The WALK-MAN–pilot interface (PI) [13] is used by the operator to send high-level commands to the robot and visualize its kinematic state, which is displayed in the 3-D environment surrounding it (Figure 5). A monocular scene image is also visualized in the interface.
The PI used by the operator of PC1. The 3-D viewer is used to understand the scene and take measurements.
A custom human–machine interface (HMI) has been realized to teleoperate the robot [Figure 4(a)–(c)]. The HMI is composed of an immersive 3-D viewer and four inertial and electromyographic bracelet sensors to control the movement of the robot arms and hands. The Myo bracelets [30] are used to acquire the teleoperator’s electromyography (EMG) and inertia measurements. We decided to place one Myo bracelet on the forearm and one on the bicep of the pilot. A Madgwick filtering algorithm [14] is used to obtain the orientation of each Myo. Hence, the relative orientation between the two devices is used to calculate the wrist pose given the length of the pilot’s arms. Finally, a linear combination of electromyographic signals from the forearms are processed, as reported in [15], to extract a signal used as a reference for the control of the robot’s hand closure. This method also allowed us to cope with the issues of placement and repeatability of EMG sensors, because each operator follows a short training session (1 or 2 min) to obtain a mapping from the EMG signals to hand closure signals. More information about the use of EMG sensors for controlling the Pisa/IIT SoftHand can be found in [16]. Virtual-reality viewer Oculus Rift [31] has been used to exploit human stereo vision and reproduce 3-D scenes, and its inertial unit and infrared sensors have been used to estimate its pose in the space. The stereo images coming from the ZED camera are sent to the 3-D viewer for a visual feedback from the robot. The orientation of the teleoperator’s head, used for robot gaze teleoperation, is computed using the inertial sensor placed in the Oculus system. The teleoperator’s wrist pose and level of hand closure is sent to the control module that translates the information into control inputs for the robot joints (see the “Teleoperation Module” section).
On the communication side, the main personal computer (PC1) was directly connected to the Ethernet cable dedicated to the commands sent by the teleoperator, while the second cable was connected to a router that also establishes a local network between all of the pilot PCs through an Ethernet connection. In this way, the teleoperator receives the visual data in the Oculus Rift while sending his or her head orientation, wrist pose, and hand-closure references to PC1. Finally, the Myo bracelets were connected via Bluetooth to their dedicated PCs, where the processing described previously was executed to retrieve the operator’s arm pose and orientation. Although in the present work it is not specifically addressed, the communication channel plays a paramount role for the achievement of our objectives. In fact, it was shown that high communication delays in visuo-haptic applications (>150 ms) significantly degrade performance [17]. For these reasons, for future development we will build a robust and effective communication channel, e.g., refining existing perceptually motivated compression approaches of the transmitted data (dead band and prediction approaches) to enable a proper information exchange.
Software Architecture
Given the target of the mission and the new robot setup, a flexible and easily reconfigurable software platform was needed. We chose the Cross-Bot-Core (XBotCore) [18] robot control framework, which satisfies hard real-time (RT) requirements, ensuring 1-kHz control loop in EtherCAT-based robots. The robot software architecture played a key role in the mission success: it guaranteed control module code reusability and interoperability with the Yet Another Robot Platform (YARP) [19] non-RT framework. XBotCore is a novel approach to configure low-level control systems by using modern description formats, such as the Universal Robot Description Format (URDF) [32] and the Semantic Robot Description Format (SRDF) [33], which are traditionally used for high-level software components. Thanks to the introduced abstractions, it is possible to control different robots or different parts of the same robot without code changes: the application programming interface (API) provided to control the robot is dynamically built starting from the robot URDF/SRDF. Modifying the SRDF, e.g., removing a kinematic chain, such as the torso, results in a different API for the user that is compatible with the available/desired parts of the robot to control. We exploited this feature by removing the leg chains from the SRDF, and we controlled the humanoid upper body using a YARP module without any code modification.
Control and Perception
Teleoperation Module
To remotely control the upper body of the WALK-MAN robot, we developed a dedicated control module, which receives the information needed from the pilot station to reproduce the teleoperator movements on the robot. In particular, three kinds of data are sent to the control module and then translated to a robot joint motion: the head orientation, the pose of the wrists, and the amount of hand closure.
The quaternion representing the operator’s head orientation with respect to the plane perpendicular to the gravity vector is translated, by means of a linear map, in the yaw and pitch joint of the head and in the yaw joint of the torso. The rotation corresponding to the roll angle has not been considered. For each arm of the teleoperator, using the two Myo armband bracelets’ relative orientation, the cartesian pose of the wrist with respect to the shoulder is computed. This pose is then scaled to map the human arm to the robot arm, and it is sent through the network. When the pose is received by the control module, a Jacobian-based inverse kinematics is performed, obtaining the desired arm joint’s position. Note that at the system start-up, the teleoperator assumes a predefined homing position to define the relative position of the two Myos.
Thanks to the EMG sensors of the Myo armband bracelets, a value proportional to the signal representing the muscular activity on each forearm is obtained using a linear map. This value represents the desired position for the hand motor. This is very convenient for the human operator: because the Myo bracelets are positioned on the forearm, a muscular activity can be generated by opening and closing the hand; consequently, the robot will move the hand as the teleoperator does. The obtained desired joint positions for the hands, arms, torso, and head joints are then sent to the low-level controller of the motor boards, resulting in a robot motion. In each part of this control scheme, safety bounds are checked before moving the robot to avoid self-collisions. A tuning phase for each teleoperator takes place before the experiments, because each person is characterized by different electromyographic signals. During this phase, the teleoperator is required to raise the arms and keep them fixed in a straight pose for 3 s.
Vision Module
To visually examine the inspected building, we used the exteroceptive sensors, i.e., lidar and RGB-D cameras, to acquire crucial information about the structure of the indoor environment. For this purpose we developed two different vision-processing modules dedicated to different measurements acquisition.
Plane Detection Module
The first module has been developed to analyze the structure of the scene by searching for planar regions in it. If the extracted planes are bigger than a certain threshold, they are categorized in four different types: ceiling, floor, and frontal and lateral wall. This categorization is necessary for inspection in disaster scenarios, e.g., to recognize cracks or anomalous inclination of walls (see Figure 6). For the classification, the relative orientation between the planes and the robot head is used. Moreover, the pilot can compare the relative distance and orientation of two planes by selecting them through the PI.
The distances and angles between the wall, the floor, and the gravity vector in room 2. On the upper left is a 2-D lidar-based simultaneous localization and mapping (SLAM) path and on the right is the RGB image on the scene.
The plane estimation algorithm uses as input the lidar data provided by the rotating laser scanner of the MultiSense-SL head. The point cloud that has been used for plane classification is obtained by acquiring and accumulating 10 s of laser data to allow a whole environment scanning [Figure 7(a)]. Then, the point cloud is filtered using a 3-D pass-through filter to remove image regions that are out of our interest. A statistical outlier and a downsampling filter are also applied on the point cloud data set, using a voxelized grid approach. In this way, the laser image has a reduced number of points, allowing a faster plane detection. The estimation uses the random sample consensus (RANSAC) algorithm [20] to search for the best plane in the cloud, reducing at the same time the number of iterations, even if the number of points is very large. Points belonging to the same plane are removed from the original laser point cloud in every iteration, until a specified number-of-points threshold is met. Then, for each plane the mean normal vector and its four corners are computed to classify a plane as ceiling, floor, or lateral or front wall, visualized in different colors in Figure 7(b). Upon request, the pilot can use a Robot Operating System (ROS) service to compute the relative orientation of planes and the distances between identified planes’ corners, computed along the normal direction.
(a) The 3-D point cloud of the first room, (b) the reconstructed planes of the first room, (c) the RGB view of a crack inspected in the first room from the PI point of view, (d) the crack estimated width measurement in the point cloud (in meters), and (e) the manual measurement of the crack width in the field (in centimeters).
Local Regions Measurements Module
The second vision module is dedicated to compute distances and orientations between selected local regions in the environment, using both the 3-D perceptual data from the stereo camera and the lidar scanner as well as the gravitational force vector from the IMU sensor that is part of the MultiSense-SL head. For the point cloud data, the pilot can select either to accumulate the laser scanner data such that the whole environment is scanned or to use the filtered stereo RGB-D data. The gravity vector is computed from the IMU data after passing a Madgwick pose filtering in RT [21]. We analyzed the mean and standard deviation IMU rotational error for the estimated gravitational vector, which is 1.8° and 1.1°, respectively.
There are two options through the PI. First, the pilot can select two seed points in the environment. For each seed point, a local
Both modules are implemented in C++ as ROS nodes, using the Point Cloud Library [23], whereas the second module works in RT and is part of the Surface Patch Library [24]. The thresholds and parameters setting for the filtering and the plane estimations can be tweaked dynamically through a graphical user interface to meet specific demands according to different environments. The point cloud region, e.g., can be limited to closer-to-robot points when only planes around the robot are required and not ceilings or floors.
Results and End-User Feedback
Figure 8 summarizes the indoor operations executed by the robot under the supervision of the technical experts. In detail it highlights the locations of the various activities performed during our field tests, like measurements and manipulation tasks.
The robot is shown scanning rooms 1–4, measuring cracks, manipulating objects, and opening a door during the field operations.
Measurements Acquisition
Figure 7(a) and (b) shows the 3-D scene sent to the pilot PC1 and the reconstructed planes computed by the dedicated vision module for the first explored room. Thanks to the acquired measurements, it was possible to evaluate the state of the building. In particular, the representative engineering and architecture professionals requested the assessment of the wall inclination with respect to the ground. For the three inspected rooms, the wall inclination with respect to the floor was about 90° (
We tested the accuracy of the lidar point cloud by accumulating the point measurements on a plane and calculating the average distance between two point neighbors (lateral accuracy) as well as the displacement depth of the same point during some fixed time slot (depth accuracy). For surfaces 1 m from the sensor the lateral accuracy is 6 mm and the depth accuracy is 11 mm. When the distance between the sensor and the surfaces increases, the accuracy drops (± 30 mm for 0.1–10 m as reported in the laser sensor specifications). As can be seen in the images, the measurements are precise enough, within 6 mm, to allow the engineers and architects to assess the severity of the cracks and hence complete the estimation of the building state.
Manipulation Tasks
The robot manipulation capabilities were fundamental during indoor operations to get access to the inspected rooms. The robot opened two doors in the building: one door was opened by pushing it and the other one by turning the handle and pulling it (Figure 9). Another manipulation task consisted in collecting relevant objects (Figure 10) to be examined successively. All of the manipulation tasks took place in teleoperation mode, using only visual feedback to complete the corresponding task. The enhancement of the teleoperation module by adding haptic feedback is currently under study. In Figures 9 and 10, we report the six-axis experimental force-torque data acquired during the manipulation tasks. A sequence of images during a remote manipulation task, from the operator and the robot point of view, is reported in Figure 11.
The WALK-MAN point of view when opening two doors. One door is opened by (a) pushing and the other by (b) turning the handle and pulling it. For the two cases of the (c) left and (d) right hand, the force-torque measurements are reported, where interactions with the environment are clearly distinguishable from the graphs. The letters in the graphs identify the peak loads related to the action of the corresponding photo.
The WALK-MAN point of view when (a)–(c) collecting different objects using different strategies. The force-torque measurements of the (d) left and (e) right hand are reported to highlight the interactions of the robot with the environment. The letters in the graphs identify the peak loads related to the action of the corresponding photo.
A detail of a manipulation task executed during the field test. (a)–(d) The pilot station is visible, with the operator wearing the Oculus and Myo bracelets. (e)–(h) The robot WALK-MAN executes the commanded actions.
End-User Feedback and Lessons Learned
During field tests, the WALK-MAN team cooperated with the technical groups that usually supervise all activities. On site, several experts from the Protezione Civile Città Metropolitana di Firenze (three), the Red Cross (two), and the Amatrice municipality (two architects and one structural engineer) were present to validate the feasibility of the tasks discussed in the “Mission Objectives and Requirements” section.
Tasks 1 and 2 concern the visual feedback provided by the interface and the vision modules as tools to retrieve information on the house interior status and quantify the entirety of the structural damages. Technical experts assessed on the field the effectiveness of the systems for a first evaluation of the building status, as required by the standard forms reported in Figure 2(c). Moreover, they confirmed that the use of these tools can go beyond the simple operation of measuring cracks, e.g., streaming the data collected to a remote consulting engineering firm.
Concerning manipulation tasks, object retrieval (task 3) was demonstrated to be possible, although nontrivial, while the sensor placement (task 4) was difficult due to the lack of tactile feedback. Adding it would also enable the teleoperator to perform the sclerometer test, which is one of the most common nondestructive tests on concrete structures. The long-term objective is to develop a humanoid system with human-like capabilities, because wheeled systems have significant mobility limitation, especially when the environment contains large debris/holes or ladders to overcome. In the present application, we decided to implement a wheeled base because legged locomotion was not at the development stage to guarantee safe and robust navigation on uneven terrain. It is worth noticing that, during postearthquake operations, the main aim is to identify those building that survived the earthquake and can be repaired. The buildings that are partially collapsed or visibly damaged are excluded from the inspection to speed up the operations. Therefore, a large number of the buildings to inspect do not present large quantities of debris on the ground, and wheeled systems can be effectively used, at least to explore the ground floor. As future work, we will study the capabilities required to navigate different terrains to define guidelines for using either a wheeled or legged system.
From a hardware point of view, robustness and reliability are required to achieve safe interactions with the environment, whereas good perception capabilities are essential to support pilot operations. We proposed to control the robot through a teleoperation framework. The aim with this approach is to fill the gap between the robot and the human, unifying the physical performance of the first and the intelligence of the latter. Indeed, scene understanding is a difficult task, and autonomous methods are still far beyond human capabilities. Navigation is particularly challenging in scenarios with a high level of unpredictability, due to the presence of debris and ground with different characteristics (stiffness, friction, and so forth). Teleoperation offers the advantage of relying on pilot experience and perception for selecting a safe path inside the building or for locating stable footholds, which are very challenging tasks for artificial intelligence. The teleoperation interface was based on the Oculus Rift and Myo bracelets teleoperation framework. This resulted in a relatively cheap teleoperation system, where the cost is approximately €5,400 (Myo × 4 @ €200/unit, Oculus × 1 @ €600, laptop × 2 @ €2,000/unit). The presented teleoperation framework will be enhanced in the future using force feedback and other methods to better help the user understand the spatial perception of its avatar, i.e., how far away the surrounding objects are. Concerning the developed communication system, the final aim of such a system is to have completely wireless communication between the pilots and the robot to enhance autonomy. However, to have a good coverage of the area that the robot has to explore, a dedicated infrastructure is needed: this can be achieved by means of wireless routers placed in the environment. Routers can be positioned by humans in safe locations or by other robots directly inside the dangerous area. These robots should be lighter and simpler than a humanoid (e.g., rovers and drones) and should be equipped with one or more Wi-Fi antennas. In the future, as already discussed, we will consider adopting different communication technologies, such as cellular data communication protocols. Future work will consider a usability analysis to assess the ease of use of the teleoperation framework and the use of sensing redundancy and the implementation of fail-recovery mechanisms to further increase the robustness and the dependability of the whole system in real conditions.
Conclusions
The use of robots as avatars for the inspection of buildings after earthquakes, or other disasters, represents a very relevant application for search-and-rescue operators, especially when the earthquake affects cultural heritage sites, in which the operators enter with high risk regardless of the level of damage. In this article, we reported the results of the field test in a building damaged by an earthquake for evaluating the technologies developed in the WALK-MAN project, with a special focus on perception and manipulation readiness. We successfully visually inspected four rooms, performing several manipulation activities for both object retrieval and path clearing (e.g., door openings). From our perspective, on-site testing is the best way to validate the maturity of newly developed technologies and to identify critical aspects to move toward real advancement in the field of search-and-rescue robotics. The evaluation of the technical experts present on site was very positive and confirmed that this technology can address a real issue. Moreover, through a centralized control station far from the dangerous environment, visual information was collected to be evaluated by experts. Having multiple parallel working robotic platforms in various buildings with a centralized monitoring station may speed up the whole second-phase operation. Research is ongoing to extend the current work enabling teleimpedance control on the robot. Using the electromyographic sensors, the operator can change the stiffness of the related robotic arm using his muscular activity. This will allow remote execution, using the same teleoperation framework, of different tasks that require a different level of robot stiffness. The design of a control framework for teleoperated legged locomotion is under study and will be a key element to enhance the effectiveness of the WALK-MAN platform in disaster scenarios.
ACKNOWLEDGMENTS
The development of the WALK-MAN platform is supported by the WALK-MAN FP7-ICT-2013-10 European Commission project. We thank Protezione Civile Città Metropolitana di Firenze and the municipality of Amatrice for their support during all of the activities described in this article. Finally, we thank Andrea di Basco and Marco Migliorini for their support in the development of the hardware prototype.