Proceedings Volume 9494

Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX

cover
Proceedings Volume 9494

Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 19 June 2015
Contents: 7 Sessions, 25 Papers, 0 Presentations
Conference: SPIE Sensing Technology + Applications 2015
Volume Number: 9494

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9494
  • New Sensors for Robots
  • Robotic Applications
  • Control
  • Advances in Fundamental Research
  • Innovations in Applied Research
  • Improved Situational Awareness
Front Matter: Volume 9494
icon_mobile_dropdown
Front Matter: Volume 9494
This PDF file contains the front matter associated with SPIE Proceedings Volume 9494, including the Title Page, Copyright information, Table of Contents, Authors, and Conference Committee listing.
New Sensors for Robots
icon_mobile_dropdown
EHD printing of PEDOT: PSS inks for fabricating pressure and strain sensor arrays on flexible substrates
Caleb Nothnagle, Joshua R. Baptist, Joe Sanford, et al.
Robotic skins with multi-modal sensors are necessary to facilitate better human-robotic interaction in non-structured environments. Integration of various sensors, especially onto substrates with non-uniform topographies, is challenging using standard semiconductor fabrication techniques. Printing is seen as a technology with great promise that can be used for sensor fabrication and integration as it may allow direct printing of different sensors onto the same substrate regardless of topology. In this work, we investigate Electro-Hydro-Dynamic (EHD) printing, a method that allows printing of micron-sized features with a wide range of materials, for fabricating pressure sensor arrays using Poly(3,4- ethylenedioxythiophene):Polystyrene Sulfonate (PEDOT:PSS). Fabrication of such sensors has been achieved by prepatterning gold or platinum metallized interdigitated comb electrode arrays on a polyimide substrate, with three custom made PEDOT:PSS based inks printed directly onto the electrode arrays. These three inks include a formulation of PEDOT:PSS and NMP; PEDOT:PSS, PVP, and NMP; and PEDOT:PSS, PVP, Nafion, and NMP. All these inks were successfully printed onto sensor elements. The initial results of bending-induced strain tests on the fabricated sensors display that all the inks are sensitive to strain. This confirms their suitability for pressure and strain sensor applications; however, the behavior of each ink; including sensitivity, linearity, and stability; is unique to the type.
Multi-material additive manufacturing of robot components with integrated sensor arrays
Matt Saari, Bryan Cox, Matt Galla, et al.
Fabricating a robotic component comprising 100s of distributed, connected sensors can be very difficult with current approaches. To address these challenges, we are developing a novel additive manufacturing technology to enable the integrated fabrication of robotic structural elements with distributed, interconnected sensors and actuators. The focus is on resistive and capacitive sensors and electromagnetic actuators, though others are anticipated. Anticipated applications beyond robotics include advanced prosthetics, wearable electronics, and defense electronics. This paper presents preliminary results for printing polymers and conductive material simultaneously to form small sensor arrays. Approaches to optimizing sensor performance are discussed.
Micro-force sensing mobile microrobots
Wuming Jing, David J. Cappelleri
This paper presents the first microscale micro force sensing mobile microrobot. The design consists of a planar, vision-based micro force sensor end-effector, while the microrobot body is made from photoresist mixed with nickel particles that is driven by an exterior magnetic field. With a known stiffness, the manipulation forces can be determined from observing the deformation of the end-effector through a camera attached to an optical microscope. After analyzing and calibrating the stiffness of a micromachined prototype, proof of concept tests are conducted to verify this microrobot prototype possessing the mobility and in-situ force sensing capabilities. This microscale micro-Force Sensing Mobile Microrobot (μFSMM) is able to translate with the speed up to 10 mm=s in a fluid environment. The calibrated stiffness of the micro force sensor end-effector of the μFSMM is on the order of 10-2 N=m. The force sensing resolution with the current vision system is approximately 100 nN.
Force estimation in a 2-DoF piezoelectric actuator by using the inverse-dynamics based unknown input observer technique
Vincent Trenchant, Micky Rakotondrabe, Yassine Haddab
The aim of this paper is the estimation of force in a two degrees of freedom (2-DoF) piezoelectric actuator devoted to microrobotic manipulation tasks. Due to the limited space and to the small sizes of the actuator, the use of external sensors to measure both the displacement and the force played into role during the tasks are impossible. Therefore the deal in this study consists to propose observer techniques to bypass the use of force sensors. Based on the unknown input observer (UIO) technique, force along the two directions (y and z axes) of the actuator can be estimated precisely and with convenient dynamics. Additionally to the force, the state vector of the actuator is also estimated. Experimental tests are carried out and demonstrate the e ectiveness of the method.
Robotic Applications
icon_mobile_dropdown
Sensor study for high speed autonomous operations
Anne Schneider, Zachary La Celle, Alberto Lacaze, et al.
As robotic ground systems advance in capabilities and begin to fulfill new roles in both civilian and military life, the limitation of slow operational speed has become a hindrance to the wide-spread adoption of these systems. For example, military convoys are reluctant to employ autonomous vehicles when these systems slow their movement from 60 miles per hour down to 40. However, these autonomous systems must operate at these lower speeds due to the limitations of the sensors they employ. Robotic Research, with its extensive experience in ground autonomy and associated problems therein, in conjunction with CERDEC/Night Vision and Electronic Sensors Directorate (NVESD), has performed a study to specify system and detection requirements; determined how current autonomy sensors perform in various scenarios; and analyzed how sensors should be employed to increase operational speeds of ground vehicles. The sensors evaluated in this study include the state of the art in LADAR/LIDAR, Radar, Electro-Optical, and Infrared sensors, and have been analyzed at high speeds to study their effectiveness in detecting and accounting for obstacles and other perception challenges. By creating a common set of testing benchmarks, and by testing in a wide range of real-world conditions, Robotic Research has evaluated where sensors can be successfully employed today; where sensors fall short; and which technologies should be examined and developed further. This study is the first step to achieve the overarching goal of doubling ground vehicle speeds on any given terrain.
Multi-modal sensor and HMI integration with applications in personal robotics
Rommel Alonzo, Sven Cremer, Fahad Mirza, et al.

In recent years, advancements in computer vision, motion planning, task-oriented algorithms, and the availability and cost reduction of sensors, have opened the doors to affordable autonomous robots tailored to assist individual humans. One of the main tasks for a personal robot is to provide intuitive and non-intrusive assistance when requested by the user. However, some base robotic platforms can’t perform autonomous tasks or allow general users operate them due to complex controls. Most users expect a robot to have an intuitive interface that allows them to directly control the platform as well as give them access to some level of autonomous tasks. We aim to introduce this level of intuitive control and autonomous task into teleoperated robotics.

This paper proposes a simple sensor-based HMI framework in which a base teleoperated robotic platform is sensorized allowing for basic levels of autonomous tasks as well as provides a foundation for the use of new intuitive interfaces. Multiple forms of HMI’s (Human-Machine Interfaces) are presented and software architecture is proposed. As test cases for the framework, manipulation experiments were performed on a sensorized KUKA YouBot® platform, mobility experiments were performed on a LABO-3 Neptune platform and Nexus 10 tablet was used with multiple users in order to examine the robots ability to adapt to its environment and to its user.

Robotic situational awareness of actions in human teaming
When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.
Performance evaluation and clinical applications of 3D plenoptic cameras
Ryan Decker, Azad Shademan, Justin Opfermann, et al.
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
Surface EMG and intra-socket force measurement to control a prosthetic device
Joe Sanford, Rita Patterson, Dan Popa
Surface electromyography (SEMG) has been shown to be a robust and reliable interaction method allowing for basic control of powered prosthetic devices. Research has shown a marked decrease in EMG-classification efficiency throughout activities of daily life due to socket shift and movement and fatigue as well as changes in degree of fit of the socket throughout the subject's lifetime. Users with the most severe levels of amputation require the most complex devices with the greatest number of degrees of freedom. Controlling complex dexterous devices with limited available inputs requires the addition of sensing and interaction modalities. However, the larger the amputation severity, the fewer viable SEMG sites are available as control inputs. Previous work reported the use of intra-socket pressure, as measured during wrist flexion and extension, and has shown that it is possible to control a powered prosthetic device with pressure sensors. In this paper, we present data correlations of SEMG data with intra-socket pressure data. Surface EMG sensors and force sensors were housed within a simulated prosthetic cuff fit to a healthy-limbed subject. EMG and intra-socket force data was collected from inside the cuff as a subject performed pre-defined grip motions with their dominant hand. Data fusion algorithms were explored and allowed a subject to use both intra-socket pressure and SEMG data as control inputs for a powered prosthetic device. This additional input modality allows for an improvement in input classification as well as information regarding socket fit through out activities of daily life.
Resolving ranges of layered objects using ground vehicle LiDAR
Jim Hollinger, Brett Kutscher, Ryan Close
Lidar systems are well known for their ability to measure three-dimensional aspects of a scene. This attribute of Lidar has been widely exploited by the robotics community, among others. The problem of resolving ranges of layered objects (such as a tree canopy over the forest floor) has been studied from the perspective of airborne systems. However, little research exists in studying this problem from a ground vehicle system (e.g., a bush covering a rock or other hazard). This paper discusses the issues involved in solving this problem from a ground vehicle. This includes analysis of extracting multi-return data from Lidar and the various laser properties that impact the ability to resolve multiple returns, such as pulse length and beam size. The impacts of these properties are presented as they apply to three different Lidar imaging technologies: scanning pulse Lidar, Geiger-mode flash Lidar, and Time-of-Flight camera. Tradeoffs associated with these impacts are then discussed for a ground vehicle Lidar application.
Performances analysis of piezoelectric cantilever based energy harvester devoted to mesoscale intra-body robot
Kanty Rabenorosoa, Micky Rakotondrabe
Mesoscale robots, including active capsules, are a promising and well suited approach for minimal invasive intrabody intervention. However, within the numerous works, the main limitation in these robots is the embedded energy used for their locomotion and for the tasks they should accomplish. The limited autonomy and the limited power make them finally unusable for real situations such as active capsules inside body during several tens of minutes. In this paper, we propose an approach to power mesoscale robots by using energy harvesting techniques through a piezoelectric cantilever structure embedded on the robot and through an oscillating magnetic excitation. The physical model of the proposed system is carried out and simulation results are yielded and analyzed accordingly to the influencing parameters such as the number of layers in the cantilever and its dimensions. Finally, the feasability of this solution is proved and perspectives are discussed.
Untethered microscale flight: mechanisms and platforms for future aerial MEMS microrobots
Syed A. Hussain, Spencer Ward, Omid Mahdavipour, et al.
This paper describes initial work on untethered microscale flying structures as a platform for new class of aerial MEMS microrobots. We present and analyze both biomimetic structures based partially on wing designs of smallest flying insects on Earth, as well as stress-engineered structures powered by radiometric (thermal) forces. The latter devices, also called MEMS Microfliers are 300 μm × 300 μm × 1.5 μm in size, and are fabricated out of polycrystalline silicon. A convex chassis, formed through a novel in-situ masked post-release stress-engineering process, ensures their static inflight stability. High-speed optical micrography was used to image these MEMS microfliers in mid-flight, analyzing their flight profile.
Control
icon_mobile_dropdown
Automated actuation of multiple bubble microrobots using computer-generated holograms
M. Arifur Rahman, Julian Cheng, Qihui Fan, et al.
Microrobots, sub-millimeter untethered microactuators, have applications including cellular manipulation, microsurgery, microassembly, tissue culture, and drug delivery. Laser-induced opto-thermocapillary flow-addressed bubble (OFB) microrobots are promising for these applications. In the OFB microrobot system, laser patterns generate thermal gradients within a liquid media, creating thermocapillary forces that actuate the air bubbles that serve as microrobots. A unique feature of the OFB microrobot system is that the optical control enables the parallel yet independent actuation of microrobots. This paper reports on the development of an automated control system for the independent addressing of many OFB microrobots in parallel. In this system, a spatial light modulator (SLM) displayed computer-generated holograms to create an optical pattern consisting of up to 50 individual spots. Each spot can control a single microrobot, so the control of array of microrobots was accomplished with sequence of holograms. Using the control system described in this paper, single, multiple, and groups of microrobots were created, repositioned, and maneuvered independently within a set workspace. Up to 12 microrobots were controlled independently and in parallel. To the best knowledge of the authors, this is the largest number of parallel, independent microrobot actuation reported to date.
An ontology to enable optimized task partitioning in human-robot collaboration for warehouse kitting operations
Ashis Gopal Banerjee, Andrew Barnes, Krishnanand N. Kaipa, et al.
Collaborative teams of human operators and mobile ground robots are becoming popular in manufacturing plants to assist humans with a lot of the repetitive tasks such as the packing of related objects into different units, an operation known as kitting. In this paper, we present an ontology to provide a unified representation of all kitting-related tasks, which are decomposed into atomic actions that are either computational involving sensing, perception, planning, and control, or physical involving actuation and manipulation. The ontology is then used in a stochastic integer linear program for optimum partitioning of the atomic tasks between the robots and humans. Preliminary experiments on a single robot, single human case yield promising results where the kitting operations are completed with lower durations and manipulation failure rates using human-robot partnership versus just the human or only the robot. This success is achieved by the robot seeking human assistance for visual perception tasks while performing the other tasks primarily on its own.
Control of a powered prosthetic device via a pinch gesture interface
Oguz Yetkin, Kristi Wallace, Joseph D. Sanford, et al.
A novel system is presented to control a powered prosthetic device using a gesture tracking system worn on a user’s sound hand in order to detect different grasp patterns. Experiments are presented with two different gesture tracking systems: one comprised of Conductive Thimbles worn on each finger (Conductive Thimble system), and another comprised of a glove which leaves the fingers free (Conductive Glove system). Timing tests were performed on the selection and execution of two grasp patterns using the Conductive Thimble system and the iPhone app provided by the manufacturer. A modified Box and Blocks test was performed using Conductive Glove system and the iPhone app provided by Touch Bionics. The best prosthetic device performance is reported with the developed Conductive Glove system in this test. Results show that these low encumbrance gesture-based wearable systems for selecting grasp patterns may provide a viable alternative to EMG and other prosthetic control modalities, especially for new prosthetic users who are not trained in using EMG signals.
Multi-mode vibration suppression in 2-DOF piezoelectric systems using zero placement input shaping technique
Yasser Al Hamidi, Micky Rakotondrabe
This paper deals with the feedforward control of the vibrations of a 2-DOF piezoelectric micropositioner in order to damp the vibrations in the direct axes and in the cross-couplings. The actuator exhibit badly damped vibrations in its direct transfers as well as in the cross-couplings transfers. We therefore propose a bivariable control which does not require sensors to reduce the vibrations in the different axes. The proposed scheme reduces all modes of vibrations for both outputs through extending the monovariable zero placement input shaping technique into bivariable. Experimental tests have been carried out and demonstrate the efficiency of the proposed method.
Simultaneous suppression of badly damped vibrations and cross-couplings in a 2-DoF piezoelectric actuator by using feedforward standard Hinfinity approach
Didace Habineza, Micky Rakotondrabe, Yann Le Gorrec
This paper deals with the feedforward control of vibrations in a 2-axis piezoelectric actuator devoted to precise positioning. The actuator is very prized for high precision spatial positioning applications, but its positioning capability as well as the stability of the final tasks are compromised by badly-damped vibrations, especially during high-speed positioning operation. In addition to these vibrations, the presence of strong cross-couplings between different actuator axis poses challenge in the feedforward control scheme. This paper proposes a bivariable feedforward standard H1 approach to suppress the vibrations in the direct transfers and to reduce the amplitudes of the cross-couplings. The proposed approach is simple to handle and easy to implement, comparatively to the commonly used techniques for oscillations suppression. Experimental tests demonstrate the efficiency of the proposed approach.
Advances in Fundamental Research
icon_mobile_dropdown
Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design
J. David Schaffer
Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of “unintelligent design”; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.
Experimental analysis of a Lotka-Volterra neural network for classification
An experimental study of a neural network modeled by an adaptive Lotka-Volterra system follows. With totally inhibitory connections, this system can be embedded in a simple classification network. This network is able to classify and monitor its inputs in a spontaneous nonlinear fashion without prior training. We describe a framework for leveraging this behavior through an example involving breast cancer diagnosis.
Collaborative mining and transfer learning for relational data
Many of the real-world problems, – including human knowledge, communication, biological, and cyber network analysis, – deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.
Innovations in Applied Research
icon_mobile_dropdown
Bio-inspired approach for intelligent unattended ground sensors
Nicolas Hueber, Pierre Raymond, Christophe Hennequin, et al.
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Improved Situational Awareness
icon_mobile_dropdown
Subset selection of training data for machine learning: a situational awareness system case study
M. McKenzie, S. C. Wong
Recent advances in machine learning with big data sets has allowed for significant advances in the optimisation of classification and recognition systems. However, for applications such as situational awareness systems, the entirety of the available data dwarfs the amount permissible for a training set with tractable machine learning optimization times. Furthermore, the performance of any optimized system is highly dependent of the training set correctly and completely representing the entire data space of scenarios. In this paper we present a technique to characterize the entire data space to ascertain the key factors for representation and subsequently select a subset that statistically represents the correct mix of scenarios. We demonstrate the effectiveness of these characterization and subset selection techniques by using a genetic algorithm to optimize the performance of a gunfire recognition system.
Realistic computer network simulation for network intrusion detection dataset generation
The KDD-99 Cup dataset is dead. While it can continue to be used as a toy example, the age of this dataset makes it all but useless for intrusion detection research and data mining. Many of the attacks used within the dataset are obsolete and do not reflect the features important for intrusion detection in today's networks. Creating a new dataset encompassing a large cross section of the attacks found on the Internet today could be useful, but would eventually fall to the same problem as the KDD-99 Cup; its usefulness would diminish after a period of time. To continue research into intrusion detection, the generation of new datasets needs to be as dynamic and as quick as the attacker. Simply examining existing network traffic and using domain experts such as intrusion analysts to label traffic is inefficient, expensive, and not scalable. The only viable methodology is simulation using technologies including virtualization, attack-toolsets such as Metasploit and Armitage, and sophisticated emulation of threat and user behavior. Simulating actual user behavior and network intrusion events dynamically not only allows researchers to vary scenarios quickly, but enables online testing of intrusion detection mechanisms by interacting with data as it is generated. As new threat behaviors are identified, they can be added to the simulation to make quicker determinations as to the effectiveness of existing and ongoing network intrusion technology, methodology and models.
Change detection in Arctic satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries
Daniela I. Moody, Cathy J. Wilson, Joel C. Rowland, et al.
Advanced pattern recognition and computer vision algorithms are of great interest for landscape characterization, change detection, and change monitoring in satellite imagery, in support of global climate change science and modeling. We present results from an ongoing effort to extend neuroscience-inspired models for feature extraction to the environmental sciences, and we demonstrate our work using Worldview-2 multispectral satellite imagery. We use a Hebbian learning rule to derive multispectral, multiresolution dictionaries directly from regional satellite normalized band difference index data. These feature dictionaries are used to build sparse scene representations, from which we automatically generate land cover labels via our CoSA algorithm: Clustering of Sparse Approximations. These data adaptive feature dictionaries use joint spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologic features. Land cover labels are estimated in example Worldview-2 satellite images of Barrow, Alaska, taken at two different times, and are used to detect and discuss seasonal surface changes. Our results suggest that an approach that learns from both spectral and spatial features is promising for practical pattern recognition problems in high resolution satellite imagery.