Proceedings Volume 8758

Next-Generation Analyst

Barbara D. Broome, David L. Hall, James Llinas
cover
Proceedings Volume 8758

Next-Generation Analyst

Barbara D. Broome, David L. Hall, James Llinas
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 14 June 2013
Contents: 5 Sessions, 22 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2013
Volume Number: 8758

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8758
  • Big Data
  • Soft Data Processing, Value, and Trust
  • Analyst Tools and Interfaces
  • Poster Session
Front Matter: Volume 8758
icon_mobile_dropdown
Front Matter: Volume 8758
This PDF file contains the front matter associated with SPIE Proceedings Volume 8758 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Big Data
icon_mobile_dropdown
Adaptive context exploitation
Alan N. Steinberg, Christopher L. Bowman
This paper presents concepts and an implementation scheme to improve information exploitation processes and products by adaptive discovery and processing of contextual information. Context is used in data fusion – and in inferencing in general – to provide expectations and to constrain processing. It also is used to infer or refine desired information (“problem variables”) on the basis of other available information (“context variables”). Contextual exploitation becomes critical in several classes of inferencing problems in which traditional information sources do not provide sufficient resolution between entity states or when such states are poorly or incompletely modeled. An adaptive evidence-accrual inference method – adapted from developments in target recognition and scene understanding – is presented; whereby context variables are selected on the basis of (a) their utility in refining explicit problem variables, (b) the probability of evaluating these variables to within a given accuracy, given candidate system actions (data collection, mining or processing), and (c) the cost of such actions. The Joint Directors of Laboratories (JDL) Data Fusion Model, with its extension to dual Resource Management functions, has been adapted to accommodate adaptive information exploitation, to include adaptive context exploitation. The interplay of Data Fusion and Resource Management (DF&RM) functionality in exploiting contextual information is illustrated in terms of the dual-node DF&RM architecture. An important advance is in the integration of data mining methods for data search/discovery and for abductive model refinement.
Concept of operations for knowledge discovery from Big Data across enterprise data warehouses
Sreenivas R. Sukumar, Mohammed M. Olama, Allen W. McNair, et al.
The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Options that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.
GOOSE: semantic search on internet connected sensors
Klamer Schutte, Freek Bomhof, Gertjan Burghouts, et al.
More and more sensors are getting Internet connected. Examples are cameras on cell phones, CCTV cameras for traffic control as well as dedicated security and defense sensor systems. Due to the steadily increasing data volume, human exploitation of all this sensor data is impossible for effective mission execution. Smart access to all sensor data acts as enabler for questions such as “Is there a person behind this building” or “Alert me when a vehicle approaches”. The GOOSE concept has the ambition to provide the capability to search semantically for any relevant information within “all” (including imaging) sensor streams in the entire Internet of sensors. This is similar to the capability provided by presently available Internet search engines which enable the retrieval of information on “all” web pages on the Internet. In line with current Internet search engines any indexing services shall be utilized cross-domain. The two main challenge for GOOSE is the Semantic Gap and Scalability. The GOOSE architecture consists of five elements: (1) an online extraction of primitives on each sensor stream; (2) an indexing and search mechanism for these primitives; (3) a ontology based semantic matching module; (4) a top-down hypothesis verification mechanism and (5) a controlling man-machine interface. This paper reports on the initial GOOSE demonstrator, which consists of the MES multimedia analysis platform and the CORTEX action recognition module. It also provides an outlook into future GOOSE development.
Soft Data Processing, Value, and Trust
icon_mobile_dropdown
Controlled English to facilitate human/machine analytical processing
Dave Braines, David Mott, Simon Laws, et al.
Controlled English is a human-readable information representation format that is implemented using a restricted subset of the English language, but which is unambiguous and directly accessible by simple machine processes. We have been researching the capabilities of CE in a number of contexts, and exploring the degree to which a flexible and more human-friendly information representation format could aid the intelligence analyst in a multi-agent collaborative operational environment; especially in cases where the agents are a mixture of other human users and machine processes aimed at assisting the human users. CE itself is built upon a formal logic basis, but allows users to easily specify models for a domain of interest in a human-friendly language. In our research we have been developing an experimental component known as the “CE Store” in which CE information can be quickly and flexibly processed and shared between human and machine agents. The CE Store environment contains a number of specialized machine agents for common processing tasks and also supports execution of logical inference rules that can be defined in the same CE language. This paper outlines the basic architecture of this approach, discusses some of the example machine agents that have been developed, and provides some typical examples of the CE language and the way in which it has been used to support complex analytical tasks on synthetic data sources. We highlight the fusion of human and machine processing supported through the use of the CE language and CE Store environment, and show this environment with examples of highly dynamic extensions to the model(s) and integration between different user-defined models in a collaborative setting.
MIPS: a service-based aid for intelligence analysis
David Braines, John Ibbotson, Graham White
The Management of Information Processing Services (MIPS) project has two main objectives; the notification to analysts of the arrival of relevant new information and the automatic processing of the new information. Within these objectives a number of significant challenges were addressed. To achieve the first objective, the team had to demonstrate the capability for specific analysts to be “tipped-off” in real-time that textual reports and sensor-data have been received that are relevant to their analytical tasks, including the possibility that such reports have been made available by other nations. In the case of the second objective, the team had to demonstrate the capability for the infrastructure to automatically initiate processing of input data as it arrives, consistent with satisfying the analytical goals of teams of analysts, in as an efficient a manner as possible (including the case where data is made available by more than one nation). Using the Information Fabric middleware developed as part of the International Technology Alliance (ITA) research program, the team created a service based information processing infrastructure to achieve the objectives and challenges set by the customer. The infrastructure allows existing software to be wrapped as a service and/or specially written services to be integrated with each other as well as with other ITA technologies such as the Controlled English (CE) Store or the Gaian Database. This paper will identify the difficulties in designing and implementing the MIPS infrastructure together with describing its architecture and illustrating its use with a worked example use case.
A decision support system for fusion of hard and soft sensor information based on probabilistic latent semantic analysis technique
Amir Shirkhodaie, Vinayak Elangovan, Amjad Alkilani, et al.
This paper presents an ongoing effort towards development of an intelligent Decision-Support System (iDSS) for fusion of information from multiple sources consisting of data from hard (physical sensors) and soft (textural sources. Primarily, this paper defines taxonomy of decision support systems for latent semantic data mining from heterogeneous data sources. A Probabilistic Latent Semantic Analysis (PLSA) approach is proposed for latent semantic concepts search from heterogeneous data sources. An architectural model for generating semantic annotation of multi-modality sensors in a modified Transducer Markup Language (TML) is described. A method for TML messages fusion is discussed for alignment and integration of spatiotemporally correlated and associated physical sensory observations. Lastly, the experimental results which exploit fusion of soft/hard sensor sources with support of iDSS are discussed.
Reusing information for high-level fusion: characterizing bias and uncertainty in human-generated intelligence
Dustin Burke, Alan Carlin, Paul Picciano, et al.
To expedite the intelligence collection process, analysts reuse previously collected data. This poses the risk of analysis failure, because these data are biased in ways that the analyst may not know. Thus, these data may be incomplete, inconsistent or incorrect, have structural gaps and limitations, or simply be too old to accurately represent the current state of the world. Incorporating human-generated intelligence within the high-level fusion process enables the integration of hard (physical sensors) and soft information (human observations) to extend the ability of algorithms to associate and merge disparate pieces of information for a more holistic situational awareness picture. However, in order for high-level fusion systems to manage the uncertainty in soft information, a process needs to be developed for characterizing the sources of error and bias specific to human-generated intelligence and assessing the quality of this data. This paper outlines an approach Towards Integration of Data for unBiased Intelligence and Trust (TID-BIT) that implements a novel Hierarchical Bayesian Model for high-level situation modeling that allows the analyst to accurately reuse existing data collected for different intelligence requirements. TID-BIT constructs situational, semantic knowledge graphs that links the information extracted from unstructured sources to intelligence requirements and performs pattern matching over these attributed-network graphs for integrating information. By quantifying the reliability and credibility of human sources, TID-BIT enables the ability to estimate and account for uncertainty and bias that impact the high-level fusion process, resulting in improved situational awareness.
Reasoning with uncertain information and trust
Murat Sensoy, Geeth de Mel, Achille Fokoue, et al.
A limitation of standard Description Logics is its inability to reason with uncertain and vague knowledge. Although probabilistic and fuzzy extensions of DLs exist, which provide an explicit representation of uncertainty, they do not provide an explicit means for reasoning about second order uncertainty. Dempster-Shafer theory of evidence (DST) overcomes this weakness and provides means to fuse and reason about uncertain information. In this paper, we combine DL-Lite with DST to allow scalable reasoning over uncertain semantic knowledge bases. Furthermore, our formalism allows for the detection of conflicts between the fused information and domain constraints. Finally, we propose methods to resolve such conflicts through trust revision by exploiting evidence regarding the information sources. The effectiveness of the proposed approaches is shown through simulations under various settings.
Analyst Tools and Interfaces
icon_mobile_dropdown
Crowded: a crowd-sourced perspective of events as they happen
Richard Brantingham, Aleem Hossain
‘Crowded’ is a web-based application developed by the Defence Science & Technology Laboratory (Dstl) that collates imagery of a particular location from a variety of media sources to provide an operator with real-time situational awareness. Emergency services and other relevant agencies have detected or become aware of an event - a riot or an explosion, for instance - and its location or text associated with it. The ubiquity of mobile devices allows people to collect and upload media of the incident to the Internet, in real time. Crowded manages the interactions with online sources of media: Flickr; Instagram; YouTube; Twitter; and Transport for London traffic cameras, to retrieve imagery that is being uploaded at that point in time. In doing so, it aims to provide human operators with near-instantaneous ‘eyes-on’ from a variety of different perspectives. The first instantiation of Crowded was implemented as a series of integrated web-services with the aim of rapidly understanding whether the approach was viable. In doing so, it demonstrated how non-traditional, open sources can be used to provide a richer current intelligence picture than can be obtained alone from classified sources. The development of Crowded also explored how open source technology and cloud-based services can be used in the modern intelligence and security environment to provide a multi-agency Common Operating Picture to help achieve a co-ordinated response. The lessons learned in building the prototype are currently being used to design and develop a second version, and identify options and priorities for future development.
Supporting tactical intelligence using collaborative environments and social networking
Arthur B. Wollocko, Michael P. Farry, Robert F. Stark
Modern military environments place an increased emphasis on the collection and analysis of intelligence at the tactical level. The deployment of analytical tools at the tactical level helps support the Warfighter’s need for rapid collection, analysis, and dissemination of intelligence. However, given the lack of experience and staffing at the tactical level, most of the available intelligence is not exploited. Tactical environments are staffed by a new generation of intelligence analysts who are well-versed in modern collaboration environments and social networking. An opportunity exists to enhance tactical intelligence analysis by exploiting these personnel strengths, but is dependent on appropriately designed information sharing technologies. Existing social information sharing technologies enable users to publish information quickly, but do not unite or organize information in a manner that effectively supports intelligence analysis. In this paper, we present an alternative approach to structuring and supporting tactical intelligence analysis that combines the benefits of existing concepts, and provide detail on a prototype system embodying that approach. Since this approach employs familiar collaboration support concepts from social media, it enables new-generation analysts to identify the decision-relevant data scattered among databases and the mental models of other personnel, increasing the timeliness of collaborative analysis. Also, the approach enables analysts to collaborate visually to associate heterogeneous and uncertain data within the intelligence analysis process, increasing the robustness of collaborative analyses. Utilizing this familiar dynamic collaboration environment, we hope to achieve a significant reduction of time and skill required to glean actionable intelligence in these challenging operational environments.
Using the living laboratory framework as a basis for understanding next-generation analyst work
Michael D. McNeese, Vincent Mancuso, Nathan McNeese, et al.
The preparation of next generation analyst work requires alternative levels of understanding and new methodological departures from the way current work transpires. Current work practices typically do not provide a comprehensive approach that emphasizes the role of and interplay between (a) cognition, (b) emergent activities in a shared situated context, and (c) collaborative teamwork. In turn, effective and efficient problem solving fails to take place, and practice is often composed of piecemeal, techno-centric tools that isolate analysts by providing rigid, limited levels of understanding of situation awareness. This coupled with the fact that many analyst activities are classified produces a challenging situation for researching such phenomena and designing and evaluating systems to support analyst cognition and teamwork. Through our work with cyber, image, and intelligence analysts we have realized that there is more required of researchers to study human-centered designs to provide for analyst’s needs in a timely fashion. This paper identifies and describes how The Living Laboratory Framework can be utilized as a means to develop a comprehensive, human-centric, and problem-focused approach to next generation analyst work, design, and training. We explain how the framework is utilized for specific cases in various applied settings (e.g., crisis management analysis, image analysis, and cyber analysis) to demonstrate its value and power in addressing an area of utmost importance to our national security. Attributes of analyst work settings are delineated to suggest potential design affordances that could help improve cognitive activities and awareness. Finally, the paper puts forth a research agenda for the use of the framework for future work that will move the analyst profession in a viable manner to address the concerns identified.
Exploring client logs towards characterizing the user behavior on web applications
Leandro Guarino de Vasconcelos, Rafael Duarte Coelho dos Santos, Laercio Augusto Baldochi Jr.
Analysis of user interaction with computer systems can be used for several purposes, the most common being analysis of the effectiveness of the interfaces used for interaction (in order to adapt or enhance its usefulness) and analysis of intention and behavior of the users when interacting with these systems. For web applications, often the analysis of user interaction is done using the web server logs collected for every document sent to the user in response to his/her request. In order to capture more detailed data on the users' interaction with sites, one could collect actions the user performs in the client side. An effective approach to this is the USABILICS system, which also allows the definition and analysis of tasks in web applications. The fine granularity of logs collected by USABILICS allows a much more detailed log of users' interaction with a web application. These logs can be converted into graphs where vertices are users' actions and edges are paths made by the user to accomplish a task. Graph analysis and visualization tools and techniques allow the analysis of actions taken in relation to an expected action path, or characterization of common (and uncommon) paths on the interaction with the application. This paper describes how to estimate users' behavior and characterize their intentions during interaction with a web application, presents the analysis and visualization tools on those graphs and shows some practical results with an educational site, commenting on the results and implications of the possibilities of using these techniques.
Exploring the dynamics of collective cognition using a computational model of cognitive dissonance
Paul R. Smart, Katia Sycara, Darren P. Richardson
The socially-distributed nature of cognitive processing in a variety of organizational settings means that there is increasing scientific interest in the factors that affect collective cognition. In military coalitions, for example, there is a need to understand how factors such as communication network topology, trust, cultural differences and the potential for miscommunication affects the ability of distributed teams to generate high quality plans, to formulate effective decisions and to develop shared situation awareness. The current paper presents a computational model and associated simulation capability for performing in silico experimental analyses of collective sensemaking. This model can be used in combination with the results of human experimental studies in order to improve our understanding of the factors that influence collective sensemaking processes.
CE-SAM: a conversational interface for ISR mission support
There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.
Poster Session
icon_mobile_dropdown
Characterization of gain-aware routing in delay tolerant networks
Faezeh Hajiaghajani, Yogesh Piolet Thulasidharan, Mahmoud Taghizadeh, et al.
Majority of the existing Delay Tolerant Network (DTN) routing protocols, attempt to minimize one of the popular DTN routing indices, i.e. message delay, forwarding count and storage. However, for many DTN applications such as distributing commercial content, targeting the best performance for one index and compromising the others is insufficient. A more practical solution would be to strike a balance between multiple of these indices. Gain Dissemination Protocol (GDP) is one of the protocols which targets this aim by introducing a gain concept which tries reach a maximum gain of delivery by keeping the balance between the value achieved via delivering the packet to the destination and the forwarding cost involved with that. In this paper, we focus on characterizing the GDP protocol in the scope of mobility. We also propose an upper bound for gain in multicast routing problem, i.e. the Union of Unicast Benchmark (UUB) and compare the performance of a few DTN routing protocols with the former. This eventually reveals the performance scope of a potential gain-aware DTN dissemination protocol.
Beyond visualization of big data: a multi-stage data exploration approach using visualization, sonification, and storification
Jeffrey Rimland, Mark Ballora, Wade Shumaker
As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn’t require extended listening – as visual “snapshots” are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in “smart” buildings, and detection of natural and man-made disasters.
Visualization and characterization of users in a citizen science project
Alessandra Marli M. Morais, Jordan Raddick, Rafael D. Coelho dos Santos
Recent technological advances allowed the creation and use of internet-based systems where many users can collaborate gathering and sharing information for specific or general purposes: social networks, e-commerce review systems, collaborative knowledge systems, etc. Since most of the data collected in these systems is user-generated, understanding of the motivations and general behavior of users is a very important issue. Of particular interest are citizen science projects, where users without scientific training are asked for collaboration labeling and classifying information (either automatically by giving away idle computer time or manually by actually seeing data and providing information about it). Understanding behavior of users of those types of data collection systems may help increase the involvement of the users, categorize users accordingly to different parameters, facilitate their collaboration with the systems, design better user interfaces, and allow better planning and deployment of similar projects and systems. Behavior of those users could be estimated through analysis of their collaboration track: registers of which user did what and when can be easily and unobtrusively collected in several different ways, the simplest being a log of activities. In this paper we present some results on the visualization and characterization of almost 150.000 users with more than 80.000.000 collaborations with a citizen science project - Galaxy Zoo I, which asked users to classify galaxies' images. Basic visualization techniques are not applicable due to the number of users, so techniques to characterize users' behavior based on feature extraction and clustering are used.
Representation of potential information gain to measure the price of anarchy on ISR activities
Hector J. Ortiz-Peña, Michael Hirsch, Mark Karwan, et al.
One of the main technical challenges facing intelligence analysts today is effectively determining information gaps from huge amounts of collected data. Moreover, getting the right information to/from the right person (e.g., analyst, warfighter on the edge) at the right time in a distributed environment has been elusive to our military forces. Synchronization of Intelligence, Surveillance, and Reconnaissance (ISR) activities to maximize the efficient utilization of limited resources (both in quantity and capabilities) has become critically important to increase the accuracy and timeliness of overall information gain. Given this reality, we are interested in quantifying the degradation of solution quality (i.e., information gain) as a centralized system synchronizing ISR activities (from information gap identification to information collection and dissemination) moves to a more decentralized framework. This evaluation extends the concept of price of anarchy, a measure of the inefficiency of a system when agents maximize decisions without coordination, by considering different levels of decentralization. Our initial research representing the potential information gain in geospatial and time discretized spaces is presented. This potential information gain map can represent a consolidation of Intelligence Preparation of the Battlefield products as input to automated ISR synchronization tools. Using the coordination of unmanned vehicles (UxVs) as an example, we developed a mathematical programming model for multi-perspective optimization in which each UxV develops its own fight plan to support mission objectives based only on its perspective of the environment (i.e., potential information gain map). Information is only exchanged when UxVs are part of the same communication network.
Conserving analyst attention units: use of multi-agent software and CEP methods to assist information analysis
Jeffrey Rimland, Michael McNeese, David Hall
Although the capability of computer-based artificial intelligence techniques for decision-making and situational awareness has seen notable improvement over the last several decades, the current state-of-the-art still falls short of creating computer systems capable of autonomously making complex decisions and judgments in many domains where data is nuanced and accountability is high. However, there is a great deal of potential for hybrid systems in which software applications augment human capabilities by focusing the analyst’s attention to relevant information elements based on both a priori knowledge of the analyst’s goals and the processing/correlation of a series of data streams too numerous and heterogeneous for the analyst to digest without assistance. Researchers at Penn State University are exploring ways in which an information framework influenced by Klein’s (Recognition Primed Decision) RPD model, Endsley’s model of situational awareness, and the Joint Directors of Laboratories (JDL) data fusion process model can be implemented through a novel combination of Complex Event Processing (CEP) and Multi-Agent Software (MAS). Though originally designed for stock market and financial applications, the high performance data-driven nature of CEP techniques provide a natural compliment to the proven capabilities of MAS systems for modeling naturalistic decision-making, performing process adjudication, and optimizing networked processing and cognition via the use of “mobile agents.” This paper addresses the challenges and opportunities of such a framework for augmenting human observational capability as well as enabling the ability to perform collaborative context-aware reasoning in both human teams and hybrid human / software agent teams.
Participatory telerobotics
Alexander D. Wissner-Gross, Timothy M. Sullivan
We present a novel “participatory telerobotics” system that generalizes the existing concept of participatory sensing to include real-time teleoperation and telepresence by treating humans with mobile devices as ad-hoc telerobots. In our approach, operators or analysts first choose a desired location for remote surveillance or activity from a live geographic map and are then automatically connected via a coordination server to the nearest available trusted human. That human’s device is then activated and begins recording and streaming back to the operator a live audiovisual feed for telepresence, while allowing the operator in turn to request complex teleoperative motions or actions from the human. Supported action requests currently include walking, running, leaning, and turning, all with controllable magnitudes and directions. Compliance with requests is automatically measured and scored in real time by fusing information received from the device’s onboard sensors, including its accelerometers, gyroscope, magnetometer, GPS receiver, and cameras. Streams of action requests are visually presented by each device to its human in the form of an augmented reality game that rewards prompt physical compliance while remaining tolerant of network latency. Because of its ability to interactively elicit physical knowledge and operations through ad-hoc collaboration, we anticipate that our participatory telerobotics system will have immediate applications in the intelligence, retail, healthcare, security, and travel industries.
Multimodal scenario analysis and visualization tool
Erin Ontiveros, Rolando Raqueno, Andrew Scott, et al.
Rochester Institute of Technology has developed a prototype environment around a fictional scenario describing a specific intelligence question. This environment ingests data across a wide range of modalities, allows the analyst to interact with the data and perform analysis within the environment. This in addition to the ability to visualize the scenario according to date and time of the data capture in order to make predictions and understand observations as a function of time. We were able to characterize the power output capacity of a power plant, which we assumed to be correlated to activity at the neighboring facility.