Show all abstracts
View Session
- Front Matter: Volume 9913
- Project Overviews and Progress I
- Telescope Control I
- Software Quality and Testing
- Data Management and Archives I
- Cyberinfrastructure
- Instrumentation Control
- UI/Web Technologies
- Project Overviews and Progress II
- Data Processing and Pipelines I
- Project Management
- Data Management and Archives II
- Telescope Control II
- Software Engineering
- Data Processing and Pipelines II
- Poster Session: Cyberinfrastructure, High-performance and Parallel Computing, Big Data
- Poster Session: Observatory, Telescope and Instrumentation Control
- Poster Session: Project Overviews and Progress Reports
- Poster Session: Software Engineering, Design, and Implementation
Front Matter: Volume 9913
Front Matter: Volume 9913
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 9913, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Project Overviews and Progress I
SKA Telescope Manager (TM): status and architecture overview
Show abstract
The SKA radio telescope project is building two telescopes, SKA-Low in Australia and SKA-Mid in South Africa respectively. The Telescope Manager is responsible for the observations lifecycle and for monitoring and control of each instrument, and is being developed by an international consortium. The project is currently in the design phase, with the Preliminary Design Review having been successfully completed, along with re-baselining to match project scope to available budget. This report presents the status of the Telescope Manager work, key architectural challenges and our approach to addressing them.
The software architecture to control the Cherenkov Telescope Array
I. Oya,
M. Füßling,
P. Oliveira Antonino,
et al.
Show abstract
The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.
The transition from construction to operations on the ALMA control software
Show abstract
The Atacama Large Millimeter/Submillimeter Array (ALMA) is a set of 66 millimeter wave antenna in the Andes in Northern Chile. All antennas are connected and operate as an interferometer making ALMA the most powerful millimeter telescope in the world. In 2013 ALMA formally marked the end of construction and the beginning of operations. This paper will focus on the impact, on the ALMA control software, of this transition from construction to operations.
Telescope Control I
Improving the pointing and tracking performance of the Keck telescopes
Show abstract
Pointing and tracking performance is one of the key metrics that characterize a telescope's overall efficiency. The pointing performance of the Keck telescopes, which use rotary friction encoders to provide position feedback to the control system, has been surpassed by newer large telescopes with more precise encoder systems. While poor tracking can be compensated with guiding, poor blind pointing performance can lead to loss of observing time. In this paper we present a history of the efforts to reduce the impact of poor pointing, as well as the improvements achieved after the installation of new tape encoders. We will discuss the calibration and testing methods and the tools for monitoring and maintaining the desired pointing performance. A comparative analysis of the pointing performance before and after the telescope control system upgrade will also be presented.
The active surface control system for the Tian Ma Telescope
Show abstract
The Tian Ma Telescope (TM) is the largest fully steerable radio telescope in Asia. It has a primary reflector of 65-m in diameter with a shaped Cassegrain configuration. The primary reflector of the TM is an active surface with 1104 actuators for the 1008 surface panels. The panels of the telescope are divided into 18 rings, and 24 fan sections. Each section includes three sub-sections. The active surface system adopts TCP/IP Ethernet network and RS-485 bus as a dominating mode of communication. The control software works on the Windows system, and adopts the object-oriented technology. The photogrammetry and phase-coherent holography has been used to set the surface to about 0.3mm at the rigging angle. The FEM model is testing now. The Out-of-Focus holography and other techniques will be used to modify the dynamic surface deformation.
Computer-aided star pattern recognition with astrometry.net: in-flight support of telescope operations on SOFIA
Karsten Schindler,
Dustin Lang,
Liz Moore,
et al.
Show abstract
SOFIA is an airborne observatory, operating a gyroscopically stabilized telescope with an effective aperture of 2.5 m on-board a modified Boeing 747SP. Its primary objective is to conduct observations at mid- to far-infrared wavelengths. When SOFIA opens its door to the night sky, the initial telescope pointing is estimated from the aircraft's position and heading as well as the telescope's attitude relative to the aircraft. This initial pointing estimate needs to be corrected using stars that are manually identified in tracking camera images; telescope pointing also needs to be verified and refined at the beginning of each flight leg. We report about the implementation of the astrometry.net package on the telescope operator workstations on-board SOFIA. This package provides a very robust, reliable and fast algorithm for blind astrometric image calibration. Using images from SOFIA's Wide Field Imager, we are able to display an almost instant, continuous feedback of calculated right ascension, declination and field rotation in the GUI for the telescope operator. The computer-aided recognition of star patterns will support telescope pointing calibrations in the future, further increasing the efficiency of the observatory. We also discuss other current and future use cases of the astrometry.net package in the SOFIA project and at the German SOFIA Institute (DSI).
Control and monitoring software for the Greenland Telescope
Show abstract
The Greenland Telescope (GLT) is a 12m diameter antenna that is being developed from the ALMA North America prototype antenna, for VLBI observations and single-dish science approaching THz, at the Summit station in Greenland. The GLT is a collaboration between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics. We describe the control and monitoring software that is being developed for GLT. The present version of the software is ready for the initial tests of the antenna at Thule, including optical and radio pointing calibration, holography, and VLBI observations at 230 GHz.
LSST control software component design
Show abstract
Construction of the Large Synoptic Survey Telescope system involves several different organizations, a situation that poses many challenges at the time of the software integration of the components. To ensure commonality for the purposes of usability, maintainability, and robustness, the LSST software teams have agreed to the following for system software components: a summary state machine, a manner of managing settings, a flexible solution to specify controller/controllee relationships reliably as needed, and a paradigm for responding to and communicating alarms. This paper describes these agreed solutions and the factors that motivated these.
Software Quality and Testing
An automated qualification framework for the MeerKAT CAM (Control-And-Monitoring)
Show abstract
This paper introduces and discusses the design of an Automated Qualification Framework (AQF) that was developed to automate as much as possible of the formal Qualification Testing of the Control And Monitoring (CAM) subsystem of the 64 dish MeerKAT radio telescope currently under construction in the Karoo region of South Africa. The AQF allows each Integrated CAM Test to reference the MeerKAT CAM requirement and associated verification requirement it covers and automatically produces the Qualification Test Procedure and Qualification Test Report from the test steps and evaluation steps annotated in the Integrated CAM Tests. The MeerKAT System Engineers are extremely happy with the AQF results, but mostly by the approach and process it enforces.
Rules of thumb to increase the software quality through testing
M. Buttu,
M. Bartolini,
C. Migoni,
et al.
Show abstract
The software maintenance typically requires 40-80% of the overall project costs, and this considerable variability mostly depends on the software internal quality: the more the software is designed and implemented to constantly welcome new changes, the lower will be the maintenance costs. The internal quality is typically enforced through testing, which in turn also affects the development and maintenance costs. This is the reason why testing methodologies have become a major concern for any company that builds - or is involved in building - software. Although there is no testing approach that suits all contexts, we infer some general guidelines learned during the Development of the Italian Single-dish COntrol System (DISCOS), which is a project aimed at producing the control software for the three INAF radio telescopes (the Medicina and Noto dishes, and the newly-built SRT). These guidelines concern both the development and the maintenance phases, and their ultimate goal is to maximize the DISCOS software quality through a Behavior-Driven Development (BDD) workflow beside a continuous delivery pipeline. We consider different topics and patterns; they involve the proper apportion of the tests (from end-to-end to low-level tests), the choice between hardware simulators and mockers, why and how to apply TDD and the dependency injection to increase the test coverage, the emerging technologies available for test isolation, bug fixing, how to protect the system from the external resources changes (firmware updating, hardware substitution, etc.) and, eventually, how to accomplish BDD starting from functional tests and going through integration and unit tests. We discuss pros and cons of each solution and point out the motivations of our choices either as a general rule or narrowed in the context of the DISCOS project.
Behavior driven testing in ALMA telescope calibration software
Show abstract
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
The evolution of the simulation environment in the ALMA Observatory
Show abstract
The Atacama Large Millimeter /submillimeter Array (ALMA) has entered into operation phase since 2013. This transition changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time. Therefore, it was planned to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment. Concepts of model in the loop and hardware in the loop were explored. In this paper we review experiences gained and lessons learnt during the design and implementation of the new simulation environment.
Modernized build and test infrastructure for control software at ESO: highly flexible building, testing, and automatic quality practices for telescope control software
Show abstract
The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.
Data Management and Archives I
High-contrast imaging in the cloud with klipReduce and Findr
Show abstract
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
Investigating interoperability of the LSST data management software stack with Astropy
Show abstract
The Large Synoptic Survey Telescope (LSST) will be an 8.4m optical survey telescope sited in Chile and capable of imaging the entire sky twice a week. The data rate of approximately 15TB per night and the requirements to both issue alerts on transient sources within 60 seconds of observing and create annual data releases means that automated data management systems and data processing pipelines are a key deliverable of the LSST construction project. The LSST data management software has been in development since 2004 and is based on a C++ core with a Python control layer. The software consists of nearly a quarter of a million lines of code covering the system from fundamental WCS and table libraries to pipeline environments and distributed process execution. The Astropy project began in 2011 as an attempt to bring together disparate open source Python projects and build a core standard infrastructure that can be used and built upon by the astronomy community. This project has been phenomenally successful in the years since it has begun and has grown to be the de facto standard for Python software in astronomy. Astropy brings with it considerable expectations from the community on how astronomy Python software should be developed and it is clear that by the time LSST is fully operational in the 2020s many of the prospective users of the LSST software stack will expect it to be fully interoperable with Astropy. In this paper we describe the overlap between the LSST science pipeline software and Astropy software and investigate areas where the LSST software provides new functionality. We also discuss the possibilities of re-engineering the LSST science pipeline software to build upon Astropy, including the option of contributing affliated packages.
VIALACTEA knowledge base homogenizing access to Milky Way data
Marco Molinaro,
Robert Butora,
Marilena Bandieramonte,
et al.
Show abstract
The VIALACTEA project has a work package dedicated to “Tools and Infrastructure" and, inside it, a task for the “Database and Virtual Observatory Infrastructure". This task aims at providing an infrastructure to store all the resources needed by the, more purposely, scientific work packages of the project itself. This infrastructure includes a combination of: storage facilities, relational databases and web services on top of them, and has taken, as a whole, the name of VIALACTEA Knowledge Base (VLKB). This contribution illustrates the current status of this VLKB. It details the set of data resources put together; describes the database that allows data discovery through VO inspired metadata maintenance; illustrates the discovery, cutout and access services built on top of the former two for the users to exploit the data content.
A case study in adaptable and reusable infrastructure at the Keck Observatory Archive: VO interfaces, moving targets, and more
Show abstract
The Keck Observatory Archive (KOA) (https://koa.ipac.caltech.edu) curates all observations acquired at the W. M. Keck
Observatory (WMKO) since it began operations in 1994, including data from eight active instruments and two
decommissioned instruments. The archive is a collaboration between WMKO and the NASA Exoplanet Science Institute
(NExScI). Since its inception in 2004, the science information system used at KOA has adopted an architectural
approach that emphasizes software re-use and adaptability. This paper describes how KOA is currently leveraging and
extending open source software components to develop new services and to support delivery of a complete set of
instrument metadata, which will enable more sophisticated and extensive queries than currently possible.
In August 2015, KOA deployed a program interface to discover public data from all instruments equipped with an imaging mode. The interface complies with version 2 of the Simple Imaging Access Protocol (SIAP), under development by the International Virtual Observatory Alliance (IVOA), which defines a standard mechanism for discovering images through spatial queries. The heart of the KOA service is an R-tree-based, database-indexing mechanism prototyped by the Virtual Astronomical Observatory (VAO) and further developed by the Montage Image Mosaic project, designed to provide fast access to large imaging data sets as a first step in creating wide-area image mosaics (such as mosaics of subsets of the 4.7 million images of the SDSS DR9 release). The KOA service uses the results of the spatial R-tree search to create an SQLite data database for further relational filtering. The service uses a JSON configuration file to describe the association between instrument parameters and the service query parameters, and to make it applicable beyond the Keck instruments.
The images generated at the Keck telescope usually do not encode the image footprints as WCS fields in the FITS file headers. Because SIAP searches are spatial, much of the effort in developing the program interface involved processing the instrument and telescope parameters to understand how accurately we can derive the WCS information for each instrument. This knowledge is now being fed back into the KOA databases as part of a program to include complete metadata information for all imaging observations.
The R-tree program was itself extended to support temporal (in addition to spatial) indexing, in response to requests from the planetary science community for a search engine to discover observations of Solar System objects. With this 3D-indexing scheme, the service performs very fast time and spatial matches between the target ephemerides, obtained from the JPL SPICE service. Our experiments indicate these matches can be more than 100 times faster than when separating temporal and spatial searches. Images of the tracks of the moving targets, overlaid with the image footprints, are computed with a new command-line visualization tool, mViewer, released with the Montage distribution. The service is currently in test and will be released in late summer 2016.
Cyberinfrastructure
A cyber infrastructure for the SKA Telescope Manager
Domingos Barbosa,
João Paulo Barraca,
Bruno Carvalho,
et al.
Show abstract
The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.
The NOAO data lab: science-driven development
Show abstract
The NOAO Data Lab aims to provide infrastructure to maximize community use of the high-value survey datasets now being collected with NOAO telescopes and instruments. As a science exploration framework, the Data Lab allow users to access and search databases containing large (i.e. terabyte-scale) catalogs, visualize, analyze, and store the results of these searches, combine search results with data from other archives or facilities, and share these results with collaborators using a shared workspace and/or data publication service. In the process of implementing the needed tools and services, specific science cases are used to guide development of the system framework and tools. The result is a Year-1 capability demonstration that (fully or partially) implements each of the major architecture components in the context of a real-world science use-case. In this paper, we discuss how this model of science-driven development helped us to build a fully functional system capable of executing the chosen science case, and how we plan to scale this system to support general use in the next phase of the project.
The AST3 controlling and operating software suite for automatic sky survey
Show abstract
We have developed a specialized software package, called ast3suite, to achieve the remote control and automatic sky survey for AST3 (Antarctic Survey Telescope) from scratch. It includes several daemon servers and many basic commands. Each program does only one single task, and they work together to make AST3 a robotic telescope. A survey script calls basic commands to carry out automatic sky survey. Ast3suite was carefully tested in Mohe, China in 2013 and has been used at Dome, Antarctica in 2015 and 2016 with the real hardware for practical sky survey. Both test results and practical using showed that ast3suite had worked very well without any manual auxiliary as we expected.
TMT common software update
Show abstract
TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their functional roles in the software system. TMT CSW has recently passed its preliminary design review. The unique features of CSW include its use of multiple, open-source products as the basis for services, and an approach that works to reduce the amount of CSW-provided infrastructure code. Considerable prototyping was completed during this phase to mitigate risk with results that demonstrate the validity of this design approach and the selected service implementation products. This paper describes the latest design of TMT CSW, key features, and results from the prototyping effort.
DDS as middleware of the Southern African Large Telescope control system
Show abstract
The Southern African Large Telescope (SALT) software control system1 is realised as a distributed control system, implemented predominantly in National Instruments' LabVIEW. The telescope control subsystems communicate using cyclic, state-based messages. Currently, transmitting a message is accomplished by performing an HTTP PUT request to a WebDAV directory on a centralised Apache web server, while receiving is based on polling the web server for new messages. While the method works, it presents a number of drawbacks; a scalable distributed communication solution with minimal overhead is a better fit for control systems. This paper describes our exploration of the Data Distribution Service (DDS). DDS is a formal standard specification, defined by the Object Management Group (OMG), that presents a data-centric publish-subscribe model for distributed application communication and integration. It provides an infrastructure for platform- independent many-to-many communication. A number of vendors provide implementations of the DDS standard; RTI, in particular, provides a DDS toolkit for LabVIEW. This toolkit has been evaluated against the needs of SALT, and a few deficiencies have been identified. We have developed our own implementation that interfaces LabVIEW to DDS in order to address our specific needs. Our LabVIEW DDS interface implementation is built against the RTI DDS Core component, provided by RTI under their Open Community Source licence. Our needs dictate that the interface implementation be platform independent. Since we have access to the RTI DDS Core source code, we are able to build the RTI DDS libraries for any of the platforms on which we require support. The communications functionality is based on UDP multicasting. Multicasting is an efficient communications mechanism with low overheads which avoids duplicated point-to-point transmission of data on a network where there are multiple recipients of the data. In the paper we present a performance evaluation of DDS against the current HTTP-based implementation as well as the historical DataSocket implementation. We conclude with a summary and describe future work.
Instrumentation Control
The DESI instrument control system
K. Honscheid,
A. E. Elliott,
L. Beaufore,
et al.
Show abstract
The Dark Energy Spectroscopic Instrument (DESI) , a new instrument currently under construction for the Mayall 4m telescope at Kitt Peak National Observatory, will consist of a wide-field optical corrector with a 3.2 degree diameter field of view, a focal plane with 5,000 robotically controlled fiber positioners and 10 fiber fed broadband spectrographs. This article describes the design of the DESI instrument control system (ICS). The ICS coordinates fiber positioner operations, interfaces to the Mayall telescope control system, monitors operating conditions, reads out the 30 spectrograph CCDs and provides observer support and data quality monitoring.
Efficient receiver tuning using differential evolution strategies
Show abstract
Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing ~1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of ~1000 pixel array receivers and consider how the KAPPa DE system might be applied.
The South African Astronomical Observatory instrumentation software architecture and the SHOC instruments
Show abstract
Until recently, software for instruments on the smaller telescopes at the South African Astronomical Observatory (SAAO) has not been designed for remote accessibility and frequently has not been developed using modern software best-practice. We describe a software architecture we have implemented for use with new and upgraded instruments at the SAAO. The architecture was designed to allow for multiple components and to be fast, reliable, remotely- operable, support different user interfaces, employ as much non-proprietary software as possible, and to take future-proofing into consideration. Individual component drivers exist as standalone processes, communicating over a network. A controller layer coordinates the various components, and allows a variety of user interfaces to be used. The Sutherland High-speed Optical Cameras (SHOC) instruments incorporate an Andor electron-multiplying CCD camera, a GPS unit for accurate timing and a pair of filter wheels. We have applied the new architecture to the SHOC instruments, with the camera driver developed using Andor's software development kit. We have used this to develop an innovative web-based user-interface to the instrument.
World coordinate information for the Daniel K. Inouye Solar Telescope
Show abstract
It is a top level science requirement that data from the Daniel K Inouye Solar Telescope (DKIST) is archived and made available to the world wide astronomical community. Data from DKIST must contain sufficient meta-data to allow proper post processing. This paper describes how the Telescope Control System (TCS), Wavefront Correction Control System (WCCS) and individual instrument control systems work together with the camera systems to provide the world coordinate information (WCI) meta-data for 2-d imaging detectors.
The Infrared Imaging Spectrograph (IRIS) for TMT: motion planning with collision avoidance for the on-instrument wavefront sensors
Show abstract
The InfraRed Imaging Spectrograph (IRIS) will be a first-light client instrument for the Narrow Field Infrared Adaptive Optics System (NFIRAOS) on the Thirty Meter Telescope. IRIS includes three configurable tip/tilt (TT) or tip/tilt/focus (TTF) On-Instrument Wavefront Sensors (OIWFS). These sensors are positioned over natural guide star (NGS) asterisms using movable polar-coordinate pick-ofi arms (POA) that patrol an approximately 2-arcminute circular field-of-view (FOV). The POAs are capable of colliding with one another, so an algorithm for coordinated motion that avoids contact is required. We have adopted an approach in which arm motion is evaluated using the gradient descent of a scalar potential field that includes an attractive component towards the goal configuration (locations of target stars), and repulsive components to avoid obstacles (proximity to adjacent arms). The resulting vector field is further modified by adding a component transverse to the repulsive gradient to avoid problematic local minima in the potential. We present path planning simulations using this computationally inexpensive technique, which exhibit smooth and efficient trajectories.
AAO Starbugs: software control and associated algorithms
Show abstract
The Australian Astronomical Observatory's TAIPAN instrument deploys 150 Starbug robots to position optical fibres to accuracies of 0.3 arcsec, on a 32 cm glass field plate on the focal plane of the 1.2 m UK-Schmidt telescope. This paper describes the software system developed to control and monitor the Starbugs, with particular emphasis on the automated path-finding algorithms, and the metrology software which keeps track of the position and motion of individual Starbugs as they independently move in a crowded field. The software employs a tiered approach to find a collision-free path for every Starbug, from its current position to its target location. This consists of three path-finding stages of increasing complexity and computational cost. For each Starbug a path is attempted using a simple method. If unsuccessful, subsequently more complex (and expensive) methods are tried until a valid path is found or the target is flagged as unreachable.
Collision-free coordination of fiber positioners in multi-object spectrographs
Show abstract
Many fiber-fed spectroscopic survey projects, such as DESI, PFS and MOONS, will use thousands of fiber positioners packed at a focal plane. To maximize observation time, the positioners need to move simultaneously and reach their targets swiftly. We have previously presented a motion planning method based on a decentralized navigation function for the collision-free coordination of the fiber positioners in DESI. In MOONS, the end effector of each positioner handling the fiber can reach the centre of its neighbours. There is therefore a risk of collision with up to 18 surrounding positioners in the chosen dense hexagonal configuration. Moreover, the length of the second arm of the positioner is almost twice the length of the first one. As a result, the geometry of the potential collision zone between two positioners is not limited to the extremity of their end-effector, but surrounds the second arm. In this paper, we modify the navigation function to take into account the larger collision zone resulting from the extended geometrical shape of the positioners. The proposed navigation function takes into account the configuration of the positioners as well as the constraints on the actuators, such as their maximal velocity and their mechanical clearance. Considering the fact that all the positioners' bases are fixed to the focal plane, collisions can occur locally and the risk of collision is limited to the 18 surrounding positioners. The decentralizing motion planning and trajectory generation takes advantage of this limited number of positioners and the locality of collisions, hence significantly reduces the complexity of the algorithm to a linear order. The linear complexity ensures short computation time. In addition, the time needed to move all the positioners to their targets is independent of the number of positioners. These two key advantages of the chosen decentralization approach turn this method to a promising solution for the collision-free motion-planning problem in the next- generation spectroscopic survey projects. A motion planning simulator, exploited as a software prototype, has been developed in Python. The pre-computed collision-free trajectories of the actuators of all the positioners are fed directly from the simulator to the electronics controlling the motors. A successful demonstration of the effectiveness of these trajectories on the real positioners as well as their simulated counterparts are put side by side in the following online video sequence (https://goo.gl/YuwwsE).
UI/Web Technologies
Exploratory visualization of astronomical data on ultra-high-resolution wall displays
Show abstract
Ultra-high-resolution wall displays feature a very high pixel density over a large physical surface, which makes them well-suited to the collaborative, exploratory visualization of large datasets. We introduce FITS-OW, an application designed for such wall displays, that enables astronomers to navigate in large collections of FITS images, query astronomical databases, and display detailed, complementary data and documents about multiple sources simultaneously. We describe how astronomers interact with their data using both the wall’s touchsensitive surface and handheld devices. We also report on the technical challenges we addressed in terms of distributed graphics rendering and data sharing over the computer clusters that drive wall displays.
Prototyping the graphical user interface for the operator of the Cherenkov Telescope Array
Show abstract
The Cherenkov Telescope Array (CTA) is a planned gamma-ray observatory. CTA will incorporate about 100 imaging atmospheric Cherenkov telescopes (IACTs) at a Southern site, and about 20 in the North. Previous IACT experiments have used up to five telescopes. Subsequently, the design of a graphical user interface (GUI) for the operator of CTA involves new challenges. We present a GUI prototype, the concept for which is being developed in collaboration with experts from the field of Human-Computer Interaction (HCI). The prototype is based on Web technology; it incorporates a Python web server, Web Sockets and graphics generated with the d3.js Javascript library.
Firefly: embracing future web technologies
W. Roby,
X. Wu,
T. Goldina,
et al.
Show abstract
At IPAC/Caltech, we have developed the Firefly web archive and visualization system. Used in production for the last eight years in many missions, Firefly gives the scientist significant capabilities to study data. Firefly provided the first completely web based FITS viewer as well as a growing set of tabular and plotting visualizers. Further, it will be used for the science user interface of the LSST telescope which goes online in 2021. Firefly must meet the needs of archive access and visualization for the 2021 LSST telescope and must serve astronomers beyond the year 2030. Recently, our team has faced the fact that the technology behind Firefly software was becoming obsolete. We were searching for ways to utilize the current breakthroughs in maintaining stability, testability, speed, and reliability of large web applications, which Firefly exemplifies. In the last year, we have ported the Firefly to cutting edge web technologies. Embarking on this massive overhaul is no small feat to say the least. Choosing the technologies that will maintain a forward trajectory in a future development project is always hard and often overwhelming. When a team must port 150,000 lines of code for a production-level product there is little room to make poor choices. This paper will give an overview of the most modern web technologies and lessons learned in our conversion from GWT based system to React/Redux based system.
Observation management challenges of the Square Kilometre Array
Show abstract
The Square Kilometre Array (SKA) will be the world's most advanced radio telescope, designed to explore some of the biggest questions in astronomy today, such as the epoch of re-ionization, the nature of gravity and the origins of cosmic magnetism. SKA1, the first phase of SKA construction, is currently being designed by a large team of experts world-wide. SKA1 comprises two telescopes: a 200-element dish interferometer in South Africa and a 130000-element dipole antenna aperture array in Australia. To enable the ground-breaking science of the SKA an advanced Observation Management system is required to support both the needs of the astronomical community users and the SKA Observatory staff. This system will ensure that the SKA realises its scientiffc aims and achieves optimal scientific throughput. This paper provides an overview of the design of the system that will accept proposals from SKA users, and result in the execution of the scripts that will obtain science data, taking in the stages of detailed preparation, planning and scheduling of the observations and onwards tracking. It describes the unique challenges of the differing requirements of two telescopes, one of which is very much a software telescope, including the need to schedule the data processing as well as the acquisition, and to react to both internally and externally discovered transient events. The scheduling of multiple parallel sub-array use is covered, along with the need to handle commensal observing - using the same data stream to satisfy the science goals of more than one project simultaneously. An international team from academia and industry, drawing on expertise and experience from previous telescope projects, the virtual observatory and comparable problems in industry, has been assembled to design the solution to this challenging but exciting problem.
Project Overviews and Progress II
Status report of the SRT radiotelescope control software: the DISCOS project
A. Orlati,
M. Bartolini,
M. Buttu,
et al.
Show abstract
The Sardinia Radio Telescope (SRT) is a 64-m fully-steerable radio telescope. It is provided with an active surface to correct for gravitational deformations, allowing observations from 300 MHz to 100 GHz. At present, three receivers are available: a coaxial LP-band receiver (305-410 MHz and 1.5-1.8 GHz), a C-band receiver (5.7-7.7 GHz) and a 7-feed K-band receiver (18-26.5 GHz). Several back-ends are also available in order to perform the different data acquisition and analysis procedures requested by scientific projects. The design and development of the SRT control software started in 2004, and now belongs to a wider project called DISCOS (Development of the Italian Single-dish COntrol System), which provides a common infrastructure to the three Italian radio telescopes (Medicina, Noto and SRT dishes). DISCOS is based on the Alma Common Software (ACS) framework, and currently consists of more than 500k lines of code. It is organized in a common core and three specific product lines, one for each telescope. Recent developments, carried out after the conclusion of the technical commissioning of the instrument (October 2013), consisted in the addition of several new features in many parts of the observing pipeline, spanning from the motion control to the digital back-ends for data acquisition and data formatting; we brie y describe such improvements. More importantly, in the last two years we have supported the astronomical validation of the SRT radio telescope, leading to the opening of the first public call for proposals in late 2015. During this period, while assisting both the engineering and the scientific staff, we massively employed the control software and were able to test all of its features: in this process we received our first feedback from the users and we could verify how the system performed in a real-life scenario, drawing the first conclusions about the overall system stability and performance. We examine how the system behaves in terms of network load and system load, how it reacts to failures and errors, and what components and services seem to be the most critical parts of our architecture, showing how the ACS framework impacts on these aspects. Moreover, the exposure to public utilization has highlighted the major flaws in our development and software management process, which had to be tuned and improved in order to achieve faster release cycles in response to user feedback, and safer deploy operations. In this regard we show how the introduction of testing practices, along with continuous integration, helped us to meet higher quality standards. Having identified the most critical aspects of our software, we conclude showing our intentions for the future development of DISCOS, both in terms of software features and software infrastructures.
Status report of the end-to-end ASKAP software system: towards early science operations
Show abstract
The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more “traditional” or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the “end-to-end” data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.
MAISIE: a multipurpose astronomical instrument simulator environment
Show abstract
Astronomical instruments often need simulators to preview their data products and test their data reduction pipelines. Instrument simulators have tended to be purpose-built with a single instrument in mind, and at- tempting to reuse one of these simulators for a different purpose is often a slow and difficult task. MAISIE is a simulator framework designed for reuse on different instruments. An object-oriented design encourages reuse of functionality and structure, while offering the flexibility to create new classes with new functionality. MAISIE is a set of Python classes, interfaces and tools to help build instrument simulators. MAISIE can just as easily build simulators for single and multi-channel instruments, imagers and spectrometers, ground and space based instruments. To remain easy to use and to facilitate the sharing of simulators across teams, MAISIE is written in Python, a freely available and open-source language. New functionality can be created for MAISIE by creating new classes that represent optical elements. This approach allows new and novel instruments to add functionality and take advantage of the existing MAISIE classes. MAISIE has recently been used successfully to develop the simulator for the JWST/MIRI- Medium Resolution Spectrometer.
ACS from development to operations
Show abstract
The ALMA Common Software (ACS), provides the infrastructure of the distributed software system of ALMA and other projects. ACS, built on top of CORBA and Data Distribution Service (DDS) middleware, is based on a Component- Container paradigm and hides the complexity of the middleware allowing the developer to focus on domain specific issues. The transition of the ALMA observatory from construction to operations brings with it that ACS effort focuses primarily on scalability, stability and robustness rather than on new features. The transition came together with a shorter release cycle and a more extensive testing. For scalability, the most problematic area has been the CORBA notification service, used to implement the publisher subscriber pattern because of the asynchronous nature of the paradigm: a lot of effort has been spent to improve its stability and recovery from run time errors. The original bulk data mechanism, implemented using the CORBA Audio/Video Streaming Service, showed its limitations and has been replaced with a more performant and scalable DDS implementation. Operational needs showed soon the difference between releases cycles for Online software (i.e. used during observations) and Offline software, which requires much more frequent releases. This paper attempts to describe the impact the transition from construction to operations had on ACS, the solution adopted so far and a look into future evolution.
The ESO astronomical site monitor upgrade
Show abstract
Monitoring and prediction of astronomical observing conditions are essential for planning and optimizing observations. For this purpose, ESO, in the 90s, developed the concept of an Astronomical Site Monitor (ASM), as a facility fully integrated in the operations of the VLT observatory[1]. Identical systems were installed at Paranal and La Silla, providing comprehensive local weather information. By now, we had very good reasons for a major upgrade:
• The need of introducing new features to satisfy the requirements of observing with the Adaptive Optics Facility and to benefit other Adaptive Optics systems.
• Managing hardware and software obsolescence.
• Making the system more maintainable and expandable by integrating off-the-shelf hardware solutions.
The new ASM integrates:
• A new Differential Image Motion Monitor (DIMM) paired with a Multi Aperture Scintillation Sensor (MASS) to measure the vertical distribution of turbulence in the high atmosphere and its characteristic velocity.
• A new SLOpe Detection And Ranging (SLODAR) telescope, for measuring the altitude and intensity of turbulent layers in the low atmosphere.
• A water vapour radiometer to monitor the water vapour content of the atmosphere.
• The old weather tower, which is being refurbished with new sensors. The telescopes and the devices integrated are commercial products and we have used as much as possible the control system from the vendors. The existing external interfaces, based on the VLT standards, have been maintained for full backward compatibility. All data produced by the system are directly fed in real time into a relational database. A completely new web-based display replaces the obsolete plots based on HP-UX RTAP. We analyse here the architectural and technological choices and discuss the motivations and trade-offs.
Data Processing and Pipelines I
ASTRI SST-2M prototype and mini-array data reconstruction and scientific analysis software in the framework of the Cherenkov Telescope Array
Show abstract
In the framework of the international Cherenkov Telescope Array (CTA) gamma-ray observatory, the Italian National Institute for Astrophysics (INAF) is developing a dual-mirror, small-sized, end-to-end prototype (ASTRI SST-2M), inaugurated on September 2014 at Mt. Etna (Italy), and a mini-array composed of nine ASTRI telescopes, proposed to be installed at the southern CTA site. The ASTRI mini-array is a collaborative effort led by INAF and carried out by institutes from Italy, Brazil, and South-Africa. The project is also including the full data handling chain from raw data up to final scientific products. To this end, a dedicated software for the online/ on-site/off-site data reconstruction and scientific analysis is under development for both the ASTRI SST-2M prototype and mini-array. The software is designed following a modular approach in which each single component and the entire pipeline are developed in compliance with the CTA requirements. Data reduction is conceived to be run on parallel computing architectures, as multi-core CPUs and graphic accelerators (GPUs), and new hardware architectures based on low-power consumption processors (e.g. ARM). The software components are coded in C++/Python/CUDA and wrapped by efficient pipelines written in Python. The final scientific products are then achieved by means of either science tools currently being used in the CTA Consortium (e.g. ctools) or specifically developed ones. In this contribution, we present the framework and the main software components of the ASTRI SST-2M prototype and mini-array data reconstruction and scientific analysis software package, and report the status of its development.
Implementing a real-time data stream for time-series stellar photometry
Show abstract
We present a new automated photometric pipeline optimized for time-series photometry that includes a real-time data streaming service. An observer using this resource can automatically stream photometric data over the Internet. Other observers can then monitor the data stream in real time and make an informed decision whether to perform complementary observations of a transient event in progress. Our pipeline uses a modular design so that it can be easily implemented or customized as a real-time robotic telescope pipeline on any observatory. The pipeline is controlled through the user friendly SAOImage DS9 package.
Automated spectral reduction pipelines
Show abstract
The Liverpool Telescope automated spectral data reduction pipelines perform both removal of instrumental signatures and provide wavelength calibrated data products promptly after observation. Unique science drivers for each of three instruments led to novel hardware solutions which required reassessment of some of the conventional CCD reduction recipes. For example, we describe the derivation of bias and dark corrections on detectors with neither overscan or shutter. In the context of spectroscopy we compare the quality of at fielding resulting from different algorithmic combinations of dispersed and non-dispersed sky and lamp flats in the case of spectra suffering from 2D spatial distortions.
StarDock: shipping customized computing environments to the data
Show abstract
Surging data volumes make it increasingly unfeasible to transfer astronomical datasets to the local systems of individual scientists. Centralized pipelines offer some relief, but lack flexibility to fulfill the needs of all users. We have developed a system that leverages the Docker container application virtualization software. Along with a suite of commonly used astronomy applications, users can configure a container with their own custom software and analysis tools. Our StarDock system will move the users container to the data, and expose the requested dataset, allowing our users to safely and securely process their data without needlessly transferring hundreds of gigabytes.
Project Management
TMT approach to observatory software development process
Show abstract
The purpose of the Observatory Software System (OSW) is to integrate all software and hardware components of the Thirty Meter Telescope (TMT) to enable observations and data capture; thus it is a complex software system that is defined by four principal software subsystems: Common Software (CSW), Executive Software (ESW), Data Management System (DMS) and Science Operations Support System (SOSS), all of which have interdependencies with the observatory control systems and data acquisition systems. Therefore, the software development process and plan must consider dependencies to other subsystems, manage architecture, interfaces and design, manage software scope and complexity, and standardize and optimize use of resources and tools. Additionally, the TMT Observatory Software will largely be developed in India through TMT’s workshare relationship with the India TMT Coordination Centre (ITCC) and use of Indian software industry vendors, which adds complexity and challenges to the software development process, communication and coordination of activities and priorities as well as measuring performance and managing quality and risk. The software project management challenge for the TMT OSW is thus a multi-faceted technical, managerial, communications and interpersonal relations challenge. The approach TMT is using to manage this multifaceted challenge is a combination of establishing an effective geographically distributed software team (Integrated Product Team) with strong project management and technical leadership provided by the TMT Project Office (PO) and the ITCC partner to manage plans, process, performance, risk and quality, and to facilitate effective communications; establishing an effective cross-functional software management team composed of stakeholders, OSW leadership and ITCC leadership to manage dependencies and software release plans, technical complexities and change to approved interfaces, architecture, design and tool set, and to facilitate effective communications; adopting an agile-based software development process across the observatory to enable frequent software releases to help mitigate subsystem interdependencies; defining concise scope and work packages for each of the OSW subsystems to facilitate effective outsourcing of software deliverables to the ITCC partner, and to enable performance monitoring and risk management. At this stage, the architecture and high-level design of the software system has been established and reviewed. During construction each subsystem will have a final design phase with reviews, followed by implementation and testing. The results of the TMT approach to the Observatory Software development process will only be preliminary at the time of the submittal of this paper, but it is anticipated that the early results will be a favorable indication of progress.
Don't get taken by surprise: planning for software obsolescence management at the ALMA Observatory
Show abstract
ALMA is still a young and evolving observatory with a very active software development group that produces new and updated software components regularly. Yet we are coming to realize that - after well over a decade of development - not only our own software, but also technologies and tools we depend upon, as well as the hardware we interface with, are coming of age. Software obsolescence management is needed, but surprisingly is not something we can just borrow from other observatories, or any other comparable organization. Here we present the challenges, our approaches and some early experiences.
Management of the science ground segment for the Euclid mission
Show abstract
Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z~2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg². The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through different levels of maturity, going from prototypes (developed mainly by scientists) to production code (engineered and tested at the SDCs). A number of incremental challenges (infrastructure, data processing and integrated) have been included in the Euclid SGS test plan to verify the correctness and accuracy of the developed systems.
Building a world-wide open source community around a software framework: progress, dos, and don'ts
Show abstract
As we all know too well, building up a collaborative community around a software infrastructure is not easy. Besides recruiting enthusiasts to work as part of it, mostly for free, to succeed you also need to overcome a number of technical, sociological, and, to our surprise, some political hurdles. The ALMA Common Software (ACS) was developed at ESO and partner institutions over the course of more than 10 years. While it was mainly intended for the ALMA Observatory, it was early on thought as a generic distributed control framework. ACS has been periodically released to the public through an LGPL license, which encouraged around a dozen non-ALMA institutions to make use of ACS for both industrial and educational applications. In recent years, the Cherenkov Telescope Array and the LLAMA Observatory have also decided to adopt the framework for their own control systems. The aim of the “ACS Community” is to support independent initiatives in making use of the ACS framework and to further contribute to its development. The Community provides access to a growing network of volunteers eager to develop ACS in areas that are not necessarily in ALMA's interests, and/or were not within the original system scope. Current examples are: support for additional OS platforms, extension of supported hardware interfaces, a public code repository and a build farm. The ACS Community makes use of existing collaborations with Chilean and Brazilian universities, reaching out to promising engineers in the making. At the same time, projects actively using ACS have committed valuable resources to assist the Community's work. Well established training programs like the ACS Workshops are also being continued through the Community's work. This paper aims to give a detailed account of the ongoing (second) journey towards establishing a world-wide open source collaboration around ACS. The ACS Community is growing into a horizontal partnership across a decentralized and diversified group of actors, and we are excited about its technical and human potential.
Data Management and Archives II
The new Gemini Observatory archive: a fast and low cost observatory data archive running in the cloud
Show abstract
We have developed and deployed a new data archive for the Gemini Observatory. Focused on simplicity and ease of use, the archive provides a number of powerful and novel features including automatic association of calibration data with the science data, and the ability to bookmark searches. A simple but powerful API allows programmatic search and download of data. The archive is hosted on Amazon Web Services, which provides us excellent internet connectivity and significant cost savings in both operations and development over more traditional deployment options. The code is written in python, utilizing a PostgreSQL database and Apache web server.
Petascale cyberinfrastructure for ground-based solar physics: approach of the DKIST data center
Show abstract
The Daniel K Inouye Solar Telescope, under construction in Maui, is designed to perform high-resolution spectropolarimetric visible and infrared measurements of the Sun, and will annually produce 3 PB of data, via 5x108 images and 2x1011 metadata elements requiring calibration, long-term data management, and open and free distribution. After briefly describing the DKIST and its instrument suite, we provide an overview of functions that the DKIST Data Center will provide, and focus on major challenges in its development. We conclude by discussing approach and mention some technologies that the Data Center team is using to develop a petascale computational and data storage resource to support this unique world-class DKIST facility and support its long-term scientific and operational goals.
Cloud services on an astronomy data center
Show abstract
The research on computational methods for astronomy performed by the first phase of the Chilean Virtual Observatory (ChiVO) led to the development of functional prototypes, implementing state-of-the-art computational methods and proposing new algorithms and techniques. The ChiVO software architecture is based on the use of the IVOA protocols and standards. These protocols and standards are grouped in layers, with emphasis on the application and data layers, because their basic standards define the minimum operation that a VO should conduct. As momentary verification, the current implementation works with a set of data, with 1 TB capacity, which comes from the reduction of the cycle 0 of ALMA. This research was mainly focused on spectroscopic data cubes coming from the cycle 0 ALMA's public data. As the dataset size increases when the cycle 1 ALMA's public data is also increasing every month, data processing is becoming a major bottleneck for scientific research in astronomy. When designing the ChiVO, we focused on improving both computation and I/ O costs, and this led us to configure a data center with 424 high speed cores of 2,6 GHz, 1 PB of storage (distributed in hard disk drives-HDD and solid state drive-SSD) and high speed communication Infiniband. We are developing a cloud based e-infrastructure for ChiVO services, in order to have a coherent framework for developing novel web services for on-line data processing in the ChiVO. We are currently parallelizing these new algorithms and techniques using HPC tools to speed up big data processing, and we will report our results in terms of data size, data distribution, number of cores and response time, in order to compare different processing and storage configurations.
Trident: scalable compute archives: workflows, visualization, and analysis
Show abstract
The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub-work flows (3) ImageX, an interactive image visualization service (3) an authentication and authorization service (4) a data service that handles archival, staging and serving of data products, and (5) a notification service that serves statistical collation and reporting needs of various projects. Several other additional components are under development. Trident is an umbrella project, that evolved from the One Degree Imager, Portal, Pipeline, and Archive (ODI-PPA) project which we had initially refactored toward (1) a powerful analysis/visualization portal for Globular Cluster System (GCS) survey data collected by IU researchers, 2) a data search and download portal for the IU Electron Microscopy Center's data (EMC-SCA), 3) a prototype archive for the Ludwig Maximilian University's Wide Field Imager. The new Trident software has been used to deploy (1) a metadata quality control and analytics portal (RADY-SCA) for DICOM formatted medical imaging data produced by the IU Radiology Center, 2) Several prototype work flows for different domains, 3) a snapshot tool within IU's Karst Desktop environment, 4) a limited component-set to serve GIS data within the IU GIS web portal. Trident SCA systems leverage supercomputing and storage resources at Indiana University but can be configured to make use of any cloud/grid resource, from local workstations/servers to (inter)national supercomputing facilities such as XSEDE.
The NOAO Data Lab virtual storage system
Show abstract
Collaborative research/computing environments are essential for working with the next generations of large astronomical data sets. A key component of them is a distributed storage system to enable data hosting, sharing, and publication. VOSpace1 is a lightweight interface providing network access to arbitrary backend storage solutions and endorsed by the International Virtual Observatory Alliance (IVOA). Although similar APIs exist, such as Amazon S3, WebDav, and Dropbox, VOSpace is designed to be protocol agnostic, focusing on data control operations, and supports asynchronous and third-party data transfers, thereby minimizing unnecessary data transfers. It also allows arbitrary computations to be triggered as a result of a transfer operation: for example, a file can be automatically ingested into a database when put into an active directory or a data reduction task, such as Sextractor, can be run on it. In this paper, we shall describe the VOSpace implementations that we have developed for the NOAO Data Lab. These offer both dedicated remote storage, accessible as a local file system via FUSE, and a local VOSpace service to easily enable data synchronization.
Telescope Control II
Mount control system of the ASTRI SST-2M prototype for the Cherenkov Telescope Array
Show abstract
The ASTRI SST-2M telescope is an end-to-end prototype proposed for the Small Size class of Telescopes (SST) of the future Cherenkov Telescope Array (CTA). The prototype is installed in Italy at the INAF observing station located at Serra La Nave on Mount Etna (Sicily) and it was inaugurated in September 2014. This paper presents the software and hardware architecture and development of the system dedicated to the control of the mount, health, safety and monitoring systems of the ASTRI SST-2M telescope prototype. The mount control system installed on the ASTRI SST-2M telescope prototype makes use of standard and widely deployed industrial hardware and software. State of the art of the control and automation industries was selected in order to fulfill the mount related functional and safety requirements with assembly compactness, high reliability, and reduced maintenance. The software package was implemented with the Beckhoff TwinCAT version 3 environment for the software Programmable Logical Controller (PLC), while the control electronics have been chosen in order to maximize the homogeneity and the real time performance of the system. The integration with the high level controller (Telescope Control System) has been carried out by choosing the open platform communications Unified Architecture (UA) protocol, supporting rich data model while offering compatibility with the PLC platform. In this contribution we show how the ASTRI approach for the design and implementation of the mount control system has made the ASTRI SST-2M prototype a standalone intelligent machine, able to fulfill requirements and easy to be integrated in an array configuration such as the future ASTRI mini-array proposed to be installed at the southern site of the Cherenkov Telescope Array (CTA).
Automation and control of the MMT thermal system
Show abstract
This study investigates the software automation and control framework for the MMT thermal system. Thermal-related effects on observing and telescope behavior have been considered during the entire software development process. Regression analysis of telescope and observatory subsystem data is used to characterize and model these thermal-related effects. The regression models help predict expected changes in focus and overall astronomical seeing that result from temperature variations within the telescope structure, within the primary mirror glass, and between the primary mirror glass and adjacent air (i.e., mirror seeing). This discussion is followed by a description of ongoing upgrades to the heating, ventilation and air conditioning (HVAC) system and the associated software controls. The improvements of the MMT thermal system have two objectives: 1) to provide air conditioning capabilities for the MMT facilities, and 2) to modernize and enhance the primary mirror (M1) ventilation system. The HVAC upgrade necessitates changes to the automation and control of the M1 ventilation system. The revised control system must factor in the additional requirements of the HVAC system, while still optimizing performance of the M1 ventilation system and the M1’s optical behavior. An industry-standard HVAC communication and networking protocol, BACnet (Building Automation and Control network), has been adopted. Integration of the BACnet protocol into the existing software framework at the MMT is discussed. Performance of the existing automated system is evaluated and a preliminary upgraded automated control system is presented. Finally, user interfaces to the new HVAC system are discussed.
Software architecture of INO340 telescope control system
Show abstract
The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on “4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by “4+1 model”, for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.
Introduction to FAST central control system
Show abstract
FAST is the largest single dish radio telescope in the world. During observation, part of spherical reflector forms paraboloid to the source direction, meanwhile the feed is placed to instant focus. The control of telescope is difficult and complicated. An autonomous central control system is designed and implemented for methodically and efficiently operation. The system connects and coordinates all subsystems including control, measurement and health monitoring for reflector, feed support and receiver respectively. The main functions are managing observation tasks, commanding subsystems, storing operating data, monitoring statuses and providing the uniform time standard. In this paper, the functions, software and hardware of FAST central control system are presented. The relative infrastructures such as power, network and control room arrangement are introduced.
Prototyping the E-ELT M1 local control system communication infrastructure
Show abstract
The primary mirror of the E-ELT is composed of 798 hexagonal segments of about 1.45 meters across. Each segment can be moved in piston and tip-tilt using three position actuators. Inductive edge sensors are used to provide feedback for global reconstruction of the mirror shape. The E-ELT M1 Local Control System will provide a deterministic infrastructure for collecting edge sensor and actuators readings and distribute the new position actuators references while at the same time providing failure detection, isolation and notification, synchronization, monitoring and configuration management. The present paper describes the prototyping activities carried out to verify the feasibility of the E-ELT M1 local control system communication architecture design and assess its performance and potential limitations.
A new telescope control software for the Mayall 4-meter telescope
Show abstract
The Mayall 4-meter telescope recently went through a major modernization of its telescope control system in preparation for DESI. We describe MPK (Mayall Pointing Kernel), our new software for telescope control. MPK outputs a 20Hz position-based trajectory with a velocity component, which feeds into Mayall’s new servo system over a socket. We wrote a simple yet realistic servo simulator that let us develop MPK mostly without access to real hardware, and also lets us provide other teams with a Mayall simulator as test bed for development of new instruments. MPK has a small core comprised of prioritized, soft real-time threads. Access to the core’s services is via MPK’s main thread, a complete, interactive Tcl/Tk shell, which gives us the power and flexibility of a scripting language to add any other features, from GUIs, to modules for interaction with critical subsystems like dome or guider, to an API for networked clients of a new instrument (e.g., DESI). MPK is designed for long term maintainability: it runs on a stock computer and Linux OS, and uses only standard, open source libraries, except for commercial software that comes with source code in ANSI C/C++. We discuss the technical details of how MPK combines the Reflexxes motion library with the TCSpk/TPK pointing library to generically handle any motion requests, from slews to offsets to sidereal or non-sidereal tracking. We show how MPK calculates when the servos have reached a steady state. We also discuss our TPOINT modeling strategy and report performance results.
Software Engineering
Revisiting software specification and design for large astronomy projects
Show abstract
The separation of science and engineering in the delivery of software systems overlooks the true nature of the problem being solved and the organization that will solve it. Use of a systems engineering approach to managing the requirements flow between these two groups as between a customer and contractor has been used with varying degrees of success by well-known entities such as the U.S. Department of Defense. However, treating science as the customer and engineering as the contractor fosters unfavorable consequences that can be avoided and opportunities that are missed. For example, the “problem” being solved is only partially specified through the requirements generation process since it focuses on detailed specification guiding the parties to a technical solution. Equally important is the portion of the problem that will be solved through the definition of processes and staff interacting through them. This interchange between people and processes is often underrepresented and under appreciated. By concentrating on the full problem and collaborating on a strategy for its solution a science-implementing organization can realize the benefits of driving towards common goals (not just requirements) and a cohesive solution to the entire problem. The initial phase of any project when well executed is often the most difficult yet most critical and thus it is essential to employ a methodology that reinforces collaboration and leverages the full suite of capabilities within the team. This paper describes an integrated approach to specifying the needs induced by a problem and the design of its solution.
Software requirements flow-down and preliminary software design for the G-CLEF spectrograph
Show abstract
The Giant Magellan Telescope (GMT)-Consortium Large Earth Finder (G-CLEF) is a fiber-fed, precision radial velocity (PRV) optical echelle spectrograph that will be the first light instrument on the GMT. The G-CLEF instrument device control subsystem (IDCS) provides software control of the instrument hardware, including the active feedback loops that are required to meet the G-CLEF PRV stability requirements. The IDCS is also tasked with providing operational support packages that include data reduction pipelines and proposal preparation tools. A formal, but ultimately pragmatic approach is being used to establish a complete and correct set of requirements for both the G-CLEF device control and operational support packages. The device control packages must integrate tightly with the state-machine driven software and controls reference architecture designed by the GMT Organization. A model-based systems engineering methodology is being used to develop a preliminary design that meets these requirements. Through this process we have identified some lessons that have general applicability to the development of software for ground-based instrumentation. For example, tasking an individual with overall responsibility for science/software/hardware integration is a key step to ensuring effective integration between these elements. An operational concept document that includes detailed routine and non- routine operational sequences should be prepared in parallel with the hardware design process to tie together these elements and identify any gaps. Appropriate time-phasing of the hardware and software design phases is important, but revisions to driving requirements that impact software requirements and preliminary design are inevitable. Such revisions must be carefully managed to ensure efficient use of resources.
Software framework for automatic learning of telescope operation
Show abstract
The “Gran Telescopio de Canarias” (GTC) is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). The GTC Control System (GCS) is a distributed object and component oriented system based on RT-CORBA and it is responsible for the operation of the telescope, including its instrumentation. The current development state of GCS is mature and fully operational. On the one hand telescope users as PI’s implement the sequences of observing modes of future scientific instruments that will be installed in the telescope and operators, in turn, design their own sequences for maintenance. On the other hand engineers develop new components that provide new functionality required by the system. This great work effort is possible to minimize so that costs are reduced, especially if one considers that software maintenance is the most expensive phase of the software life cycle. Could we design a system that allows the progressive assimilation of sequences of operation and maintenance of the telescope, through an automatic self-programming system, so that it can evolve from one Component oriented organization to a Service oriented organization? One possible way to achieve this is to use mechanisms of learning and knowledge consolidation to reduce to the minimum expression the effort to transform the specifications of the different telescope users to the operational deployments. This article proposes a framework for solving this problem based on the combination of the following tools: data mining, self-Adaptive software, code generation, refactoring based on metrics, Hierarchical Agglomerative Clustering and Service Oriented Architectures.
Can your software engineer program your PLC?
Show abstract
The use of Programmable Logic Controllers (PLCs) in the control of large physics experiments is ubiquitous1, 2, 3. The programming of these controllers is normally the domain of engineers with a background in electronics, this paper introduces PLC program development from the software engineer's perspective. PLC programs provide the link between control software running on PC architecture systems and physical hardware controlled and monitored by digital and analog signals. The higher-level software running on the PC is typically responsible for accepting operator input and from this deciding when and how hardware connected to the PLC is controlled. The PLC accepts demands from the PC, considers the current state of its connected hardware and if correct to do so (based upon interlocks or other constraints) adjusts its hardware output signals appropriately for the PC's demands. A published ICD (Interface Control Document) defines the PLC memory locations available to be written and read by the PC to control and monitor the hardware. Historically the method of programming PLCs has been ladder diagrams that closely resemble circuit diagrams, however, PLC manufacturers nowadays also provide, and promote, the use of higher-level programming languages4. Based on techniques used in the development of high-level PC software to control PLCs for multiple telescopes, this paper examines the development of PLC programs to operate the hardware of a medical cyclotron beamline controlled from a PC using the Experimental Physics and Industrial Control System (EPICS), which is also widely used in telescope control5, 6, 7. The PLC used is the new generation Siemens S7-1200 programmed using Siemens Pascal based Structured Control Language (SCL), which is their implementation of Structured Text (ST). The approach described is that from a software engineer's perspective, utilising Siemens Totally Integrated Automation (TIA) Portal integrated development environment (IDE) to create modular PLC programs based upon reusable functions capable of being unit tested without the PLC connected to hardware. Emphasis has been placed on designing an interface between EPICS and SCL that enforces correct operation of hardware through stringent separation of PC accessible PLC memory and hardware I/O addresses used only by the PLC. The paper also introduces the method used to automate the creation, from the same source document, the PLC memory structure (tag) definitions (defining memory used to access hardware I/O and that accessed by the PC) and creation of the PC program data structures (EPICS database records) used to access the permitted PLC addresses. From direct experience this paper demonstrates the advantages of PLC program development being shared between electronic and software engineers, to enable use of the most appropriate processes from both the perspective of the hardware and the higher-level software used to control it.
Data Processing and Pipelines II
Integrated data analysis in the age of precision spectroscopy: the ESPRESSO case
Show abstract
The Echelle SPectrograph for Rocky Exoplanets and Stable Spectral Observations (ESPRESSO) is an ultrastable spectrograph for the coudé-combined focus of the VLT. With its unprecedented capabilities (resolution up to fi 200,000, wavelength range from 380 to 780 nm; centimeter-per-second precision in wavelength calibration), ESPRESSO is a prime example of the now spreading science machine concept: a fully-integrated system carefully designed to perform direct scientific measurements on the data, in a matter of minutes from the execution of the observations. This approach is motivated by the very specific science cases of the instrument (search for terrestrial exoplanets with the radial velocity method; measure of the variation of fundamental constants using the spectral signatures of the inter-galactic medium) and is achieved by a dedicated tool for spectral analysis, the data analysis software or DAS, targeted to both stellar and quasar spectra. In this paper, we describe characteristics and performances of the DAS, with particular emphasis on the novel algorithms for stellar and quasar analysis (continuum fitting and interpretation of the absorption features).
RabbitQR: fast and flexible big data processing at LSST data rates using existing, shared-use hardware
Show abstract
Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.
GAVIP: a platform for Gaia data analysis
Show abstract
Gaia is a major European Space Agency (ESA) astrophysics mission designed to map and analyse 109 stars, ultimately generating more than 1 PetaByte of data-products. As Gaia data becomes publicly available and reaches a wider audience, there is an increasing need to facilitate the further use of Gaia products without needing to download large datasets. The Gaia Added Value Interface Platform (GAVIP) is designed to address this challenge by providing an innovative platform within which scientists can submit and deploy code, packaged as "Added Value Interfaces" (AVIs), which will be executed close to the data. Deployed AVIs and associated outputs may also be made available to other GAVIP platform users, thus providing a mechanism for scientific experiment reproducibility. This paper describes the capabilities and features of GAVIP.
Poster Session: Cyberinfrastructure, High-performance and Parallel Computing, Big Data
Is the work flow model a suitable candidate for an observatory supervisory control infrastructure?
Show abstract
This paper reports on the early investigation of using the work flow model for observatory infrastructure software. We researched several work ow engines and identified 3 for further detailed, study: Bonita BPM, Activiti and Taverna. We discuss the business process model and how it relates to observatory operations and identify a path finder exercise to further evaluate the applicability of these paradigms.
WAS: the data archive for the WEAVE spectrograph
Show abstract
The WAS1(WEAVE Archive System) is a software architecture for archiving and delivering the data releases for the WEAVE7 instrument at WHT (William Herschel Telescope). The WEAVE spectrograph will be mounted at the 4.2-m WHT telescope and will provide millions of spectra in a 5-year program, starting early 2018. The access and retrieval of information will be through its dedicated archive, the WEAVE Archive System (WAS). This will be developed and maintained at the TNG2 premises on the same island as the WHT. Its structure foresees the main axes of scalability, virtualization, and high availability. We present here the first performances on a simulated data set of 20M spectra, using different architectures and hardware choices.
The very high energy source catalog at the ASI Science Data Center
Show abstract
The increasing number of Very High Energy (VHE) sources discovered by the current generation of Cherenkov telescopes made particularly relevant the creation of a dedicated source catalogs as well as the cross-correlation of VHE and lower energy bands data in a multi-wavelength framework. The “TeGeV Catalog” hosted at the ASI Science Data Center (ASDC) is a catalog of VHE sources detected by ground-based Cherenkov detectors. The TeGeVcat collects all the relevant information publicly available about the observed GeV/TeV sources. The catalog contains also information about public light curves while the available spectral data are included in the ASDC SED Builder tool directly accessible from the TeGeV catalogue web page. In this contribution we will report a comprehensive description of the catalog and the related tools.
Telemetry correlation and visualization at the Large Binocular Telescope Observatory
Show abstract
To achieve highly efficient observatory operations requires continuous evaluation and improvement of facility and instrumentation metrics. High quality metrics requires a foundation of robust and complete observatory telemetry. At the Large Binocular Telescope Observatory (LBTO), a variety of telemetry-capturing mechanisms exist, but few tools have thus far been created to facilitate studies of the data. In an effort to make all observatory telemetry data easy to use and broadly available, we have developed a suite of tools using in-house development and open source applications. This paper will explore our strategies for consolidating, parameterizing, and correlating any LBTO telemetry data to achieve easily available, web-based two- and three-dimensional time series data visualization.
The ALMA high speed optical communication link is here: an essential component for reliable present and future operations
Show abstract
Announced in 2012, started in 2013 and completed in 2015, the ALMA high bandwidth communication system has become a key factor to achieve the operational and scientific goals of ALMA. This paper summarizes the technical, organizational, and operational goals of the ALMA Optical Link Project, focused in the creation and operation of an effective and sustainable communication infrastructure to connect the ALMA Operations Support Facility and Array Operations Site, both located in the Atacama Desert in the Northern region of Chile, with the point of presence of REUNA in Antofagasta, about 400km away, and from there to the Santiago Central Office in the Chilean capital through the optical infrastructure created by the EC-funded EVALSO project and now an integral part of the REUNA backbone. This new infrastructure completed in 2014 and now operated on behalf of ALMA by REUNA, the Chilean National Research and Education Network, uses state of the art technologies, like dark fiber from newly built cables and DWDM transmission, allowing extending the reach of high capacity communication to the remote region where the Observatory is located. The paper also reports on the results obtained during the first year and a half testing and operation period, where different operational set ups have been experienced for data transfer, remote collaboration, etc. Finally, the authors will present a forward look of the impact of it to both the future scientific development of the Chajnantor Plateau, where many installations area are (and will be) located, as well as the potential Chilean scientific backbone long term development.
Operational logs analysis at ALMA observatory based on ELK stack
Show abstract
During operations, the ALMA observatory generates a huge amount of logs which contain not only valuable information related to specific failures but also for long term performance analysis. We implemented a big data solution based on Elasticsearch, Logstash and Kibana. They are configured as decoupled system which causes zero impact on the existent operations. It is able to keep more than six months of operation logs online. In this paper, we'll describe this infrastructure, applications built on top of it, and the problems that we faced during its implementation.
The Open Microscopy Environment: open image informatics for the biological sciences
Colin Blackburn,
Chris Allan,
Sébastien Besson,
et al.
Show abstract
Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO’s model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).
Advanced GLS map-making for the Herschel’s photometers
Show abstract
We discuss Generalised Least Squares (GLS) map-making for the data of the Herschel satellite’s photometers, which is a difficult task, due to the many disturbances affecting the data, and requires appropriate pre- and post-processing. Taking an existing map-maker as a reference, we propose several advanced techniques, which can improve both the quality of the estimate and the efficiency of the software. As a main contribution we discuss two disturbances, which have not been studied yet and may be detrimental to the image quality. The first is a data shift, due to delays in the timing system or in the processing chain. The second is a random noise, termed pixel noise, due to the jitter and the approximation of the pointing information. For both these disturbances, we develop a mathematical model and propose a compensation method. As an additional contribution, we note that the performance can be improved by properly adapting the algorithm parameters to the data being processed and discuss an automatic setting method. We also provide a rich set of examples and experiments, illustrating the impact of the proposed techniques on the image quality and the execution speed.
Data reduction software for the Mid-Infrared E-ELT Imager and Spectrograph (METIS) for the European Extremely Large Telescope (E-ELT)
Show abstract
We present the current status of the design of the science data reduction pipeline and the corresponding dataflow system for METIS. It will be one of the first three instruments for the E-ELT and work at wavelengths between 3-19 μm (L/M/N/Q1 bands). We will deliver software which is compliant to standards of the European Southern Observatory (ESO), and will employ state of the art techniques to produce science grade data, master calibration frames, quality control parameters and to handle instrument effects. The Instrument currently offers a wealth of observing modes that are listed in this paper. Data reduction for a ground based instrument at these wavelengths is particularly challenging because of the massive influence of thermal radiation from various sources. We will give a comprehensive overview of the data ow system for the imaging modes that the instrument offers and discuss a single recipe versus a multi recipe approach for the different observing modes for imaging.
A distributed infrastructure for publishing VO services: an implementation
Show abstract
This contribution describes both the design and the implementation details of a new solution for publishing VO services, enlightening its maintainable, distributed, modular and scalable architecture. Indeed, the new publisher is multithreaded and multiprocess. Multiple instances of the modules can run on different machines to ensure high performance and high availability, and this will be true both for the interface modules of the services and the back end data access ones. The system uses message passing to let its components communicate through an AMQP message broker that can itself be distributed to provide better scalability and availability.
The HARPS-N archive through a Cassandra, NoSQL database suite?
Show abstract
The TNG-INAF is developing the science archive for the WEAVE instrument. The underlying architecture of the archive is based on a non relational database, more precisely, on Apache Cassandra cluster, which uses a NoSQL technology. In order to test and validate the use of this architecture, we created a local archive which we populated with all the HARPSN spectra collected at the TNG since the instrument's start of operations in mid-2012, as well as developed tools for the analysis of this data set. The HARPS-N data set is two orders of magnitude smaller than WEAVE, but we want to demonstrate the ability to walk through a complete data set and produce scientific output, as valuable as that produced by an ordinary pipeline, though without accessing directly the FITS files. The analytics is done by Apache Solr and Spark and on a relational PostgreSQL database. As an example, we produce observables like metallicity indexes for the targets in the archive and compare the results with the ones coming from the HARPS-N regular data reduction software. The aim of this experiment is to explore the viability of a high availability cluster and distributed NoSQL database as a platform for complex scientific analytics on a large data set, which will then be ported to the WEAVE Archive System (WAS) which we are developing for the WEAVE multi object, fiber spectrograph.
Virtualizing observation computing infrastructure at Subaru Telescope
Show abstract
Subaru Telescope, an 8-meter class optical telescope located in Hawaii, has been using a high-availability commodity cluster as a platform for our Observation Control System (OCS). Until recently, we have followed a tried-and-tested practice of running the system under a native (Linux) OS installation with dedicated attached RAID systems and following a strict cluster deployment model to facilitate failover handling of hardware problems,1.2 Following the apparent benefits of virtualizing (i.e. running in Virtual Machines (VMs)) many of the non- observation critical systems at the base facility, we recently began to explore the idea of migrating other parts of the observatory's computing infrastructure to virtualized systems, including the summit OCS, data analysis systems and even the front ends of various Instrument Control Systems. In this paper we describe our experience with the initial migration of the Observation Control System to virtual machines running on the cluster and using a new generation tool – ansible - to automate installation and deployment. This change has significant impacts for ease of cluster maintenance, upgrades, snapshots/backups, risk-management, availability, performance, cost-savings and energy use. In this paper we discuss some of the trade-offs involved in this virtualization and some of the impacts for the above-mentioned areas, as well as the specific techniques we are using to accomplish the changeover, simplify installation and reduce management complexity.
Information and Communications Technology (ICT) Infrastructure for the ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array
Show abstract
The Cherenkov Telescope Array (CTA) represents the next generation of ground-based observatories for very high energy gamma-ray astronomy. The CTA will consist of two arrays at two different sites, one in the northern and one in the southern hemisphere. The current CTA design foresees, in the southern site, the installation of many tens of imaging atmospheric Cherenkov telescopes of three different classes, namely large, medium and small, so defined in relation to their mirror area; the northern hemisphere array would consist of few tens of the two larger telescope types. The Italian National Institute for Astrophysics (INAF) is developing the Cherenkov Small Size Telescope ASTRI SST- 2M end-to-end prototype telescope within the framework of the International Cherenkov Telescope Array (CTA) project. The ASTRI prototype has been installed at the INAF observing station located in Serra La Nave on Mt. Etna, Italy. Furthermore a mini-array, composed of nine of ASTRI telescopes, has been proposed to be installed at the Southern CTA site. Among the several different infrastructures belonging the ASTRI project, the Information and Communication Technology (ICT) equipment is dedicated to operations of computing and data storage, as well as the control of the entire telescope, and it is designed to achieve the maximum efficiency for all performance requirements. Thus a complete and stand-alone computer centre has been designed and implemented. The goal is to obtain optimal ICT equipment, with an adequate level of redundancy, that might be scaled up for the ASTRI mini-array, taking into account the necessary control, monitor and alarm system requirements. In this contribution we present the ICT equipment currently installed at the Serra La Nave observing station where the ASTRI SST-2M prototype will be operated. The computer centre and the control room are described with particular emphasis on the Local Area Network scheme, the computing and data storage system, and the telescope control and monitoring.
Radio data archiving system
Show abstract
Radio Astronomical Data models are becoming very complex since the huge possible range of instrumental configurations available with the modern Radio Telescopes. What in the past was the last frontiers of data formats in terms of efficiency and flexibility is now evolving with new strategies and methodologies enabling the persistence of a very complex, hierarchical and multi-purpose information. Such an evolution of data models and data formats require new data archiving techniques in order to guarantee data preservation following the directives of Open Archival Information System and the International Virtual Observatory Alliance for data sharing and publication. Currently, various formats (FITS, MBFITS, VLBI's XML description files and ancillary files) of data acquired with the Medicina and Noto Radio Telescopes can be stored and handled by a common Radio Archive, that is planned to be released to the (inter)national community by the end of 2016. This state-of-the-art archiving system for radio astronomical data aims at delegating as much as possible to the software setting how and where the descriptors (metadata) are saved, while the users perform user-friendly queries translated by the web interface into complex interrogations on the database to retrieve data. In such a way, the Archive is ready to be Virtual Observatory compliant and as much as possible user-friendly.
Poster Session: Observatory, Telescope and Instrumentation Control
Pre-selecting muon events in the camera server of the ASTRI telescopes for the Cherenkov Telescope Array
Show abstract
The Cherenkov Telescope Array (CTA) represents the next generation of ground based observatories for very high energy gamma ray astronomy. The CTA will consist of two arrays at two different sites, one in the northern and one in the southern hemisphere. The current CTA design foresees, in the southern site, the installation of many tens of imaging atmospheric Cherenkov telescopes of three different classes, namely large, medium, and small, so defined in relation to their mirror area; the northern hemisphere array would consist of few tens of the two larger telescope types. The telescopes will be equipped with cameras composed either of photomultipliers or silicon photomultipliers, and with different trigger and read-out electronics. In such a scenario, several different methods will be used for the telescopes’ calibration. Nevertheless, the optical throughput of any CTA telescope, independently of its type, can be calibrated analyzing the characteristic image produced by local atmospheric highly energetic muons that induce the emission of Cherenkov light which is imaged as a ring onto the focal plane if their impact point is relatively close to the telescope optical axis. Large sized telescopes would be able to detect useful muon events under stereo coincidence and such stereo muon events will be directly addressed to the central CTA array data acquisition pipeline to be analyzed. For the medium and small sized telescopes, due to their smaller mirror area and large inter-telescope distance, the stereo coincidence rate will tend to zero; nevertheless, muon events will be detected by single telescopes that must therefore be able to identify them as possible useful calibration candidates, even if no stereo coincidence is available. This is the case for the ASTRI telescopes, proposed as pre-production units of the small size array of the CTA, which are able to detect muon events during regular data taking without requiring any dedicated trigger. We present two fast algorithms to efficiently use uncalibrated data to recognize useful muon events within the single ASTRI camera server while keeping the number of proton induced triggers as low as possible to avoid saturating the readout budget towards the central CTA data analysis pipeline.
Automatization of the guiding process in the GTC
Show abstract
The "Gran Telescopio Canarias" (GTC) is an optical-infrared 10-meter segmented mirror telescope at the Observatorio del Roque de los Muchachos (ORM) observatory in Canary Islands (Spain). The GTC Control System (GCS) is continuously evolving to enhance the operational efficiency. In this work we present the new GCS subsystem to automatize the guiding setup process, both for Fast Guiding and for Slow Guiding. A set of restrictions (including vignetting and photometric computations) is used to select the stars appropriate for guiding, and a merit function is used to choose the best one. Then, the system computes the optical configuration that fits best the selected star, automatically performs the guide star acquisition process and it closes the guide loop.
Wendelstein Observatory control software
Show abstract
LMU Munchen operates an astrophysical observatory on Mt. Wendelstein1. The 2m Fraunhofer telescope2, 3 is equipped with a 0.5 x 0.5 square degree field-of-view wide field camera4 and a 3 channel optical/NIR camera5, 6. Two fiber coupled spectrographs7-9 and a wavefront sensor will be added in the near future. The observatory hosts a multitude of supporting hardware, i.e. allsky cameras, webcams, meteostation, air conditioning etc. All scientific hardware can be controlled through a single, central "Master Control Program" (MCP). At the last SPIE astronomy venue we presented the overall Wendelstein Observatory software concept10. Here we explain concept and implementation of the MCP as a multi-threaded Python daemon in the area of conflict between debuggability and Don't Repeat Yourself (DRY).
Integration of the instrument control electronics for the ESPRESSO spectrograph at ESO-VLT
Show abstract
ESPRESSO, the Echelle SPectrograph for Rocky Exoplanet and Stable Spectroscopic Observations of the ESO - Very Large Telescope site, is now in its integration phase. The large number of functions of this complex instrument are fully controlled by a Beckhoff PLC based control electronics architecture. Four small and one large cabinets host the main electronic parts to control all the sensors, motorized stages and other analogue and digital functions of ESPRESSO. The Instrument Control Electronics (ICE) is built following the latest ESO standards and requirements. Two main PLC CPUs are used and are programmed through the TwinCAT Beckhoff dedicated software. The assembly, integration and verification phase of ESPRESSO, due to its distributed nature and different geographical locations of the consortium partners, is quite challenging. After the preliminary assembling and test of the electronic components at the Astronomical Observatory of Trieste and the test of some electronics and software parts at ESO (Garching), the complete system for the control of the four Front End Unit (FEU) arms of ESPRESSO has been fully assembled and tested in Merate (Italy) at the beginning of 2016. After these first tests, the system will be located at the Geneva Observatory (Switzerland) until the Preliminary Acceptance Europe (PAE) and finally shipped to Chile for the commissioning. This paper describes the integration strategy of the ICE workpackage of ESPRESSO, the hardware and software tests that have been performed, with an overall view of the experience gained during these project’s phases.
The ICT monitoring system of the ASTRI SST-2M prototype proposed for the Cherenkov Telescope Array
Show abstract
In the framework of the international Cherenkov Telescope Array (CTA) observatory, the Italian National Institute for Astrophysics (INAF) has developed a dual mirror, small sized, telescope prototype (ASTRI SST-2M), installed in Italy at the INAF observing station located at Serra La Nave, Mt. Etna. The ASTRI SST-2M prototype is the basis of the ASTRI telescopes that will form the mini-array proposed to be installed at the CTA southern site during its preproduction phase. This contribution presents the solutions implemented to realize the monitoring system for the Information and Communication Technology (ICT) infrastructure of the ASTRI SST-2M prototype. The ASTRI ICT monitoring system has been implemented by integrating traditional tools used in computer centers, with specific custom tools which interface via Open Platform Communication Unified Architecture (OPC UA) to the Alma Common Software (ACS) that is used to operate the ASTRI SST-2M prototype. The traditional monitoring tools are based on Simple Network Management Protocol (SNMP) and commercial solutions and features embedded in the devices themselves. They generate alerts by email and SMS. The specific custom tools convert the SNMP protocol into the OPC UA protocol and implement an OPC UA server. The server interacts with an OPC UA client implemented in an ACS component that, through the ACS Notification Channel, sends monitor data and alerts to the central console of the ASTRI SST-2M prototype. The same approach has been proposed also for the monitoring of the CTA onsite ICT infrastructures.
Challenges and strategies for the maintenance of the SKA Telescope Manager
Show abstract
The Square Kilometre Array (SKA) is an ambitious project aimed to build a radio telescope that will enable breakthrough science not possible with current facilities over the next 50 years. Because of this long expected operational period, the maintenance of Telescope Manager (TM), the SKA Element responsible for the coordination of all Elements composing the Telescope (e.g. Dishes for mid-frequency or Low-Frequency Aperture Arrays), plays a crucial role for the overall SKA operation. A challenge is represented by the technological evolution in hardware and software, which is rather fast nowadays: only in the last 10 years, for instance, new operating systems were born, as well as new technologies for data storage and for calculation. Dealing with such changing environment deserves therefore a deep analysis in terms of maintenance. In spite of the importance of hardware maintenance for TM, its software maintenance is actually the real challenge, given TM is a system almost entirely composed by software applications. In computer science, indeed, it is almost impossible to build a software which does not need to be changed over time: new requirements emerge, old requirements change during application lifetime, errors are discovered or performance must be improved. For all these reasons the management of software changes is critical to maintain the value of the software developed, especially for a complex system like SKA TM. In this paper the maintenance for both SKA TM hardware and software is presented with respect to the Operational (i.e. related to Maintenance Process) and Organizational (i.e. related to Logistic Support) aspects.
The technical CCDs in ESPRESSO: usage, performances, and network requirements
Show abstract
The Echelle Spectrograph for Rocky Exoplanets and Stable Spectral Observations (ESPRESSO) requires active-loop stabilization of the light path from the telescope to the spectrograph, in order to achieve its centimeter-per- second precision goal. This task is accomplished by moving the mirrors placed along the light path by means of piezoelectric actuators. Two cameras are used to acquire the field and pupil images, and the required corrections are dynamically calculated and applied to the piezos. In this paper we will discuss the camera usage, performance and network bandwidth requirements for the ESPRESSO scientific operations.
The SKA observation control system
Show abstract
The Square Kilometre Array (SKA) will be the world's most advanced radio telescope, designed to be many times times more sensitive and hundreds of times faster at mapping the sky than today's best radio astronomy facilities. The scale and advanced capabilities of the SKA present technical challenges for co-ordinating and executing observations. This paper discusses the requirements placed on the SKA's observation sequencer - the Observation Execution Tool - and the functions it must perform. A design and prototype implementation of the Observation Execution Tool are presented, with initial results showing that a Python implementation using a message-driven component architecture could be capable of meeting the SKA's requirements.
Rejecting harmonic vibrations at Gemini with real-time vibration tracking
Show abstract
Fighting vibrations on large telescopes is an arduous task. At Gemini, vibrations originating from cryogenic coolers have been shown to degrade the optical wavefront, in certain cases by as much as 40%. This paper discusses a general solution to vibration compensation by tracking the real time vibration state of the telescope and using M2 to apply corrections. Two approaches are then presented: an open loop compensation at M2 based on the signal of accelerometers at the M1 glass, and a closed loop compensation at M2 based on optical measurements from the wave front sensor. The paper elaborates on the pros and cons of each approach and the challenges faced during commissioning. A conclusion is presented with the final results of vibration tracking integrated with operations.
LSST OCS status and plans
Show abstract
This paper reports on progress and plans for all meta-components of the Large Synoptic Survey Telescope (LSST) observatory control system (OCS). After an introduction to the scope of the OCS we discuss each meta- component in alphabetical order: application, engineering and facility database, maintenance, monitor, operator- remote, scheduler, sequencer, service abstraction layer and telemetry. We discuss these meta-components and their relationship with the overall control and operations strategy for the observatory. At the end of the paper, we review the timeline and planning for the delivery of these items.
Target allocation and prioritized motion planning for MIRADAS probe arms
Show abstract
The Mid-resolution InfRAreD Astronomical Spectrograph (MIRADAS) is a near-infrared multi-object echelle spectrograph for the 10.4-meter Gran Telescopio Canarias. The instrument has 12 pickoff mirror optics on cryogenic probe arms, enabling it to concurrently observe up to 12 user-defined objects located in its field-of-view. In this paper, a method to compute collision-free trajectories for the arms of MIRADAS is presented. We propose a sequential approach that has two stages: target to arm assignment and motion planning. For the former, we present a model based on linear programming that allocates targets according to their associated priorities. The model is constrained by two matrices specifying the targets’ reachability and the incompatibilities among each pair of feasible target-arm pairs. This model has been implemented and experiments show that it is able to determine assignments in less than a second. Regarding the second step, we present a prioritized approach which uses sampled-based roadmaps containing a variety of paths. The motions along a given path are coordinated with the help of a depth-first search algorithm. Paths are sequentially explored according to how promising they are and those not leading to a solution are skipped. This motion planning approach has been implemented considering real probe arm geometries and joint velocities. Experimental results show that the method achieves good performance in scenarios presenting two different types of conflicts.
GHOST and GIAPI: experience using Gemini's new instrument control system framework
Show abstract
The new Gemini High Resolution Optical Spectrograph (GHOST) will be controlled with software developed against the new Gemini software framework - the Gemini Instrument Application Programmer Interface (GIAPI). The developers describe their experience using this framework and compare it to control systems developed for earlier Gemini instruments using the original Gemini Core Instrument Control System (CICS) framework.
Using muon rings for the optical calibration of the ASTRI telescopes for the Cherenkov Telescope Array
Show abstract
High-energy muons constitute a very useful tool to calibrate the total optical throughput of any telescope of the Cherenkov Telescope Array (CTA). Differences in precision and efficiency can however be present due to the variety of telescope types and sizes. In this contribution we present some preliminary results on simulated muon ring images collected by the ASTRI small sized dual-mirror (SST-2M) telescope in the basic configuration installed in Italy at the Serra La Nave observing station. ASTRI SST-2M is able, using 6% of the detected muon events, to calibrate with muons the optical throughput down to a degradation of the optical efficiency of 30%. Moreover, its precision in reconstructing the muon arrival direction is about one camera pixel, and its error on the reconstructed ring radius is ~ 6.3%. The adopted procedures will be tested and validated with real data acquired by the prototype after the commissioning phase. The nine telescopes that will form the ASTRI mini-array, proposed to be installed at the final CTA southern site during the pre-production phase, will improve these results thanks to the higher detection efficiency and the lower optical cross-talk and after-pulse of their updated silicon photomultipliers.
INO340 telescope mount control system analysis and design
Show abstract
The INO340 stands for Iranian National Observatory, which is an Alt-Az reflecting optical telescope with 3.4m main
mirror diameter. At the moment, the conceptual design of telescope control system (TCS) has been finished and the
detailed design is developing. Distributed control system configuration has been selected for the architecture of TCS
design. TCS is responsible for the control of the telescope structure with its mirrors including 3 major subsystems:
TCSS, MCS and AOS. All subsystems of TCS are designed with an adequate safety subsystem. This paper presents the
TCS architecture of INOCS, and then it focuses on the requirements and the major functionalities of MCS. We provide
different analysis of MCS using related parameters such as wind effect, encoder resolution and etc. Based on the
simulation results the optimum sets of parameters and functions of different modules are concluded. The Alt balancing
and mirror cover sub-systems are also briefly presented. Finally, we present the evaluation results of MCS design based
on the pre-defined telescope requirements.
Status, upgrades, and advances of RTS2: the open source astronomical observatory manager
Show abstract
RTS2 is an open source observatory control system. Being developed from early 2000, it continue to receive new features in last two years. RTS2 is a modulat, network-based distributed control system, featuring telescope drivers with advanced tracking and pointing capabilities, fast camera drivers and high level modules for ”business logic” of the observatory, connected to a SQL database. Running on all continents of the planet, it accumulated a lot to control parts or full observatory setups.
Using Robotic Operating System (ROS) to control autonomous observatories
Francesc Vilardell,
Gabriel Artigues,
Josep Sanz,
et al.
Show abstract
Astronomical observatories are complex systems requiring the integration of numerous devices into a common platform. We are presenting here the firsts steps to integrate the popular Robotic Operating System (ROS) into the control of a fully autonomous observatory. The observatory is also equipped with a decision-making procedure that can automatically react to a changing environment (like weather events). The results obtained so far have shown that the automation of a small observatory can be greatly simplified when using ROS, as well as robust, with the implementation of our decision-making algorithms.
On-board target acquisition for CHEOPS
Show abstract
The CHaracterising ExOPlanet Satellite (CHEOPS) is the first ESA S-class and exoplanetary follow-up mission headed for launch in 2018. It will perform ultra-high-precision photometry of stars hosting confirmed exoplanets on a 3-axis stabilised sun-synchronous orbit that is optimised for uninterrupted observations at minimum stray light and thermal variations. Nevertheless, due to the satellites structural design, the alignment of the star trackers and the payload instrument telescope is affected by thermo-elastic deformations. This causes a high pointing uncertainty, which requires the payload instrument to provide an additional acquisition system for distinct target identification. Therefor a star extraction software and two star identification algorithms, originally designed for star trackers, were adapted and optimised for the special case of CHEOPS. In order to evaluate these algorithms reliability, thousands of random star configurations were analysed in Monte-Carlo simulations. We present the implemented identification methods and their performance as well as recommended parameters that guarantee a successful identification under all conditions.
Towards integrated modelling: full image simulations for WEAVE
Show abstract
We present an integrated end-end simulation of the spectral images that will be obtained by the weave spectrograph, which aims to include full modelling of all effects from the top of the atmosphere to the detector. These data are based in input spectra from a combination of library spectra and synthetic models, and will be used to provide inputs for an endend test of the full weave data pipeline and archive systems, prior to 1st light of the instrument.
SKA CSP controls: technological challenges
Show abstract
The Square Kilometer Array (SKA) project is an international effort to build the world's largest radio telescope, with eventually over a square kilometer of collecting area. For SKA Phase 1, Australia will host the low-frequency instrument with more than 500 stations, each containing around 250 individual antennas, whilst South Africa will host an array of close to 200 dishes. The scale of the SKA represents a huge leap forward in both engineering and research and development towards building and delivering a unique instrument, with the detailed design and preparation now well under way. As one of the largest scientific endeavors in history, the SKA will brings together close to 100 organizations from 20 countries. Every aspect of the design and development of such a large and complex instrument requires state-of-the-art technology and innovative approach. This poster (or paper) addresses some aspects of the SKA monitor and control system, and in particular describes the development and test results of the CSP Local Monitoring and Control prototype. At the SKA workshop held in April 2015, the SKA monitor and control community has chosen TANGO Control System as a framework, for the implementation of the SKA monitor and control. This decision will have a large impact on Monitor an Control development of SKA. As work is on the way to incorporate TANGO Control System in SKA is in progress, we started to development a prototype for the SKA Central Signal Processor to mitigate the associated risks. In particular we now have developed a uniform class schema proposal for the sub-Element systems of the SKA-CSP.
Remote observing environment using a KVM-over-IP for the OAO 188 cm telescope
Show abstract
We have prepared remote observing environment for the 188 cm telescope at Okayama Astrophysical Observatory. A KVM-over-IP and a VPN gateway are employed as core devices, which offer reliable, secure and fast link between on site and remote sites. We have confirmed the KVM-over-IP has ideal characteristics for serving the remote observing environment; the use is simple for both users and maintainer; access from any platform is available; multiple and simultaneous access is possible; and maintenance load is small. We also demonstrated that the degradation of observing efficiency specific to the remote observing is negligibly small. The remote observing environment has fully opened since the semester 2016A, about 30% of the total observing time in the last semester was occupied by remote observing.
The Cherenkov Telescope Array Observatory: top level use cases
Show abstract
Today the scientific community is facing an increasing complexity of the scientific projects, from both a technological and a management point of view. The reason for this is in the advance of science itself, where new experiments with unprecedented levels of accuracy, precision and coverage (time and spatial) are realised. Astronomy is one of the fields of the physical sciences where a strong interaction between the scientists, the instrument and software developers is necessary to achieve the goals of any Big Science Project. The Cherenkov Telescope Array (CTA) will be the largest ground-based very high-energy gamma-ray observatory of the next decades. To achieve the full potential of the CTA Observatory, the system must be put into place to enable users to operate the telescopes productively. The software will cover all stages of the CTA system, from the preparation of the observing proposals to the final data reduction, and must also fit into the overall system. Scientists, engineers, operators and others will use the system to operate the Observatory, hence they should be involved in the design process from the beginning. We have organised a workgroup and a workflow for the definition of the CTA Top Level Use Cases in the context of the Requirement Management activities of the CTA Observatory. Scientists, instrument and software developers are collaborating and sharing information to provide a common and general understanding of the Observatory from a functional point of view. Scientists that will use the CTA Observatory will provide mainly Science Driven Use Cases, whereas software engineers will subsequently provide more detailed Use Cases, comments and feedbacks. The main purposes are to define observing modes and strategies, and to provide a framework for the flow down of the Use Cases and requirements to check missing requirements and the already developed Use-Case models at CTA sub-system level. Use Cases will also provide the basis for the definition of the Acceptance Test Plan for the validation of the overall CTA system. In this contribution we present the organisation and the workflow of the Top Level Use Cases workgroup.
Remote operations at UKIRT, Cassegrain included, 2 years later
Show abstract
This is a progress report on UKIRT remote operations, by now with the experience of 3 Cassegrain blocks with up to 3 Cassegrain instruments.
Agile development approach for the observatory control software of the DAG 4m telescope
Show abstract
Observatory Control Software for the upcoming 4m infrared telescope of DAG (Eastern Anatolian Observatory in Turkish) is in the beginning of its lifecycle. After the process of elicitation-validation of the initial requirements, we have been focused on preparation of a rapid conceptual design not only to see the big picture of the system but also to clarify the further development methodology. The existing preliminary designs for both software (including TCS and active optics control system) and hardware shall be presented here in brief to exploit the challenges the DAG software team has been facing with. The potential benefits of an agile approach for the development will be discussed depending on the published experience of the community and on the resources available to us.
CARMENES: The CARMENES instrument control software suite
Show abstract
The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1 m/s) with long-term stability. CARMENES is installed at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it is equipped with two spectrographs covering from the visible to the near-infrared. We present the software packages that are included in the instrument control layer. The coordination and management of CARMENES is handled by the Instrument Control System (ICS), which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-infrared (NIR) and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. The software control framework and all the software modules and layers for the different subsystems contribute to maximize the scientific return of the instrument. The CARMENES workflow covers from the translation of the survey strategy into a detailed schedule to the data processing routines that extract radial velocity data from the observed targets. The control suite is integrated in the instrument since the end of 2015.
Target-based fiber assignment for large survey spectrographs
Show abstract
Next generation massive spectroscopic survey projects have to process a massive amount of targets. The preparation of subsequent observations should be feasible in a reasonable amount of time. We present a fast algorithm for target assignment that scales as O(log(n)). Our proposed algorithm follow a target based approach, which enables to assign large number of targets to their positioners quickly and with a very high assignment efficiency. We also discuss additional optimization of the fiber positioning problem to take into account the positioner collision problems and how to use the algorithm for an optimal survey strategy. We apply our target-based algorithm in the context of the MOONS project.
The 4MOST facility control software
Show abstract
The 4-m Multi-Object Spectrographic Telescope (4MOST) is one high-resolution (R ~ 18000) and two lowresolution (R fi 5000) spectrographs covering the wavelength range between 390 and 950 nm. The spectrographs will be installed on ESO VISTA telescope and will be fed by approximately 2400 fibres. The instrument is capable to simultaneously obtain spectra of about 2400 objects distributed over an hexagonal field-of-view of four square degrees. This paper aims at giving an overview of the control software design, which is based on the standard ESO VLT software architecture and customised to fit the needs of the 4MOST instrument. In particular, the facility control software is intended to arrange the precise positioning of the fibres, to schedule and observe many surveys in parallel, and to combine the output from the three spectrographs. Moreover, 4MOST's software will include user-friendly graphical user interfaces that enable users to interact with the facility control system and to monitor all data-taking and calibration tasks of the instrument. A secondary guiding system will be implemented to correct for any fibre exure and thus to improve 4MOST's guiding performance. The large amount of fibres requires the custom design of data exchange to avoid performance issues. The observation sequences are designed to use spectrographs in parallel with synchronous points for data exchange between subsystems. In order to control hardware devices, Programmable Logic Controller (PLC) components will be used, the new standard for future instruments at ESO.
Modified deformable mirror stroke minimization control for direct imaging of exoplanets
Show abstract
For direct imaging of faint exoplanets, coronagraphs are widely used to suppress light and achieve a high contrast. Wavefront correction algorithms based on adaptive optics are introduced simultaneously to mitigate aberrations in the optical system. Stroke minimization is one of the primary control algorithms used for high-contrast wavefront control. This technique calculates the minimum deformation across the deformable mirrors' surface under the constraint that a targeted average contrast level in the search areas, namely the dark holes, is achieved. In this paper we present a modified linear constraint stroke minimization algorithm. Instead of using a single constraint on intensity averaged over all pixels, we constrain the electric field's real and imaginary part of each pixel in the dark holes. The new control algorithm can be written into a linear programming problem. Model reduction methods, including pixel binning and singular value decomposition (SVD), are further employed to avoid over-constraining the problem and to speed up computation. In numerical simulation, we find that the revised algorithm leads to more uniform dark holes and faster convergence.
The instrument control software package for the Habitable-Zone Planet Finder spectrometer
Show abstract
We describe the Instrument Control Software (ICS) package that we have built for The Habitable-Zone Planet Finder (HPF) spectrometer. The ICS controls and monitors instrument subsystems, facilitates communication with the Hobby-Eberly Telescope facility, and provides user interfaces for observers and telescope operators. The backend is built around the asynchronous network software stack provided by the Python Twisted engine, and is linked to a suite of custom hardware communication protocols. This backend is accessed through Python-based command-line and PyQt graphical frontends. In this paper we describe several of the customized subsystem communication protocols that provide access to and help maintain the hardware systems that comprise HPF, and show how asynchronous communication benefits the numerous hardware components. We also discuss our Detector Control Subsystem, built as a set of custom Python wrappers around a C-library that provides native Linux access to the SIDECAR ASIC and Hawaii-2RG detector system used by HPF. HPF will be one of the first astronomical instruments on sky to utilize this native Linux capability through the SIDECAR Acquisition Module (SAM) electronics. The ICS we have created is very flexible, and we are adapting it for NEID, NASA's Extreme Precision Doppler Spectrometer for the WIYN telescope; we will describe this adaptation, and describe the potential for use in other astronomical instruments.
Poster Session: Project Overviews and Progress Reports
Development of a real-time data processing system for a prototype of the Tomo-e Gozen wide field CMOS camera
Show abstract
The Tomo-e Gozen camera is a next-generation, extremely wide field optical camera, equipped with 84 CMOS sensors. The camera records about a 20 square degree area at 2 Hz, providing “astronomical movie data”. We have developed a prototype of the Tomo-e Gozen camera (hereafter, Tomo-e PM), to evaluate the basic design of the Tomo-e Gozen camera. Tomo-e PM, equipped with 8 CMOS sensors, can capture a 2 square degree area at up to 2 Hz. Each CMOS sensor has about 2.6 M pixels. The data rate of Tomo-e PM is about 80 MB/s, corresponding to about 280 GB/hour. We have developed an operating system and reduction softwares to handle such a large amount of data. Tomo-e PM was mounted on 1.0-m Schmidt Telescope in Kiso Observatory at the University of Tokyo. Experimental observations were carried out in the winter of 2015 and the spring of 2016. The observations and software implementation were successfully completed. The data reduction is now in execution.
A reorganization cyberinfrastructure of history observing data in China
Show abstract
Astronomical data analysis depends on the accumulation of data, including integrity of data in observing location, time, and diversity of data. We are now developing a reorganization project of solar physics history data of China. There are 90 years, 44 kinds of solar observing data in China. In the project, we will finish imagination, digitalization and standardization of these data. This article introduces the project framework, data, data processing, and how to share.
The survey operation software system development for Prime Focus Spectrograph (PFS) on Subaru Telescope
Show abstract
The Prime Focus Spectrograph (PFS) is a wide-field, multi-object spectrograph accommodating 2394 fibers to observe the sky at the prime focus of the Subaru telescope. The software system to operate a spectroscopic survey is structured by the four packages: Instrument control software, exposure targeting software, data reduction pipeline, and survey planning and tracking software. In addition, we operate a database system where various information such as properties of target objects, instrument configurations, and observation conditions is stored and is organized via a standardized data model for future references to update survey plans and to scientific researches. In this article, we present an overview of the software system and describe the workflows that need to be performed in the PFS operation, with some highlights on the database that organizes various information from sub-processes in the survey operation, and on the process of fiber configuration from the software perspectives.
Status of the array control and data acquisition system for the Cherenkov Telescope Array
Matthias Füßling,
Igor Oya,
Arnim Balzer,
et al.
Show abstract
The Cherenkov Telescope Array (CTA) will be the next-generation ground-based observatory using the atmospheric Cherenkov technique. The CTA instrument will allow researchers to explore the gamma-ray sky in the energy range from 20 GeV to 300 TeV. CTA will comprise two arrays of telescopes, one with about 100 telescopes in the Southern hemisphere and another smaller array of telescopes in the North. CTA poses novel challenges in the field of ground-based Cherenkov astronomy, due to the demands of operating an observatory composed of a large and distributed system with the needed robustness and reliability that characterize an observatory. The array control and data acquisition system of CTA (ACTL) provides the means to control, readout and monitor the telescopes and equipment of the CTA arrays. The ACTL system must be flexible and reliable enough to permit the simultaneous and automatic control of multiple sub-arrays of telescopes with a minimum effort of the personnel on-site. In addition, the system must be able to react to external factors such as changing weather conditions and loss of telescopes and, on short timescales, to incoming scientific alerts from time-critical transient phenomena. The ACTL system provides the means to time-stamp, readout, filter and store the scientific data at aggregated rates of a few GB/s. Monitoring information from tens of thousands of hardware elements need to be channeled to high performance database systems and will be used to identify potential problems in the instrumentation. This contribution provides an overview of the ACTL system and a status report of the ACTL project within CTA.
The TESS science processing operations center
Show abstract
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover ∼1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
Poster Session: Software Engineering, Design, and Implementation
Software design of the ASTRI camera server proposed for the Cherenkov Telescope Array
Show abstract
The Italian National Institute for Astrophysics (INAF) is leading the ASTRI project within the ambitious Cherenkov Telescope Array (CTA), the next generation of ground-based observatories for very high energy gamma-ray astronomy. In the framework of the small sized telescopes (SST), a first goal of the ASTRI project is the realization of an end-to-end prototype in dual-mirror configuration (2M) with the camera composed of a matrix of Silicon photo-multiplier sensors managed by innovative front-end and back-end electronics. The prototype, named ASTRI SST-2M, is installed in Italy at the INAF “M.G. Fracastoro” observing station located at Serra La Nave, 1735 m a.s.l. on Mount Etna, Sicily. As a second step, the ASTRI project is focused on the implementation of a mini-array composed at least of nine ASTRI telescopes and proposed to be placed at the CTA southern site. This paper outlines the design of the camera server software that will be installed on the ASTRI mini-array. The software is based on the version installed on the ASTRI SST-2M prototype operating in a single telescope configuration. The migration from single telescope to mini-array context has required additional interfaces in order to guarantee high interoperability with other software and hardware components. In the mini-array configuration each camera communicates with its own camera server via a dedicated high rate data link. The primary goal of the camera server is to acquire the bulk data, packet by packet, without any data loss and to timestamp each packet very precisely. During array operation, the camera server receives from the SoftWare Array Trigger (SWAT) the list of science events that participate in stereo triggered events. These science events, and all others that are flagged either by the camera as interleaved calibration or by the camera server as possible single-muon events, are sent to the Array DAQ. All remaining science events will be discarded. A suitable buffer is provided to perform this processing on all the incoming event packets. The camera server provides interfaces to the array control software to allow for monitoring and control during array operations. In this paper we present the design of the camera server software with particular emphasis on the external interfaces. In addition, we report the results of the first integration activities and performance tests.
SINBAD flight software, the on-board software of NOMAD in ExoMars 2016
Show abstract
The Spacecraft INterface and control Board for NomAD (SINBAD) is an electronic interface designed by the Instituto de Astroffisica de Andalucfia (IAA-CSIC). It is part of the Nadir and Occultation for MArs Discovery instrument (NOMAD) on board in the ESAs ExoMars Trace Gas Orbiter mission. This mission was launched in March 2016. The SINBAD Flight Software (SFS) is the software embedded in SINBAD. It is in charge of managing the interfaces, devices, data, observing sequences, patching and contingencies of NOMAD. It is presented in this paper the most remarkable aspects of the SFS design, likewise the main problems and lessons learned during the software development process.
Porting the ALMA Correlator Data Processor from hard real-time to plain Linux
Show abstract
The ALMA correlator back-end consists of a cluster of 16 computing nodes and a master collector/packager node. The mission of the cluster is to process time domain lags into auto-correlations and complex visibilities, integrate them for some configurable amount of time and package them into a workable data product. Computers in the cluster are organized such that individual workloads per node are kept within achievable levels for different observing modes and antennas in the array. Over the course of an observation the master node transmits enough state information to each involved computing node to specify exactly how to process each set of lags received from the correlator. For that distributed mechanism to work, it is necessary to unequivocally identify each individual lag set arriving at each computing node. The original approach was based on a custom hardware interface to each node in the cluster plus a realtime version of the Linux Operating System. A modification recently introduced in the ALMA correlator consists of tagging each lag set with a time stamp before delivering them to the cluster. The time stamp identifies a precise 16- millisecond window during which that specific data set was streamed to the computing cluster. From the time stamp value a node is able to identify a centroid (in absolute time units), base-lines, and correlator mode during that hardware integration. That is, enough information to let the digital signal processing pipeline in each node to process time domain lags into frequency domain auto-correlations per antenna and visibilities per base-line. The scheme also means that a good degree of concurrency can be achieved in each node by having individual CPU cores process individual lag sets at the same time, thus rendering enough processing power to cope with a maximum 1 GiB/sec output from the correlator. The present paper describes how we time stamp lag sets within the correlator hardware, the implications to their on-line processing in software and the benefits that this extension has brought in terms of software maintainability and overall system simplifications.
Implementing the concurrent operation of sub-arrays in the ALMA correlator
Show abstract
The ALMA correlator processes the digitized signals from 64 individual antennas to produce a grand total of 2016 correlated base-lines, with runtime selectable lags resolution and integration time. The on-line software system can process a maximum of 125M visibilities per second, producing an archiving data rate close to one sixteenth of the former (7.8M visibilities per second with a network transfer limit of 60 MB/sec). Mechanisms in the correlator hardware design make it possible to split the total number of antennas in the array into smaller subsets, or sub-arrays, such that they can share correlator resources while executing independent observations. The software part of the sub-system is responsible for configuring and scheduling correlator resources in such a way that observations among independent subarrays occur simultaneously while internally sharing correlator resources under a cooperative arrangement. Configuration of correlator modes through its CAN-bus interface and periodic geometric delay updates are the most relevant activities to schedule concurrently while observations happen at the same time among a number of sub-arrays. For that to work correctly, the software interface to sub-arrays schedules shared correlator resources sequentially before observations actually start on each sub-array. Start times for specific observations are optimized and reported back to the higher level observing software. After that initial sequential phase has taken place then simultaneous executions and recording of correlated data across different sub-arrays move forward concurrently, sharing the local network to broadcast results to other software sub-systems. The present paper presents an overview of the different hardware and software actors within the correlator sub-system that implement some degree of concurrency and synchronization needed for seamless and simultaneous operation of multiple sub-arrays, limitations stemming from the resource-sharing nature of the correlator, limitations intrinsic to the digital technology available in the correlator hardware, and milestones so far reached by this new ALMA feature.
The ASTRI mini-array software system (MASS) implementation: a proposal for the Cherenkov Telescope Array
Show abstract
The ASTRI mini-array, composed of nine small-size dual mirror (SST-2M) telescopes, has been proposed to be installed at the southern site of the Cherenkov Telescope Array (CTA), as a set of preproduction units of the CTA observatory. The ASTRI mini-array is a collaborative and international effort carried out by Italy, Brazil and South Africa and led by the Italian National Institute of Astrophysics, INAF. We present the main features of the current implementation of the Mini-Array Software System (MASS) now in use for the activities of the ASTRI SST-2M telescope prototype located at the INAF observing station on Mt. Etna, Italy and the characteristics that make it a prototype for the CTA control software system. CTA Data Management (CTADATA) and CTA Array Control and Data Acquisition (CTA-ACTL) requirements and guidelines as well as the ASTRI use cases were considered in the MASS design, most of its features are derived from the Atacama Large Millimeter/sub-millimeter Array Control software. The MASS will provide a set of tools to manage all onsite operations of the ASTRI mini-array in order to perform the observations specified in the short term schedule (including monitoring and controlling all the hardware components of each telescope and calibration device), to analyze the acquired data online and to store/retrieve all the data products to/from the onsite repository.
Concept study of an observation preparation tool for MICADO
Show abstract
MICADO, the near-infrared Multi-AO Imaging Camera for Deep Observations and first light instrument for the European ELT, will provide capabilities for imaging, coronagraphy, and spectroscopy. As usual, MICADO observations will have to be prepared in advance, including AO and secondary guide star selection, offset/dither pattern definition, and an optimization for the most suitable configuration. A visual representation of the latter along with graphical and scripting interfaces is desirable. We aim at developing a flexible and user-friendly application that enhances or complements the ESO standard preparation software. Here, we give a summary of the requirements on such a tool, report on the status of our conceptual study and present a first proof-of-concept implementation.
A new generation of spectral extraction and analysis package for Fibre Optics Cassegrain Echelle Spectrograph (FOCES)
Show abstract
We describe a new generation of spectral extraction and analysis software package (EDRS2) for the Fibre Optics Cassegrain Echelle Spectrograph (FOCES), which will be attached to the 2m Fraunhofer Telescope on the Wendelstein Observatory. The package is developed based on Python language and relies on a variety of third party, open source packages such as Numpy and Scipy. EDRS2 contains generalized image calibration routines including overscan correction, bias subtraction, flat fielding and background correction, and can be supplemented by user customized functions to fit other echelle spectrographs. An optimal extraction method is adopted to obtain the one dimensional spectra, and the output multi order, wavelength calibrated spectra are saved in FITS files with binary table format. We introduce the algorithm and performance of major routines in EDRS2.
Monitoring service for the Gran Telescopio Canarias control system
Show abstract
The Monitoring Service collects, persists and propagates the Telescope and Instrument telemetry, for the Gran Telescopio CANARIAS (GTC), an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). A new version of the Monitoring Service has been developed in order to improve performance, provide high availability, guarantee fault tolerance and scalability to cope with high volume of data. The architecture is based on a distributed in-memory data store with a Product/Consumer pattern design. The producer generates the data samples. The consumers either persists the samples to a database for further analysis or propagates them to the consoles in the control room to monitorize the state of the whole system.
ESPRESSO front end guiding algorithms: from design phase to implementation and validation toward the commissioning
Show abstract
In this paper we will review the ESPRESSO guiding algorithm for the Front End subsystem. ESPRESSO, the Echelle Spectrograph for Rocky Exoplanets and Stable Spectroscopic Observations, will be installed on ESO’s Very Large Telescope (VLT). The Front End Unit (FEU) is the ESPRESSO subsystem which collects the light coming from the Coudè Trains of all the Four Telescope Units (UTs), provides Field and Pupil stabilization better than 0.05’’ via piezoelectric tip tilt devices and inject the beams into the Spectrograph fibers. The field and pupil stabilization is obtained through a re-imaging system that collects the halo of the light out of the Injection Fiber and the image of the telescope pupil. In particular, we will focus on the software design of the system starting from class diagram to actual implementation. A review of the theoretical mathematical background required to understand the final design is also reported. We will show the performance of the algorithm on the actual Front End by adoption of telescope simulator exploring various scientific requirements.
EELT-HIRES the high resolution spectrograph for the E-ELT: software and hardware solutions for its control
Show abstract
The current E-ELT instrumentation plan foresees a High Resolution Spectrograph conventionally indicated as EELTHIRES whose Phase A study has started in March 2016. Since 2013 however, a preliminary study of a modular E-ELT instrument able to provide high-resolution spectroscopy (R~100,000) in a wide wavelength range (0.37-2.5 μm) has been already conducted by an international consortium (termed “HIRES initiative”). Taking into account the requirements inferred from this preliminary work in terms of both high-level operations as well as low-level control, we will present in this paper possible solutions for HIRES hardware and software architecture. The validity of the proposed architectural and hardware choices will be eventually discussed based also on the experience gained on a real-working instrument, ESPRESSO, the next generation high-stability spectrograph for the VLT and to certain extent the precursor of HIRES.
Monitoring and controlling the SKA telescope manager: a peculiar LMC system in the framework of the SKA LMCs
Show abstract
The SKA Telescope Manager (TM) is the core package of the SKA Telescope: it is aimed at scheduling observations, controlling their execution, monitoring the telescope health status, diagnosing and fixing its faults and so on. To do that, TM directly interfaces with the Local Monitoring and Control systems (LMCs) of the various SKA Elements (e.g. Dishes, Low-Frequency Aperture Array, etc.), exchanging commands and data with each of them. TM in turn needs to be monitored and controlled, in order its continuous and proper operation – and therefore that of the whole SKA Telescope – is ensured. It appears indeed that, while the unavailability of one or more instances of any other SKA element should result only in a degraded operation for the whole telescope, a problem in TM could cause a complete stop of any operation. In addition to this higher responsibility, a local monitoring and control system for TM has to collect and display logging data directly to operators, perform lifecycle management of TM applications and directly deal - when possible - with management of TM faults (which also includes a direct handling of TM status and performance data). In this paper, the peculiarities presented by the TM monitoring and control and the consequences they have on the design of a related LMC system are addressed and discussed.
The software architecture of the camera for the ASTRI SST-2M prototype for the Cherenkov Telescope Array
Pierluca Sangiorgi,
Milvia Capalbi,
Renato Gimenes,
et al.
Show abstract
The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.
INO340 telescope control system: middleware requirements, design, and evaluation
Show abstract
The INO340 Control System (INOCS) is being designed in terms of a distributed real-time architecture. The real-time
(soft and firm) nature of many processes inside INOCS causes the communication paradigm between its different
components to be time-critical and sensitive. For this purpose, we have chosen the Data Distribution Service (DDS)
standard as the communications middleware which is itself based on the publish-subscribe paradigm. In this paper, we
review and compare the main middleware types, and then we illustrate the middleware architecture of INOCS and its
specific requirements. Finally, we present the experimental results, performed to evaluate our middleware in order to
ensure that it meets our requirements.
ASTRI SST-2M data reduction and reconstruction software on low-power and parallel architectures
Show abstract
In the framework of the international Cherenkov Telescope Array (CTA) gamma-ray observatory, a mini-array of nine small-sized, dual-mirror (SST-2M) telescopes developed by the ASTRI Collaboration has been proposed to be installed at the future CTA southern site. In such a location, the capability of each telescope to process its own data before sending them to a central acquisition system provides a key advantage. We implemented the complete analysis chain required by a single telescope on a NVIDIA® Jetson™ TK1 development board, exceeding the nominal required real-time processing speed by more than a factor two, while staying within a very small power budget.
Towards a dynamical scheduler for ALMA: a science - software collaboration
Jorge Avarias,
Ignacio Toledo,
Daniel Espada,
et al.
Show abstract
State-of-the art astronomical facilities are costly to build and operate, hence it is essential that these facilities must be operated as much efficiently as possible, trying to maximize the scientific output and at the same time minimizing overhead times. Over the latest decades the scheduling problem has drawn attention of research because new facilities have been demonstrated that is unfeasible to try to schedule observations manually, due the complexity to satisfy the astronomical and instrumental constraints and the number of scientific proposals to be reviewed and evaluated in near real-time. In addition, the dynamic nature of some constraints make this problem even more difficult. The Atacama Large Millimeter/submillimeter Array (ALMA) is a major collaboration effort between European (ESO), North American (NRAO) and East Asian countries (NAOJ), under operations on the Chilean Chajnantor plateau, at 5.000 meters of altitude. During normal operations at least two independent arrays are available, aiming to achieve different types of science. Since ALMA does not observe in the visible spectrum, observations are not limited to night time only, thus a 24/7 operation with little downtime as possible is expected when full operations state will have been reached. However, during preliminary operations (early-science) ALMA has been operated on tied schedules using around half of the whole day-time to conduct scientific observations. The purpose of this paper is to explain how the observation scheduling and its optimization is done within ALMA, giving details about the problem complexity, its similarities and differences with traditional scheduling problems found in the literature. The paper delves into the current recommendation system implementation and the difficulties found during the road to its deployment in production.
Software design and code generation for the engineering graphical user interface of the ASTRI SST-2M prototype for the Cherenkov Telescope Array
Show abstract
ASTRI is an on-going project developed in the framework of the Cherenkov Telescope Array (CTA). An end- to-end prototype of a dual-mirror small-size telescope (SST-2M) has been installed at the INAF observing station on Mt. Etna, Italy. The next step is the development of the ASTRI mini-array composed of nine ASTRI SST-2M telescopes proposed to be installed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort carried on by Italy, Brazil and South-Africa and led by the Italian National Institute of Astrophysics, INAF. To control the ASTRI telescopes, a specific ASTRI Mini-Array Software System (MASS) was designed using a scalable and distributed architecture to monitor all the hardware devices for the telescopes. Using code generation we built automatically from the ASTRI Interface Control Documents a set of communication libraries and extensive Graphical User Interfaces that provide full access to the capabilities offered by the telescope hardware subsystems for testing and maintenance. Leveraging these generated libraries and components we then implemented a human designed, integrated, Engineering GUI for MASS to perform the verification of the whole prototype and test shared services such as the alarms, configurations, control systems, and scientific on-line outcomes. In our experience the use of code generation dramatically reduced the amount of effort in development, integration and testing of the more basic software components and resulted in a fast software release life cycle. This approach could be valuable for the whole CTA project, characterized by a large diversity of hardware components.
A real-time prediction system for solar weather based on magnetic nonpotentiality (I)
Show abstract
The Sun is the source of space weather. The characteristics and evolution of the solar active-region magnetic field closely relate to violent solar eruptions such as flares and coronal mass ejections. The Solar Magnetic Field Telescope in Huairou Solar Observing Station has accumulated numerous vector magnetogram data of solar photospheric active regions (AR) covering nearly 30 years. Utilizing these precious historical data to establish statistical prediction models for solar eruptive events, not only can provide a reference for the timely adjustment of observation mode to specific active regions, but also can offer valuable reference to the monitoring and forecasting departments of solar and space weather. In this part of work, we focus on the Yes/No and occurrence time predictions for AR-related solar flares, and the predictions independently rely on the vector magnetic-filed observation of the solar surface.
Software use cases to elicit the software requirements analysis within the ASTRI project
Show abstract
The Italian National Institute for Astrophysics (INAF) is leading the Astrofisica con Specchi a Tecnologia Replicante Italiana (ASTRI) project whose main purpose is the realization of small size telescopes (SST) for the Cherenkov Telescope Array (CTA). The first goal of the ASTRI project has been the development and operation of an innovative end-to-end telescope prototype using a dual-mirror optical configuration (SST-2M) equipped with a camera based on silicon photo-multipliers and very fast read-out electronics. The ASTRI SST-2M prototype has been installed in Italy at the INAF “M.G. Fracastoro” Astronomical Station located at Serra La Nave, on Mount Etna, Sicily. This prototype will be used to test several mechanical, optical, control hardware and software solutions which will be used in the ASTRI mini-array, comprising nine telescopes proposed to be placed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort led by INAF and carried out by Italy, Brazil and South-Africa. We present here the use cases, through UML (Unified Modeling Language) diagrams and text details, that describe the functional requirements of the software that will manage the ASTRI SST-2M prototype, and the lessons learned thanks to these activities. We intend to adopt the same approach for the Mini Array Software System that will manage the ASTRI miniarray operations. Use cases are of importance for the whole software life cycle; in particular they provide valuable support to the validation and verification activities. Following the iterative development approach, which breaks down the software development into smaller chunks, we have analysed the requirements, developed, and then tested the code in repeated cycles. The use case technique allowed us to formalize the problem through user stories that describe how the user procedurally interacts with the software system. Through the use cases we improved the communication among team members, fostered common agreement about system requirements, defined the normal and alternative course of events, understood better the business process, and defined the system test to ensure that the delivered software works properly. We present a summary of the ASTRI SST-2M prototype use cases, and how the lessons learned can be exploited for the ASTRI mini-array proposed for the CTA Observatory.
M&C Domain Map Maker: an environment complimenting MDE with M&C knowledge and ensuring solution completeness
Show abstract
Model Driven Engineering (MDE) as a key driver to reduce development cost of M&C systems is beginning to find acceptance across scientific instruments such as Radio Telescopes and Nuclear Reactors. Such projects are adopting it to reduce time to integrate, test and simulate their individual controllers and increase reusability and traceability in the process. The creation and maintenance of models is still a significant challenge to realizing MDE benefits. Creating domain-specific modelling environments reduces the barriers, and we have been working along these lines, creating a domain-specific language and environment based on an M&C knowledge model. However, large projects involve several such domains, and there is still a need to interconnect the domain models, in order to ensure modelling completeness. This paper presents a knowledge-centric approach to doing that, by creating a generic system model that underlies the individual domain knowledge models. We present our vision for M&C Domain Map Maker, a set of processes and tools that enables explication of domain knowledge in terms of domain models with mutual consistency relationships to aid MDE.
The RTE inversion on FPGA aboard the solar orbiter PHI instrument
Show abstract
In this work we propose a multiprocessor architecture to reach high performance in floating point operations by using radiation tolerant FPGA devices, and under narrow time and power constraints. This architecture is used in the PHI instrument that carries out the scientific analysis aboard the ESA’s Solar Orbiter mission. The proposed architecture, in a SIMD flavor, is aimed to be an accelerator within the Data Processing Unit (it is composed by a main Leon processor and two FPGAs) for carrying out the RTE inversion on board the spacecraft using a relatively slow FPGA device – Xilinx XQR4VSX55–. The proposed architecture squeezes the FPGA resources in order to reach the computational requirements and improves the ground-based system performance based on commercial CPUs regarding time and power consumption. In this work we demonstrate the feasibility of using this FPGA devices embedded in the SO/PHI instrument. With that goal in mind, we perform tests to evaluate the scientific results and to measure the processing time and power consumption for carrying out the RTE inversion.
Knowledge-based engineering of a PLC controlled telescope
Show abstract
As the new control system of the Mercator Telescope is being finalized, we can review some technologies and design methodologies that are advantageous, despite their relative uncommonness in astronomical instrumentation. Particular for the Mercator Telescope is that it is controlled by a single high-end soft-PLC (Programmable Logic Controller). Using off-the-shelf components only, our distributed embedded system controls all subsystems of the telescope such as the pneumatic primary mirror support, the hydrostatic bearing, the telescope axes, the dome, the safety system, and so on. We show how real-time application logic can be written conveniently in typical PLC languages (IEC 61131-3) and in C++ (to implement the pointing kernel) using the commercial TwinCAT 3 programming environment. This software processes the inputs and outputs of the distributed system in real-time via an observatory-wide EtherCAT network, which is synchronized with high precision to an IEEE 1588 (PTP, Precision Time Protocol) time reference clock. Taking full advantage of the ability of soft-PLCs to run both real-time and non real-time software, the same device also hosts the most important user interfaces (HMIs or Human Machine Interfaces) and communication servers (OPC UA for process data, FTP for XML configuration data, and VNC for remote control). To manage the complexity of the system and to streamline the development process, we show how most of the software, electronics and systems engineering aspects of the control system have been modeled as a set of scripts written in a Domain Specific Language (DSL). When executed, these scripts populate a Knowledge Base (KB) which can be queried to retrieve specific information. By feeding the results of those queries to a template system, we were able to generate very detailed “browsable” web-based documentation about the system, but also PLC software code, Python client code, model verification reports, etc. The aim of this paper is to demonstrate the added value that technologies such as soft-PLCs and DSL-scripts and design methodologies such as knowledge-based engineering can bring to astronomical instrumentation.
Aided generation of search interfaces to astronomical archives
Show abstract
Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.
A control system framework for the Hobby-Eberly telescope
Jason Ramsey,
Niv Drory,
Randy Bryant,
et al.
Show abstract
We present the development framework for the distributed control systems, scripting frontend, and monitoring facilities of the recently upgraded Hobby-Eberly Telescope (HET). A common flexible control and data acquisition layer in C++, with message passing implemented on top of ZeroMQ, wraps the final designs of each new hardware component including tracking, metrology, instrumentation and calibration equipment. A homogeneous command, response and event layer normalizes the diversity of the lower level software interfaces easing the development of the Telescope Control System (TCS). Applications developed in the framework easily interface to the new tracker and legacy instrumentation of the primary mirror, weather, dome, and tracker support structure. The framework facilitates testing, vetting, and characterization of the telescope and TCS. Examples of the real-time monitoring capabilities and the Python scripting methods of various telescope components yield insight into overall system performance. Lessons learned along the way, future refinements, and anticipated enhancements, are detailed.
A user interface framework for the Square Kilometre Array: concepts and responsibilities
Show abstract
The Square Kilometre Array (SKA) project is responsible for developing the SKA Observatory, the world’s largest radio telescope, with eventually over a square kilometre of collecting area and including a general headquarters as well as two radio telescopes: SKA1-Mid in South Africa and SKA1-Low in Australia. The SKA project consists of a number of subsystems (elements) among which the Telescope Manager (TM) is the one involved in controlling and monitoring the SKA telescopes. The TM element has three primary responsibilities: management of astronomical observations, management of telescope hardware and software subsystems, management of data to support system operations and all stakeholders (operators, maintainers, engineers and science users) in achieving operational, maintenance and engineering goals. Operators, maintainers, engineers and science users will interact with TM via appropriate user interfaces (UI). The TM UI framework envisaged is a complete set of general technical solutions (components, technologies and design information) for implementing a generic computing system (UI platform). Such a system will enable UI components to be instantiated to allow for human interaction via screens, keyboards, mouse and to implement the necessary logic for acquiring or deriving the information needed for interaction. It will provide libraries and specific Application Programming Interfaces (APIs) to implement operator and engineer interactive interfaces. This paper will provide a status update of the TM UI framework, UI platform and UI components design effort, including the technology choices, and discuss key challenges in the TM UI architecture, as well as our approaches to addressing them.
Queue software reuse and implementation at the Large Binocular Telescope Observatory
Show abstract
In this paper we detail the process the LBTO followed to chose software for reuse and modification to support binocular queue operations. We outline the survey of initial candidate solutions, how and why the final selection was made, and describe our requirements gap analysis for LBTO binocular use. We provide details of our software development approach including a project road map and phased release strategy. We provide details of added LBTO functionality, discuss issues, and suggest some reuse lessons learned. We conclude with discussion of known desired enhancements to be addressed in future release cycles.
The Infrared Imaging Spectrograph (IRIS) for TMT: data reduction system
Show abstract
IRIS (InfraRed Imaging Spectrograph) is the diffraction-limited first light instrument for the Thirty Meter Telescope (TMT) that consists of a near-infrared (0.84 to 2.4 μm) imager and integral field spectrograph (IFS). The IFS makes use of a lenslet array and slicer for spatial sampling, which will be able to operate in 100’s of different modes, including a combination of four plate scales from 4 milliarcseconds (mas) to 50 mas with a large range of filters and gratings. The imager will have a field of view of 34×34 arcsec2 with a plate scale of 4 mas with many selectable filters. We present the preliminary design of the data reduction system (DRS) for IRIS that need to address all of these observing modes. Reduction of IRIS data will have unique challenges since it will provide real-time reduction and analysis of the imaging and spectroscopic data during observational sequences, as well as advanced post-processing algorithms. The DRS will support three basic modes of operation of IRIS; reducing data from the imager, the lenslet IFS, and slicer IFS. The DRS will be written in Python, making use of open-source astronomical packages available. In addition to real-time data reduction, the DRS will utilize real-time visualization tools, providing astronomers with up-to-date evaluation of the target acquisition and data quality. The quick look suite will include visualization tools for 1D, 2D, and 3D raw and reduced images. We discuss the overall requirements of the DRS and visualization tools, as well as necessary calibration data to achieve optimal data quality in order to exploit science cases across all cosmic distance scales.
Key software architecture decisions for the automated planet finder
Show abstract
The Automated Planet Finder (APF) at Lick Observatory on Mount Hamilton is a modern 2.4 meter computer controlled telescope. At one Nasmyth focus is the Levy Spectrometer, at present the sole instrument used with the APF. The primary research mission of the APF and the Levy Spectrometer is high-precision Doppler spectroscopy. Observing at the APF is unattended; custom software written by diverse authors in diverse languages manage all aspects of a night’s observing.
This paper will cover some of the key software architecture decisions made in the development of autonomous observing at the APF. The relevance to future projects of these decisions will be emphasized throughout.
AVU/BAM: software refurbishment (design and implementation) for the CU3 Gaia verification pipeline
Show abstract
AVU/BAM is the Gaia software for the Astrometric Verification Unit (AVU) devoted to the monitoring of the Basic Angle Monitoring (BAM), one of the metrology instruments onboard of the Gaia Payload. AVU/BAM is integrated and operative at the Data Processing Center of Turin (DPCT), since the beginning of the Gaia Mission. The DPCT infrastructure performs the ingestion of pre-elaborated data coming from the satellite and it's responsible of running the code of different Verification Packages. The new structure of the pipeline consists of three phases: the first is a pre-analysis in which a preliminary study data is performed, with the calculation of quantities needed to the analysis; the second one processes the interferograms coming from the instrument; the third phase analyzes the data obtained from the previous processing. Also it has been changed part of the long-term analysis and was added a phase of calibration of the data obtained from the processing.
ImageX: new and improved image explorer for astronomical images and beyond
Show abstract
The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan images, and its featureset to include basic functions like image overlay and colormaps. Users needing more advanced visualization and analysis capabilities could use a desktop tool like DS9+IRAF on another IU Trident project called StarDock, without having to download Gigabytes of FITS image data.
The ExoMars DREAMS scientific data archive
Show abstract
DREAMS (Dust Characterisation, Risk Assessment, and Environment Analyser on the Martian Surface) is a payload
accommodated on the Schiaparelli Entry and Descent Module (EDM) of ExoMars 2016, the ESA – Roscosmos mission
to Mars successfully launched on 14 March 2016. The DREAMS data will be archived and distributed to the scientific
community through the ESA’s Planetary Science Archive (PSA). All data shall be compliant with NASA’s Planetary
Data System (PDS4) standards for formatting and labelling files. This paper summarizes the format and content of the
DREAMS data products and associated metadata. The pipeline to convert the raw telemetries to the final products for the
archive is sketched as well.
Image processing improvement for optical observations of space debris with the TAROT telescopes
Show abstract
CNES is involved in the Inter-Agency Space Debris Coordination Committee (IADC) and is observing space debris with two robotic ground based fully automated telescopes called TAROT and operated by the CNRS. An image processing algorithm devoted to debris detection in geostationary orbit is implemented in the standard pipeline. Nevertheless, this algorithm is unable to deal with debris tracking mode images, this mode being the preferred one for debris detectability. We present an algorithm improvement for this mode and give results in terms of false detection rate.
Thirty Meter Telescope (TMT) Narrow Field Infrared Adaptive Optics System (NFIRAOS) real-time controller preliminary architecture
Show abstract
The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).
FRIDA´s mechanisms control system structure and tests
R. Flores-Meza,
G. Lara,
B. Sánchez,
et al.
Show abstract
FRIDA will be a near infrared imager and integral field spectrograph covering the wavelength range from 0.9 to 2.5 microns. FRIDA will work in two observing modes: direct imaging and integral field spectroscopy. This paper presents the main structure of the FRIDA mechanisms control system. In order to comply with a high level of re-configurability FRIDA will comprise eight cryogenic mechanisms and one room temperature mechanism. Most of these mechanisms require high positioning repeatability to ensure FRIDA fulfills with high astronomical specifications. In order to set up the mechanisms positioning control parameters a set of programs have been developed to perform several tests of mechanisms in both room and cryogenic environments. The embedded control software for most of the FRIDA mechanisms has been developed. A description of some mechanisms tests and the software used for this purpose are presented.
A virtual appliance as proxy pipeline for the Solar Orbiter/Metis coronagraph
Show abstract
Metis is the coronagraph on board Solar Orbiter, the ESA mission devoted to the study of the Sun that will be launched in October 2018. Metis is designed to perform imaging of the solar corona in the UV at 121.6 nm and in the visible range where it will accomplish polarimetry studies thanks to a variable retarder plate. Due to mission constraints, the telemetry downlink on the spacecraft will be limited and data will be downloaded with delays that could reach, in the worst case, several months. In order to have a quick overview on the ongoing operations and to check the safety of the 10 instruments on board, a high-priority downlink channel has been foreseen to download a restricted amount of data. These so-called Low Latency Data will be downloaded daily and, since they could trigger possible actions, they have to be quickly processed on ground as soon as they are delivered. To do so, a proper processing pipeline has to be developed by each instrument. This tool will then be integrated in a single system at the ESA Science Operation Center that will receive the downloaded data by the Mission Operation Center. This paper will provide a brief overview of the on board processing and data produced by Metis and it will describe the proxy-pipeline currently under development to deal with the Metis low-latency data.
The ALMA Snooping Project Interface (SnooPI)
Show abstract
In order to provide ALMA users with a comprehensive view of their observing projects, we developed the ALMA Snooping Project Interface (SnooPI) application. The simple and intuitive interface allows scientists to follow the status of their projects, broken down into observing unit sets and scheduling blocks. The application itself contains two separate parts: a Java back-end server and a JavaScript front-end client application. The application interacts with REST interfaces of other ALMA software components to get the necessary project reports, certain details describing the observations and to access statistics of the user’s ALMA Helpdesk tickets. All this information allows to successfully trace all stages of observations, processing and delivery of the ALMA science projects.
Observatory software for the Maunakea Spectroscopic Explorer
Show abstract
The Canada-France-Hawaii Telescope is currently in the conceptual design phase to redevelop its facility into the new Maunakea Spectroscopic Explorer (MSE). MSE is designed to be the largest non-ELT optical/NIR astronomical telescope, and will be a fully dedicated facility for multi-object spectroscopy over a broad range of spectral resolutions. This paper outlines the software and control architecture envisioned for the new facility. The architecture will be designed around much of the existing software infrastructure currently used at CFHT as well as the latest proven opensource software. CFHT plans to minimize risk and development time by leveraging existing technology.
Synchronization of off-centered dome and 3.6m Devasthal Optical Telescope
Show abstract
A 3.6m aperture telescope has been installed at Devasthal recently and once commissioned this would be the largest optical telescope in India. The integration of the telescope was carried out by lifting the components from inside the telescope building. To make this possible, the position of the telescope was shifted by 1.85m from the dome centre at an angle of 255 degree with respect to the north. This posed a serious challenge in synchronizing the dome with the telescope movement. In this contribution we will be presenting the synchronization algorithm and dome control software.
A novel approach to visual rendering of astro-photographs
Show abstract
When we perform a visual analysis of a cosmic object photograph the contrast plays a fundamental role. A linear distribution of the observable values is not necessarily the best possible for the Human Visual System (HVS). In fact HVS has a non-linear response, and exploits contrast locally with different stretching for different lightness areas. As a consequence, according to the observation task, local contrast can be adjusted to make easier the detection of relevant information. The proposed approach is based on Spatial Color Algorithms (SCA) that mimic the HVS behavior. These algorithms compute each pixel value by a spatial comparison with all (or a subset of) the other pixels of the image. The comparison can be implemented as a weighted difference or as a ratio product over given sampling in the neighbor region. A final mapping allows exploiting all the available dynamic range. In the case of color images SCA process separately the three chromatic channels producing an effect of color normalization, without introducing channel cross correlation. We will present very promising results on amateur photographs of deep sky objects. The results are presented for a qualitative and subjective visual evaluation and for a quantitative evaluation through image quality measures, in particular to quantify the effect of algorithms on the noise. Moreover our results help to better characterize contrast measures.