Proceedings Volume 12189

Software and Cyberinfrastructure for Astronomy VII

Jorge Ibsen, Gianluca Chiozzi
cover
Proceedings Volume 12189

Software and Cyberinfrastructure for Astronomy VII

Jorge Ibsen, Gianluca Chiozzi
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 22 September 2022
Contents: 21 Sessions, 94 Papers, 46 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2022
Volume Number: 12189

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Project Management/Web Technologies
  • Software Engineering
  • Cyberinfrastructure
  • Observatory and Telescope Control I
  • Observatory and Telescope Control II
  • Data Science/Engineering and HPC
  • Data Management, Processing, and Pipelines I
  • Observatory and Telescope Control III
  • Instrumentation Control
  • Data Management, Processing, Pipelines II
  • Project Overviews and Progress
  • Poster Session: Cyberinfrastructure
  • Poster Session: Data Management, Processing and Pipelines
  • Poster Session: Data
  • Poster Session: Instrumentation Control
  • Poster Session: Observatory/Telescope Control
  • Poster Session: Software Engineering
  • Poster Session: Software Quality and Testing
  • Poster Session: UI/Web Technologies
  • Poster Session
  • Front Matter: Volume 12189
Project Management/Web Technologies
icon_mobile_dropdown
Inspecting and adapting via problem-solving workshops: the SKA experience
Valentina Alberti, Snehal Valame
The retrospective and the problem solving workshops held at the end of each Program Increment by Agile Release Trains, are significant events that characterise the adoption of the Scaled Agile Framework(SAFe). Their purpose is to address systemic problems that have been identified and prioritised by agile team members and management with the end goal of encouraging learning and growth through continuous reflection and process enhancements. They constitute the fuel of the relentless improvement mindset which is one of the critical pillars of the SAFe House of Lean. In this paper we report on the execution of the retrospective and problem-solving workshops in the context of a large, multicultural, scientific project such as The Square Kilometre Array Observatory (SKAO). The analysis describes the biggest challenges we experienced and how the pattern of the workshop (face to face, virtual), the tools and processes followed, the discussed problems and selected improvement items gradually changed over the time elapsed since the adoption of SAFe in 2019. It also assesses the efficacy of retrospective and problem-solving workshops in leading to process improvements and therefore identifies areas that can be adjusted to increase their effectiveness.
Managing an agile build phase while keeping the client informed with your progress
In Astronomy, software projects that are tied to hardware (interface with or control hardware or a sequencer) are usually bound to their funding profiles, which means it usually starts at a feasibility or conceptual level, then to a preliminary design, and then the final/critical design. Each of those phases typically culminate with a design review where the work is reviewed and at that point the external client will understand what has been done in those stages. When the build stage starts, it becomes less clear, especially when your client is not a software engineer. Years ago, software was built and delivered entirely once it was finished, but this does not build confidence in the external client that the project will get completed in a timely manner. It is also becoming more common that an Agile development cycle is used. This paper will discuss the different interpretations of Agile, roles of a client and contractor, conflicts that occur in an Agile build, and different ways to report on projects, including those that have worked and those that have not.
Software Engineering
icon_mobile_dropdown
CI-CD practices at SKA
M. Di Carlo, P. Harding, U. Yilmaz, et al.
The Square Kilometre Array (SKA) is an international effort to build two radio interferometers in South Africa and Australia forming one Observatory monitored and controlled from global headquarters (GHQ) based in the United Kingdom at Jodrell Bank. SKA is highly focused on adopting CI/CD practices for its software development. CI/CD stands for Continuous Integration & Delivery and/or Deployment. This paper analyses the CI/CD practices selected by the Systems Team (a specialised agile team devoted to developing and maintaining the tools that allow continuous practices) in relation to a specific software system of the SKA telescope, i.e. the Local Monitoring and Control (LMC) of the Central Signal Processor (CSP), from now on called CSP.LMC. CSP is the SKA element with the aim to process the data coming from the receivers in order to be used for scientific analysis. To achieve this, it is composed of several instruments, called subsystems, such as the Correlator Beam Former (CBF), the Pulsar Search (PSS) and the Pulsar Timing (PST). CSP.LMC communicates to the Telescope Manager (the software front-end to control the telescope operations) all the required information to monitor the CSP’s subsystems and the interface to configure them and send the commands needed to perform an observation. In other words, CSP.LMC permits the TM to monitor and control CSP as a single entity.
A middleware to confine obsolescence
Marco Buttu, Giuseppe Carboni, Antonietta Fara, et al.
Software obsolescence affects all control systems (CSs) designed to last for decades. They are often based on operating system at the end-of-life, libraries, frameworks and on programming language versions that are not supported anymore. This legacy code forces GUIs, clients and third-party applications to cope with the same constraints of the CS, spreading the obsolescence even more widely. Profitable mainstream online services for code hosting and continuous deployment workflows are sometimes not exploitable. The software team can thus lose motivation because of the lack of the stimuli usually brought by innovation. On the other hand, sometimes it is impossible to afford a CS refurbishment, either because it requires a high manpower effort or it might impair the system stability. Some of these issues can be solved by designing a middleware, lying between the CS and the external world. The middleware exposes APIs to the clients and offers a level of abstraction from the operating system and the programming language. Moreover, the CS can be easily extended bypassing the old framework and taking advantage of new architectures. In this paper we present the solution we chose for the Sardinia Radio Telescope and the other radio telescopes managed by the Italian National Institute for Astrophysics(INAF). We discuss the advantages and the drawbacks of a middleware and we also provide the technical details and technologies concerning our implementation.
Experience of utilising CI/CD practices in the development of software for a modern astronomical observatory
Joao Bento, Doug M. Arnold, Robert J. Smith, et al.
The 4m class New Robotic Telescope being built on La Palma, Canary Islands, will build upon the successful autonomous robotic operations model of the Liverpool Telescope. The software stack brings together Telescope Level Systems built using an adaptation from the GranTeCan Control System, with a new Robotic Control System replacing a human operator. On top of this sits the observer and operations interface systems for submission observations, retrieval of data and monitoring of operational progress. This software stack has been developed as a simulated end-to-end minimal viable product (MVP) complete with a simulated telescope and imaging instrument. We present our experiences of applying agile continuous integration methodologies and practices to develop our software and highlight the benefits of this approach in development of systems that will power a modern astronomical observatory that is still in construction.
Cyberinfrastructure
icon_mobile_dropdown
The Sloan Digital Sky Survey cyberinfrastructure
José R. Sánchez-Gallego, Brian Cherinka, Joel Brownstein, et al.
The Sloan Digital Sky Survey V (SDSS-V) is an all-sky, multi-epoch spectroscopic survey designed to decode the stellar evolution of the Milky Way, reveal the inner workings of stars, study the interstellar medium in the Local Volume of galaxies, and track the growth of supermassive black holes across the Universe. SDSS-V presents significant innovations in hardware and instrumentation, with the introduction of a new Focal Plane System instrument that enables multi-object spectroscopy using an array of 500 robotic fibre positioners, and the development of a new robotic observatory for the Local Volume Mapper program. These advances in instrumentation and operations necessitate a similarly evolved computing and software architecture to ensure survey efficiency and to take advantage of the improvements in software engineering and development. In this paper we present the cyberinfrastructure of the SDSS project with focus on the changes introduced since the previous iteration of the project, the adoption of new technologies, and the lessons learned in this process.
Unifying the deployment of ALMA's end user applications at its regional centers using a distributed infrastructure
Álvaro Aguirre, Víctor González, Lidia Dominguez-Faus, et al.
ALMA provides a wide range of web applications. Their main purposes are to support the work of its end users, be it staff astronomers or the external scientific community which uses these to propose and track their observation projects, including the download of their scientific data. These web applications -internally known as Offline Software, in contrast to the Online Software which corresponds mainly to the Control Software- are separated in two groups. One group of applications, which requires to modify data contained in the ALMA Archive, is deployed at JAO Offices in Chile, and a second group of applications, which doesn't modify data in the ALMA Archive, is deployed at each ALMA Regional Center (ARC), to improve the application response time by running in a location closer to the final user. Based on previous improvements done to the deployment of these web applications used by the Joint ALMA Observatory (JAO), recently ALMA has achieved a unified way of deploying the applications that run at each ARC. This has been achieved by implementing an infrastructure/configuration as code approach. The corresponding code base that holds the configuration and infrastructure definitions to achieve this are kept under configuration control, following a set of DevOps best practices to handle the day-by-day operations of all these applications, in a unified way, across all ARCs and the JAO. To manage these tools at the different ARCs a maintenance group for this deployment framework has been recently established. In this paper we detail the framework implemented in this process. We also explain the characteristics of the globally distributed maintenance group, the process by which we manage the deployment of the applications at each ARC and the successes we have enjoyed thanks to this collaboration within ALMA's partner institutions.
Observatory and Telescope Control I
icon_mobile_dropdown
Creating a highly flexible and autonomous stratospheric observatory: the essential elements of the European Stratospheric Balloon Observatory payload control software
Balloon-based astronomical missions offer a potential for cost-effective large and complex astronomical infrastructure in between ground and space-based observatories. Balloons are particularly suitable for UV and IR observations. Even though stratospheric observations have a long history, they have been up to recently they have been one-shot missions. The design of balloon platforms is usually formed around one or more specific instruments and their requirements. Not surprisingly, flight and ground software architecture in these missions may often seem like a second thought, with very limited flexibility intended. The relatively limited budget and short operations time of these platforms to some extent justifies the decision to not invest in a flexible flight or ground software. The same applies to the onboard autonomy of these scientific systems, including autonomous scheduling, data processing, and Fault Detection, Isolation and Recovery methods implemented onboard. The implemented autonomy is relatively limited and only answers the expected circumstances of one mission. Autonomy is usually implemented in the form of separated scripts running automatically in parallel with some communications in between them, but not as autonomous adaptive operations. However, the progress in development of super-pressure balloons, would provide access to longer balloon missions, and safe landing technologies would help to reuse these platforms and launch them several times. The European Stratospheric Balloon Observatory infrastructure (ESBO) is an on-going effort to improve the way scientific balloons have been used up to now, by creating an autonomous, highly flexible, and reusable platform, capable of integrating different instruments, long autonomous flights, and frequent launches. The promise of reusability and exchangeability of the instruments at the core of ESBO requires a flexible design both at hardware and software levels. The goal of this paper is to describe the flexible and autonomous payload control software developed for ESBO’s first prototype, STUDIO, and its various elements. The essential elements covered are the instrumentindependent telescope stabilization system based on COTS elements, the autonomous and highly flexible science data downlink manager, the onboard scheduler, mode-based telescope operations and finally, the fault detection, isolation, and recovery routines onboard. The software is developed based on an open-source flight software framework for space systems and is open-source to benefit the community working on similar missions. In addition to the flight software, the paper briefly describes the flexible autonomous science pipeline on ground, designed to perform arbitrary chain of processing steps based on specific type of data received, and the pointing monitoring software. Both these web-based elements use web-based tools to enhance remote operations in the future.
A graph database solution for tracking the deployment and layout of a large radio interferometer
Adam D. Hincks, Anatoly Zavyalov, Dhananjhay Bansal
The Hydrogren Intensity and Real-Time Analysis eXperiment (HIRAX) is a 21 cm intensity mapping experiment in the Karoo region of South Africa that will consist of 1,024 six metre dishes operating interferometrically from 400 to 800 MHz; an initial 256-dish array is funded. The full experiment will have over 2,000 signal chains, each consisting of many individual components—dual-polarisation antennas, amplifiers, filters, cables, etc.—whose connections, locations and states need to be tracked as the experiment is deployed and modified. Data analysis requires accurate information about the physical location of each antenna and which digital channel it is connected to at any given moment in the observatory’s history. Identifying the particular components within each signal chain, which in principle can have unique calibration information, may also be needed in order to reach the high levels of precision required by HIRAX. This complex bookkeeping task requires specialised software, and Padloper is a package under active development to meet this need. It uses JanusGraph, an open source graph database, to represent hardware components as vertices and their connections as edges of a graph. A customwritten Python package populates the database and can be used to query the database for the experiment configuration at a given date and time. A web interface built with React interfaces to this package via a Flask server for user-friendly access and provides useful visualisation tools. Padloper is open source and could easily be deployed for any experiment that needs to track signal chains, such as the upcoming CHORD and PUMA radio interferometers.
Dynamic scheduling for SOXS instrument: environment, algorithms and development
We present development progress of the scheduler for the Son Of X-Shooter (SOXS) instrument at the ESO-NTT 3.58-m telescope. SOXS will be a single object spectroscopic facility, consisting of a two-arms high-efficiency spectrograph covering the spectral range 350-2000 nm with a mean resolving power R≈4500. SOXS will be uniquely dedicated to the UV-visible and near infrared follow up of astrophysical transients, with a very wide pool of targets available from the streaming services of wide-field telescopes, current and future. This instrument will serve a variety of scientific scopes in the astrophysical community, with each scope eliciting its specific requirements for observation planning, that the observing scheduler has to meet. Due to directions from the European Southern Observatory (ESO), the instrument will be operated only by La Silla staff, with no astronomer present on the mountain. This implies a new challenge for the scheduling process, requiring a fully automated algorithm that should be able to present the operator not only with and ordered list of optimal targets, but also with optimal back-ups, should anything in the observing conditions change. This imposes a fast-response capability to the scheduler, without compromising the optimization process, that ensures good quality of the observations. In this paper we present the current state of the scheduler, that is now almost complete, and of its web interface.
A metaheuristic approach for INO340 telescope flexible scheduling
Observatories are often oversubscribed with observation proposals competing for available time slots at the best observing conditions. The role of the scheduling system is critical in such matters. In the INO340 telescope, a flexible scheduling system has been developed to make optimum programs for the observation nights to minimize the idle time of the INO340 telescope and decrease the cost of its mechanical motion while obtaining the best quality image results. A genetic algorithm has been employed to take into account the predictable factor affecting the observation conditions and obtain the optimal scheduling solution. This paper aims to present the short-term flexible scheduling design and implementation, the factors involved in the process, and the evaluation test results on how it improved the performance.
Software architecture and development plan for a 4m fully autonomous observatory (New Robotic Telescope)
Joao Bento, Robert J. Smith, Doug M. Arnold, et al.
The New Robotic Telescope (NRT) is a new UK/Spain 4-m optical telescope on La Palma. When complete it will be the world’s largest and fastest, fully autonomous optical observatory and is being designed to address the coming era of time-domain and transient astrophysics. We summarise the planned software architecture, presented as a complete, coordinated observatory system from telescope axis control, through intelligent, automated scheduling, up to the user interfaces available at astronomers’ home desks. We have adopted a blend of proven software from existing telescopes with developing new systems suited to the telescope’s unique requirements and to modern web-based, collaboration models. We pay particular attention to aspects of the software stack that distinguish this project and enable the unsupervised, autonomous science operations in which observers around the globe can specify and monitor their observing requests intra-night without needing any support from observatory staff.
Observatory and Telescope Control II
icon_mobile_dropdown
The Software Architecture and development approach for the ASTRI Mini-Array gamma-ray air-Cherenkov experiment at the Observatorio del Teide
A. Bulgarelli, F. Lucarelli, G. Tosti, et al.
The ASTRI Mini-Array is an international collaboration led by the Italian National Institute for Astrophysics (INAF) and devoted to the imaging of atmospheric Cherenkov light for very-high gamma-ray astronomy. The project is deploying an array of 9 telescopes sensitive above 1 TeV. In this contribution, we present the architecture of the software that covers the entire life cycle of the observatory, from scheduling to remote operations and data dissemination. The high-speed networking connection available between the observatory site, at the Canary Islands, and the Data Center in Rome allows for ready data availability for stereo triggering and data processing.
EtherCAT as an alternative of the next generation real-time control system for telescopes
Tzu-Chiang Shen, Patricio Galeas, Sebastian Carrasco, et al.
The ALMA Observatory was inaugurated in 2013; after almost ten years of successful operation, obsolescence emerged in different areas. One of the most critical areas is the real-time controller, there are around 80 of this kind of controller distributed across the observatory. They monitor and control hardware devices through a customized protocol built on top of the CAN bus. Similarly, other observatories are facing the obsolescence problem in this area. In collaboration with Universidad de la Frontera, initial studies were performed to explore alternatives to provide state-of-the-art solutions for the next decades. One of the candidate solutions explored is based on the EtherCAT technology. This project takes the ALMA control system as a challenge, and evaluated a new design that is not only compatible with the existing hardware devices framework of ALMA but also provides the foundation for the new subsystems associated with ALMA 2030 initiatives. In this paper, the progress of a proof of concept is reported, which explores the possibility of embedding the existing ALMA monitor and control data structure into EtherCAT frames and using EtherCAT as the primary communication protocol to monitor and control hardware devices of ALMA telescope subsystems.
New electronic brains for Halfmann telescopes
Jörg Weingrill, Thomas Granzer, Michael Weber, et al.
Many components of our STELLA telescopes located on Tenerife, which were built by Halfmann in the 2000s have reached the end of their life with no replacement parts available. A solution was necessary to guarantee continuous operation and support for the next ten years. The prerequisite for the retrofit, however, was that the mechanical components remain largely untouched in order to simplify the upgrade. We decided to remove all the existing electronics in the main control cabinet. In order to avoid electronic interference in the scientific instruments, we took several precautions. This included an isolating transformer, line filters and power chokes for the servo drivers. All of the control electronics as well as the sensory inputs is now handled by Beckhoff components. A Beckhoff PLC CX5140 is the new ”electronic brain” replacing a Linux computer running the telescope control firmware. The new telescope control firmware written in TwinCAT3 is available as open source. MQTT messages are used to command the telescope and report sensor values and position information. Sensor measurements and the state of the telescope are logged in an Influx∗-database and visualized using Grafana†. Future enhancements include an improved guiding of the telescope using machine vision and a GigE camera in a closed loop on the PLC.
The ELT Sequencer
The automated execution of scientific observations and engineering procedures at the Extremely Large Telescope (ELT) requires a standard tool: The Sequencer. For scientific observation, the Sequencer is responsible for controlling the telescope and its instruments to perform the observations. For engineering, it shall be used for commissioning and maintenance procedures. The ELT Sequencer allows building Directed Acyclic Graphs (DAG) representing tasks to be carried out. The generated graph defines every task needed (nodes), its order of execution, and its dependencies (edges). Python’s asyncio library is used to control and schedule the tasks derived from the DAG. It also allows for pseudo-parallelism between tasks. Despite being asyncio based, the Sequencer is task-agnostic, allowing standard python functions and coroutines to be executed as well. It is composed of various layers: Programmer’s API, execution kernel, command line tools and a GUI.
Simonyi Survey Telescope M1M3 control system
The Rubin Observatory’s Simonyi Survey Telescope M1M3 is a lightweight honeycomb 8.4 meter Ohara E- 6 borosilicate glass mirror, cast by the University of Arizona (UofA) Mirror Lab. It combines primary and tertiary mirror surfaces, hence its acronym. Its control software might be referenced as a 3rd generation UofA mirror active control system - after the Multiple Mirror Telescope’s (MMT) and the Large Binocular Telescope Observatory’s (LBTO). The control software uses a combination of LabVIEW Field Programmable Gate Array (FPGA),1 C++ (”back office”), and Python/Web (Graphical User Interface (GUI)/Engineering User Interface (EUI) to control the mirror. With the telescope’s first light expected soon, details of control software evolution, performed changes, as well as new development and status are described.
Data Science/Engineering and HPC
icon_mobile_dropdown
Real-time inversion of solar spectropolarimetric data at high spatial and temporal resolution: HPC and GPU implementations
The upcoming generation of 4-meter solar telescopes (such as DKIST and EST) and planned networks for synoptic solar observations (such as SPRING) will rely on full Stokes spectropolarimetric measurements to infer the properties of the solar atmosphere. They will produce a wealth of data whose analysis represents a formidable challenge. To solve this problem, we have pursued two approaches within the H2020 SOLARNET project: parallelization of a Milne-Eddington Stokes inversion code for use in mid-size servers and implementation in graphics processing units (GPUs). Here we present the results of those efforts. P-MILOS and G-MILOS are two Stokes inversion codes that can be used to produce maps of physical quantities in real time during the observations at the telescope, or to generate science-ready data from time series of spectropolarimetric measurements taken by both imaging and slit-based spectropolarimeters. These codes will open a new era in solar research.
Towards a data analytics platform for technical data in Paranal observatory
Eduardo Peña, Andres Anania, Juan Pablo Gil, et al.
During the last five years, Paranal has been developing a data centric paradigm for the monitoring and maintenance of the different systems in the observatory. The main objectives of this paradigm are, on the one hand, to automate as many tasks as possible to improve the dependability of the observatory while not increasing the FTE needed to operate it, and on the other, to increment the remote operation reducing the need of “in-situ” access to the system under scrutiny. In principle, the data centric approach is meant to complement and not replace the traditional problem-solving methods used at Paranal. Nevertheless, FTE-expensive tasks must be limited to exceptional situations. During all these years, we have moved from prototypes to production, and the observatory culture is slowly changing towards this data centric approach, including slow incorporation of AI/ML and NLP. Nonetheless, this is just stage one, as we are now moving to expand the scope incorporating, among other things, the cloud and creating a homogeneous, hybrid, data cyberinfrastructure.
Image quality evaluation and fast masking with deep neural networks
As more and more images are obtained by astronomical observations, a fast image quality evaluation algorithm is required for data processing pipelines. The image quality evaluation algorithm should be able to recognize blur or noise levels according to scientists’ requirements and further mask parts of images with low qualities. In this paper, we introduce a deep learning based image quality evaluation and fast masking algorithm. Our algorithm uses an auto-encoder neural network to obtain blur or noise levels and we further use blur or noise levels to generate mask maps for input images. Tested with simulated and real data, our algorithm could provide reliable results with small amount of images as the training set. Our algorithm could be used as a reliable image mask algorithm for different image processing pipelines.
The quality check system architecture for Son-Of-X-Shooter SOXS
Marco Landoni, Laurent Marty, Dave Young, et al.
We report the implemented architecture for monitoring the health and the quality of the Son Of X-Shooter (SOXS) spectrograph for the New Technology Telescope in La Silla at the European Southern Observatory. Briefly, we report on the innovative no-SQL database approach used for storing time-series data that best suits for automatically triggering alarm, and report high-quality graphs on the dashboard to be used by the operation support team. The system is designed to constantly and actively monitor the Key Performance Indicators (KPI) metrics, as much automatically as possible, reducing the overhead on the support and operation teams. Moreover, we will also detail about the interface designed to inject quality checks metrics from the automated SOXS Pipeline (Young et al. 2022).
Data Management, Processing, and Pipelines I
icon_mobile_dropdown
Faro: a framework for measuring the scientific performance of petascale Rubin Observatory data products
Leanne P. Guy, Keith Bechtol, Jeffrey L. Carlin, et al.
The Vera C. Rubin Observatory will advance many areas of astronomy over the next decade with its unique widefast- deep multi-color imaging survey, the Legacy Survey of Space and Time (LSST).1 The LSST will produce approximately 20TB of raw data per night, which will be automatically processed by the LSST Science Pipelines to generate science-ready data products – processed images, catalogs and alerts. To ensure that these data products enable transformative science with LSST, stringent requirements have been placed on their quality and scientific fidelity, for example on image quality and depth, astrometric and photometric performance, and object recovery completeness. In this paper we introduce faro, a framework for automatically and efficiently computing scientific performance metrics on the LSST data products for units of data of varying granularity, ranging from single-detector to full-survey summary statistics. By measuring and monitoring metrics, we are able to evaluate trends in algorithmic performance and conduct regression testing during development, compare the performance of one algorithm against another, and verify that the LSST data products will meet performance requirements by comparing to specifications. We present initial results using faro to characterize the performance of the data products produced on simulated and precursor data sets, and discuss plans to use faro to verify the performance of the LSST commissioning data products.
The Array Data Acquisition System software architecture of the ASTRI Mini-Array project
Vito Conforti, Fulvio Gianotti, Valerio Pastore, et al.
The ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) Project was born as a collaborative international effort led by the Italian National Institute for Astrophysics (INAF) to design and realize an end-to-end prototype of the Small-Sized Telescope (SST) of the Cherenkov Telescope Array (CTA) in a dual-mirror configuration (2M). The prototype, named ASTRI-Horn, has been operational since 2014 at the INAF observing station located on Mt. Etna (Italy). The ASTRI Project is now building the ASTRI Mini-Array consisting of nine ASTRI-Horn-like telescopes to be installed and operated at the Teide Observatory (Spain). The ASTRI software is aimed at supporting the Assembly Integration and Verification (AIV), and the operations of the ASTRI Mini-Array. The Array Data Acquisition System (ADAS) includes all hardware, software and communication infrastructure required to gather the bulk data of the Cherenkov Cameras and the Intensity Interferometers installed on the telescopes, and make these data available to the Online Observation Quality System (OOQS) for the on-site quick look, and to the Data Processing System (DPS) for the off-site scientific pipeline. This contribution presents the ADAS software architecture according to the use cases and requirement specifications, with particular emphasis on the interfaces with the Back End Electronics (BEE) of the instruments, the array central control, the OOQS, and the DPS.
The data processing, simulation, and archive systems of the ASTRI Mini-Array project
Saverio Lombardi, Fabrizio Lucarelli, Ciro Bigongiari, et al.
The ASTRI Mini-Array is an international project led by the Italian National Institute for Astrophysics (INAF) to build and operate an array of nine 4-m class Imaging Atmospheric Cherenkov Telescopes (IACTs) at the Observatorio del Teide (Tenerife, Spain). The system is designed to perform deep observations of the galactic and extragalactic gamma-ray sky in the TeV and multi-TeV energy band, with important synergies with other ground-based gamma-ray facilities in the Northern Hemisphere and space-borne telescopes. As part of the overall software system, the ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) Team is developing dedicated systems for Data Processing, Simulation, and Archive to achieve effective handling, dissemination, and scientific exploitation of the ASTRI Mini-Array data. Thanks to the high-speed network connection available between Canary Islands and Italy, data acquired on-site will be delivered to the ASTRI Data Center in Rome immediately after acquisition. The raw data will be then reduced and analyzed by the Data Processing System up to the generation of the final scientific products. Detailed Monte Carlo simulated data will be produced by the Simulation System and exploited in several data processing steps in order to achieve precise reconstruction of the physical characteristics of the detected gamma rays and to reject the overwhelming background due to charged cosmic rays. The data access at different user levels and for different use cases, each one with a customized data organization, will be provided by the Archive System. In this contribution we present these three ASTRI Mini-Array software systems, focusing on their main functionalities, components, and interfaces.
High-volume spectral data processing pipeline at the Dominion Radio Astrophysical Observatory
Dustin Lagoy, Michael A. Smith, Stephen T. Harrison, et al.
A novel approach for a high-volume radio telescope data processing pipeline is under development at the Dominion Radio Astrophysical Observatory (DRAO). The pipeline is designed to temporarily store raw telescope data, filter and repackage the raw data packets into standard astronomical products and upload the generated results to the Canadian Astronomy Data Centre for archival storage and distribution, all at near real-time. The system is designed to support the processing tasks of a common DRAO spectrometer infrastructure currently being commissioned for both the John A. Galt 26m telescope and the Dish Verification Antenna 2 telescope at DRAO.
Observatory and Telescope Control III
icon_mobile_dropdown
Software architecture of the Intelligent Observatory Local Control Unit
We describe the software architecture of the Local Control Units (LCU) being deployed as part of the Intelligent Observatory project of the South African Astronomical Observatory. This is an integrated system for scheduling and controlling observations across several telescopes and instruments. As part of this, each telescope and its associated instruments fall under the control of an LCU. The LCU interfaces with the observatory-wide scheduler, executing observations as requested. It also monitors observing conditions and shuts down the telescope if necessary. The software is layered, modular and distributed, and allows remote and robotic control of the various instruments and telescopes.
Challenges of containerization and robotization the telescope control system for large robotic telescope
J. J. Fernández-Valdivia, Josué Barrera Martín, M. Torres, et al.
Being the NRT1 an international collaboration to design and build a leading astronomical facility, it will be focused in the optical and near infrared ranges for the emergent area of time domain astronomy. This will rely on the mix of a large collecting area (4 m diameter), quick response (<30 s), and full robotic operation. The Telescope Level System (TLS) will be responsible for controlling, coordinating, monitoring and planning, both hardware and software systems, involved in the operation of the telescope. The NRT control system architecture aims to follow best practices in services decoupling and deployment, following recent techniques in containerization and orchestration (dockerization). This type of system will give a great stability, scalability, and flexibility, allowing new services to be added or removed, minimizing downtime scenarios. This approach is based on the know-how gathered with the control system (GCS)2 of the Spanish 10 m telescope GTC (Gran Telescopio de Canarias)3, which has been operating successfully for more than a decade. Currently, GCS does not support a robotic control, being the challenge for the NRT project6,7, to extend the functionality of the GCS with this new feature of autonomous operation4,5. The NRT aims to keep the GCS model of decoupling system components, having distributed execution and communications. Another advantage is the abstraction from low-level hardware and software, which GCS offers at the moment of integration new entities into the system. We discuss the interest and possible deployment of this kind of TLS for future based robotic facilities.
Modernizing observation planning for accessible, science-ready data
Matthew K. Brown, John O'Meara, Max Broadheim, et al.
For 25 years, W. M. Keck Observatory has relied on observers to do their own planning for their observing nights. This would usually result in a starlist and a notion of what would be best to observe next based on the priority to the science they were conducting. Under the Data Services Initiative, this will become a required part of observing. The Database-Driven Observing Infrastructure aims to supplement the creation of science-ready data by carrying observation metadata throughout the observing process. The result is a file with all the data about the observation ready to be processed by the pipelines. In order to facilitate this, tools are being developed to help create better observing plans. One of the big complexities is that W. M. Keck Observatory currently supports ten active instruments with more on the horizon and no clear plan of retiring old instruments. With that in mind, the Database-Driven Observing Infrastructure system has been developed to be modular and instrument agnostic so that differences are abstracted from the system and handled only at the entrance and exit points of an observation. The benefit to this is that new instruments are easy to implement and old instruments are easy to update.
A high-performance data acquisition on COTS hardware for astronomical instrumentation
Julien Plante, Damien Gratadour, Lionel Matias, et al.
Data throughput in modern telescopes instrumentation have been steadily increasing over the last decade. The few gigabits per second range is now the lower bound, and bandwidths as high as tens of terabits per second are expected with the Square Kilometer Array. We present a new approach based on DPDK, and its support for GPUDirect recently introduced by Nvidia to perform DMA from Network Interface Controller (NIC) to GPU memory, to answer very high throughput data acquisition in astronomy.
Instrumentation Control
icon_mobile_dropdown
MOONS fibre positioner control and path planning software
Steven Beard, Bart Willemse, Stephen Watson, et al.
The MOONS multi-object spectrograph relies on an array of 1000 fibre positioners to acquire targets in the focal plane. The fibre positioners have a larger overlap than similar instruments because MOONS can observe in the infrared. The large overlap gives MOONS the ability to acquire close pairs of object and sky targets, but it makes moving positioners to their targets without a collision even more technically challenging. We describe how the MOONS fibre positioner control system overcomes those challenges with custom electronics to manage the synchronisation between the positioners, a collision protection system, and a grid driver software system which manages the control of the fibre positioners. We also describe our experiments with different path planning algorithms and present the latest results from MOONS testing.
How Taranta provides tools to build user interfaces for TANGO devices in the SKA integration environment without writing a line of code
Matteo Canzari, Valentina Alberti, Hèlder Ribeiro, et al.
Square Kilometer Array (SKA) is a project aimed to build the largest radio telescope in the world and it has just gotten into the construction phase. In this phase, the ability to develop and integrate software in an integration environment is crucial as it is the ability to visualize system-related information via a User Interface to rapidly verify the correctness of the system behavior and spot any anomaly. This is achieved by SKA teams thanks to the deployment of the Taranta suite in the integration environment. Taranta suite is a web-based toolset jointly developed by MAX IV Laboratory and the SKA that allows the fast development of graphical user interfaces connected to TANGO devices, based on a set of predefined widgets and a drag-and-drop mechanism and therefore without the need to write any additional code. In this paper, we present the Taranta general architecture and the main widgets currently available, we describe how the Taranta suite is deployed in the SKA integration environment and we explain the process used to collect feedback from the SKA community to define the roadmap for the future development of the tool.
MAVIS instrument control software: toward the preliminary design
E. Costa, B. Salasnich, A. Baruffolo, et al.
The MCAO Assisted Visible Imager and Spectrograph (MAVIS) is a new instrument being built for the ESO’s Very Large Telescope (VLT). It will operate at the Nasmyth focus of “UT4” telescope and it is composed of two main parts: a Multi - Conjugate Adaptive Optics (MCAO) module and two post focal scientific channels, an imager and an integral field spectrograph, both operating in the visible spectrum. The project is approaching the final steps of the preliminary design phase and it is expected to have the first light in 2027. We present the status of the Instrument Control Software (ICSS). In particular, we focus on the software architecture and the interaction between ICSS and real-time computer (RTC), telescope control system (TCS) and VLT Laser Guide Stars Facility (4LGSF). Besides the complexity of the instrument, we present a software architecture that is simple and still maintains modularity, guaranteeing the overall functionality of the instrument.
The control software of the BEaTriX x-ray beam calibration facility: problems and solutions
In the context of the ATHENA mission, the BEaTriX (Beam Expander Testing X-ray) facility has been developed for the test and acceptance of the Silicon Pore Optics Modules (MM) that, once assembled, will compose the mirror of the X-ray telescope. This paper describes the software developed to control the entire facility. The language employed is LabVIEW, a control language commonly used for data acquisition, instrument control, and industrial automation. The software is composed of two independent sections: the first one is dedicated to the management of the facility during the tests of the mirror modules, as it incorporates an automated control of all the functionalities of the facility. The second one will be used for the maintenance of the facility, permitting the independent access to every single component of the system for functional checks. In the paper, the program and its functionalities are described, presenting what we have implemented to address specific problems.
Development of the spectrograph control software package for SDSS-Ⅴ Local Volume Mapper Instrument
Changgon Kim, José Sanchez-Gallego, Pavan Bilgi, et al.
The Local Volume Mapper (LVM) project in the fifth iteration of the Sloan Digital Sky Survey (SDSS-Ⅴ) will produce large integral-field spectroscopic survey data to understand the physical conditions of the interstellar medium in the Milky Way, the Magellanic Clouds, and other local-volume galaxies. We developed the Local Volume Mapper Spectrograph Control Package (LVMSCP) which controls the instruments for the operation of the spectrograph. We use the new SDSS message passing protocol CLU (Codified Likeness Utility) for the interaction, based on the RabbitMQ that implemented the Advanced Message Queuing Protocol (AMQP). Also, asynchronous programming with non-blocking procedures is applied for the package since three spectrographs should be operated simultaneously. The software is implemented based on Python 3.9, and will provide the Application Programming Interface (API) to the Robotic Observation Package (ROP) for the integrated observation.
Data Management, Processing, Pipelines II
icon_mobile_dropdown
The sky at one terabit per second: architecture and implementation of the Argus Array Hierarchical Data Processing System
Hank Corbett, Alan Vasquez Soto, Lawrence Machia, et al.
The Argus Optical Array is a synoptic survey observatory, currently in development, that will have a total collecting area equivalent to a 5-meter monolithic telescope and an all-sky field of view, multiplexed from 900 commercial off-the-shelf telescopes. The Array will observe 7916 deg2 every second during high-speed operations (mg ≤ 16.1) and every 30 seconds at base cadence (mg ≤ 19.1), producing 4.3 PB and 145 TB respectively of data per night with its 55-gigapixel mosaic of cameras. The Argus Array Hierarchical Data Processing System (Argus-HDPS) is the instrument control and analysis pipeline for the Argus Array project, able to create fullyreduced data products in real time. We pair sub-arrays of cameras with co-located compute nodes, responsible for distilling the raw 11 Tbps data rate into transient alerts, full-resolution image segments around selected targets at 30-second cadence, and full-resolution coadds of the entire field of view at 15+-min cadences. Production of long-term light curves and transient discovery in deep coadds out to 5-day cadence (mg ≤ 24.0) will be scheduled for daytime operations. In this paper, we describe the data reduction strategy for the Argus Optical Array and demonstrate image segmentation, coaddition, and difference image analysis using the GPU-enabled Argus-HDPS pipelines on representative data from the Argus Array Technology Demonstrator.
The Vera C. Rubin Observatory Data Butler and pipeline execution system
Tim Jenness, James F. Bosch, Andrei Salnikov, et al.
The Rubin Observatory’s Data Butler is designed to allow data file location and file formats to be abstracted away from the people writing the science pipeline algorithms. The Butler works in conjunction with the workflow graph builder to allow pipelines to be constructed from the algorithmic tasks. These pipelines can be executed at scale using object stores and multi-node clusters, or on a laptop using a local file system. The Butler and pipeline system are now in daily use during Rubin construction and early operations.
The BlueMUSE data reduction pipeline: lessons learned from MUSE and first design choices
Peter M. Weilbacher, Sven Martens, Martin Wendt, et al.
BlueMUSE is an integral field spectrograph in an early development stage for the ESO VLT. For our design of the data reduction software for this instrument, we are first reviewing capabilities and issues of the pipeline of the existing MUSE instrument. MUSE has been in operation at the VLT since 2014 and led to discoveries published in more than 600 refereed scientific papers. While BlueMUSE and MUSE have many common properties we briefly point out a few key differences between both instruments. We outline a first version of the flowchart for the science reduction, and discuss the necessary changes due to the blue wavelength range covered by BlueMUSE. We also detail specific new features, for example, how the pipeline and subsequent analysis will benefit from improved handling of the data covariance, and a more integrated approach to the line-spread function, as well as improvements regarding the wavelength calibration which is of extra importance in the blue optical range. We finally discuss how simulations of BlueMUSE datacubes are being implemented and how they will be used to prepare the science of the instrument.
The spectroscopic pipeline design for the ELT METIS
Nadeen B. Sabha, Wolfgang Kausch, Norbert Przybilla
The Mid-Infrared ELT Imager and Spectrograph (METIS) is going to be one of the first-light instruments on the upcoming ESO Extremely Large Telescope and will be the only one that will operate in the mid-infrared regime. It will have five observational modes ranging from direct imaging to long-slit spectroscopy and integral-field spectroscopy. All five modes will be operational at the diffraction limit of the ELT assisted by adaptive optics. In this paper, we describe the reduction process and discuss the workflow design and algorithms for the long-slit spectroscopic mode of METIS at its Final Design Stage.
Automatic spectroscopic data reduction using BANZAI
Curtis McCully, Matthew Daily, G. Mirek Brandt, et al.
Time domain astronomy has both increased the data volume and the urgency of data reduction in recent years. Spectra provide key insights into astrophysical phenomena but require complex reductions. Las Cumbres Observatory has six spectrographs - two low-dispersion FLOYDS instruments and four NRES high-resolution echelle spectrographs. We present an extension of the data reduction framework, BANZAI, to process spectra automatically, with no human interaction. We also present interactive tools we have developed for human vetting and improvement of the spectroscopic reduction. Tools like those presented here are essential to maximize the scientific yield from current and future time domain astronomy.
Project Overviews and Progress
icon_mobile_dropdown
Design, development, and testing of flight software for EIRSAT-1: a university-class CubeSat enabling astronomical research
Maeve Doyle, Andrew Gloster, Meadhbh Griffin, et al.
The capabilities of CubeSats have grown significantly since the first of these small satellites was launched in the early 2000s. These capabilities enable a wide range of mission profiles, with CubeSats emerging as viable platforms for certain space-based astronomical research applications. The Educational Irish Research Satellite (EIRSAT-1) is a CubeSat being developed by a student-led team as part of the European Space Agency’s Fly Your Satellite! programme. In addition to its educational aims, the mission is driven by several scientific and technological goals, including a novel gamma-ray instrument for the detection of bright transient astrophysical sources such as gamma-ray bursts. This work provides a detailed description of the software development lifecycle for EIRSAT-1, addressing the design, development and testing of robust flight software, aspects of payload interfacing, and risk mitigation. A design-to-testing approach has been implemented in order to establish, prior to launch, that EIRSAT-1 can perform its intended mission. Constraints and challenges typically experienced by CubeSat teams, which can impact the likelihood of mission success, have been considered throughout and lessons learned are discussed. The aim of this work is to highlight the advanced capabilities of CubeSats while also providing a useful resource for other university-based teams implementing their own flight software.
Development of the Program Execution System Architecture (PESA) for MSE
Maunakea Spectroscopic Explorer (MSE) is a telescope dedicated to multi-fibers spectroscopy and IFUs observations of the sky. Program Execution System Architecture (PESA) is one of the systems of MSE, responsible for planning, executing, reducing, and distributing science products from survey programs. Work is being done to design PESA in a modular way to include several sophisticated software tools, organized into an operational framework. This paper describes the first step of its organization and the concepts that will be used in the development of PESA.
Latest developments for the Giant Magellan Telescope (GMT) control system
The Giant Magellan Telescope (GMT) is a complex observatory with thirty major subsystems, many low-level subsystems, components, external contracts, and interfaces. Almost all subsystems require software and controls to operate. An important goal for GMT is to have software and control subsystems that are easy to develop, test, integrate, operate, and maintain. To provide consistency across all controlled subsystems, a set of standards and a reference architecture are provided. Software components are specified using a Domain Specific Language (DSL), which enables code-generation in several languages and automatic validation of architectural conformance and interfaces. Some of the main observatory control subsystems have already been modeled using this approach, and initial implementations are currently being tested. The most advanced control subsystem is the primary mirror Device Control System (M1 DCS), which is currently under testing before the integration of the optical mirror in the test cell. This paper describes the status of the GMT control system, the main lessons learned, and the future steps in the development of the GMT control system.
TMT observatory software construction update
The design and development of the TMT Software System is a complex, multi-year project that includes management, reviews, design work, and construction of software with a multi-organization team that spans three continents. The initial conceptual design was completed in 2014 and following multiple reviews, the construction phase began in 2017 with our India-based development partners. With completion of the TMT Common Software in 2019, construction development moved to the first phase of the Executive Software system, which was completed in late 2021. This paper describes the current state of the TMT Software System summarizing what has been accomplished to date and the next steps in design and development. Within the last year, TMT has become part of the larger USELT project, and this paper describes how this has influenced the software design and future development plans.
The ELT high level coordination and control
Gianluca Chiozzi, Nick Kornweibel, Ulrich Lampater, et al.
The Extremely Large Telescope (ELT) is a 39-meter optical telescope under construction in the Chilean Atacama desert. The optical design is based on a five-mirror scheme and incorporates adaptive optics. The primary mirror consists of 798 segments. Scientific first light is planned by the end of 2027. The status of the project is described in [1]. The major challenges for the control of the telescope and the instruments are in the number of sensors (~25,000) and actuators (~15,000) to be controlled in a coordinated fashion, the computing performance and low latency requirements for the phasing of the primary mirror, performing adaptive optics and coordinating all sub-systems in the optical path. Industrial contractors are responsible for the low-level control of individual subsystems and ESO for the development of coordination functions and control strategies requiring astronomical domain knowledge. In this paper we focus on architecture and design of the High-Level Coordination and Control (HLCC). It is the component of the control software responsible for coordination of all telescope subsystems to properly perform the activities required by scientific and technical operations. We first identify the HLCC context by introducing the global architecture of the telescope control system and by discussing the role of HLCC and its interfaces with the other components of the control system. We then analyze the internal architecture of the HLCC, and the primary design patterns adopted. We also discuss how the features identified from the requirements and the use cases are mapped into the design. Finally, the timeline and the current status of development activities are presented.
Software design for CSP.LMC in SKA
G. Marotta, E. Giani, I. Novak, et al.
The Square Kilometer Array (SKA) is devoted to the construction of a two body giant Radio-telescope. The design and implementation of the SKA Monitor and Control software involves about 100 people referring to eight Agile Teams developing different software elements of the telescope. Each of these elements is implemented as a 'device' within the TANGO Control System framework, written in Python code. This paper analyzes the implemented design of the Local Monitoring and Control (LMC) of the Central Signal Processor (CSP), from now on called CSP.LMC. CSP is the SKA element that will make the data that comes from the antennas available for scientific analysis. It is composed of different data processing components, i.e. the Correlator and Beam Former, the Pulsar Search and the Pulsar Timing. In this larger system, CSP.LMC has the role to communicate with the Telescope Manager (TM), i. e. the software front-end for operations, as if the CSP was an unique entity. The paper shows the detailed structure of the software, implemented with an Object Oriented approach, with a design largely inspired by some standard design patterns, such as the Observer, the Command and the Aggregator. Another essential feature is the separation of the business logic from the TANGO communication layer, improving the testability and the maintainability of the code.
Poster Session: Cyberinfrastructure
icon_mobile_dropdown
Containerizing the telemetry data pipeline for MMTO subsystems
The telemetry data pipeline for the MMT Observatory (MMTO) describes the flow of data sampled from diverse hardware devices within MMTO subsystems, through logging into various databases, to user interfaces and monitoring services. Subsystems within the pipeline include the telescope mount, primary and secondary mirrors, instruments, and environmental sensors. Data acquisition services within the pipeline post new data with a uniform data structure to a master Redis server. These incoming data are transported in real-time to replicated Redis servers where they are logged into local MariaDB relational databases. Database tables for logged data from the subsystems are highly optimized for data storage, allowing the archival of billions of data points for thousands of parameters over the past 10-15 years. Because of ever increasing difficulty in supporting legacy servers and software, a large-scale containerization effort is underway of the various components of the telemetry pipeline and underlying cyberinfrastructure. These critical servers and services are single points of failure that could result in up to weeks of operational downtime. Containerization helps to reduce the risk of potential hardware failure, operating system upgrades, and software incompatibilities. Containerizing a service defines all the software requirements for that service, including the code, runtime, system tools, system libraries, and settings. It allows rapid and reliable redeployment of new and legacy services with minimal concern for the underlying hardware. Finally, a summary of the ongoing and planned future work is presented.
Assembling and integration of the ALMA hardware in the loop simulation environment
Tzu-Chiang Shen, Alejandro Saez, Rodrigo Cabezas, et al.
The Atacama Large Millimeter/submillimeter Array (ALMA) has been working in the operations regime since 2013. After almost 10 years of successful operation, obsolescence of hardware and software emerged. On the other hand, the ALMA 2030 plan will add new disrupting capabilities to the ALMA telescope. Both efforts will require an increased amount of technical time for testing in order to minimize the risks to introduce instability in the operation when new equipment and software are integrated into the telescope. Therefore, a process to design and implement a new simulation environment, which must be comparable to the production environment, was started in 2017 and passed the Critical Design and Manufacturing Review (CDMR) in 2020. In this paper, the current status of the project was reviewed focusing on the assembling and integration period, and use cases that are started to be built on top of this testing facility.
ASTRI Mini-Array on-site Information and Communication Technology infrastructure
The ASTRI ("Astrofisica con Specchi a Tecnologia Replicante Italiana") program is a collaborative international effort led by the Italian National Institute for Astrophysics (INAF) for developing and operating an array of nine 4-m class Imaging Atmospheric Cherenkov Telescopes (IACTs), sensitive to gamma-ray radiation at energies above 1 TeV, under deployment at the Teide Observatory in Tenerife, in the Canary Islands. In order to support the development, installation and operations of the ASTRI Mini-Array an on-site Information and Communication Technology (ICT) Infrastructure has been designed. In this paper we describe the main objective of the ICT infrastructure project and its configuration in the initial phase. This ICT infrastructure, which we called mini-ICT (m-ICT), include all hardware and services needed to support the installation and testing of the first three Telescopes Mechanical Structures of the ASTRI Mini-Array that will be installed at the Teide site by 2022, before the definitive ICT infrastructure will be up and running. The m-ICT includes a virtualization system (ProxMox) and a container system to run the ASTRI Mini-Array on-site control and monitoring software. It also includes all interconnection functions of his Local Area Network (LAN) and the necessary network services: Network Time Protocol (NTP), Domain name Server (DNS), Network Address translator (NAT), Virtual Private Network (VPN), Routing. Internet connection will also be supported so that the connection with the ASTRI Data Centre in Italy (Rome) can be tested and all test results transferred to this persistent storage.
The monitoring, logging, and alarm system of the ASTRI mini-array gamma-ray air-Cherenkov experiment at the Observatorio del Teide
Federico Incardona, Alessandro Costa, Kevin Munari, et al.
The ASTRI Mini-Array is a project for the Cherenkov astronomy in the TeV energy range. ASTRI Mini- Array consists of nine Imaging Atmospheric Cherenkov telescopes located at the Teide Observatory (Canarias Islands). Large volumes of monitoring and logging data result from the operation of a large-scale astrophysical observatory. In the last few years, several “Big Data” technologies have been developed to deal with such volumes of data, especially in the Internet of Things (IoT) framework. We present the Monitoring, Logging, and Alarm (MLA) system for the ASTRI Mini-Array aimed at supporting the analysis of scientific data and improving the operational activities of the telescope facility. The MLA system was designed and built considering the latest software tools and concepts coming from Big Data and IoT to respond to the challenges posed by the operation of the array. A particular relevance has been given to satisfying the reliability, availability, and maintainability requirements towards all the array sub-systems and auxiliary devices. The system architecture has been designed to scale up with the number of devices to be monitored and with the number of software components to be considered in the distributed logging system.
Extending the life of MegaCam: redesign of the data link
Kevin K. Y. Ho, Sidik Isani, Simon Prunet
MegaCam has been CFHT's one-degree wide-field optical imager and primary dark time instrument since 2003. After nearly twenty years of operation, demand for the instrument remains high, but maintenance has been a challenge as many electronic components have become obsolete and difficult to find. Other off-the-shelf assemblies, such as the S-LINK data transmission pair for the CCD controllers from CERN (European Organization for Nuclear Research), are also no longer available and cannot be repaired. Ongoing failures, only one working spare, and a lack of a plug-n-play upgrade path forced the development of an alternative solution.
Poster Session: Data Management, Processing and Pipelines
icon_mobile_dropdown
The Gamma-Flash real-time data pipeline for ground observation of terrestrial gamma-ray flashes
A. Addis, A. Aboudan, A. Bulgarelli, et al.
Gamma-Flash is an Italian project funded by the Italian Space Agency (ASI) and led by the National Institute for Astrophysics (INAF), devoted to the observation and study of high-energy phenomena, such as terrestrial gamma-ray flashes and gamma-ray glows produced in Earth’s atmosphere during thunderstorms. The project represents the ground-based supplement to the work of the ASI AGILE satellite in this particular field. This contribution presents the architecture of the Gamma-Flash data pipeline placed at the Osservatorio Climatico “O. Vittori” on the top of Mt. Cimone (2165 m a.s.l., Northern-Central Italy). It consists of RedPitaya ARM-FPGA boards designed for acquiring events at different energies from scintillator crystals coupled to photomultiplier tubes, and a main computer that executes a real-time software pipeline. The software performs several data processing steps, data acquisition, data reduction level, algorithms for waveform selection, and finally it produces the cumulative energy spectrum of the gamma radiation collected by the photomultipliers. Data is stored in different layers, each with a different purpose, and it is available to the scientific community as HDF5 files. The pipeline has a modular architecture to provide good maintenance and flexibility, allowing for easy extensions in the future. A specific subset of data is stored in a database connected to a real-time graphical dashboard for quick-look analysis, showing the acquisition products and the environmental telemetry data.
The TolTEC camera: the citlali data reduction pipeline engine
Michael McCrackan, Zhiyuan Ma, Nat S. DeNigris, et al.
TolTEC is an imaging polarimeter installed on the Large Millimeter Telescope that simultaneously images the sky at 1.1, 1.4, and 2.0 mm. We have developed the open-source, fully parallelized C++ data reduction pipeline, citlali, to process TolTEC’s raw time-ordered data for science and calibration observations into on-sky maps, while also performing map coaddition and post-map-making analyses. Here, we describe citlali’s structure, including its reduction stages, algorithms, and parallelization scheme. We also present the results of the application of citlali to both TolTEC commissioning data and synthetic observations, characterizing the resulting map properties, as well as the software performance and memory usage.
The Son-Of-X-Shooter (SOXS) data-reduction pipeline
David R. Young, Marco Landoni, Stephen J. Smartt, et al.
The Son-Of-XShooter (SOXS) is a single object spectrograph (UV-VIS & NIR) and acquisition camera scheduled to be mounted on the European Southern Observatory (ESO) 3.58-m New Technology Telescope at the La Silla Observatory. Although the underlying data reduction processes to convert raw detector data to fully-reduced science ready data are complex and multi-stepped, we have designed the SOXS Data Reduction pipeline with the core aims of providing end-users with a simple-to-use, well-documented command-line interface while also allowing the pipeline to be run in a fully automated state; streaming reduced data into the ESO Science Archive Facility (SAF) without need for human intervention. To keep up with the stream of data coming from the instrument, there is the requirement to optimise the software to reduce each observation block of data well within the typical observation exposure time. The pipeline is written in Python 3 and has been built with an agile development philosophy that includes CI and adaptive planning.
The on-ground data reduction and calibration pipeline for SO/PHI-HRT
J. Sinjan, D. Calchetti, J. Hirzberger, et al.
The ESA/NASA Solar Orbiter space mission has been successfully launched in February 2020. Onboard is the Polarimetric and Helioseismic Imager (SO/PHI), which has two telescopes, a High Resolution Telescope (HRT) and the Full Disc Telescope (FDT). The instrument is designed to infer the photospheric magnetic field and line-of-sight velocity through differential imaging of the polarised light emitted by the Sun. It calculates the full Stokes vector at 6 wavelength positions at the Fe I 617.3nm absorption line. Due to telemetry constraints, the instrument nominally processes these Stokes profiles onboard, however when telemetry is available, the raw images are downlinked and reduced on ground. Here the architecture of the on-ground pipeline for HRT is presented, which also offers additional corrections not currently available on board the instrument. The pipeline can reduce raw images to the full Stokes vector with a polarimetric sensitivity of 10−3 · Ic or better.
Final pipeline design of the MICADO spectroscopic mode
Wolfgang Kausch, Nadeen B. Sabha, Norbert Przybilla
The Multi-AO Imaging Camera for Deep Observations (MICADO1) will be one of the first generation instruments for the Extremely Large Telescope (ELT) currently being built by the European Southern Observatory (ESO) at Cerro Armazones in the Chilean Atacama desert comprising several observing modes for a wide range of astrophysical applications. Its spectroscopic mode is an echelle design aiming for point and compact sources with a medium resolving power (R ≈ 20, 000) covering the entire I to K bands in two setups. The goal of the respective pipeline is to fully calibrate the raw data, i.e. to deliver science data products fully compliant with the Phase 3 requirements of the ESO Science Data Archive Facility. This includes the removal of the instrument signature, geometrical distortion correction, wavelength calibration and conversion to physical units (flux calibration). In addition, a comprehensive set of quality control parameters is foreseen for monitoring the instrument health and the data quality control.
Liger at Keck Observatory: design of the data reduction system and software interfaces
Nils Rundquist, Andrea Zonca, Arun Surya, et al.
Liger is a second generation near-infrared imager and integral field spectrograph (IFS) for the W. M. Keck Observatory that will utilize the capabilities of the Keck All-sky Precision Adaptive-optics (KAPA) system. Liger operates at a wavelength range of 0.81 μm - 2.45 μm and utilizes a slicer and a lenslet array IFS with varying spatial plate scales and fields of view resulting in hundreds of modes available to the astronomer. Because of the high level of complexity in the raw data formats for the slicer and lenslet IFS modes, Liger must be designed in conjunction with a Data Reduction System (DRS) which will reduce data from the instrument in real-time and deliver science-ready data products to the observer. The DRS will reduce raw imager and IFS frames from the readout system and provide 2D and 3D data products via custom quick-look visualization tools suited to the presentation of IFS data. The DRS will provide the reduced data to the Keck Observatory Archive (KOA) and will be available to astronomers for offline post-processing of observer data. We present an initial design for the DRS and define the interfaces between observatory and instrument software systems.
Poster Session: Data
icon_mobile_dropdown
Monitoring the performance of the SKA CICD infrastructure
M. Di Carlo, P. Harding, U. Yilmaz, et al.
The selected solution for monitoring the SKA CICD (continuous integration and continuous deployment) Infrastructure is Prometheus and Grafana. Starting from a study on the modifiability aspects of it, the Grafana project emerged as an important tool for displaying data in order to make specific reasoning and debugging of particular aspect of the infrastructure in place. Its plugin architecture easily allow to add new data sources like prometheus and the TANGO-controls framework related data sources has been added as well. The main concept of grafana is the dashboard, which enable to create real analysis. In this paper the monitoring platform is presented which take advantage of different datasources and a variety of different panels (widget) for reasoning on archiving data, monitoring data, state of the system and general health of it.
Using elasticsearch for archiving with TANGO-controls framework
The TANGO controls framework community has put a lot of effort in creating the HDB++ software system that is an high performance, event-driven archiving system. Its design allows storing data into traditional database management systems such as MySQL as well as NoSQL database such as Apache Cassandra. The architecture allow also to easily extend it to other noSql database like, for instance, Elasticsearch. This paper describes the extension for Elasticsearch made and how to use it alongside its graphical tool called Kibana.
Development of a high-speed identification model for infrared-ring structures using deep learning
Shimpei Nishimoto, Shota Ueda, Shinji Fujita, et al.
Machine learning-based analysis has become essential to efficiently handle the increasing massive data from modern astronomical instruments in recent years. Churchwell et al. (2006, 2007) identified infrared ring structures, which are believed to relate to the formation of massive stars, with the human eye. Recently, Ueda et al. (2020) showed that Convolutional Neural Networks (CNN) can detect objects with indistinct boundaries such as infrared rings with comparable accuracy as the human eye. However, such a classification-based object detector requires a long processing time, making it impractical to apply to existing all-sky 12 μm and 22 μm data captured by WISE. We introduced the Single Shot MultiBox Detector (SSD, Liu W. et al. 2016), which directly outputs the locations and confidences of targets, to significantly reduce the time for identification. We applied an SSD model to the rings toward the 6 deg2 region in the Galactic plane which is the same region used in Ueda et al. (2020), and confirmed that the time for identification was reduced by about 1/80 with maintaining almost the same accuracy. Since detecting small rings is still difficult by even this model, an input image should be cropped into small images, which increases the number of applications of the model. There is still room for reducing the processing time. In the future, we will try to solve this problem and detect the rings faster.
The Stereo Event Builder software system of the ASTRI Mini-Array project
S. Germani, S. Lombardi, V. La Parola, et al.
The ASTRI Mini-Array is an international project led by the Italian National Institute for Astrophysics (INAF) aimed at the construction and operation of an array of nine Imaging Atmospheric Cherenkov Telescopes (IACTs) at the Observatorio del Teide in Tenerife (Spain). The project is designed to detect very high-energy gamma rays up to the multi-TeV energy scale. The telescopes design, based on the Schwarzschild-Couder two mirror configuration and Silicon Photomultipliers sensors, leads to a very wide field of view of 10.5 degrees which allows to cover a large ground surface area with an average inter-telescope distance of about 160 m. Upon completion, it will be for some time the largest IACT array in operation below 2,500m a.s.l. both in terms of number of telescopes and of ground surface area, with the primary goal of investigating gamma-ray emission from celestial sources. The ASTRI Mini-Array design and expected performance are based on the stereoscopic technique, i.e. the detection of the same atmospheric shower event with two or more telescopes: therefore the correct identification of the single-telescope triggers participating to the same stereo event is of paramount importance. This strong requirement must meet the need to observe muon events with each single-telescope to allow for calibrations with adequate precision. In the ASTRI Mini-Array operation concept, all the single-telescope events are acquired independently and stored for off-line processing. The Stereo Event Builder (SEB) software system is the part of the off-line reconstruction chain that is responsible for identifying single and stereo Cherenkov events. The SEB constraints, design, and expected performance are described in this article.
EFTE-Rocks, a framework to discriminate fast optical transient phenomena from orbital debris
Alan Vasquez Soto, Nicholas Law, Hank Corbett, et al.
Wide-field telescopes like the Evryscope enable all-sky searches for fast optical transient events such as kilonovae, optical counterparts to fast-radio-bursts and other exotic events. To further understand these phenomena, we need infrastructure with the capability to monitor and quickly analyze these events. The Evryscopes are an allsky system with a total field of view of 16,512 sq. deg. that, coupled with the Evryscope Fast Transient Engine (EFTE), can catalogue fast optical transients down to g=16. In the past two years, EFTE has seen millions of transients across the sky including hundreds of flaring events from cool stars and a population of millisecond glints produced by Earth-orbiting objects that appear morphologically similar to transient astrophysical phenomena. In order to further characterize these events, the Evryscope and other all-sky optical surveys, such as the upcoming Argus Pathfinder and Argus Optical Array, require a framework to discriminate between this fog of imposter transients and real astrophysics. EFTE-Rocks is an automated orbit determination pipeline that takes short-duration transients from EFTE and associates them into tracklets based on an initial trajectory. Here we present a framework to characterize which orbital debris produce glints seen by fast, wide-field telescopes; lessons learned; and future software improvements. We also discuss its applications to upcoming surveys that are capable of probing for fainter objects at faster cadences.
Data processing pipeline for photo plates digital archives with deep neural networks
Photo plates have been used to capture and store astronomical images for quite a long time. In recent years, several projects are carried out to digitize photo plates and these digitized photo plates are shared through the Internet. We could extract invaluable astronomical data from digitized photo plates to analyse astronomical targets with very long temporal variations (up to decades). Extracting positions of celestial objects from photo plates and calculating their positions in celestial coordinates would be the first step. However, since astronomers would use multiple exposures to obtain images in photo plates and there are some scratches and mildews during storage of these photo plates, it becomes hard for us to directly obtain necessary information from digitized photo plates. In this paper, we will discuss the data processing pipeline developed by us to process photo plates digital archives.
A general purpose image restoration method with deep neural network and active learning
The blurred range of astronomical image data we observe is usually uncertain, Due to the complex space environment, random noise, unpredictable atmospheric turbulence and other external factors. We usually use ground-based large aperture optical telescopes to observe astronomical images, which are mainly affected by atmospheric turbulence. Therefore, the restoration of astronomical images under the influence of arbitrary atmospheric turbulence is of great significance for the theoretical development and technological progress of astronomy. In this paper, a novel astronomical image restoration algorithm is proposed, which connects the deep learning based image restoration algorithm with the data generation method. The algorithm could effectively restore images within predefined blur or noise levels. We use long exposure galaxy images and short exposure Solar images to test the algorithm. We find that a well trained algorithm can restore these images.
Poster Session: Instrumentation Control
icon_mobile_dropdown
MORFEO (formerly known as MAORY) instrument control software: toward a consolidated design
Bernardo Salasnich, Andrea Baruffolo, Fulvio Laudisio, et al.
MORFEO (formerly known as MAORY) is the multi-conjugated adaptive optics module for the ESO’s Extremely Large Telescope (ELT). It will serve the first light instrument MICADO. We present the current preliminary design of the Instrument Control Software (ICSS) illustrating the most demanding requirements ICSS has to deal with and how we are going to integrate the MORFEO ICSS architecture with the control software framework ESO is developing for new instruments.
Scientific camera driver and application software based on ASCOM
In order to improve the flexibility and portability of astronomical camera software and its driver, a scheme of astronomical camera software system is designed and implemented based on the astronomical software interface standard ASCOM. In this scheme, the camera software is divided into driver layer, logic layer, interface layer and application layer, and the astronomical camera driver development based on ASCOM standard, astronomical observation function module packaging, interface function module packaging and human-computer interaction interface creation are realized in turn. This paper introduces the structural design and working principle of the software scheme, and introduces the implementation methods of the main key units in detail, and finally verifies the feasibility of this scheme through the results of test cases. This paper lays a foundation for the research on the standardized control of astronomical camera terminal and the general architecture of system software.
Design of a remote control system for a camera system based on EPICS and web technology
In order to meet the requirement of scientific camera system with remote control, a set of distributed remote control system is built based on EPICS framework and Web service for a camera system. EPICS provides an implementation framework of distributed soft real-time control system based on Channel Access protocol. A single device control program is named IOC. It's convenient to monitor and maintain the status of devices by operating the interfaces of IOC program, namely Process Variable (PV). This paper mainly discusses the IOC implementation of CCD controller, ion pump controller, vacuum pressure sensor and temperature controller, as well as the construction of Web monitoring platform based on Quasar and Flask framework. At present, the remote control system has been put into CCD290-99 camera named PXE290.
Design and development of the IGRINS-2 control software as a facility instrument of the Gemini observatory
Hye-In Lee, Francisco Ramos, Pablo Prado, et al.
We are developing, the second generation of Immersion GRating INfrared Spectrometer, IGRINS-2 which will be a dedicated facility instrument of the Gemini Observatory. IGRINS has been in active operation for more than 8 years since 2014, including recent visits to the Gemini South telescope. House Keeping Package (HKP) of the IGRINS-2 control software monitors temperature, vacuum pressure, and Power Distribution Unit (PDU) of the hardware components, and controls PDU and calibration unit (the motors and the lamps). Slit Camera Package (SCP) and Data Taking Package (DTP) operate the infrared array detectors of Detector Control System (DCS). The interface board for each H2RG detector in IGRINS-2 has been changed from JADE2 to MACIE, which leads us to develop our own control software using the MACIE library in DCS. The IGRINS-2 software will communicate with Gemini Master Process (GMP) through Gemini Instrument Application Programmer Interface (GIAPI). This work presents the design and development process of the IGRINS-2 control software.
Design and development of the Supervisor software component for the ASTRI Mini-Array Cherenkov Camera
Mattia Corpora, Alessandro Grillo, Pierluca Sangiorgi, et al.
The ASTRI Mini-Array is a project led by INAF to construct nine Imaging Atmospheric Cherenkov Telescopes in order to study gamma-ray sources emitting up to the multi-TeV energy band. These telescopes, which will be deployed at the Observatorio del Teide (Tenerife, Spain), will be based on the prototype ASTRI-Horn telescope, successfully tested since 2014 at the Serra La Nave Astronomical Station of the INAF Observatory of Catania. Each telescope will be equipped with the new version of the ASTRI Silicon Photo-Multiplier (SiPM) Cherenkov Camera. In order to monitor and control the different subsystems, a Supervisory Control And Data Acquisition (SCADA) system will be developed to manage a set of software components. Among them, the Cherenkov Camera Supervisor (CCS), a software subsystem of the Telescope Control System (TCS), is the software component to control each Cherenkov Camera. It realizes the interface between each Camera and the central SCADA software through the Alma Common Software (ACS). Furthermore, the CCS is based on the Open Platform Communications - Unified Architecture (OPC-UA) standard, in order to realize a client/server system. The server side is implemented in the software subsystem deployed on board the Camera, while the CCS contains the client side which uses the server services. This work presents the design and the technologies used to implement the CCS. It describes architecture and functionalities starting from the definition of the use cases and the system requirements. Moreover, the work reports the various phases of the CCS development.
Improvements to SHINS, the SHARK-NIR instrument software, during the AIT phase
Davide Ricci, Fulvio Laudisio, Sona Chavan, et al.
In the context of SHARK-NIR (System for coronagraphy with High Order adaptive optics in Z and H band), we present the development of SHINS, the SHARK-NIR INstrument control Software, in particular focusing on the changes introduced during the Assembly, Integration, and Test (AIT) phase. SHARK-NIR observing sessions will be carried out with ”ESO-style” Observation Blocks (OBs) based on so-called Templates scripts that will be prepared by observers. We decided to develop Templates also for the large number of AIT tests (flexures, coronagraphic mask alignment, scientific camera performances...). Here we present the adopted HTTP API for the OBs generation and a web-based frontend that implements it. Taking advantage of this approach, we decided to expose APIs also for individual device movement and monitoring, as well as for general status. These APIs are then used in the web-based instrument control and synoptic panels. During the recent AIT phase, a potential collision issue between two motorized components emerged. While we are exploring the possibility of a hardware interlock, we present a software solution developed at the Observation Software level, that is also available while using other software such as engineering panels. The system is based on three protection layers and it has been successfully tested.
Synchronized observations with multiple detectors in GRIS: a demonstrator of an instrument for the European Solar Telescope (EST)
J. Quintero Nehrkorn, H. Rodriguez Delgado, A. Matta Gómez, et al.
This contribution describes the software and electronic improvements implemented in the GREGOR Infrared Spectrograph (GRIS) installed on the Gregor telescope. It is located at the Teide Observatory, in Tenerife, Islas Canarias, Spain. As a demonstrator for an instrument for the European Solar Telescope (EST), this project aims to perform simultaneous spectropolarimetric observations in several spectral lines using several synchronized detectors that may operate at different synchronized frame rates. Throughout the article, the problems encountered in achieving the synchronization of two or more sensors and the solutions proposed to solve them are explained.
HEART: Gemini Infrared Multi-Object Spectrograph (GIRMOS) Real-time Controller using Herzberg Extensible Adaptive Real-time Toolkit (HEART)
This paper will discuss the Gemini Infrared Multi-Object Spectrograph (GIRMOS) with a focus on the design of its facility class Adaptive Optics (AO) Real Time Controller (RTC). The GIRMOS Adaptive Optics Real-Time Controller (GIRMOS RTC) will be developed using the Herzberg Extensible Adaptive Real-time Toolkit (HEART), a C/C++ software framework for constructing RTCs that targets general-purpose CPUs and standard networking hardware. The GIRMOS RTC just finished a successful pre-build phase where the custom parts of GIRMOS were designed and it was shown how the design incorporated HEART’s software modules. The GIRMOS RTC as a Multi-Object implementation of HEART will leverage a decade of design, modelling, and prototyping effort aimed to support the performance and configurability requirements of AO systems, with support for multiple client science instruments. This paper will discuss how HEART can be customized for a Multi-Object AO (MOAO) system.
Array data acquisition system interface for online distribution of acquired data in the ASTRI Mini-Array project
Valerio Pastore, Vito Conforti, Fulvio Gianotti, et al.
The ASTRI project was born as a collaborative international effort led by the Italian National Institute for Astrophysics (INAF) to design and realise an end-to-end prototype of the Small-Sized Telescope (SST) of the Cherenkov Telescope Array (CTA) in a dual-mirror configuration (2M). The ASTRI Mini-Array is being installed at the Teide Observatory on the island of Tenerife (Canary Islands) and represents the first system of atmospheric Cherenkov telescopes completely dedicated to the study of very high energy gamma emission. The ASTRI software supports the operations of the ASTRI Mini-Array. The Array Data Acquisition System (ADAS) includes all hardware, software and communication infrastructure required to acquire, buffer and store the bulk data of the ASTRI Mini-Array instruments which will be installed to the ASTRI telescopes. The Cherenkov Camera Data Acquisition, one for each telescope, is a component of the ADAS. It connects with the Back End electronics (BEE) of the Cherenkov cameras to acquire and save the raw data. The Cherenkov Camera Dispatcher gets data from the Camera Data Acquisition and interfaces with the Online Observation Quality System (OOQS) to decode and send acquired data in near real-time. The OOQS aims to perform the data quality analysis during the observations. According to the requirement specifications, we are redesigning the software to decode and send the raw data to the OOQS at a rate of 1 kHz. This contribution presents the assessment of a solution based on Avro software for data serialisation and a Kafka server for the data transmission to the OOQS.
Real-time exposure control and instrument operation with the NEID spectrograph GUI
The NEID spectrograph on the WIYN 3.5-m telescope at Kitt Peak has completed its first full year of science operations and is reliably delivering sub-m/s precision radial velocity measurements. The NEID instrument control system uses the TIMS package (Bender et al. 2016), which is a client-server software system built around the twisted python software stack. During science observations, interaction with the NEID spectrograph is handled through a pair of graphical user interfaces (GUIs), written in PyQT, which wrap the underlying instrument control software and provide straightforward and reliable access to the instrument. Here, we detail the design of these interfaces and present an overview of their use for NEID operations. Observers can use the NEID GUIs to set the exposure time, signal-to-noise ratio (SNR) threshold, and other relevant parameters for observations, configure the calibration bench and observing mode, track or edit observation metadata, and monitor the current state of the instrument. These GUIs facilitate automatic spectrograph configuration and target ingestion from the nightly observing queue, which improves operational efficiency and consistency across epochs. By interfacing with the NEID exposure meter, the GUIs also allow observers to monitor the progress of individual exposures and trigger the shutter on user-defined SNR thresholds. In addition, inset plots of the instantaneous and cumulative exposure meter counts as each observation progresses allow for rapid diagnosis of changing observing conditions as well as guiding failure and other emergent issues.
HEART: Herzberg Extensible Adaptive Real-time Toolkit (HEART): internal structure: blocks, pipes, and composition of a new RTC
Herzberg Extensible Adaptive Real-time Toolkit (HEART) is a collection of libraries and other software that can be used to create different types of Adaptive Optics (AO) systems. Pixels can be received from Laser Guide Star (LGS) Wavefront Sensors (WFSs), high-order Natural Guide Star (NGS) WFSs, On-Instrument WFSs (OIWFSs) that are located in the science instruments, and on-detector guide windows (ODGW) from science imagers. These inputs are processed in real-time by HEART to compute commands to configure the deformable mirrors (DMs) and the tip-tilt stage (TTS), as well as offloading information to selected mechanisms in the RTC, in the telescope and in the client instruments. This paper will explore the internal structure of HEART. In particular, the concept of “blocks”, which are reusable software units from which an RTC can be composed, how “pipes” are used to combine blocks in a meaningful manner and ultimately how those pipes can be used to realize many different types of real-time controllers (RTCs) such as SCAO (Single Conjugate AO), Multi-Conjugate AO (MCAO), Multi-Object AO (MOAO), and Ground Layer AO (GLAO). HEART is currently being implemented for use in NFIRAOS (Near Field Infra-Red AO System) for TMT, GNAO (Gemini North Adaptive Optics system), GIRMOS (Gemini Infrared Multi-Object Spectrograph), GPI2.0 (Gemini Planet Imager upgrade) and REVOLT (Research, Experiment and Validation of adaptive Optics with a Legacy Telescope).
A flexible automation solution for the Gemini North Adaptive Optics facility
Part of GNAO’s1 ”one button” approach to control requires the ability of scripting and automating a number of our high-level actions, in a way that provides repeatable and seamless operation, and error recovery without requiring the user’s input beyond the initial command. In order to achieve all these we focused on an automation solution that allowed us enough flexibility to implement our goals, without compromising the integration with the rest of our framework. Our final candidate was BNL’s Bluesky2 which covered most of our needs, was written in Python, and open-sourced, which led to its adoption as the core of our high-level command sequencing software. In order to integrate Bluesky into our framework we’ve expanded on it, including custom Ophyd components to talk to our internal database (Redis,3 instead of default EPICS4), and a number of modifications to concurrently monitor and control our commands. We present our choice for the core sequencing component and the modifications that we implemented on top of it, as well as the challenges that we faced and lessons learned.
CUBES and its software ecosystem: instrument simulation, control, and data processing
Giorgio Calderone, Roberto Cirami, Guido Cupani, et al.
CUBES (Cassegrain U-Band Efficient Spectrograph) is the recently approved high-efficiency VLT spectrograph aimed to observe the sky in the UV ground-based region (305-400 nm) with a high-resolution mode (∼ 20K) and a low-resolution mode (∼ 5K). In this paper we will briefly describe the requirements and the design of the several software packages involved in the project, namely the instrument control software, the exposure time calculator, the end-to-end simulator, and the data reduction software suite. We will discuss how the above mentioned blocks cooperate to build up a “software ecosystem” for the CUBES instrument, and to support the users from the proposal preparation to the science-grade data products.
Employing ELT software technologies for the upgrade of the FORS instrument at ESO VLT
R. Cirami, V. Baldini, G. Calderone, et al.
FORS (FOcal Reducer/low dispersion Spectrograph) is a multi-mode (imaging, polarimetry, long slit and multi-object spectroscopy) optical instrument mounted at the Cassegrain focus of one of the Unit Telescopes of ESO’s Very Large Telescope (VLT). Since FORS is a workhorse and quite unique instrument in Paranal, there is a strong need to upgrade it, both to address possible new scientific goals and to ensure regular instrument availability for the forthcoming years. The current instrument control software and electronics have been developed at the end of the ’90s, and several parts are becoming obsolete and do not follow the latest standards imposed by ESO for the VLT instruments. An initial collaboration has been setup between ESO and INAF – Astronomical Observatory of Trieste in 2018 for the feasibility study of the upgrade of the FORS control software and electronics with the latest VLT standard technologies (FORS-Up project). In the last years, however, ESO is developing new software and electronics control standards for the forthcoming ELT telescope with the aim to develop a full-fledged control system able to efficiently fight hardware obsolescence, offer modern software tools, lower costs, integration and maintenance efforts, and easy installation. This paper focuses on the FORS-Up control system based on the ELT Instrument Control Software Framework as presented at the FORS-Up Final Design Review in October 2021.
Status of the Automated Data Extraction, Processing, and Tracking System (ADEPTS) for CHARIS/SCExAO
Taylor L. Tobin, Jonathan Pal, Jeffrey Chilcote, et al.
CHARIS is a near-infrared (JHK) coronagraphic integral field spectrograph (IFS) housed on the Subaru Telescope. In conjunction with the extreme adaptive optics system, SCExAO, it provides high-contrast spectral imaging of substellar companions and circumstellar disks. However, data extraction, calibration, and processing are time consuming processes with a steep learning curve, creating a bottleneck and slowing the production of scientific results. The new Automated Data Extraction, Processing, and Tracking System (ADEPTS) will automatically process all data taken by CHARIS, building calibration files and extracting science-grade data cubes, as well as performing post-processing on the results using default parameters. ADEPTS also serves as a backend for CHARIS in terms of file organization, logging all raw and produced files in a searchable SQLite database and providing a mechanism for user flagging of bad files. By using automation to remove wait times and reducing elapsed computation time via parallelization on its home 72-CPU processor, ADEPTS will be able to produce science-grade data products within a day of observations. We present an update on the current status of ADEPTS, as well as new features and design modifications.
Poster Session: Observatory/Telescope Control
icon_mobile_dropdown
LVMECP: SDSS-V Local Volume Mapper Enclosure Control Package
Mingyeong Yang, Felipe Besser, José Sánchez-Gallego, et al.
We developed control software for an enclosure system of the SDSS-V Local Volume Mapper (LVM) which provides a contiguous 2,500 deg2 integral-field survey. The LVM enclosure, located at the Las Campanas Observatory in Chile, is a building that hosts the LVM instruments (LVM-I). The enclosure system consists of four main systems: 1) a roll-off dome, 2) building lights, 3) a Heating, Ventilation, and Air Conditioning (HVAC) system, and 4) a safety system. Two Programmable Logic Controllers (PLCs) as middleware software directly operate complex mechanisms of the dome and the HVAC via the Modbus protocol. The LVMECP is implemented by Python 3.9 following the SDSS software framework which adopted a protocol, called CLU, with message passing based on the RabbitMQ and Advanced Message Queuing Protocol (AMQP). Also, we applied asynchronous programming to our system to process multiple requests simultaneously. The Dome PLC system remotely sends commands for the operation of a roll-off dome and enclosure lights. The HVAC PLC system keeps track of changing environmental values of the HVAC system in real-time. This software provides observers with remote access by high-level commands.
LVMAGP: SDSS-V local volume mapper acquisition and guiding package
Sloan Digital Sky Survey fifth-generation (SDSS-V) Local Volume Mapper (LVM) is a wide-field IFU survey that uses an array of four 160 mm telescopes. It provides IFU spectra over the optical range with R ∼ 4,000 to reveal the inner components of galaxies and the evolution of the universe. Each telescope observes the science field or the calibration field independently, but all of them should be simultaneously synchronized with the science exposure. To minimize the moving parts, the LVM adopted the siderostat design with a field derotator. We designed the optimized control software for our LVM observation, lvmagp, which controls four focusers, three K-mirror derotators, one fiber selector, four mounts (siderostats), and seven guide cameras. It was built on its owen user interface and messaging protocol called actor and clu based on asynchronous programming. The lvmagp provides three key sequences: autofocus sequence, field acquisition sequence, and autoguide sequence. Also, we designed and fabricated the proto-model siderostat for the software test. The real sky test was made with proto-model siderostat, and the lvmagp showed arcsecond-level field acquisition and autoguide accuracy.
INO340 telescope interlock, safety, and alarm management systems
Interlock and safety systems (ILS) play a significant role in the safe and reliable operations of cyber-physical systems such as optical telescopes. We performed a comprehensive hazard analysis to identify and classify various hazard issues and their severities together with the required actions. ILS has been deployed in reliable PLC platforms in the INO340 telescope to deliver all related functionalities in safe and stable conditions. Since ILS provides dedicated engineering panels for specific operators, we have also developed a specific Alarm/warning management system for the operator/astronomer applications in parallel. In this paper, we briefly present the INO340 telescope hazard analysis process, ILS architecture and development methodologies, and Alarm/warning management system design and implementation.
HARMONI at ELT: an evolvable software architecture for the instrument pointing model
HARMONI is the first light visible and near-IR integral field spectrograph for the ELT. To achieve its optimal image quality, an accurate measurement of the telescope’s pointing error is necessary. These measurements are affected both by systematic and random error contributions. To characterise the impact of the latter (as well as the performance of any necessary corrective model), simulations of the pointing error measurement process are required. We introduce harmoni-pm: a Python-based prototype which, departing from a geometric optics modelisation of the instrument, attempts to reproduce the main drivers of the instrumental pointing error. harmoni-pm features a software architecture that is resilient to instrument model refinements and enables performance analyses of corrective models based on simulated calibrations. Results showed that the relay optics are the main drivers of the instrumental pointing error (order 100 μm). Additionally, simulated calibrations of corrective models based on truncated Zernike expansions can compensate for systematic pointing errors up to a residual of order 1 μm.
The online observation quality system software architecture for the ASTRI Mini-Array project
The ASTRI Mini-Array is an international collaboration led by the Italian National Institute for Astrophysics. This project aims to construct and operate an array of nine Imaging Atmospheric Cherenkov Telescopes to study gamma-ray sources at very high energy (TeV) and perform stellar intensity interferometry observations. We describe the software architecture and the technologies used to implement the Online Observation Quality System (OOQS) for the ASTRI Mini-Array project. The OOQS aims to execute data quality checks on the data acquired in real-time by the Cherenkov cameras and intensity interferometry instruments, and provides feedback to both the Central Control System and the Operator about abnormal conditions detected. The OOQS can notify other sub-systems, triggering their reaction to promptly correct anomalies. The results from the data quality analyses (e.g. camera plots, histograms, tables, and more) are stored in the Quality Archive for further investigation and they are summarised in reports available to the Operator. Once the OOQS results are stored, the operator can visualize them using the Human Machine Interface. The OOQS is designed to manage the high data rate generated by the instruments (up to 4.5 GB/s) and received from the Array Data Acquisition System through the Kafka service. The data are serialized and deserialized during the transmission using the Avro framework. The Slurm workload scheduler executes the analyses exploiting key features such as parallel analyses and scalability.
The telescope control system for the ASTRI Mini-Array of imaging atmospheric Cherenkov telescopes
Federico Russo, Gino Tosti, Pietro Bruno, et al.
The ASTRI Mini-Array is an international project led by INAF to construct and operate nine Imaging Atmospheric Cherenkov Telescopes with the scientific goals of studying several classes of objects possibly emitting at energies higher than some TeVs and of performing stellar intensity interferometry observations. The telescopes array will be installed at the Teide Observatory (Tenerife, Spain). A Supervisory Control And Data Acquisition (SCADA) software system will be developed to manage the ASTRI Mini-Array allowing its control remotely, from several locations. One of the most important components of the SCADA system is the Telescope Control System (TCS), i.e. the system responsible for the control and supervision of each telescope. The TCS includes several supervisor components, that interface with the telescope local control systems, the hardware and software that control the telescopes hardware devices such as the telescope mount drive systems and the Cherenkov camera, via the Open Platform Communications - Unified Architecture (OPC-UA) standard. These supervisors are then controlled by a telescope manager component responsible for the execution of the single telescope scientific and technical operations requested, orchestrated and synchronized centrally by the SCADA array central controller. This contribution describes the TCS architecture, design and development approach in the context of the general SCADA architecture and of the ALMA Common Software, the framework chosen for the development of all SCADA software of the ASTRI Mini-Array.
CCAT-prime: observatory control software for FYST
Reinhold Schaaf, Mike Nolta, Ronan Higgins, et al.
The Fred Young Submillimeter Telescope (FYST) is a 6-meter diameter telescope with a surface accuracy of 10 microns, operating at submillimeter to millimeter wavelengths. It will be located at 5600 meters elevation on Cerro Chajnantor in the Atacama desert of northern Chile overlooking the ALMA site. Its novel optical “crossed-Dragone” design will deliver a high-throughput, wide field-of-view telescope capable of mapping the sky very rapidly and efficiently. The telescope can host up to three instruments, with the heterodyne array “CHAI” and the direct-detection camera “Prime-Cam” as first-generation instruments. The often harsh environmental conditions at the telescope site require that FYST be operated remotely, either from the base station near San Pedro de Atacama or from the scientist’s home institutions in the US, Canada and Germany. Automated observations will therefore be the dominant observation mode. FYST’s Observatory Control System (OCS) gives instrument teams the responsibility to control observations. We believe that this model is a good fit for FYST because the observatory will operate exclusively in campaign mode. Furthermore, instrument teams have significant investments in software they want to preserve. The OCS adopts a micro-service design using off-theshelf components as far as possible to minimize development effort. We will present the OCS design and the selection of off-the-shelf components used.
ZeroMQ-based control system for optical telescope
This paper introduced a telescope control system based on ZeroMQ communication architecture to realize the coordination and scheduling control of mount, camera, dome, weather station and other equipment. The system can be divided into three layers: equipment interface layer, telescope control layer and business layer. In the instrument interface layer, the information interaction control between different devices and systems is mainly realized through ZeroMQ communication protocol. In the telescope control layer, each type of equipment is abstractly defined to realize the logical control of the equipment and the decomposition of upper control commands. The business layer mainly takes time series as the main line, schedules equipment to complete various tasks, monitors and manages environmental status information, telescope equipment status, task execution status and other data, and realizes automatic observation and exception handling of telescopes. The system has been applied to the 36 cm telescope on the Korla observation. It is mainly used for automatic observation of space debris and has achieved good results.
pytelpoint: an open-source package for modeling and assessing telescope pointing performance
This paper presents pytelpoint, an open-source Python1 package that uses PyMC2 (https://www.pymc.io/) to perform robust analysis of telescope pointing performance. It implements pointing models in a way similar to TPOINT3 and uses compatible parameter names and definitions. This way results can be easily compared with previous TPOINT analysis and implemented in telescope control systems that use TPOINT or TPOINTcompatible pointing models. The Bayesian modeling techniques that PyMC enables allow for much more robust determinations of the uncertainties in model parameters and the correlations between them. Several visualization routines are provided to help assess results and the residuals of the model fits. Some examples will be shown of how this has been used at the MMTO.4 The initial release only supports elevation-azimuth telescopes. Support for other kinds of mounts is planned.
Poster Session: Software Engineering
icon_mobile_dropdown
SciDevOps: accelerating scientific software delivery under a continuous integration model
Álvaro Aguirre, Víctor González, Lidia Dominguez-Faus, et al.
ALMA has been operating for several years now. Many scientific tools used to assess its scientific data products have been in use since its beginnings. Recently, it has become clear that there is a necessity to agilize the delivery and deployment of improvements and new features of these tools. However, since these tools are developed by scientists and not by ALMA's Integrated Computing Team, the existing delivery process fails to address all the needs that these tools require. In this paper we will show how we have automated the software delivery process for a particular tool used within the scientific operations at ALMA. The approach to implement this was to use DevOps principles applied to the Science Operations within ALMA, hence the concept SciDevOps. We explain what the previous situation was and what changes were implemented in order to achieve a fully automated delivery and deployment process.
Poster Session: Software Quality and Testing
icon_mobile_dropdown
An exposure time calculator for the Maunakea Spectroscopic Explorer
The Maunakea Spectroscopic Explorer (MSE) will convert the 3.6-m Canada-France-Hawaii Telescope (CFHT) into an 11.25-m primary aperture telescope with a 1.5 square degrees field-of-view at the prime focus. It will produce multi-object spectroscopy with a suite of low (R∼3,000), moderate (R∼6,000), and high (R∼40,000) spectral resolution spectrographs in optical and near-infrared bands that are capable of detecting over 4,000 objects per pointing. Generally, an exposure time calculator (ETC) should simulate a system performance by computing a signal-to-noise ratio (SNR) and exposure time based on parameters such as a target magnitude, a total throughput of the system, and sky conditions, etc. The ETC that we have developed for MSE has individual computation modes for SNR, exposure time, SNR as a function of AB magnitude, and SNR as a function of wavelength. The code is based on an agile development methodology and allows for a variety of user input. Users must select either LR, MR, or HR spectral resolution settings in order to pull the associated MSE instrument parameters. Additionally, users must specify the target and background sky magnitudes (and have the ability to alter the default airmass and water vapor values). The software is developed with Python 3.7, and Tkinter graphical user interface is implemented to facilitate cross-platform use. In this paper, we present the logic structure and various functionalities of our MSE-ETC, including a software design and a demonstration.
Poster Session: UI/Web Technologies
icon_mobile_dropdown
Designing the MICADO observation preparation software for a distributed architecture
Michael Wegner, Jörg Schlichter, Valentin Ziel
MICADO, the near-infrared Multi-AO Imaging Camera for Deep Observations and designated ELT first-light instrument, will require dedicated software tools for observation preparation. In this context, the European Southern Observatory is currently employing a uniform and distributed architecture, which expects business logic to be provided as microservices on an ESO host. This particularly applies to the MICADO-specific features, too: Automatic guide star selection, AO performance calculation, and telescope offset permissibility checks are first candidates for a microservice implementation here. We present our approach of adapting MICADO tasks to ESO's distributed architecture and share our view on the new paradigm in general.
Rapid and painless development of python INDI drivers to elegant and responsive web GUIs
Scott Swindell, Dan Avner, Timothy Pickering, et al.
In this paper we present pyINDI, a web-friendly python port of the widely adopted Instrument Neutral Distributed Interface (INDI) protocol. The INDI model separates the GUI or “client” from the software that communicates directly with the hardware or “driver.” pyINDI includes tools for building a client or driver and is compatible with any INDI compliant software. On the client side, a JavaScript library communicates with the INDI driver. The client side also includes HTML and CSS tools to auto generate a GUI based on the INDI properties. A developer could also use the HTML and CSS tools to build a custom GUI. The driver and client APIs utilize python's asyncio library for low overhead concurrency. We will summarize the range of current pyINDI drivers and clients at the Bok, Kuiper and MMT observatories. We will then pivot to potential uses and expansions of pyINDI.
Poster Session
icon_mobile_dropdown
2dFdr Pipeline As a Web Service (PAWS): on demand reduction of archival 2dF-AAOmega observations
Brent Miszalski, Simon J. O'Toole, Kate Sheng, et al.
The Two-degree Field (2dF) facility of the Anglo-Australian Telescope (AAT) continues to take regular observations with millions of spectra collected over its lifetime. While individual projects have used the 2dFdr data reduction package to reduce and publish their own spectra, the majority of 2dF spectra are relatively inaccessible inside raw files located in the AAT archive. Here we introduce our 2dFdr Pipeline As a Web Service (PAWS) system that allows users to reduce 2dF-AAOmega observations on demand from the upgraded AAT archive. Without downloading data or installing 2dFdr, users can select science observations and reduction parameters before jobs are submitted for reduction. The system uses docker-py and Celery to robustly execute the reduction workflows, while a custom job tracking system keeps users informed of job progress. Data products may be downloaded and individual spectra can be viewed interactively. We intend to support additional instruments in the future.
Data central's data aggregation service
Brent Miszalski, Simon J. O'Toole, James Tocknell, et al.
Astronomers routinely have to collate heterogeneous observational data for one or several targets from a variety of online resources. Traditionally this process of data aggregation can be time consuming and error prone, especially if multiple telescope archives or data centres are searched individually. To streamline this task we have developed Data Central’s Data Aggregation Service (DAS), an interactive web application that leverages Aladin Lite to display images and catalogues resulting from multiple online service queries of a given target. The modern asynchronous Python design allows these queries to be sent simultaneously and individual results are quickly displayed as soon as they are received. The DAS also hosts Pipeline As a Web Service (PAWS) data reduction workflows that may be triggered on demand. The DAS can effectively unlock science from unreduced data in telescope archives and may help manage the massive data volumes expected from next generation facilities.
Front Matter: Volume 12189
icon_mobile_dropdown
Front Matter: 12189
This PDF file contains the front matter associated with SPIE Proceedings Volume 12189, including the Title Page, Copyright information, Table of Contents, and Conference Committee Page.