A next-generation open-source toolkit for FITS file image viewing
Author(s):
Eric Jeschke;
Takeshi Inagaki;
Russell Kackley
Show Abstract
The astronomical community has a long tradition of sharing and collaborating on FITS file tools, including viewers. Several excellent viewers such as DS9 and Skycat have been successfully reused again and again. Yet this "first generation" of viewers predate the emergence of a new class of powerful object-oriented scripting languages such as Python, which has quickly become a very popular language for astronomical (and general scientific) use. Integration and extension of these viewers by Python is cumbersome. Furthermore, these viewers are also built on older widget toolkits such as Tcl/Tk, which are becoming increasingly difficult to support and extend as time passes.
Suburu Telescope's second-generation observation control system (Gen2) is built on a a foundation of Python-based technologies and leverages several important astronomically useful packages such as numpy and pyfits. We have written a new flexible core widget for viewing FITS files which is available in versions for both the modern Gtk and Qt-based desktops. The widget offers seamless integration with pyfits and numpy arrays of FITS data. A full-featured viewer based on this widget has been developed, and supports a plug-in architecture in which new features can be added by scripting simple Python modules. In this paper we will describe and demonstrate the capabilities of the new widget and viewer and discuss the architecture of the software which allows new features and widgets to easily developed by subclassing a powerful abstract base class. The software will be released as open-source.
Data mining and knowledge discovery resources for astronomy in the web 2.0 age
Author(s):
S. Cavuoti;
M. Brescia;
G. Longo
Show Abstract
The emerging field of AstroInformatics, while on the one hand appears crucial to face the technological challenges, on
the other is opening new exciting perspectives for new astronomical discoveries through the implementation of advanced data mining procedures. The complexity of astronomical data and the variety of scientific problems, however, call for innovative algorithms and methods as well as for an extreme usage of ICT technologies. The DAME (DAta Mining and Exploration) Program exposes a series of web-based services to perform scientific investigation on astronomical massive data sets. The engineering design and requirements, driving its development since the beginning of the project, are projected towards a new paradigm of Web based resources, which reflect the final goal to become a prototype of an efficient data mining framework in the data-centric era.
Science ground segment for the ESA Euclid Mission
Author(s):
Fabio Pasian;
John Hoar;
Marc Sauvage;
Christophe Dabin;
Maurice Poncet;
Oriana Mansutti
Show Abstract
The Scientific Ground Segment (SGS) of the ESA M2 Euclid mission, foreseen to be launched in the fourth quarter of
2019, is composed of the Science Operations Center (SOC) operated by ESA and a number of Science Data Centers
(SDCs) in charge of data processing, provided by a Consortium of 14 European countries. Many individuals, scientists
and engineers, are and will be involved in the SGS development and operations. The distributed nature of the data
processing and of the collaborative software development, the data volume of the overall data set, and the needed
accuracy of the results are the main challenges expected in the design and implementation of the Euclid SGS. In
particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will
require a distributed storage to avoid data migration across SDCs. The leading principles driving the development of the
SGS are expected to be the simplicity of system design, a component-based software engineering, virtualization, and a
data-centric approach to the system architecture where quality control, a common data model and the persistence of the
data model objects play a crucial role. ESA/SOC and the Euclid Consortium have developed, and are committed to
maintain, a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS.
The VO-Dance web application at the IA2 data center
Author(s):
Marco Molinaro;
Cristina Knapic;
Riccardo Smareglia
Show Abstract
Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the
Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for
the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the
Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments,
IA2 offers proprietary and public data access through user portals (both developed and mirrored) and
deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are
developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing,
along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols
providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards,
the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes
the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from
already existent database tables or views. The tool consists in a Java web application, potentially DBMS and
platform independent, that stores internally the services' metadata and information, exposes restful endpoints to
accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent
with the published table or view. In response to the call VO-Dance translates back the database answer in a VO
compliant way.
Distributed agile software development for the SKA
Author(s):
Andreas Wicenec;
Rebecca Parsons;
Slava Kitaeff;
Kevin Vinsen;
Chen Wu;
Paul Nelson;
David Reed
Show Abstract
The SKA software will most probably be developed by many groups distributed across the globe and coming
from dierent backgrounds, like industries and research institutions. The SKA software subsystems will have to
cover a very wide range of dierent areas, but still they have to react and work together like a single system to
achieve the scientic goals and satisfy the challenging data
ow requirements. Designing and developing such
a system in a distributed fashion requires proper tools and the setup of an environment to allow for ecient
detection and tracking of interface and integration issues in particular in a timely way. Agile development can
provide much faster feedback mechanisms and also much tighter collaboration between the customer (scientist)
and the developer. Continuous integration and continuous deployment on the other hand can provide much faster
feedback of integration issues from the system level to the subsystem developers. This paper describes the results
obtained from trialing a potential SKA development environment based on existing science software development
processes like ALMA, the expected distribution of the groups potentially involved in the SKA development and
experience gained in the development of large scale commercial software projects.
Evolution of the top level control software of astronomical instruments at ESO
Author(s):
Eszter Pozna
Show Abstract
The Observation Software (OS) is the top level control software of astronomical instruments which is managing the
actions during exposures and calibrations carried out at ESO (at various sites VLT, VLTI, La Silla, VISTA). The
software framework Base Observation Software Stub (BOSS) provides the foundation of the OS, in use for a decade.
BOSS contains 26000 lines of C++ code and covers the functionalities of a simple OS (configuration, synchronization of
the subsystems, state alignment, exposure and image file handling). The need for ever increasing precision and speed
imposes a consequent increase in complexity on the astronomical instrument control software. Thus makes the OS a
critical component in the instrument design. This is reflected by the size of the BOSS applications varying between 0-12000 lines including additional scheduler mechanism, calculation of optical phenomena, online calibrations etc. This
article focuses on the progress of OS and BOSS, and their functionality over time.
Discovery Channel Telescope software component template and state design: principles and implementation
Author(s):
Paul J. Lotz;
Michael J. Lacasse;
Ryan C. Godwin
Show Abstract
The Discovery Channel Telescope is a 4.3m astronomical research telescope in northern Arizona constructed through a partnership between Discovery Communications and Lowell Observatory. The control software for the telescope and observatory systems consists of stand-alone, state-based components that respond to triggers (external signals or internal data changes). Component applications execute on Windows, real-time, and FPGA targets. The team has developed a template for a system component, the implementation of which has yielded large gains in productivity, robustness, and maintainability. These benefits follow from the dependence of the template on common, well-tested code, allowing a developer to focus on application-specific particulars unencumbered by details of infrastructure elements such as communication, and from the separation of concerns the architecture provides, ensuring that modifications are straightforward, separable, and consequently relatively safe. We describe a repeatable design process for developing a state machine design, and show how this translates directly into a concrete implementation utilizing several design patterns, illustrating this with examples from components of the functioning active optics system. We also present a refined top-level state machine design and rules for highly independent component interactions within and between hierarchies that we propose offer a general solution for large component-based control systems.
Instrument control software development process for the multi-star AO system ARGOS
Author(s):
M. Kulas;
L. Barl;
J. L. Borelli;
W. Gässler;
S. Rabien
Show Abstract
The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large
Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics
system integrates several control loops and many different components like lasers, calibration swing arms and
slope computers that are dispersed throughout the telescope. The purpose of the instrument control software
(ICS) is running this AO system and providing convenient client interfaces to the instruments and the control
loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software
system with no defects in a short time, the creation of huge and complex software programs with a maintainable
code base, the delivery of software components with the desired functionality and the support of geographically
distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software
like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different
functional levels like unit tests and regression tests, agree about code and architecture style and deliver software
incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already
successfully in use in the laboratories for testing ARGOS control loops.
Data management for the EVLA
Author(s):
Bryan J. Butler;
Claire J. Chandler
Show Abstract
The Expanded Very Large Array (EVLA) project is the next generation instrument for high resolution long-millimeter to
short-meter wavelength radio astronomy. It is currently in early science operations, with full science operations to
commence in January 2013. The EVLA construction project provided new software for all aspects of operation of the
telescope, including both that required for controlling and monitoring the instrument and that involved with the scientific
workflow. As the telescope transitions into full operations we are also developing the software and operations policies
that allow us to manage the large amounts of data collected by the instrument (up to terabytes for a single observation;
petabytes per year for all observations). We present an overview of our data management software and policies for the
EVLA, as well as some early experience we have gained with the storage and distribution of data, post-processing,
automatic processing, and centralized reprocessing of data, and storage of derived products back into our science
archive.
Design and capabilities of the MUSE data reduction software and pipeline
Author(s):
Peter M. Weilbacher;
Ole Streicher;
Tanya Urrutia;
Aurélien Jarno;
Arlette Pécontal-Rousset;
Roland Bacon;
Petra Böhm
Show Abstract
MUSE, the Multi Unit Spectroscopic Explorer,1 is an integral-field spectrograph under construction for the ESO VLT to
see first light in 2013. It can record spectra of a 1′x1′ field on the sky at a sampling of 0″.2x0″.2, over a wavelength range
from 4650 to 9300Å.
The data reduction for this instrument is the process which converts raw data from the 24 CCDs into a combined
datacube (with two spatial and one wavelength axis) which is corrected for instrumental and atmospheric effects. Since
the instrument consists of many subunits (24 integral-field units, each slicing the light into 48 parts, i. e. 1152 regions with
a total of almost 90000 spectra per exposure), this task requires many steps and is computationally expensive, in terms of
processing speed, memory usage, and disk input/output.
The data reduction software is designed to be mostly run as an automated pipeline and to fit into the open source
environment of the ESO data flow as well as into a data management system based on AstroWISE. We describe the
functionality of the pipeline, highlight details of new and unorthodox processing steps, discuss which algorithms and code
could be used from other projects. Finally, we show the performance on both laboratory data as well as simulated scientific
data.
Significantly reducing the processing times of high-speed photometry data sets using a distributed computing model
Author(s):
Paul Doyle;
Fred Mtenzi;
Niall Smith;
Adrian Collins;
Brendan O'Shea
Show Abstract
The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD
instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must
go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many
existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance
Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to
process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access
computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel
nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression,
allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and
provide an elastic computing model without the requirement for large centralized high performance computing data
centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been
achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.
The Dark Energy Survey data processing and calibration system
Author(s):
Joseph J. Mohr;
Robert Armstrong;
Emmanuel Bertin;
Greg Daues;
Shantanu Desai;
Michelle Gower;
Robert Gruendl;
William Hanlon;
Nikolay Kuropatkin;
Huan Lin;
John Marriner;
Donald Petravic;
Ignacio Sevilla;
Molly Swanson;
Todd Tomashek;
Douglas Tucker;
Brian Yanny
Show Abstract
The Dark Energy Survey (DES) is a 5000 deg2 grizY survey reaching characteristic photometric depths of 24th magnitude (10 sigma) and enabling accurate photometry and morphology of objects ten times fainter than in SDSS. Preparations for DES have included building a dedicated 3 deg2 CCD camera (DECam), upgrading the existing CTIO Blanco 4m telescope and developing a new high performance computing (HPC) enabled data management system (DESDM). The DESDM system will be used for processing, calibrating and serving the DES data. The total data volumes are high (~ 2PB), and so considerable effort has gone into designing an automated processing and quality control system. Special purpose image detrending and photometric calibration codes have been developed to meet the data quality requirements, while survey astrometric calibration, coaddition and cataloging rely on new extensions of the AstrOmatic codes which now include tools for PSF modeling, PSF homogenization, PSF corrected model fitting cataloging and joint model fitting across multiple input images. The DESDM system has been deployed on dedicated development clusters and HPC systems in the US and Germany. An extensive program of testing with small rapid turn-around and larger campaign simulated datasets has been carried out. The system has also been tested on large real datasets, including Blanco Cosmology Survey data from the Mosaic2 camera. In Fall 2012 the DESDM system will be used for DECam commissioning, and, thereafter, the system will go into full science operations.
The Italian DPC: infrastructure and operations for the Italian contribution to the Gaia data processing and analysis consortium
Author(s):
R. Messineo;
R. Morbidelli;
M. Martino;
E. Pigozzi;
A. F. Mulone;
A. Vecchiato
Show Abstract
This paper describes the design and the implementation of the Italian Data Processing Centre multi-tier software and
hardware infrastructure, built by ALTEC and funded by ASI, to support the Italian participation to the Gaia data
processing tasks. In particular the paper focuses on the software and hardware architectural choices adopted to manage
both big data volumes and complex operations scenarios. The DPCT system has been designed as an integrated system
whit the capability to manage all data processing pipeline phases: data receiving, data processing, data extraction, data
archiving and data sending. In addition, the DPCT system includes also data access and analysis tools allowing Italian
scientists to be active system users during operations.
Automated and generalized integral-field spectroscopy data reduction using p3d
Author(s):
Christer Sandin;
Peter Weilbacher;
Fachreddin Tabataba-Vakili;
Sebastian Kamann;
Ole Streicher
Show Abstract
Integral-field spectrograph (IFS) instruments are well suited to observe extended and faint objects, such as planetary nebulæ
and galaxies. A result of such observations are large quantities of raw data, which mostly require an expert to derive
accurate scientific spectra. Most instruments handle up to several thousand spectra simultaneously, using a unique file
format, together with numerous instrument-specific issues, which only an experienced expert can resolve. p3d is an open
source processing tool that is designed to handle raw data of any fiber-fed IFS, reducing IFS data quickly, easily, and
accurately. Separate tools are available that handle many tasks including cosmic-ray hit rejection in single spectrum images,
the combination of images, tracing of spectra on the detector, determination of spatial profiles of any shape, handling of
images for flat fielding, dierent versions of spectrum extraction, combination of multi-detector data, and correction for
atmospheric dierential refraction. The same approach and code is used with all instruments. No license is required when
using p3d, even though it is based on the proprietary software IDL; this is made possible through precompiled binary files
that are distributed together with the source code. p3d has been much improved with numerous releases since the first
version early in 2010. Here we present the latest capabilities of a nearly complete program.
Service-oriented architecture for the ARGOS instrument control software
Author(s):
J. Borelli;
L. Barl;
W. Gässler;
M. Kulas;
Sebastian Rabien
Show Abstract
The Advanced Rayleigh Guided ground layer Adaptive optic System, ARGOS, equips the Large Binocular
Telescope (LBT) with a constellation of six rayleigh laser guide stars. By correcting atmospheric turbulence near
the ground, the system is designed to increase the image quality of the multi-object spectrograph LUCIFER
approximately by a factor of 3 over a field of 4 arc minute diameter. The control software has the critical task
of orchestrating several devices, instruments, and high level services, including the already existing adaptive
optic system and the telescope control software. All these components are widely distributed over the telescope,
adding more complexity to the system design. The approach used by the ARGOS engineers is to write loosely
coupled and distributed services under the control of different ownership systems, providing a uniform mechanism
to offer, discover, interact and use these distributed capabilities. The control system counts with several finite
state machines, vibration and flexure compensation loops, and safety mechanism, such as interlocks, aircraft,
and satellite avoidance systems.
Development of the ACS+OPC UA based control system for a CTA medium size telescope prototype
Author(s):
Bagmeet Behera;
Igor Oya;
Emrah Birsin;
Hendryk Köppel;
David Melkumyan;
Stefan Schlenstedt;
Torsten Schmidt;
Ullrich Schwanke;
Peter Wegner;
Stephan Wiesand;
Michael Winde
Show Abstract
The Cherenkov Telescope Array (CTA) is the next generation Very High Energy (VHE, defined as > 50GeV to several
100TeV) telescope facility, currently in the design and prototyping phase, and expected to come on-line around 2016. The
array would have both a Northern and Southern hemisphere site, together delivering nearly complete sky coverage. The
CTA array is planned to have ~100 telescopes of several different sizes to fulfill the sensitivity and energy coverage needs.
Each telescope has a number of subsystems with varied hardware and control mechanisms; a drive system that gets
commands and inputs via OPC UA (OPC Unified Architecture), mirror alignment systems based on XBee/ZigBee protocol
and/or CAN bus, weather monitor accessed via serial/Ethernet ports, CCD cameras for calibration, Cherenkov camera, and
the data read out electronics, etc. Integrating the control and data-acquisitions of such a distributed heterogeneous system
calls for a framework that can handle such a multi-platform, multi-protocol scenario. The CORBA based ALMA Common
software satisfies these needs very well and is currently being evaluated as the base software for developing the control
system for CTA.
A prototype for a Medium Size Telescope (MST, ~12m) is being developed and will be deployed in Berlin, by end of
2012. We present the development being carried out to integrate and control the various hardware subsystems of this MST
prototype using ACS.
Software control of the Advanced Technology Solar Telescope enclosure PLC hardware using COTS software
Author(s):
Alastair J. Borrowman;
Lander de Bilbao;
Javier Ariño;
Gaizka Murga;
Bret Goodrich;
John R. Hubbard;
Alan Greer;
Chris Mayer;
Philip Taylor
Show Abstract
As PLCs evolve from simple logic controllers into more capable Programmable Automation Controllers (PACs),
observatories are increasingly using such devices to control complex mechanisms1, 2. This paper describes use of COTS
software to control such hardware using the Advanced Technology Solar Telescope (ATST) Common Services
Framework (CSF). We present the Enclosure Control System (ECS) under development in Spain and the UK.
The paper details selection of the commercial PLC communication library PLCIO. Implemented in C and delivered with
source code, the library separates the programmer from communication details through a simple API. Capable of
communicating with many types of PLCs (including Allen-Bradley and Siemens) the API remains the same irrespective
of PLC in use.
The ECS is implemented in Java using the observatory's framework that provides common services for software
components. We present a design following a connection-based approach where all components access the PLC through
a single connection class. The link between Java and PLCIO C library is provided by a thin Java Native Interface (JNI)
layer. Also presented is a software simulator of the PLC based upon the PLCIO Virtual PLC. This creates a simulator
operating below the library's API and thus requires no change to ECS software. It also provides enhanced software
testing capabilities prior to hardware becoming available.
Results are presented in the form of communication timing test data, showing that the use of CSF, JNI and PLCIO
provide a control system capable of controlling enclosure tracking mechanisms, that would be equally valid for telescope
mount control.
Simultaneous control of multiple instruments at the Advanced Technology Solar Telescope
Author(s):
Erik M. Johansson;
Bret Goodrich
Show Abstract
The Advanced Technology Solar Telescope (ATST) is a 4-meter solar observatory under construction at Haleakala,
Hawaii. The simultaneous use of multiple instruments is one of the unique capabilities that makes the ATST a premier
ground based solar observatory. Control of the instrument suite is accomplished by the Instrument Control System (ICS),
a layer of software between the Observatory Control System (OCS) and the instruments. The ICS presents a single
narrow interface to the OCS and provides a standard interface for the instruments to be controlled. It is built upon the
ATST Common Services Framework (CSF), an infrastructure for the implementation of a distributed control system.
The ICS responds to OCS commands and events, coordinating and distributing them to the various instruments while
monitoring their progress and reporting the status back to the OCS. The ICS requires no specific knowledge about the
instruments. All information about the instruments used in an experiment is passed by the OCS to the ICS, which
extracts and forwards the parameters to the appropriate instrument controllers. The instruments participating in an
experiment define the active instrument set. A subset of those instruments must complete their observing activities in
order for the experiment to be considered complete and are referred to as the must-complete instrument set. In addition,
instruments may participate in eavesdrop mode, outside of the control of the ICS. All instrument controllers use the same
standard narrow interface, which allows new instruments to be added without having to modify the interface or any
existing instrument controllers.
The STELLA robotic observatory on Tenerife
Author(s):
Michael Weber;
Thomas Granzer;
Klaus G. Strassmeier
Show Abstract
The STELLA project is made up of two 1.2m robotic telescopes to simultaneously monitor stellar activity
using a high-resolution spectrograph on one telescope, and an imaging instrument on the other telescope. The
STELLA Echelle spectrograph (SES) along with the building has been in operation successfully since 2006, and
is producing spectra covering the visual wavelength range between 390 and 900 nm at a resolution of 55 000. The
stability of the spectrograph over the entire two year span, measured by monitoring 15 radial velocity standard
stars, is 30 to 150 m/s rms. The Wide-field stellar imager and photometer (WIFSIP) was put into operation in
2010, when the SES-lightfeed was physically moved to the second telescope. We describe the final instrument
conguration in use now, and on the efficiency of the robotic scheduling in use at the observatory.
Design and implementation of a distributed system for the PAUCam camera control system
Author(s):
O. Ballester;
C. Pio;
C. Hernández-Ferrer;
S. Serrano;
N. Tonello
Show Abstract
PAUCam consists of an array of 18 red-sensitive CCDs of 4K x 2K pixels with a system of 36 narrow-band (10 nm)
filters and 6 wide-band filters which will be installed at the William Herschel Telescope (WHT). The PAUCam Camera
Control System (CCS) is the software system in charge of the coordination of the several subsystems to acquire
exposures with PAUCam.
Towards dynamic light-curve catalogues
Author(s):
Bart Scheers;
Fabian Groffen
Show Abstract
Time-domain astronomy is becoming a fundamental aspect of the next generation of astronomical instruments.
The timing properties will revolutionise the studies of all kinds of astronomical objects. Consequetially, the
huge complex data volumes and high cadences of these facilities will force us to overhaul and extend current
software solutions. LOFAR, laying the groundwork for this, will produce a continuously updated spectral light-curve catalogue of all detected sources, with real-time capabilities to cope with the growth of 50 - 100TB/yr,
making it the largest dynamic astronomical catalogue. Automated pipelines use the column-store MonetDB as
their key component. We exploit SciLens, a 300+ node, 4-tier locally distributed cluster focussed on massive
I/O. Introduction of the new array-based query language, SciQL, simplifies data exploration and mining. I
will demonstrate how MonetDB/SQL & SciQL on its SciLens platform manages the millions of lightcurves for
LOFAR. Initial benchmark results confirm the linear scale-up performance over tens of TBs using tens of nodes.
Design and implementation of a general main axis controller for the ESO telescopes
Author(s):
Stefan Sandrock;
Nicola Di Lieto;
Lorenzo Pettazzi;
Toomas Erm
Show Abstract
Most of the real-time control systems at the existing ESO telescopes were developed with "traditional" methods, using
general purpose VMEbus electronics, and running applications that were coded by hand, mostly using the C
programming language under VxWorks.
As we are moving towards more modern design methods, we have explored a model-based design approach for real-time
applications in the telescope area, and used the control algorithm of a standard telescope main axis as a first example.
We wanted to have a clear work-flow that follows the "correct-by-construction" paradigm, where the implementation is
testable in simulation on the development host, and where the testing time spent by debugging on target is minimized. It
should respect the domains of control, electronics, and software engineers in the choice of tools. It should be a targetindependent
approach so that the result could be deployed on various platforms.
We have selected the Mathworks tools Simulink, Stateflow, and Embedded Coder for design and implementation, and
LabVIEW with NI hardware for hardware-in-the-loop testing, all of which are widely used in industry. We describe how
these tools have been used in order to model, simulate, and test the application. We also evaluate the benefits of this
approach compared to the traditional method with respect to testing effort and maintainability.
For a specific axis controller application we have successfully integrated the result into the legacy platform of the
existing VLT software, as well as demonstrated how to use the same design for a new development with a completely
different environment.
A standard framework for developing instrument controllers for the ATST
Author(s):
John R. Hubbard;
Erik M. Johansson
Show Abstract
The Advanced Technology Solar Telescope (ATST) is a 4-meter solar observatory under construction at Haleakala,
Hawaii. The simultaneous use of multiple instruments is one of the unique capabilities that makes the ATST the premier
ground based solar observatory. Although the operation of the instruments and the data collected varies widely across the
ATST instrument suite, the basic control functions and data recording capabilities are similar. Each instrument must be
capable of controlling its devices, mechanisms and hardware, interacting with the Instrument Control System (ICS), and
saving science data. Because of these similarities, the ATST Software Group has developed the Standard Instrument
Framework (SIF), a set of software components comprising a framework that can be used to implement instrument
controllers with common functionality for all ATST instrumentation.
The SIF is built upon the ATST Common Services Framework (CSF) and includes controllers capable of interfacing
with the ICS, managing sub-controllers and multiple camera systems, as well as coordinating the instrument’s
mechanical mechanisms and other hardware. The key to this framework is the principle that each controller has a small,
well defined task and when the individual pieces are combined, a powerful control system may easily be implemented.
Moreover, because most of the instruments for the ATST are being developed by partner institutions, the SIF allows for
standardization of the instrument control systems throughout the instrument suite and reduced software development
effort for the partners. This will lead to significant code reuse and a smaller code base that is easier to maintain.
UAF: a generic OPC unified architecture framework
Author(s):
Wim Pessemier;
Geert Deconinck;
Gert Raskin;
Philippe Saey;
Hans Van Winckel
Show Abstract
As an emerging Service Oriented Architecture (SOA) specically designed for industrial automation and process
control, the OPC Unied Architecture specication should be regarded as an attractive candidate for controlling
scientic instrumentation. Even though an industry-backed standard such as OPC UA can oer substantial added
value to these projects, its inherent complexity poses an important obstacle for adopting the technology. Building
OPC UA applications requires considerable eort, even when taking advantage of a COTS Software Development
Kit (SDK). The OPC Unied Architecture Framework (UAF) attempts to reduce this burden by introducing
an abstraction layer between the SDK and the application code in order to achieve a better separation of the
technical and the functional concerns. True to its industrial origin, the primary requirement of the framework
is to maintain interoperability by staying close to the standard specications, and by expecting the minimum
compliance from other OPC UA servers and clients. UAF can therefore be regarded as a software framework
to quickly and comfortably develop and deploy OPC UA-based applications, while remaining compatible to
third party OPC UA-compliant toolkits, servers (such as PLCs) and clients (such as SCADA software). In the
rst phase, as covered by this paper, only the client-side of UAF has been tackled in order to transparently
handle discovery, session management, subscriptions, monitored items etc. We describe the design principles
and internal architecture of our open-source software project, the rst results of the framework running at the
Mercator Telescope, and we give a preview of the planned server-side implementation.
The last mile of the ALMA software development: lessons learned
Author(s):
A. M. Chavan;
B. E. Glendenning;
J. Ibsen;
J. Kern;
G. Kosugi;
G. Raffi;
E. Schmid;
J. Schwarz
Show Abstract
At the end of 2012, ALMA software development will be completed. While new releases are still being prepared
following an incremental development process, the ALMA software has been in daily use since 2008. Last year it was
successfully used for the first science observations proposed by and released to the ALMA scientific community. This
included the whole project life cycle from proposal preparation to data delivery, taking advantage of the software being
designed as an end-to-end system. This presentation will report on software management aspects that became relevant in
the last couple of years. These include a new feature driven development cycle, an improved software verification
process, and a more realistic test environment at the observatory. It will also present a forward look at the planned
transition to full operations, given that upgrades, optimizations and maintenance will continue for a long time.
Adoption of new software and hardware solutions at the VLT: the ESPRESSO control architecture case
Author(s):
R. Cirami;
P. Di Marcantonio;
I. Coretti;
P. Santin;
M. Mannetta;
V. Baldini;
S. Cristiani;
M. Abreu;
A. Cabral;
M. Monteiro;
D. Mégevand;
F. Zerbi
Show Abstract
ESPRESSO is a fiber-fed cross-dispersed echelle spectrograph which can be operated with one or up to 4 UT (Unit
Telescope) of ESO's Very Large Telescope (VLT). It will be located in the Combined-Coudé Laboratory (CCL) of the
VLT and it will be the first permanent instrument using a 16-m equivalent telescope. The ESPRESSO control software
and electronics are in charge of the control of all instrument subsystems: the four Coudé Trains (one for each UT), the
front-end and the fiber-fed spectrograph itself contained within a vacuum vessel. The spectrograph is installed inside a
series of thermal enclosures following an onion-shell principle with increasing temperature stability from outside to
inside. The proposed electronics architecture will use the OPC Unified Architecture (OPC UA) as a standard layer to
communicate with PLCs (Programmable Logical Controller), replacing the old Instrument Local Control Units (LCUs)
for ESO instruments based on VME technology. The instrument control software will be based on the VLT Control
Software package and will use the IC0 Field Bus extension for the control of the instrument hardware. In this paper we
present the ESPRESSO software architectural design proposed at the Preliminary Design Review as well as the control
electronics architecture.
Conceptual design of the control software for the European Solar Telescope
Author(s):
P. Di Marcantonio;
R. Cirami;
P. Romano;
R. Cosentino;
I. Ermolli;
F. Giorgi
Show Abstract
Aim of this paper is to present an overview of the conceptual design of the Control Software for the European Solar
Telescope (EST), as emerged after the successful Conceptual Design Review held in June 2011 which formally
concluded the EST Preliminary Design Study. After a general description of ECS (EST Control Software) architecture
end-to-end, from operation concepts and observation preparations to the control of the planned focal plane instruments,
the paper focuses on the arrangement devised to date of ECS to cope with the foreseen scientific requirements. EST
major subsystems together with the functions to be controlled are eventually detailed and discussed.
South Pole Telescope software systems: control, monitoring, and data acquisition
Author(s):
K. Story;
E. Leitch;
P. Ade;
K. A. Aird;
J. E. Austermann;
J. A. Beall;
D. Becker;
A. N. Bender;
B. A. Benson;
L. E. Bleem;
J. Britton;
J. E. Carlstrom;
C. L. Chang;
H. C. Chiang;
H-M. Cho;
T. M. Crawford;
A. T. Crites;
A. Datesman;
T. de Haan;
M. A. Dobbs;
W. Everett;
A. Ewall-Wice;
E. M. George;
N. W. Halverson;
N. Harrington;
J. W. Henning;
G. C. Hilton;
W. L. Holzapfel;
S. Hoover;
N. Huang;
J. Hubmayr;
K. D. Irwin;
M. Karfunkle;
R. Keisler;
J. Kennedy;
A. T. Lee;
D. Li;
M. Lueker;
D. P. Marrone;
J. J. McMahon;
J. Mehl;
S. S. Meyer;
J. Montgomery;
T. E. Montroy;
J. Nagy;
T. Natoli;
J. P. Nibarger;
M. D. Niemack;
V. Novosad;
S. Padin;
C. Pryke;
C. L. Reichardt;
J. E. Ruhl;
B. R. Saliwanchik;
J. T. Sayre;
K. K. Schaffer;
E. Shirokoff;
G. Smecher;
B. Stalder;
C. Tucker;
K. Vanderlinde;
J. D. Vieira;
G. Wang;
R. Williamson;
V. Yefremenko;
K. W. Yoon;
E. Young
Show Abstract
We present the software system used to control and operate the South Pole Telescope. The South Pole Telescope is
a 10-meter millimeter-wavelength telescope designed to measure anisotropies in the cosmic microwave background
(CMB) at arcminute angular resolution. In the austral summer of 2011/12, the SPT was equipped with a new
polarization-sensitive camera, which consists of 1536 transition-edge sensor bolometers. The bolometers are read
out using 36 independent digital frequency multiplexing (DfMux) readout boards, each with its own embedded
processors. These autonomous boards control and read out data from the focal plane with on-board software
and firmware. An overall control software system running on a separate control computer controls the DfMux
boards, the cryostat and all other aspects of telescope operation. This control software collects and monitors
data in real-time, and stores the data to disk for transfer to the United States for analysis.
A modern approach to upgrading the telescope control system of the CTIO Blanco 4-m telescope
Author(s):
Michael Warner;
Rolando Cantarutti;
German Schumacher;
Eduardo Mondaca;
Omar Estay;
Manuel Martinez;
Victor Aguirre;
Rodrigo Alvarez;
Rodrigo Leiva;
Timothy M. C. Abbott;
Nicole S. van der Bliek
Show Abstract
In preparation for the arrival of the Dark Energy Camera (DECam) at the CTIO Blanco 4-m telescope, both the hardware
and the software of the Telescope Control System (TCS) have been upgraded in order to meet the more stringent
requirements on cadence and tracking required for efficient execution of the Dark Energy Survey1. This upgrade was
also driven by the need to replace obsolete hardware, some of it now over half a century old.
In this paper we describe the architecture of the new mount control system, and in particular the method used to develop
and implement the servo-driver portion of the new TCS. This portion of the system had to be completely rethought,
when an initial approach, based on commercial off the shelf components, lacked the flexibility needed to cope with the
complex behavior of the telescope. Central to our design approach was the early implementation of extensive telemetry,
which allowed us to fully characterize the real dynamics of the telescope. These results then served as input to extensive
simulations of the proposed new servo system allowing us to iteratively refine the control model. This flexibility will be
important later when DECam is installed, since this will significantly increase the moving mass and inertia of the
telescope.
Based on these results, a fully digital solution was chosen and implemented. The core of this new servo hardware is
modern cRIO hardware, which combines an embedded processor with a high-performance FPGA, allowing the
execution of LabVIEW applications in real time.
Data management cyberinfrastructure for the Large Synoptic Survey Telescope
Author(s):
D. Michael Freemon;
Kian-Tat Lim;
Jacek Becla;
Gregory P. Dubois-Felsman;
Jeffrey Kantor
Show Abstract
The Large Synoptic Survey Telescope (LSST) project is a proposed large-aperture, wide-field, ground-based telescope
that will survey half the sky every few nights in six optical bands. LSST will produce a data set suitable for answering a
wide range of pressing questions in astrophysics, cosmology, and fundamental physics. The 8.4-meter telescope will be
located in the Andes mountains near La Serena, Chile. The 3.2 Gpixel camera will take 6.4 GB images every 15
seconds, resulting in 15 TB of new raw image data per night. An estimated 2 million transient alerts per night will be
generated within 60 seconds of when the camera’s shutter closes. Processing such a large volume of data, converting the
raw images into a faithful representation of the universe, automated data quality assessment, automated discovery of
moving or transient sources, and archiving the results in useful form for a broad community of users is a major
challenge. We present an overview of the planned computing infrastructure for LSST. The cyberinfrastructure required
to support the movement, storing, processing, and serving of hundreds of petabytes of image and database data is
described. We also review the sizing model that was developed to estimate the hardware requirements to support this
environment beginning during project construction and continuing throughout the 10 years of operations.
ALMA software scalability experience with growing number of antennas
Author(s):
Matias Mora;
Jorge Avarias;
Alexis Tejeda;
Juan Pablo Gil;
Heiko Sommer
Show Abstract
The ALMA Observatory is a challenging project in many ways. The hardware and software pieces were often designed
specifically for ALMA, based on overall scientific requirements. The observatory is still in its construction
phase, but already started Early Science observations with 16 antennas in September 2011, and has currently
(June 2012) 39 accepted antennas, with 1 or 2 new antennas delivered every month. The finished array will
integrate up to 66 antennas in 2014.
The on-line software is a critical part of the operations: it controls everything from the low level real-time
hardware and data processing up to the observations scheduler and data storage. Many pieces of the software are
eventually affected by a growing number of antennas, as more processes are integrated into the distributed system,
and more data flows to the Correlator and Database. Although some early scalability tests were performed in
a simulated environment, the system proved to be very dependent on real deployment conditions and several
unforeseen scalability issues have been found in the last year, starting with a critical number of about 15
antennas. Processes that grow with the number of antennas tend to quickly demand more powerful machines,
unless alternatives are implemented.
This paper describes the practical experience of dealing with (and hopefully preventing) blocking scalability
issues during the construction phase, while the expectant users push the system to its limits. This may also be
a very useful example for other upcoming radio-telescopes with a large number of receivers.
The DIRP framework: Flexible HPC based post-processing of TB size datasets
Author(s):
Andreas Wicenec;
Christopher J. Harris;
Kevin Vinsen;
Peter J. Quinn
Show Abstract
The immense scale of data from modern radio interferometer arrays results in processing demands requiring HPC facilities to produce scientific results. However, in the modern era such facilities are more complex than a single monolithic HPC system. The transfer and processing of scientific data must be managed across hierarchies of storage and processing architectures; including traditional HPC, heterogeneous HPC, database and visualisation systems. The ICRAR Data Intensive Research Pathfinder (DIRP) will consist of an integrated system of the hardware, middleware, tools and interfaces to support ICRAR data intensive research primarily focused on data owing from the Australian SKA Pathfinder (ASKAP1) and the Murchison Widefield Array (MWA) telescopes.
A complete history of everything
Author(s):
Kyle Lanclos;
William T. S. Deich
Show Abstract
This paper discusses Lick Observatory's local solution for retaining a complete history of everything. Leveraging
our existing deployment of a publish/subscribe communications model that is used to broadcast the state of all
systems at Lick Observatory, a monitoring daemon runs on a dedicated server that subscribes to and records
all published messages. Our success with this system is a testament to the power of simple, straightforward
approaches to complex problems. The solution itself is written in Python, and the initial version required about
a week of development time; the data are stored in PostgreSQL database tables using a distinctly simple schema.
Over time, we addressed scaling issues as the data set grew, which involved reworking the PostgreSQL
database schema on the back-end. We also duplicate the data in flat files to enable recovery or migration of the
data from one server to another. This paper will cover both the initial design as well as the solutions to the
subsequent deployment issues, the trade-offs that motivated those choices, and the integration of this history
database with existing client applications.
UKIRT remote operations fail-safe system
Author(s):
Bryan Gorges;
Craig Walther;
Tim Chuter
Show Abstract
Remote operation of a four meter class telescope on the summit of Mauna Kea from 40 kilometers away presents unique
challenges. Concerns include: communication links being severed, the computer controlling the enclosure becoming
inoperable, non-responsive software, inclement weather, or the operator forgetting or unable to close the dome during a
personal emergency. These issues are addressed at the United Kingdom Infrared Telescope (UKIRT) by a series of
deadman handshakes starting on the operator's end with a graphical user interface that requires periodic attention and
culminates with hardware in the telescope that will initiate a closing sequence when regular handshake signals do not
continue. Software packages including Experimental Physics and Industrial Control Systems1 (EPICS) and a distributed,
real time computing system for instrumentation2 (DRAMA) were used in this project to communicate with hardware
control systems and to coordinate systems. After testing, this system has been used in operation since January 2011.
Interaction design challenges and solutions for ALMA operations monitoring and control
Author(s):
Emmanuel Pietriga;
Pierre Cubaud;
Joseph Schwarz;
Romain Primet;
Marcus Schilling;
Denis Barkats;
Emilio Barrios;
Baltasar Vila Vilaro
Show Abstract
The ALMA radio-telescope, currently under construction in northern Chile, is a very advanced instrument that
presents numerous challenges. From a software perspective, one critical issue is the design of graphical user
interfaces for operations monitoring and control that scale to the complexity of the system and to the massive
amounts of data users are faced with. Early experience operating the telescope with only a few antennas has
shown that conventional user interface technologies are not adequate in this context. They consume too much
screen real-estate, require many unnecessary interactions to access relevant information, and fail to provide
operators and astronomers with a clear mental map of the instrument. They increase extraneous cognitive load,
impeding tasks that call for quick diagnosis and action.
To address this challenge, the ALMA software division adopted a user-centered design approach. For the
last two years, astronomers, operators, software engineers and human-computer interaction researchers have
been involved in participatory design workshops, with the aim of designing better user interfaces based on
state-of-the-art visualization techniques. This paper describes the process that led to the development of those
interface components and to a proposal for the science and operations console setup: brainstorming sessions,
rapid prototyping, joint implementation work involving software engineers and human-computer interaction
researchers, feedback collection from a broader range of users, further iterations and testing.
GMT software and controls overview
Author(s):
José M. Filgueira;
Matthieu Bec;
José Soto;
Ning Liu;
Chien Y. Peng
Show Abstract
The Giant Magellan Telescope Organization is designing and building a ground-based 25-meter extremely large telescope. This project represents a significant increase in complexity and performance requirements over 8-10 meter class telescope control systems. This paper presents how recent software and hardware technologies and the lessons learned from the previous generation of large telescopes can help to address some of these challenges. We illustrate our model-centric approach to capture all the functionalities and workflows of the observatory subsystems, and discuss its benefits for implementing and documenting the software and control systems. The same modeling approach is also used
to capture and facilitate the development process.
The Readout and Control System of the Dark Energy Camera
Author(s):
Klaus Honscheid;
Ann Elliott;
James Annis;
Marco Bonati;
Elizabeth Buckley-Geer;
Francisco Castander;
Luiz daCosta;
Angelo Fausti;
Inga Karliner;
Steve Kuhlmann;
Eric Neilsen;
Kenneth Patton;
Kevin Reil;
Aaron Roodman;
Jon Thaler;
Santiago Serrano;
Marcelle Soares Santos;
Eric Suchyta
Show Abstract
The Dark Energy Camera (DECam) is a new 520 Mega Pixel CCD camera with a 3 square degree field of view designed
for the Dark Energy Survey (DES). DES is a high precision, multi-bandpass, photometric survey of 5000 square degrees
of the southern sky. DECam is currently being installed at the prime focus of the Blanco 4-m telescope at the Cerro-
Tololo International Observatory (CTIO). In this paper we describe SISPI, the data acquisition and control system of the
Dark Energy Camera. SISPI is implemented as a distributed multi-processor system with a software architecture based
on the Client-Server and Publish-Subscribe design patterns. The underlying message passing protocol is based on
PYRO, a powerful distributed object technology system written entirely in Python. A distributed shared variable system
was added to support exchange of telemetry data and other information between different components of the system. We
discuss the SISPI infrastructure software, the image pipeline, the observer console and user interface architecture, image
quality monitoring, the instrument control system, and the observation strategy tool.
Instrument control software for the Visible Broadband Imager using ATST common services framework and base
Author(s):
Andrew Ferayorni
Show Abstract
The Advanced Technology Solar Telescope (ATST) Common Services Framework (CSF) and ATST Base provide the
technical framework and building blocks for developing telescope and instrument control systems. The ATST Visible
Broadband Imager (VBI) is a high priority instrument with science use cases requiring a control system capable of
executing deterministic motion control tasks and synchronizing those tasks with other systems in the observatory. The
VBI control system is the first designed and developed using the ATST CSF and Base components, and therefore
provides insight into the strengths and weaknesses of using distributed control software in the instrument domain. In this
paper we lay out the design of the VBI control system, examine how the underlying ATST CSF and Base components
are utilized, and discuss where custom software is incorporated to meet real-time performance and synchronization
requirements. We present our analysis of the system design against three of the VBI use cases.
Intercontinental network control platform and robotic observation for Chinese Antarctic telescopes
Author(s):
Lingzhe Xu
Show Abstract
The Chinese astronomical exploration in Antarctic region has been initialized and stepped forward. The R&D roadmap in this regard identifies each progressive step. For the past several years China has set up Kunlun station at Antarctic Dome-A, and Chinese Small Telescope ARray (CSTAR) has already been up and running regularly. In addition, Antarctic Schmidt Telescopes (AST3_1) was transported to the area in the year of 2011 and has recently been placed in service for some time and followed with telescopes in larger size predictably more to come. Antarctic region is one of a few best sites left on the Earth for astronomical telescope observation, yet with worst fundamental living conditions for human survival and activities. To meet such a tough challenge it is essential to establish an efficient and reliable means of remote access for telescope routine observation. This paper outlines the remote communication for CSTAR and AST3_1, and further proposes an intercontinental network control platform for Chinese Antarctic telescope array with remote full-automatic control and robotic observation and management. A number of technical issues for telescope access such as the unattended operation, the bandwidth based on iridium satellite transmission as well as the means of reliable and secure communication among other things are all reviewed and further analyzed.
A distributed data management system for data-intensive radio astronomy
Author(s):
Arne Grimstrup;
Venkat Mahadevan;
Olivier Eymere;
Ken Anderson;
Cameron Kiddle;
Rob Simmonds;
Erik Rosolowsky;
Andrew R. Taylor
Show Abstract
The next generation of telescopes, such as the Square Kilometre Array (SKA), will generate orders of magnitude
more data than previous instruments, far in excess of current storage and networking system handling abilities.
To address this problem, we propose an architecture where data is distributed over several archive sites, each
holding only a portion of the overall data, that provides efficient and transparent access to the archive as a whole.
This paper describes that architecture in detail and the design and implementation of a prototype system, based
on the Integrated Rule-Oriented Data System (iRODS) software.
J-PAS data management pipeline and archiving
Author(s):
D. Cristóbal-Hornillos;
N. Gruel;
J. Varela;
A. López-Sainz;
A. Ederoclite;
M. Moles;
A. J. Cenarro;
A. Marín-Franch;
J. Hernández-Fuertes;
A. Yanes-Díaz;
S. Chueca;
S. Rueda-Teruel;
F. Rueda-Teruel;
R. Luis-Simoes
Show Abstract
J-PAS survey is going to observe 8000 deg2 in 54 optical narrow band filters plus 3 broad band ones. The survey
will produce 1.2 PB of raw data in six years of observations. The treatment of about 1.5 TB per night, coming
from the 14 detectors of the JPCam camera in the JST/T250, plus one detector in the JAST/T80 camera,
shall be performed during the day after the operations. This contribution presents the software and hardware
architecture designed to process and validate the data. The processing of the images is divided in two main
stages. The first one, which deals with instrumental correction and data validation, is run daily. The second
stage is run when a tile is completed and do the combination of the individual corrected frames and weight maps.
To perform the astrometric calibration, image coadding and source extraction the data management pipeline uses
software from the community which is integrated through Python. The software uses a database to control the
process by storing the operations performed, parameters used and quality checks. This allows fast reprocessing
to retrieve intermediate stages of the treatment from the raw data for any data release. This approach saves
disk space by avoiding the storage of the processed individual frames. The data archiving and processing will be
done in a data center 30 km away from the observatory. It will be equipped with ~2.5 PB of storage capacity to
store the raw data and the final mosaics of the 57 filters, and processing power to deal with the incoming data.
The LOFAR long-term archive: e-infrastructure on petabyte scale
Author(s):
Hanno Holties;
Adriaan Renting;
Yan Grange
Show Abstract
The Low Frequency Array (LOFAR) is a large distributed radio telescope that observes at frequencies from 10 MHz to
240 MHz. LOFAR combines phased array antenna stations in Germany, the UK, France, and Sweden with forty stations
in the Netherlands. The archive of science data products is expected to grow by five petabyte per year. The LOFAR
long-term archive (LTA) provides an e-Infrastructure for the storage, distribution, and analysis of the science data
produced by LOFAR. It builds on national and international e-Infrastructure on a European scale pioneering the
international cyber-infrastructures of the future such as needed for e.g. the Square Kilometer Array. For astronomers, the
LOFAR LTA is the principal interface not only to LOFAR data retrieval and data mining but also to processing facilities
for this data. Each site involved in the LTA provides storage capacity and optionally processing capabilities. To allow
collaboration with a variety of institutes and projects, the LOFAR LTA merges different technologies (EGI, global file
systems, Astro-WISE dataservers). For its connectivity it utilizes the national research networks and explores new
technologies for high bandwidth on demand and long distance data streaming. A centrally operated catalogue provides a
searchable database for scientific products stored in the LTA from raw visibilities to calibrated images and derived
source lists. The user administration synchronizes accounts across the LTA and is designed to connect to federated
infrastructures. For its data analysis capabilities it builds on the data processing frameworks provided by EGI and Astro-WISE.
The MWA archive infrastructure: archiving terabytes of data over dedicated WAN connections
Author(s):
Andreas Wicenec;
Dave Pallot;
Alessio Checcucci;
Slava Kitaeff;
Kevin Vinsen;
Chen Wu
Show Abstract
The Murchison Wide Field Array (MWA) is being upgraded from 32 tiles to 128 tiles of 16 dual-polarization
dipole antennas. In the course of this project the software and the data infrastructure are also undergoing a
major overhaul in order to cope with a more continuous and remote operational model; and the substantial
increase in data rate (400 MB/s from the correlator plus 160 MB/s from the real time imaging pipeline). During
the course of 2012/13 the data collected by the MWA will be transported via a dedicated 40 Gbit WAN network
link between the Murchison Radio Observatory (MRO) and Perth (700 km). However, this network will not be
available for some time; and until then, the data will be transported using disk arrays instead. Once in Perth,
the data will be ingested into a tape library. The archiving process itself consists of various steps executed either
at the MRO site or in Perth, and makes use of a modied version of the archiving system from the Atacama
Large Millimeter Array (ALMA). This includes the extraction of meta-data from the original raw data and the
ingestion and generation of the appropriate data links for the MWA archive. The MWA correlator generates
a collection of small les all belonging to the same observation. In order to optimise the network transfer and
the storage of those les they are transparently packed into larger containers at the MRO site and subsequently
handled as one big le. This paper describes the setup of the MWA archiving infrastructure.
ESO Archive data and metadata model
Author(s):
Adam Dobrzycki;
Cristiano da Rocha;
Ignacio Vera;
My-Hà Vuong;
Thomas Bierwirth;
Vincenzo Forchì;
Nathalie Fourniol;
Christophe Moins;
Stefano Zampieri
Show Abstract
We present the data model utilised in maintaining the lifecycle of astronomical frames in the ESO Archive
activities. The principal concept is that complete file metadata are managed separately from the data and
merged only upon delivery of the data to the end user. This concept is now applied to all ESO Archive assets:
raw observation frames originated in ESO telescopes in all Chilean sites, reduced frames generated intra-ESO
using pipeline processing, as well as the processed data generated by the PIs and delivered to the ESO Archive
through "Phase 3" infrastructure. We present the implementation details of the model and discuss future
applications.
The ALMA OT in early science: supporting multiple customers
Author(s):
Alan Bridger;
Stewart Williams;
Stewart McLay;
Hiroshi Yatagai;
Marcus Schilling;
Andrew Biggs;
Rodrigo Tobar;
Rein H. Warmels
Show Abstract
The ALMA Observatory is currently operating ′Early Science′ observing. The Cycle0 and Cycle1 Calls for Proposals are
part of this Early Science, and in both the ALMA Observing Tool plays a crucial role. This paper describes how the
ALMA OT tackles the problem of making millimeter/sub-millimeter interferometry accessible to the wider community,
while allowing "experts" the power and flexibility they need.
We will also describe our approach to the challenges of supporting multiple customers, and explore the lessons learnt
from the Early Science experiences. Finally we look ahead to the challenges presented by future observing cycles.
Evolution of the phase 2 preparation and observation tools at ESO
Author(s):
D. Dorigo;
B. Amarand;
T. Bierwirth;
Y. Jung;
P. Santos;
F. Sogni;
I. Vera
Show Abstract
Throughout the course of many years of observations at the VLT, the phase 2 software applications supporting the
specification, execution and reporting of observations have been continuously improved and refined. Specifically the
introduction of astronomical surveys propelled the creation of new tools to express more sophisticated, longer-term
observing strategies often consisting of several hundreds of observations. During the execution phase, such survey
programs compete with other service and visitor mode observations and a number of constraints have to be considered.
In order to maximize telescope utilization and execute all programs in a fair way, new algorithms have been developed to
prioritize observable OBs taking into account both current and future constraints (e.g. OB time constraints, technical
telescope time) and suggest the next OB to be executed. As a side effect, a higher degree of observation automation
enables operators to run telescopes mostly autonomously with little supervision by a support astronomer. We describe
the new tools that have been deployed and the iterative and incremental software development process applied to
develop them. We present our key software technologies used so far and discuss potential future evolution both in terms
of features as well as software technologies.
Accelerated speckle imaging with the ATST visible broadband imager
Author(s):
Friedrich Wöger;
Andrew Ferayorni
Show Abstract
The Advanced Technology Solar Telescope (ATST), a 4 meter class telescope for observations of the solar
atmosphere currently in construction phase, will generate data at rates of the order of 10 TB/day with its
state of the art instrumentation. The high-priority ATST Visible Broadband Imager (VBI) instrument alone
will create two data streams with a bandwidth of 960 MB/s each. Because of the related data handling issues,
these data will be post-processed with speckle interferometry algorithms in near-real time at the telescope using
the cost-effective Graphics Processing Unit (GPU) technology that is supported by the ATST Data Handling
System.
In this contribution, we lay out the VBI-specific approach to its image processing pipeline, put this into the
context of the underlying ATST Data Handling System infrastructure, and finally describe the details of how
the algorithms were redesigned to exploit data parallelism in the speckle image reconstruction algorithms. An
algorithm re-design is often required to efficiently speed up an application using GPU technology; we have chosen
NVIDIA's CUDA language as basis for our implementation. We present our preliminary results of the algorithm
performance using our test facilities, and base a conservative estimate on the requirements of a full system that
could achieve near real-time performance at ATST on these results.
Advanced PANIC quick-look tool using Python
Author(s):
José-Miguel Ibáñez;
Antonio J. García Segura;
Clemens Storz;
Josef W. Fried;
Matilde Fernández;
Julio F. Rodríguez Gómez;
V. Terrón;
M. C. Cárdenas
Show Abstract
PANIC, the Panoramic Near Infrared Camera, is an instrument for the Calar Alto Observatory currently being integrated
in laboratory and whose first light is foreseen for end 2012 or early 2013. We present here how the PANIC Quick-Look
tool (PQL) and pipeline (PAPI) are being implemented, using existing rapid programming Python technologies and
packages, together with well-known astronomical software suites (Astromatic, IRAF) and parallel processing techniques.
We will briefly describe the structure of the PQL tool, whose main characteristics are the use of the SQLite database and
PyQt, a Python binding of the GUI toolkit Qt.
Astro-WISE information system
Author(s):
E. A. Valentijn;
A. N. Belikov;
G. A. Verdoes Kleijn;
O. R. Williams
Show Abstract
Astro-WISE is the first information system in astronomy which covers all aspects of data processing, storage
and visualization. We show the various concepts behind the Astro-WISE, their realization and use, migration of
Astro-WISE to other astronomical and non-astronomical information systems.
Enabling efficient electronic collaboration between LIGO and other astronomy communities using federated identity and COmanage
Author(s):
Heather Flanagan;
Marie Huynh;
Ken Klingenstein;
Scott Koranda;
Benjamin Oshrin
Show Abstract
Identity federations throughout the world including InCommon in the United States, SURFnet in the Netherlands, DFN-AAI in Germany, GakuNin in Japan, and the UK Access Management Federation for Education and
Research have made federated identities available for a large number of astronomers, astrophysicists, and other
researchers. The LIGO project has recently joined the InCommon federation and is beginning the process to
both consume federated identities from outside of LIGO and to make the LIGO identities issued to collaboration
members available for consumption by other research communities.
Consuming federated identity, however, is only the beginning. Realizing the promise of multi-messenger
astronomy requires efficient collaboration among individuals from multiple communities. Efficient collaboration
begins with federated identity but also requires robust collaboration management platforms providing consistent,
scalable identity and access control information to collaboration applications including wikis, calendars, mailing
lists and science portals. LIGO, together with collaborators from Internet2, is building the COmanage suite of
tools for Collaborative Organization Management. Using COmanage and leveraging federated identities we plan
to streamline electronic collaboration between LIGO and other astronomy projects so that scientists spend less
time managing accounts and access control and more time doing science.
REMOTES: reliable and modular telescope solution for seamless operation and monitoring of various observation facilities
Author(s):
M. Jakubec;
P. Skala;
M. Sedlacek;
M. Nekola;
J. Strobl;
M. Blazek;
R. Hudec
Show Abstract
Astronomers often need to put several pieces of equipment together and have to deploy them at a particular location.
This task could prove to be a really tough challenge, especially for distant observing facilities with intricate operating
conditions, poor communication infrastructure and unreliable power source. To have this task even more complicated,
they also expect secure and reliable operation in both attended and unattended mode, comfortable software with
user-friendly interface and full supervision over the observation site at all times.
During reconstruction of the D50 robotic telescope facility, we faced many of the issues mentioned above. To get rid of
them, we based our solution on a flexible group of hardware modules controlling the equipment of the observation site,
connected together by the Ethernet network and orchestrated by our management software. This approach is both
affordable and powerful enough to fulfill all of the observation requirements at the same time. We quickly figured out
that the outcome of this project could also be useful for other observation facilities, because they are probably facing the
same issues we have solved during our project.
In this contribution, we will point out the key features and benefits of the solution for observers. We will demonstrate
how the solution works at our observing location. We will also discuss typical management and maintenance scenarios
and how we have supported them in our solution. Finally, the overall architecture and technical aspects of the solution
will be presented and particular design and technology decisions will be clarified.
A symbiotic relationship between HST and JWST operations software systems development
Author(s):
Denise C. Taylor;
Maria Bertch;
Robert E. Douglas Jr.;
Mark Giuliano;
Anthony Roman
Show Abstract
The Space Telescope Science Institute's development of the James Webb Space Telescope's science operations systems
has benefitted from and has been a benefit to the current operations for the Hubble Space Telescope. Changes and
improvements to systems shared by both missions have helped the HST mission keep up with newer technologies, while
providing a free, live testbed for further JWST development.
The EMIR experience in the use of software control simulators to speed up the time to telescope
Author(s):
Pablo Lopez Ramos;
J. C. López-Ruiz;
Heidy Moreno Arce;
Josefina Rosich;
José Maria Perez Menor
Show Abstract
One of the main problems facing development teams working on instrument control systems consists on the need to
access mechanisms which are not available until well into the integration phase. The need to work with real hardware
creates additional problems like, among others: certain faults cannot be tested due to the possibility of hardware damage,
taking the system to the limit may shorten its operational lifespan and the full system may not be available during some
periods due to maintenance and/or testing of individual components.
These problems can be treated with the use of simulators and by applying software/hardware standards. Since
information on the construction and performance of electro-mechanical systems is available at relatively early stages of
the project, simulators are developed in advance (before the existence of the mechanism) or, if conventions and standards
have been correctly followed, a previously developed simulator might be used.
This article describes our experience in building software simulators and the main advantages we have identified, which
are: the control software can be developed even in the absence of real hardware, critical tests can be prepared using the
simulated systems, test system behavior for hardware failure situations that represent a risk of the real system, and the
speed up of in house integration of the entire instrument. The use of simulators allows us to reduce development, testing
and integration time.
More flexibility in representing geometric distortion in astronomical images
Author(s):
David L. Shupe;
Russ R. Laher;
Lisa Storrie-Lombardi;
Jason Surace;
Carl Grillmair;
David Levitan;
Branimir Sesar
Show Abstract
A number of popular software tools in the public domain are used by astronomers, professional and amateur
alike, but some of the tools that have similar purposes cannot be easily interchanged, owing to the lack of a
common standard. For the case of image distortion, SCAMP and SExtractor, available from Astromatic.net,
perform astrometric calibration and source-object extraction on image data, and image-data geometric distortion
is computed in celestial coordinates with polynomial coefficients stored in the FITS header with the PV i_j
keywords. Another widely-used astrometric-calibration service, Astrometry.net, solves for distortion in pixel
coordinates using the SIP convention that was introduced by the Spitzer Science Center. Up until now, due to
the complexity of these distortion representations, it was very difficult to use the output of one of these packages
as input to the other. New Python software, along with faster-computing C-language translations, have been
developed at the Infrared Processing and Analysis Center (IPAC) to convert FITS-image headers from PV to
SIP and vice versa. It is now possible to straightforwardly use Astrometry.net for astrometric calibration and
then SExtractor for source-object extraction. The new software also enables astrometric calibration by SCAMP
followed by image visualization with tools that support SIP distortion, but not PV . The software has been
incorporated into the image-processing pipelines of the Palomar Transient Factory (PTF), which generate FITS
images with headers containing both distortion representations. The software permits the conversion of archived
images, such as from the Spitzer Heritage Archive and NASA/IPAC Infrared Science Archive, from SIP to PV
or vice versa. This new capability renders unnecessary any new representation, such as the proposed TPV
distortion convention.
Virtualization in network and servers infrastructure to support dynamic system reconfiguration in ALMA
Author(s):
Tzu-Chiang Shen;
Nicolás Ovando;
Marcelo Bartsch;
Max Simmond;
Gastón Vélez;
Manuel Robles;
Rubén Soto;
Jorge Ibsen;
Christian Saldias
Show Abstract
ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount
of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning
phase, several production lines have been established to work in parallel. This decision required modification in the
original system architecture in which all the elements are controlled and operated within a unique Standard Test
Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm
allows us to provide a solution which can replicate the STE infrastructure without changing their network address
definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The
solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time
and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are
being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with
VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been
established and operated successfully during the last two years. This experience gave us confident to propose a solution
to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and
efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since
there will be no need to deal with subarrays complexity at software level.
Reflector adjustment for a large radio telescope based on active optics
Author(s):
Tongying Li;
Zhenchao Zhang;
Aihua Li;
You Wang
Show Abstract
The reflector deformation caused by gravity, temperature, humidity, wind loading and so on can reduce the global
performance of a large radio telescope. In this paper, considering the characteristics of the primary reflector of a 13.7 m
millimeter-wave telescope a novel reflector adjustment method based on active optics has therefore been proposed to
control the active surface of the reflector through the communication between the active surface computer and embedded
intelligent controller with a large quantity of displacement actuators, in which the active surface computer estimates and
controls the real time active surface figure at any elevation angle, reduces or eliminates the adverse effects of the
reflector deformation to increase the resolution and sensitivity of the radio telescope due to the more radio signals
collected. A Controller Area Network /Ethernet protocol converter is designed for the communication between the active
surface control computer as a host computer in Ethernet and the displacement actuator controller in Controller Area
Network. The displacement actuator is driven by a stepper motor and controlled by an intelligent controller with the data
from the active surface computer. The closed-loop control of the stepper motor improves the control accuracy greatly
through the feedback link based on the optical encoder.
A mask quality control tool for the OSIRIS multi-object spectrograph
Author(s):
J. C. López-Ruiz;
Jacinto Javier Vaz Cedillo;
Alessandro Ederoclite;
Ángel Bongiovanni;
Víctor González Escalera
Show Abstract
OSIRIS multi object spectrograph uses a set of user-customised-masks, which are manufactured on-demand. The
manufacturing process consists of drilling the specified slits on the mask with the required accuracy. Ensuring that slits
are on the right place when observing is of vital importance.
We present a tool for checking the quality of the process of manufacturing the masks which is based on analyzing the
instrument images obtained with the manufactured masks on place. The tool extracts the slit information from these
images, relates specifications with the extracted slit information, and finally communicates to the operator if the
manufactured mask fulfills the expectations of the mask designer. The proposed tool has been built using scripting
languages and using standard libraries such as opencv, pyraf and scipy. The software architecture, advantages and limits
of this tool in the lifecycle of a multiobject acquisition are presented.
ALMA software regression tests: the evolution under an operational environment
Author(s):
Ruben Soto;
Víctor González;
Jorge Ibsen;
Matias Mora;
Norman Sáez;
Tzu-Chiang Shen
Show Abstract
The ALMA software is a large collection of modules, which implements the functionality needed for the observatory day-to-day operations, including among others Array/Antenna Control, Correlator, Telescope Calibration
and Data Archiving. Many software patches must periodically be applied to fix problems detected during operations or to introduce enhancements after a release has been deployed and used under regular operational
conditions. Under this scenery, it has been imperative to establish, besides a strict conguration control system,
a weekly regression test to ensure that modications applied do not impact system stability and functionality.
A test suite has been developed for this purpose, which reflects the operations performed by the commissioning
and operations groups, and that aims to detect problems associated to the changes introduced at different versions
of ALMA software releases. This paper presents the evolution of the regression test suite, which started at the
ALMA Test Facility, and that has been adapted to be executed in the current operational conditions. Topics
about the selection of the tests to be executed, the validation of the obtained data and the automation of the
test suite are also presented.
ALMA operation support software and infrastructure
Author(s):
Tzu-Chiang Shen;
Ruben Soto;
Matias Mora;
Johnny Reveco;
Jorge Ibsen
Show Abstract
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least
66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000
m. Each antenna contains instruments capable of receiving radio signals from 31.3 GHz up to 950 GHz. These signals
are correlated inside a Correlator and the spectral data are finally saved into the Archive system together with the
observation metadata. This paper describes the progress in the development of the ALMA operation support software,
which aims to increase the efficiency of the testing, distribution, deployment and operation of the core observing
software. This infrastructure has become critical as the main array software evolves during the construction phase. In
order to support and maintain the core observing software, it is essential to have a mechanism to align and distribute the
same version of software packages across all systems. This is achieved rigorously with weekly based regression tests and
strict configuration control. A build farm to provide continuous integration and testing in simulation has been established
as well. Given the large amount of antennas, it is imperative to have also a monitoring system to allow trend analysis of
each component in order to trigger preventive maintenance activities. A challenge for which we are preparing this year
consists in testing the whole ALMA software performing complete end-to-end operation, from proposal submission to
data distribution to the ALMA Regional Centers. The experience gained during deployment, testing and operation
support will be presented.
Development of telescope control system for the 50cm telescope of UC Observatory Santa Martina
Author(s):
Tzu-Chiang Shen;
Ruben Soto;
Johnny Reveco;
Leonardo Vanzi;
Jose M. Fernández;
Pedro Escarate;
Vincent Suc
Show Abstract
The main telescope of the UC Observatory Santa Martina is a 50cm optical telescope donated by ESO to Pontificia
Universidad Catolica de Chile. During the past years the telescope has been refurbished and used as the main facility for
testing and validating new instruments under construction by the center of Astro-Engineering UC. As part of this work,
the need to develop a more efficient and flexible control system arises. The new distributed control system has been
developed on top of Internet Communication Engine (ICE), a framework developed by Zeroc Inc. This framework
features a lightweight but powerful and flexible inter-process communication infrastructure and provides binding to
classic and modern programming languages, such as, C/C++, java, c#, ruby-rail, objective c, etc. The result of this work
shows ICE as a real alternative for CORBA and other de-facto distribute programming framework. Classical control
software architecture has been chosen and comprises an observation control system (OCS), the orchestrator of the
observation, which controls the telescope control system (TCS), and detector control system (DCS). The real-time
control and monitoring system is deployed and running over ARM based single board computers. Other features such as
logging and configuration services have been developed as well. Inter-operation with other main astronomical control
frameworks are foreseen in order achieve a smooth integration of instruments when they will be integrated in the main
observatories in the north of Chile
The MUSE observation preparation tool
Author(s):
L. Piqueras;
J. Richard;
R. Bacon;
A. Pecontal;
Pedro Baksai;
Joel Vernet
Show Abstract
MUSE (Multi Unit Spectroscopic Explorer) is an integral-field spectrograph which will be mounted on the Very Large
Telescope (VLT). MUSE is being built for ESO by a European consortium under the supervision of the Centre de
Recherche Astrophysique de Lyon (CRAL).
In this context, CRAL is responsible for the development of dedicated software to help MUSE users prepare and submit
their observations. This software, called MUSE-PS, is based on the ESO SkyCat tool that combines visualization of
images and access to catalogs and archive data for astronomy. MUSE-PS has been developed as a plugin to SkyCat to
add new features specific to MUSE observations.
In this paper, we present the MUSE observation preparation tool itself and especially its specific functionalities:
definition of the center of the MUSE field of view and orientation, selection of the VLT guide star for the different
modes of operations (Narrow Field Mode or Wide Field Mode, with or without AO). We will also show customized
displays for MUSE (zoom on specific area, help with MUSE mosaïcing and generic offsets, finding charts …).
Korea microlensing telescope network: data management plan
Author(s):
Chung-Uk Lee;
Dong-Jin Kim;
Seung-Lee Kim;
Byeong-Gon Park;
Sang-Mok Cha
Show Abstract
We are developing three 1.6m optical telescopes and 18k by 18k mosaic CCD cameras. These telescopes will be
installed and operated at three southern astronomical sites in Chile, South Africa, and Australia for the Korea
Microlensing Telescope Network (KMTNet) project. The main scientific goal of the project is to discover earth-like
extrasolar planets using the gravitational microlensing technique. To achieve the goal, each telescope at three sites will
continuously monitor the specific region of Galactic bulge with 2.5 minute cadence for five years. Assuming 12 hour
observation in maximum for a night, the amount of 200GB file space is required for one-night observations at each
observatory. If we consider the whole project period and the data processing procedure, a few PB class data storage,
high-speed network, and high performance computers are essential. In this paper, we introduce the KMTNet data
management plan that handles gigantic data; observation data collecting, image calibration, data reduction pipeline,
database archiving, and backup.
HARPS-N: software path from the observation block to the image
Author(s):
D. Sosnowska;
M. Lodi;
X. Gao;
N. Buchschacher;
A. Vick;
J. Guerra;
M. Gonzalez;
D. Kelly;
C. Lovis;
F. Pepe;
E. Molinari;
A. C. Cameron;
D. Latham;
S. Udry
Show Abstract
HARPS North is the twin of the HARPS (High Accuracy Radial velocity for Planetary Search) spectrograph operating in
La Silla (Chile) recently installed on the TNG in La Palma observatory and used to follow-up, the "hot" candidates
delivered by the Kepler satellite. HARPS-N is delivered with its own software that completely integrates with the TNG
control system. A special care has been dedicated to develop tools that will assist the astronomers during the whole
process of taking images: from the observation schedule to the raw image acquisition. All these tools are presented in the
paper. In order to provide a stable and reliable system, the software has been developed keeping in mind concepts like
failover and high-availability. HARPS-N is made of heterogeneous systems, from normal computer to real-time systems,
that's why the standard message queue middleware (ActiveMQ) was chosen to provide the communications between
different processes. The path of operations starting with the Observation Blocks and ending with the FITS frames is fully
automated and could allow, in the future, the completely remote observing runs optimized for the time and quality
constraints.
Research of simulation framework for telescope wireless networks control system
Author(s):
Xiaoying Shuai;
Huanyan Qian
Show Abstract
To capture both the system dynamics and network communication events in the large telescope control system (TCS),
and depict the real network control system (NCS) accurately, co-simulation platform must be developed. It is challenge
for very large or large telescope control system because TCS contains thousands of controlled objects. TCS using
Wireless Networks Control System (WNCS) is a trend. Building a distributed control system supported by wireless
networks is a challenging task that requires a new design and simulation approach. This paper has described a cosimulation
framework for telescope wireless networks control system. The co-simulation platform consists of wireless
networks simulation sub-system, actuator simulation sub-system, control simulation sub-system and interface subsystem.
The co-simulation platform can help telescope WNCS designer optimize the design of telescope control system
and improve the performance of the TCS.
Toolkit of automated database creation and cross-match
Author(s):
Yanxia Zhang;
Hongwen Zheng;
Tong Pei;
Yongheng Zhao
Show Abstract
Astronomy steps into a fullwave and data-avalanche era. Astronomical data is measured by Terabyte, even
Petabyte. How to save, manage, analyze so massive data is an important issue in astronomy. In order to let
astronomers free of the data processing burden and expert in science, various valuable and convenient tools
(e.g. Aladin, VOSpec, VOPlot) are developed by VO projects. To suit this requirement, we develop a toolkit to
realize automated database creation, automated database index creation and cross-match. The toolkit provides
a good interface for users to apply. The cross-match task may be implemented between local databases, remote
databases or local database and remote database. The large-scale cross-match is also easily achieved. Moreover,
the speed for large-scale cross-match is rather satisfactory.
SPHERE instrumentation software: a progress report
Author(s):
A. Baruffolo;
D. Fantinel;
L. Gluck;
B. Salasnich;
G. Zins;
P. Steiner;
M. Micallef;
P. Bruno;
D. Popovic;
R. H. Donaldson;
E. Fedrigo;
M. Kiekebusch;
C. Soenke;
M. Suarez Valles
Show Abstract
SPHERE INS is the software devoted to the control of the SPHERE "Planet Finder Instrument". SPHERE is a second
generation instrument for the VLT whose prime objective is the discovery and study of new extra-solar giant planets
orbiting nearby stars. The instrument is currently assembled and being tested. It is expected to undergo Preliminary
Acceptance in Europe before the end of 2012.
SPHERE INS, besides controlling the instrument functions, implements all observation, calibration and maintenance
procedures. It includes on-line data reduction procedures, necessary during observations and calibrations, as well as
quick-look procedures that allow monitoring the status of ongoing observations. SPHERE INS also manages the external
interfaces with the VLT Telescope Control Software, the High-level Observing Software and the Data Handling System.
It provides both observing and engineering graphical user interfaces. In this paper we give a brief review of the SPHERE
INS design. We then report about the current status of the software, the activities concerning its integration with the
Instrument and the testing and validation procedures.
Remote monitoring and fault recovery for FPGA-based field controllers of telescope and instruments
Author(s):
Yuhua Zhu;
Dan Zhu;
Jianing Wang
Show Abstract
As the increasing size and more and more functions, modern telescopes have widely used the control architecture, i.e.
central control unit plus field controller. FPGA-based field controller has the advantages of field programmable, which
provide a great convenience for modifying software and hardware of control system. It also gives a good platform for
implementation of the new control scheme. Because of multi-controlled nodes and poor working environment in
scattered locations, reliability and stability of the field controller should be fully concerned.
This paper mainly describes how we use the FPGA-based field controller and Ethernet remote to construct monitoring
system with multi-nodes. When failure appearing, the new FPGA chip does self-recovery first in accordance with prerecovery
strategies. In case of accident, remote reconstruction for the field controller can be done through network
intervention if the chip is not being restored. This paper also introduces the network remote reconstruction solutions of
controller, the system structure and transport protocol as well as the implementation methods. The idea of hardware and
software design is given based on the FPGA. After actual operation on the large telescopes, desired results have been
achieved. The improvement increases system reliability and reduces workload of maintenance, showing good application
and popularization.
Commissioning the VST Telescope Control Software
Author(s):
Pietro Schipani;
Javier Argomedo;
Laurent Marty
Show Abstract
Although the VST telescope control software is based on the heritage of the other ESO telescopes, there is almost no
module which has not been, at least, customized for its specific system. This paper reviews the lessons learned during the
telescope commissioning in terms of advantages and disadvantages coming from the reuse of the VLT and ATs software.
Using ODGWs with GSAOI: software and firmware implementation challenges
Author(s):
Peter J. Young;
Peter McGregor;
Jan van Harmelen;
Benoît Neichel
Show Abstract
The Gemini South Adaptive-Optics Imager (GSAOI) has recently been commissioned on the Gemini South telescope.
Designed for use with the Gemini GeMS Multi-Conjugate Adaptive Optics System, GSAOI makes use of the HAWAII-
2RG (H2RG) On-Detector Guide Window (ODGW) feature where guide windows positioned in each of the four H2RG
detectors provide GeMS with tip-tilt and flexure corrections. This paper concentrates on the complex software and
firmware required for operating the ODGWs and for delivering the performance required by GeMS. Software
architecture, algorithms, performance and the implementation platform for the current on-telescope solution are detailed.
Cure-WISE: HETDEX data reduction with Astro-WISE
Author(s):
J. M. Snigula;
M. E. Cornell;
N. Drory;
Max. Fabricius;
M. Landriau;
G. J. Hill;
K. Gebhardt
Show Abstract
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) is a blind spectroscopic survey to map the
evolution of dark energy using Lyman-alpha emitting galaxies at redshifts 1:9 < z < 3:5 as tracers. The survey
instrument, VIRUS, consists of 75 IFUs distributed across the 22-arcmin field of the upgraded 9.2-m HET. Each
exposure gathers 33,600 spectra. Over the projected five year run of the survey we expect about 170 GB of data
per night. For the data reduction we developed the Cure pipeline. Cure is designed to automatically find and
calibrate the observed spectra, subtract the sky background, and detect and classify different types of sources.
Cure employs rigorous statistical methods and complete pixel-level error propagation throughout the reduction
process to ensure Poisson-limited performance and meaningful significance values. To automate the reduction of
the whole dataset we implemented the Cure pipeline in the Astro-WISE framework. This integration provides
for HETDEX a database backend with complete dependency tracking of the various reduction steps, automated
checks, and a searchable interface to the detected sources and user management. It can be used to create various
web interfaces for data access and quality control. Astro-WISE allows us to reduce the data from all the IFUs in
parallel on a compute cluster. This cluster allows us to reduce the observed data in quasi real time and still have
excess capacity for rerunning parts of the reduction. Finally, the Astro-WISE interface will be used to provide
access to reduced data products to the general community.
Multiple guide star acquisition software for LINC-NIRVANA
Author(s):
T. Bertram;
F. Kittmann;
L. Mohr
Show Abstract
LINC-NIRVANA is the near-infrared interferometric imaging camera for the Large Binocular Telescope. Once
operational, it will provide an unprecedented combination of angular resolution, sensitivity and field of view.
Its layer-oriented MCAO systems (one for each arm of the interferometer) are conjugated to the ground layer
and an additional layer in the upper atmosphere. The wavefront sensors can use up to 12 natural guide stars
for wavefront sensing. Up to 12 opto-mechanical units have to be accurately positioned to coincide with the
positions of the natural guide stars in the focal plane. A positioning software will coordinate the motion of
these units. It has to fulfill a number of requirements: Collisions between the opto-mechanical units have to be
prevented at any time. The units shall be positionable as close as possible to each other without touching their
neighbors. To reduce the acquisition overhead, the units shall move in parallel. Different positioning modes
have to be supported: Guide star acquisition, but also positioning model corrections and common offsets will be
commanded.
In this presentation we will outline the requirements and use cases of the positioning software. The logic
that will be used to prevent collisions will be discussed as well as the algorithm that can be used to assign the
opto-mechanical units to the guide stars.
OpenROCS: a software tool to control robotic observatories
Author(s):
Josep Colomé;
Josep Sanz;
Francesc Vilardell;
Ignasi Ribas;
Pere Gil
Show Abstract
We present the Open Robotic Observatory Control System (OpenROCS), an open source software platform developed
for the robotic control of telescopes. It acts as a software infrastructure that executes all the necessary processes to
implement responses to the system events that appear in the routine and non-routine operations associated to data-flow
and housekeeping control. The OpenROCS software design and implementation provides a high flexibility to be adapted
to different observatory configurations and event-action specifications. It is based on an abstract model that is
independent of the specific hardware or software and is highly configurable. Interfaces to the system components are
defined in a simple manner to achieve this goal. We give a detailed description of the version 2.0 of this software, based
on a modular architecture developed in PHP and XML configuration files, and using standard communication protocols
to interface with applications for hardware monitoring and control, environment monitoring, scheduling of tasks, image
processing and data quality control. We provide two examples of how it is used as the core element of the control system
in two robotic observatories: the Joan Oró Telescope at the Montsec Astronomical Observatory (Catalonia, Spain) and
the SuperWASP Qatar Telescope at the Roque de los Muchachos Observatory (Canary Islands, Spain).
The 3,6 m optical telescope for ARIES: the control system
Author(s):
Eric Gabriel;
Christian Bastin;
Maxime Piérard
Show Abstract
AMOS SA has been awarded of the contract for the design, manufacturing, assembly, tests and on site installation
(Nainital, Devasthal site at 2540 mm altitude) of the 3,6 m Optical Telescope for ARIES (Aryabhatta Research Institute
of Observational Sciences), Nainital (India).
This paper describes the architecture deployed for the whole telescope control. It contains, in others, the Telescope
Control System (TCS), the Active Optic System (AOS) and the Auto Guiding Unit (AGU). The TCS generates the
telescope axes trajectory from the celestial coordinates selected by the operator and drives the main axes. The AOS
generates the force set points for each M1 actuator and the position set point of the M2 hexapod from the data given by a
wave front sensor. The AGU sends the main axes corrections from the movement of the guide star on the guiding CCD.
The modules shall communicate between them to optimize the telescope behavior and with the Observatory Control
System (OCS) for data reporting and synchronization with the instrument.
Technical solutions in preparing data for the Keck Observatory Archive (KOA)
Author(s):
Hien D. Tran;
Jeff A. Mader;
Robert W. Goodrich;
Myrna Tsubota
Show Abstract
The Keck Observatory Archive (KOA), which began curating and serving data in 2004, was developed many years after
the W. M. Keck Observatory (WMKO) came into operations. Since much of the data produced from instruments on the
twin Keck telescopes were never designed with the archive in mind, the metadata contained in the original FITS headers
were not adequate for proper archiving. Some examples of the challenges facing the process of making the data suitable
for archiving include: assigning data to the correct owner and program, especially on nights split between two or more
PIs; distinguishing science files from calibration files; and identifying the type of calibration. We present some software
techniques that prepare and evaluate the data, adding content to the FITS headers and "retrofitting" the metadata in order
to support archiving Keck legacy data. We also describe tools developed to ensure a smooth ingestion of data for current
and future instruments. We present briefly our method for controlling and monitoring the data transfer between WMKO
in Hawaii and the NASA Exoplanet Science Institute (NExScI) in California, where the data are physically hosted.
Open source pipeline for ESPaDOnS reduction and analysis
Author(s):
Eder Martioli;
Doug Teeple;
Nadine Manset;
Daniel Devost;
Kanoa Withington;
Andre Venne;
Megan Tannock
Show Abstract
OPERA is a Canada-France-Hawaii Telescope (CFHT) open source collaborative software project currently under
development for an ESPaDOnS echelle spectro-polarimetric image reduction pipeline. OPERA is designed to be
fully automated, performing calibrations and reduction, producing one-dimensional intensity and polarimetric
spectra. The calibrations are performed on two-dimensional images. Spectra are extracted using an optimal
extraction algorithm. While primarily designed for CFHT ESPaDOnS data, the pipeline is being written to be
extensible to other echelle spectrographs. A primary design goal is to make use of fast, modern object-oriented
technologies. Processing is controlled by a harness, which manages a set of processing modules, that make use
of a collection of native OPERA software libraries and standard external software libraries. The harness and
modules are completely parametrized by site configuration and instrument parameters. The software is open-
ended, permitting users of OPERA to extend the pipeline capabilities. All these features have been designed to
provide a portable infrastructure that facilitates collaborative development, code re-usability and extensibility.
OPERA is free software with support for both GNU/Linux and MacOSX platforms. The pipeline is hosted on
SourceForge under the name "opera-pipeline".
MESA: Mercator scheduler and archive system
Author(s):
Florian Merges;
Saskia Prins;
Wim Pessemier;
Gert Raskin;
Jesus Perez Padilla;
Hans Van Winckel;
Conny Aerts
Show Abstract
We have developed an observing scheduling and archive system for the 1.2 meter Mercator Telescope. The goal
was to optimize the specific niche of this modern small telescope in observational astrophysics: the building-up
of long-term time series of photometric or high-resolution spectroscopic data with appropriate sampling for any
given scientific program. This system allows PIs to easily submit their technical requirements and keep track of
the progress of the observing programmes. The scheduling system provides the observer with an optimal schedule
for the night which takes into account the current observing conditions as well as the priorities and requirements
of the programmes in the queue. The observer can conveniently plan an observing night but also quickly adapt
it to changing conditions. The archiving system automatically processes new files as they are created, including
reduced data. It extracts the metadata and performs the normalization. A user can query, inspect and retrieve
observing data. The progress of individual programmes, including timeline and reduced data plots can be seen at
any time. Our MESA project is based on free and open source software (FOSS) using the Python programming
language. The system is fully integrated with the Mercator Observing Control System1 (MOCS).
SDAI: a key piece of software to manage the new wideband backend at Robledo
Author(s):
J. R. Rizzo;
M. Gutiérrez Bustos;
T. B. H. Kuiper;
J. Cernicharo;
I. Sotuela;
A. Pedreira
Show Abstract
A joint collaborative project was recently developed to provide the Madrid Deep Space Communications Complex
with a state-of-the-art wideband backend. This new backend provides from 100MHz to 6 GHz of instantaneous
bandwidth, and spectral resolutions from 6 to 200 kHz. The backend includes a new intermediate-frequency
processor, as well as a FPGA-based FFT spectrometer, which manage thousands of spectroscopic channels
in real time. All these equipment need to be controlled and operated by a common software, which has to
synchronize activities among affected devices, and also with the observing program. The final output should
be a calibrated spectrum, readable by standard radio astronomical tools for further processing. The developed
software at this end is named "Spectroscopic Data Acquisition Interface" (SDAI). SDAI is written in python 2.5,
using PyQt4 for the User Interface. By an ethernet socket connection, SDAI receives astronomical information
(source, frequencies, Doppler correction, etc.) and the antenna status from the observing program. Then it
synchronizes the observations at the required frequency by tuning the synthesizers through their USB ports;
finally SDAI controls the FFT spectrometers through UDP commands sent by sockets. Data are transmitted
from the FFT spectrometers by TCP sockets, and written as standard FITS files. In this paper we describe the
modules built, depict a typical observing session, and show some astronomical results using SDAI.
Confronting the numerical simulations of the VLT/MUSE instrument with the first real data
Author(s):
Aurélien Jarno;
Roland Bacon;
Arlette Pécontal-Rousset;
Ole Streicher;
Peter Weilbacher
Show Abstract
The Multi Unit Spectroscopic Explorer (MUSE) instrument is a second-generation integral-field spectrograph
in development for the Very Large Telescope (VLT), operating in the visible and near IR wavelength range
(465-930 nm). Given the complexity of MUSE we have developed an instrument numerical simulator, which
includes the whole chain of acquisition from the atmosphere down to the telescope and including the detectors,
and taking into account both optical aberrations and diffraction effects, by propagating a wavefront through
the instrument, according to the Fourier optics concept.
This simulator is used to produce simulated exposures, in order to develop the data reduction software
and to develop and validate the test procedures of the assembly, integration and tests phase. The MUSE
instrument is currently being integrated in CRAL, and first real exposures have been taken. This paper
compares and analyses the differences between the real data and the numerical simulations, in order to
improve the instrument simulator and make it more realistic.
The future of TNG telescope control system
Author(s):
Jose Guerra Sr.;
Jose San Juan;
Marcello Lodi;
Nauzet Hernandez
Show Abstract
In glance, Tesescopio Nazionale Galileo1 (TNG) is going to present a New Telescope Control System (a.k.a. NTCS)following the new paths targeted for this medium size, ground based telescope. The core of this new TCS is based on a new system and software architecture which allows the telescope to reach the new goals with a higher grade of efficiency, automation and flexibility.
Software-centric view on the LINC-NIRVANA beam control concept
Author(s):
Jan Trowitzsch;
Thomas Bertram
Show Abstract
The near-infrared interferometric imaging camera for the Large Binocular Telescope, LINC-NIRVANA, is equipped
with dedicated multi-conjugated adaptive optics systems and will provide an unprecedented combination of angular
resolution, sensitivity, and field of view.
Tight requirements resulting from long exposure interferometric imaging over a large field of view need to
be fulfilled. Both incoming beams have to coincide in the focal plane of the science detector. Their pointing
origins, offsets, orientations, and plate scales have to match each other and must not change during observations.
Therefore, active beam control beyond fringe tracking and adaptive optics is essential. The beams need to be
controlled along the complete optical path down to the combined focal plane.
This paper describes the beam control aspects from a software-centric point of view. We give an outline on
the overall distributed control software architecture of LINC-NIRVANA. Furthermore, we center on the beam
control specific features and related functionality as foreseen and implemented in the LINC-NIRVANA software
packages.
Design of LAMOST data processing and production database
Author(s):
Yanxin Guo;
Ali Luo;
Fengfei Wang;
Zhongrui Bai;
Jian Li
Show Abstract
Before LAMOST spectra release, raw data need to go through a series of processes, i.e. a pipeline after observed,
including 2D reduction, spectral analysis, eyeball identification. It is a proper strategy that utilizing a database to
integrate them. By using database the coupling between relative modules would be reduced to make the adding or
removing of them more convenient, and the dataflow seems to be more clearly. The information of a specific object,
from target selection to intermediate results and spectrum production, can be efficiently accessed and traced back
through the database search, rather than via FITS reading. Furthermore, since the pipeline has not been perfected yet, the
eyeball check is needed before the spectra are released, and an appropriate database can make the feedback period of
eyeball check result more conveniently, thus the improvement of the pipeline will be more purposely. Finally, database
can be a data mining tools for the statistics and analysis of massive astronomical data. This article focuses on the
database design and the data processing flow built on it for LAMOST. The database design requirement of the existing
routines, such as input/output, the relationship or dependence between them is introduced. Accordingly, the database
structure suited for multiple version data process and eyeball verification is presented. The dataflow, how the pipeline is
integrated relied on such a dedicated database system and how it worked are also explained. In addition, some user
interfaces, eyeball check interfaces, statistical functions are also presented.
The improvement of CCD auto-guiding system for 2.5m telescope
Author(s):
Liyan Chen;
Zhenchao Zhang;
Hang Wang
Show Abstract
The CCD Auto-Guiding Star System is a significant part in the telescope control system for minimizing the tracking
errors, and acquiring high-resolution data. In this paper, the improved algorithms of the off-axis CCD Auto-Guiding Star
System is designed and applied. Especially the de-rotator algorithm is added for the large-field Alt-Az Telescope to keep
stars at original positions. The software that can communicate with the CCD camera is designed to collect, analyze and
process data, and then sent data to Alt-Az control system in real time less than 160ms. The algorithms to calculate
centriod of stars are novel. The experimental results were high-resolution and reliable for system, the RMS of star offset
vector can arrive 0.03 pixel. In addition, experiments showed that the software could work steadily in long time. Now the
CCD Auto-guiding software has been applied on the f/8 reflecting Alt-Az telescope with 2.5m aperture, 1-degree field of
view.
Diving into the Sardinia Radio Telescope minor servo system
Author(s):
M. Buttu;
A. Orlati;
G. Zacchiroli;
M. Morsiani;
F. Fiocchi;
F. Buffa;
G. Maccaferri;
G. P. Vargiu;
C. Migoni;
S. Poppi;
S. Righini;
A. Melis
Show Abstract
The Sardinia Radio Telescope (SRT) is a new 64-metre, Gregorian-shaped antenna built in Sardinia (Italy). It
is designed to carry out observations up to 100 GHz.
The telescope is provided with six focal positions: primary, Gregorian and four beam-waveguide foci. This
paper describes the project of the servo system which allows the focus and receiver selection during the instrument
setup. This system also operates, at the observation stage, the compensation of some of the stucture deformations
due to gravity, temperature variations and other environmental effects.
We illustrate the system features following a bottom-up approach, analysing all the project layers ranging
from low-level systems, as the hardware controls, to the design and implementation of high-level software, which
is based on the distributed objects ACS (ALMA Common Software) framework.
Particular focus will be put on the links among the hierarchical levels of the system, and on the solutions
adopted in order to guarantee that the control of the servo system is abstracted from the underlying hardware.
The control software for the Sardinia Radio Telescope
Author(s):
A. Orlati;
M. Buttu;
A. Melis;
C. Migoni;
S. Poppi;
S. Righini
Show Abstract
The Sardinia Radio Telescope (SRT) is a new 64-meter shaped antenna designed to carry out observations up to 100
GHz. This large instrument has been built in Sardinia, 35 km north of Cagliari, and is now facing the technical
commissioning phase. This paper describes the architecture, the implementation solutions and the development status of
NURAGHE, the SRT control software. Aim of the project was to produce a software which is reliable, easy to keep up to
date and flexible against other telescopes. The most ambitious goal will be to install NURAGHE at all the three italian
radio telescopes, allowing the astronomers to access these facilities through a common interface with very limited extra
effort. We give a description of all the control software subsystems (servo systems, backends, receivers, etc.) focusing on
the resulting design, which is based on the ACS (Alma Common Software) patterns and comes from linux-based, LGPL,
Object-Oriented development technologies. We also illustrate how NURAGHE deals with higher level requirements,
coming from the telescope management or from the system users.
Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages
Author(s):
Mikhail V. Konnik;
James Welsh
Show Abstract
Numerical simulators for adaptive optics systems have become an essential tool for the research and development
of the future advanced astronomical instruments. However, growing software code of the numerical simulator
makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical
software for adaptive optics simulators may complicate the development since the documentation must
contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most
modern programming environments like MATLAB or Octave have in-built documentation abilities, they are
often insufficient for the description of a typical adaptive optics simulator code.
This paper describes a general cross-platform framework for the documentation of scientific software using open-source
tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB
comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source
code. The documentation generated by this framework contains the current code description with mathematical
formulas, images, and bibliographical references.
A detailed description of the framework components is presented as well as the guidelines for the framework
deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive
optics simulator are provided.
A motion control networked solution for the PAUCam slow control
Author(s):
O. Ballester;
C. Pio;
C. Hernández-Ferrer;
M. Albareda-Sirvent
Show Abstract
PAUCam consists of an array of 18 red-sensitive CCDs of 4K x 2K pixels with a system of 36 narrow-band (10 nm)
filters and 6 wide-band filters which will be installed at the William Herschel Telescope (WHT).
PAUCam Slow Control (SC) is the working package of PAUCam in charge of implement all system motion control,
sensors monitoring, actuators control and first security reaction. It is implemented using a Siemens Simotion D435
Motion Controller which controls all the motors and the connected profibus periphery.
GCS component development cycle
Author(s):
Jose A. Rodríguez;
Rosa Macias;
Jordi Molgo;
Dailos Guerra;
Marti Pi
Show Abstract
The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain).
First light was at 13/07/2007 and since them it is in the operation phase.
The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is
responsible for the management and operation of the telescope, including its instrumentation.
GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process
framework.
After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its
interface is obtained and from that information a component specification is written. In order to improve the code
productivity, GCS has adopted the code generation to transform this component specification into the skeleton of
component classes based on a software framework, called Device Component Framework.
Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled
and deployed to be tested for the first time through our GUI inspector.
The main advantages of this approach are the following: It reduces the learning curve of new developers and the
development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the
deliverables of the software product and massively increase the timescale, design consistency and design quality, and
eliminates the future refactoring process required for the code.
CARMENES. IV: instrument control software
Author(s):
Josep Guàrdia;
Josep Colomé;
Ignasi Ribas;
Hans-Jürgen Hagen;
Rafael Morales;
Miguel Abril;
David Galadí-Enríquez;
Walter Seifert;
Miguel A. Sánchez Carrasco;
Andreas Quirrenbach;
Pedro J. Amado;
Jose A. Caballero;
Holger Mandel
Show Abstract
The overall purpose of the CARMENES instrument is to perform high-precision measurements of radial velocities of
late-type stars with long-term stability. CARMENES will be installed in 2014 at the 3.5 m telescope in the German-
Spanish Astronomical Center at Calar Alto observatory (CAHA, Spain) and will be equipped with two spectrographs in
the near-infrared and visible windows. The technology involved in such instrument represents a challenge at all levels.
The instrument coordination and management is handled by the Instrument Control System (ICS), which is responsible
of carrying out the operations of the different subsystems and providing a tool to operate the instrument from low to high
user interaction level. The main goal of the ICS and the CARMENES control layer architecture is to maximize the
instrument efficiency by reducing time overheads and by operating it in an integrated manner. The ICS implements the
CARMENES operational design. A description of the ICS architecture and the application programming interfaces for
low- and high-level communication is given. Internet Communications Engine is the technology selected to implement
most of the interface protocols.
Use of RTS2 for LSST multiple channel CCD characterisation
Author(s):
Petr Kubánek;
Michael Prouza;
Ivan Kotov;
Paul O'Connor;
Peter Doherty;
James Frank
Show Abstract
RTS2, or Remote Telescope System 2nd Version, is a modular observatory control system. Development of RTS2 began in 2003 and since then it has been used at more than 20 observatories world-wide. Its main users are small, fully autonomous observatories, performing target of opportunity observations.
Since June 2007 RTS2 has been used at Brookhaven National Laboratory (BNL) to control the acquisition of images for the Large Synoptic Survey Telescope (LSST) CCD characterisation. The CCD test laboratory includes multiple devices which need to be controlled in order to perform the electro-optical testing of the CCD.
The configuration of the devices must be recorded in order for that information to be used later during data analysis.
The main factors leading to use of RTS2 were its availability, open - source code, and modular design which allows its fast customisation to fit changing needs of a R&D project.
This article focuses on the description of changes to the system which allow for the integration of LSST's
multiple output CCD imagers. The text provides details of the multiple channel implementation, which parts
of the system were affected, and how these changes influenced overall system design. It also describes how easy
and fast it was to run the multiple channel instrument on night and twilight sky during prototype CCD testing,
and demonstrates how the complex routines, such as twilight skyflats acquisitions, worked out of the box.
A complete solar eruption activities processing tool with robotization and real time
Author(s):
Ganghua Lin
Show Abstract
The demands or cases of forecasts or researches on solar eruption activities are considered. Based on these demands and
based mainly on the observing data of Huairou Solar Observing Station, function architecture of a complete solar
eruption activities processing tool with robotization and real time is designed, kernel implement method for the
processing tool is analysed and support function library is designed, a pattern that benefits user use and a pattern that
benefits the tool software extensions are presented finally.
Design and first commissioning results of PLC-based control systems for the Mercator telescope
Author(s):
Wim Pessemier;
Geert Deconinck;
Gert Raskin;
Philippe Saey;
Hans Van Winckel
Show Abstract
The 1.2m optical Mercator Telescope (based at the Roque de Los Muchachos Observatory at La Palma) is currently
in the commissioning phase of a third permanently installed instrument called MAIA (Mercator Advanced
Imager for Asteroseismology), a three-channel frame-transfer imager optimized for rapid photometry. Despite
having three cryostats, MAIA is designed as a highly compact and portable instrument by using small Stirling-type
cryocoolers, and a single PLC in charge of all temperature control loops, cryocooler interaction, telemetry
acquisition and other instrument control related tasks. To accommodate MAIA at the Nasmyth B focal station of
the telescope, a new mechanism for the tertiary mirror had to be built since the former mechanism only allowed
motor controlled access to the Cassegrain and Nasmyth A focal stations. A second PLC has been installed in
order to control the two degrees of freedom of this mirror mechanism by interfacing with its motor controllers,
high-precision optical encoders, and limit switches. This PLC is not dedicated to the tertiary mirror control but
will serve as a general purpose controller for various tasks related to the telescope and the observatory, as part
of a new Telescope Control System primarily based on PLCs and OPC UA communication technology. Due to
the central location of the PLC inside the observatory, the position control loops of the mirror mechanism are
distributed using EtherCAT as the communication fieldbus. In this paper we present the design and the first
commissioning results of both the MAIA instrument control and the tertiary mirror control.
ESPRESSO front-end guiding algorithm
Author(s):
M. Landoni;
M. Riva;
F. M. Zerbi;
D. Mégevand;
A. Cabral;
S. Cristiani
Show Abstract
This paper presents the Espresso Front End Guiding Algorithm. ESPRESSO, the Echelle SPectrograph for Rocky
Exoplanets and Stable Spectroscopic Observations, will be installed on ESO's Very Large Telescope (VLT). The
Front End (FE) is the subsystem that collects the light coming from the Coudè Trains of all the Four Telescope
Units (UTs), provides Field and Pupil stabilization via piezoelectric tip tilt devices and inject the beams into
the Spectrograph fibers. The field and pupil guiding is obtained through a re-imaging system that elaborates
the halo of the light out of the Injection Fiber and a telescope pupil beacon. The first guiding algorithm that we
evaluated splits the FP in four areas and computes the sum of the photon counting of each pixel in that area.
The unbalancing of the photon sums will give the centroid misalignment information that will be handled by the
Instrument Control Software (ICS). Different algorithms and controllers architectures have been evaluated and
implemented in order to select a strategy that enables the FE to guide up to 20 apparent magnitude in V band.
Image acquisition system with three CCD cameras
Author(s):
Binhua Li;
Yigong Zhang;
Lei Yang;
Wei Mao;
Jiancheng Wang
Show Abstract
A new astrometric telescope, Multi-Function Astronomical Transit, has been built in Yunnan Astronomical Observatory.
Its main imaging device is a digital CCD camera without refrigeration because the telescope is only used to observe stars
brighter than 7.5mag. As an astrometric telescope like the Lower Latitude Meridian Circle, some special errors from the
horizon and the altitude axis of the telescope must be measured. Thus two analog CCD cameras are used. The digital
camera is connected to a digital frame grabber via a Camera Link cable and a custom-made cable, while the two analog
cameras are connected to an analog frame grabber by two custom-made cables. The two frame grabbers are separately
mounted in two image workstations and operate in external trigger mode. Two trigger signals are generated by the
telescope control system, one for the digital camera, another for the two analog cameras. The software of image
acquisition is programmed using VC++ and Sapera LT. This paper presents an imaging system solution for the new
transit, programming methods and the test observation results.
Final redshift determination of LAMOST pilot survey
Author(s):
Fengfei Wang;
A-Li Luo;
Haotong Zhang
Show Abstract
Over 1 million spectra have been obtained by LAMOST telescope during its Pilot Survey, of which half million
spectra are released. LAMOST 1D pipeline was designed to classify spectra, measure redshift, and estimate
parameters. Data quality is a key factor that affect the results of our reduction. For high signal-to-noise(S/N)
spectra, which have good qualities, the software shows a reasonable performance on not only the spectral classification but also the redshift measurement. For bad quality data, the software lost its stability and precision.
Human check is adopted into our spectra processing and final data release. By calculate the confidence of our
redshift measurement, we assess the correction of our redshift measurement and recalculate the low confidence
redshift for data release.
Design and realization of the backup field controllers for LAMOST spectrographs
Author(s):
Jianing Wang;
Zhongyi Han;
Yizhong Zeng;
Songxin Dai;
Zhongwen Hu;
Yongtian Zhu;
Lei Wang;
Yonghui Hou
Show Abstract
The China-made telescope, LAMOST, consists of 16 Spectrographs to detect stellar spectra via 4000 optical fibers. In
each spectroscope, many movable parts work in phase. Those parts are real-time controlled and managed by field
controllers based on FPGA. The master control board of controllers currently being used is constructed by Altera's
Cyclone II Development Kit. However, now Altera no longer produce such Kits. As the needs for maintenance and
improvement, a backup control board is developed, so that once any field controller is broken, another can changed in
time to ensure the control system not being interrupted. Using the newer Altera FPGA chip 3C40 as master control chip
can minimize the change in the original design frame of the control structure so as to reduce the workload of software
and hardware migration.
This paper describes the design process of the Spectrographs backup field controller based on Cyclone 3C40 and gives
the problems and solutions encountered during migration for controller hardware and software. The improved field
controller not only retains the original controller functions, but also can serve for more motors and sensors due to the
increase of input and output pins. Besides, no commodity supply limits, which saves expenses. The FPGA-field
controller can also be used in other telescopes, astronomical instruments and industrial control systems as well.
Design and practice multi-channel real time system on deformation control of optical plate
Author(s):
Yi Zheng;
Yin-hu Wang;
Ying Li;
Xin-nan Li
Show Abstract
Optical plates (OP) play more and more important role in modern ground-based telescopes. They can be as segments
composing primary mirror, deformable mirror for correcting air turbulence or active stressed lap used in polishing large
aspherical optics. When control the deformation of these plates, we always confronts with common situations: high
shape precision requirement, rapid deformation frequency with real time demand, intrinsic multi-channel coupling
characteristic. So how to improve OP deformation performance becomes a critical task in practical design. In this paper,
the control principle of OP is first introduced. Then a three-layer control architecture is presented. They are application
layer, real time control layer and motion execution layer. After that we designed a prototype system following this
framework, targeting active stressed polishing lap which has twelve motion channels. Both the hardware and software
development are discussed thereafter. OP surface deformation experiments are carried out and surface shape obtained
using LVDT array. Results verify the effectiveness of the design. And we are looking forward to use this control design
in more channel and time demanding applications.
Support vector machines for photometric redshift measurement of quasars
Author(s):
Hongwen Zheng;
Yanxia Zhang
Show Abstract
Based on photometric and spectroscopic data of quasars from SDSS DR7 and UKDISS DR7, support vector
machines (SVM) is applied to predict photometric redshifts of quasars. Different input patterns are tried and
the best pattern is presented. Comparing the results using optical data with that using optical and infrared
data, the experimental results show that the accuracy improves with data from more bands. In addition, the
quasar sample is firstly clustered into two groups by one-class SVM, then the photometric redshifts of the two
groups are separately estimated by means of SVM. The results based on the whole sample and the combined
results from the two groups are comparable.
Review of techniques for photometric redshift estimation
Author(s):
Hongwen Zheng;
Yanxia Zhang
Show Abstract
A photometric redshift provides an estimate for the distance of an astronomical object, such as a galaxy or
quasar, which is a powerful statistical tool for studies of evolutionary properties of galaxies, in particular of
faint galaxies, since their spectroscopic data are hard or impossible to obtain. At present, there are amounts
of methods to estimate photometric redshifts of galaxies and quasars. These methods are grouped into two
kinds: template fitting methods and empirical methods. The commonly used techniques about these two kinds
are narrated. The difference of approaches between quasars and galaxies are pointed out. The methods show
superiorities in galaxies and maybe show poor performance in quasars. Template-fitting methods and empirical
methods have their pros and cons.
Survey of approaches for targeting quasars
Author(s):
Yanxia Zhang;
Yong-Heng Zhao
Show Abstract
The study of quasars is of great importance to the formation and evolution of galaxies and the early history of
the universe, especially high redshift quasars. With the development and employment of large sky spectroscopic
survey projects (e.g. 2dF, SDSS), the number of quasars increases to more than 200,000. For improving the
efficiency of high-cost telescopes, careful selecting observational targets is necessary. Therefore various targeting
quasar algorithms are used and developed based on different data. We review them in detail. Some statistical
approaches are based on photometric color, variability, UV-excess, BRX, radio properties, color-color cut and so
on. Automated methods include support vector machines (SVMs), kernel density estimation (KDE), artificial
neural networks (ANNs), extreme-deconvolution method, probabilistic principal surfaces (PPS) and the negative
entropy clustering (NEC), etc. In addition, we touch upon some quasar candidate catalogues created by different
algorithms.
Upper computer software design for active optics
Author(s):
Chen Li;
Guomin Wang;
Liang Gao
Show Abstract
China has joined the international global network SONG project and will build one 1-meter telescope as one node of
SONG network. This paper shows the upper computer software system design under Linux operating system for active
optics control system of Chinese SONG telescope. The upper computer software developed in this paper under Linux OS
has three functions: detection of S-H wavefront, calculation of mirror correction force and communication with the
controller of hardware. We will introduce the three modules developed under Linux environment: wavefront image
processing module, communication module and GUI module.
Comparison of different interpolation algorithm in feature-based template matching for stellar parameters analysis
Author(s):
Bing Du;
Ali Luo;
JianNan Zhang;
Yue Wu;
FengFei Wang
Show Abstract
Referring to SDSS/SEGUE pipeline for stellar parameters SSPP and other pipelines, two methods,
ULySS9,10 and CFI (correlation function interpolation) are investigated to estimate stellar parameters
(effective temperature, surface gravity, and metallicity) for AFGK stars based on medium-resolution
spectroscopy. Both of the two methods carry with an interpolator, ULySS provides with an
interpolator of the template library consisting of polynomial expansions of each wavelength element
in powers of stellar parameters while CFI interpolates the maximal correlation coefficent as
functions of stellar parameters. Comparing with known objects observed by the Sloan Digital Sky
Survey (SDSS), their performances are tested, random and systematic errors are examined. By
comparing CFI with ULySS, performances of different interpolations are tested. These two methods
will be integrated into LAMOST stellar parameter pipeline (LASP) and used for the data release of
the LAMOST pilot survey.
The optical synthetic aperture image restoration based on the improved maximum-likelihood algorithm
Author(s):
Zexun Geng;
Qing Xu;
Baoming Zhang;
Zhihui Gong
Show Abstract
Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high
altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing
with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the
modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be
post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML)
algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the
point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is
unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these
limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which
incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during
iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that
the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the
average contrast evaluation indexes.
Estimate LAMOST hot star's parameters by POLLUX
Author(s):
Fang Zuo;
Ali Luo;
Jiannan Zhang
Show Abstract
With the highest efficiency of gathering spectra by LAMOST telescope, a large number of spectra have been obtained
during commissioning observation, which included a lot of spectra of O type star. It’s a difficult task to obtain accurate
parameters for hot star, lacking of a good model. Several stellar models, such as MAFAGS, ATLAS, Marcs etc, do not
cover the parameter range which temperature exceeds 25000K. POLLUX is a database of synthetic stellar spectra, in
which CMFGEN provides atmosphere models for the O type stars (Teff >25000K) [5]. A method of estimating stellar
parameters for hot stars is presented in this paper, based on matching LAMOST observed spectra with the theoretical
spectra library. We convert the resolution of CMFGEN spectra, which is about 150000 to LAMOST resolution of 2000.
By comparing with the CMFGEN template spectra, we can obtain the parameters of observed hot stars. Estimation for
the errors of the final parameters shows that low efficiency of LAMOST blue arms of the spectrographs does not affect
O type star observations.
Data access and analysis system for Gaia data handling during operations at Italian DPC: scientific validation and results monitoring approach in support of the AVU operations
Author(s):
Roberto Morbidelli;
Rosario Messineo;
Deborah Busonero;
Alberto Riva;
Alberto Vecchiato
Show Abstract
This document is the first, systematic description of the approach adopted to support the operations of the
Gaia Astrometric Verification Unit (AVU) systems. A further subsystem that collects and provides useful tools
for a scientific oriented approach to Data Analysis and Access is designed and integrated in Data Processing
Center of Turin. Specifically, its aim is to provide to the AVU system an operative and versatile set of diagnostic
elements useful for the analysis and the manipulation of the stored data. Examples of the different scenarios
targeted by the operation efforts are: - Visualization of the “Runtime” mission status; - Archive and recovery
data, using graphs and log files contained in the Data Base; - get “On-demand” information for ad hoc analyses
and data mining; - Production of tables and reports retrieving custom data in the database. The different cases
are described in terms of the methods and of the environments in which these take place.
The global sphere reconstruction for the Gaia mission in the Astrometric Verification Unit
Author(s):
Alberto Vecchiato;
Ummi Abbas;
Marilena Bandieramonte;
Ugo Becciani;
Luca Bianchi;
Beatrice Bucciarelli;
Deborah Busonero;
Mario G. Lattanzi;
Rosario Messineo
Show Abstract
The core task of the Gaia mission is the solution of the Global Astrometric Sphere, which is providing the
materialization of the astrometric reference frame for the catalog that will be the main outcome of the mission. Given the absolute character of the measurements, the Gaia Data Processing and Analysis Consortium (DPAC) has decided to replicate a dedicated version of this task, together with two other ones selected for their mission criticality, in an Astrometric Verification Unit (AVU). This task, named Global Sphere Reconstruction (GSR), focusses on the importance of having an implementation of the astrometric sphere solution from a well-defined subset of objects, based on an independent astrometric model as well as on a different solution algorithm. We analyze here these two aspects in the context of the GSR implementation at the Data Processing Center of Torino (DPCT) and the solution to implement the most computationally intensive part of the pipeline as a High-Performance Computing module.
Qsys NOC-based MPSOC design for LAMOST Spectrographs
Author(s):
Zhongyi Han;
Jianing Wang;
Yizhong Zeng
Show Abstract
At present, FPGA-based SOPC was used to design the China's LAMOST telescope spectrograph control system. But
with the increase of the controlled objects and requirement of telescope’s accuracy, the problems like system
performance, I/O source shortage, real-time multi-task processing, Fmax, Logic Element (LE) Usage have to be solved
immediately. The combination of multi-processor (NIOS II) method and NOC technology can meet this requirement
effectively. This article mainly introduced how to realize the NOC-based MPSOC in the Altera’s Cyclone III FPGA
experimental board by Qsys tool. According to the function of task, the system was divided into several subsystems
which also include two NIOS II CPU subsystems (implement the control strategies and remote update tasks separately).
These different subsystems are interconnected by NOC hierarchical interconnection idea. The results illustrate that this
solution can improve system performance, double the Fmax, decrease LE usage, and save the maintenance cost
compared with the previous SOPC-based approach. The motor control system designed by this approach also can be
applied to other astronomy equipments and industrial control fields.
The system software development for prime focus spectrograph on Subaru Telescope
Author(s):
Atsushi Shimono;
Naoyuki Tamura;
Hajime Sugai;
Hiroshi Karoji
Show Abstract
The Prime Focus Spectrograph (PFS) is a wide field multi-fiber spectrograph using the prime focus of the Subaru
telescope, which is capable of observing up to 2400 astronomical objects simultaneously.
The instrument control software will manage the observation procedure communicating with subsystems
such as the fiber positioner "COBRA", the metrology camera system, and the spectrograph and camera systems.
Before an exposure starts, the instrument control system needs to access to a database where target lists provided
by observers are stored in advance, and accurately position fibers onto astronomical targets as requested therein.
This fiber positioning will be carried out interacting with the metrology system which measures the fiber positions.
In parallel, the control system can issue a command to point the telescope to the target position and to rotate
the instrument rotator. Finally the telescope pointing and the rotator angle will be checked by imaging bright
stars and checking their positions on the auto-guide and acquisition cameras. After the exposure finishes, the
data are collected from the detector systems and are finalized as FITS files to archive with necessary information.
The observation preparation software is required, given target lists and a sequence of observation, to find
optimal fiber allocations with maximizing the number of guide stars. To carry out these operations efficiently,
the control system will be integrated seamlessly with a database system which will store information necessary
for observation execution such as fiber configurations.
In this article, the conceptual system design of the observation preparation software and the instrument
control software will be presented.
Test results for the Gemini Planet Imager data reduction pipeline
Author(s):
Jérôme Maire;
Marshall D. Perrin;
René Doyon;
Jeffrey Chilcote;
James E. Larkin;
Jason L. Weiss;
Christian Marois;
Quinn M. Konopacky;
Maxwell Millar-Blanchaer;
James R. Graham;
Jennifer Dunn;
Raphael Galicher;
Franck Marchis;
Sloane J. Wiktorowicz;
Kathleen Labrie;
Sandrine J. Thomas;
Stephen J. Goodsell;
Fredrik T. Rantakyro;
David W. Palmer;
Bruce A. Macintosh
Show Abstract
The Gemini Planet Imager (GPI) is a new facility instrument for the Gemini Observatory designed to detect
and characterize planets and debris disks orbiting nearby stars; its science camera is a near infrared integral
field spectrograph. We have developed a data pipeline for this instrument, which will be made publicly available
to the community. The GPI data reduction pipeline (DRP) incorporates all necessary image reduction and
calibration steps for high contrast imaging in both the spectral and polarimetric modes, including datacube
generation, wavelength solution, astrometric and photometric calibrations, and speckle suppression via ADI and
SSDI algorithms. It is implemented in IDL as a flexible modular system, and includes both command line and
graphical interface tools including a customized viewer for GPI datacubes.
This GPI data reduction pipeline is currently working very well, and is in use daily processing data during
the instrument’s ongoing integration and test period at UC Santa Cruz. Here we summarize the results from
recent pipeline tests, and present reductions of instrument test data taken with GPI. We will continue to refine
and improve these tools throughout the rest of GPI’s testing and commissioning, and they will be released to the
community, including both IDL source code and compiled versions that can be used without an IDL license.
Electronics and mechanisms control system for FRIDA (inFRared Imager and Dissector for Adaptive optics)
Author(s):
R. Flores-Meza;
S. Cuevas;
J. J. Díaz;
C. Espejo;
C. Keiman;
G. Lara;
B. Sánchez;
J. Uribe
Show Abstract
FRIDA will be a common-user near infrared imager and integral field spectrograph covering the wavelength range from
0.9 to 2.5 microns. Primary observing modes driven the instrument design are two: direct imaging and integral field
spectroscopy. FRIDA will be installed at the Nasmyth-B platform of the Gran Telescopio Canarias (GTC) behind the
GTC Adaptive Optics (GTCAO) system. Instrument will use diffraction-limited optics to avoid degrading the high Strehl
ratios derived by the GTCAO system in the near infrared.
High-performance astronomical instruments with a high reconfiguration degree as FRIDA, not only depends on optical
and mechanical efficient designs but also on the good quality of its electronics and control systems design. In fact,
astronomical instruments operating performance on telescope greatly relies on electronics and control system. This paper
describes the main design topics for the FRIDA electronics and mechanisms control system, pointing on the
development that these areas have reached on the project status. FRIDA Critical Design Review (CDR) was held on
September 2011.
SPIRou @ CFHT: data reduction software and simulation tools
Author(s):
Étienne Artigau;
François Bouchy;
Xavier Delfosse;
Xavier Bonfils;
Jean-François Donati;
Pedro Figueira;
Karun Thanjavur;
David Lafrenière;
René Doyon;
Christian Surace;
Claire Moutou;
Isabelle Boisse;
Leslie Saddlemyer;
David Loop;
Driss Kouach;
Francesco Pepe;
Christophe Lovis;
Olivier Hernandez;
Shiang-Yu Wang
Show Abstract
SPIRou is a near-infrared, echelle spectropolarimeter/velocimeter under design for the 3.6m Canada-France-
Hawaii Telescope (CFHT) on Mauna Kea, Hawaii. The unique scientific capabilities and technical design features
are described in the accompanying papers at this conference. In this paper we focus on the data reduction software
(DRS) and the data simulation tool. The SPIRou DRS builds upon the experience of the existing SOPHIE,
HARPS and ESPADONS spectrographs; class-leaders instruments for high-precision RV measurements and
spectropolarimetry. While SPIRou shares many characteristics with these instruments, moving to the near-
infrared domain brings specific data-processing challenges: the presence of a large number of telluric absorption
lines, strong emission sky lines, thermal background, science arrays with poorer cosmetics, etc. In order for the
DRS to be fully functional for SPIRou's first light in 2015, we developed a data simulation tool that incorporates
numerous instrumental and observational e_ects. We present an overview of the DRS and the simulation tool
architectures.
ORBS: A data reduction software for the imaging Fourier transform spectrometers SpIOMM and SITELLE
Author(s):
T. Martin;
L. Drissen;
G. Joncas
Show Abstract
SpIOMM (Spectromètre-Imageur de l'Observatoire du Mont Mégantic) is still the only operational astronomical
Imaging Fourier Transform Spectrometer (IFTS) capable of obtaining the visible spectrum of every source of
light in a field of view of 12 arc-minutes. Even if it has been designed to work with both outputs of the Michelson
interferometer, up to now only one output has been used. Here we present ORBS (Outils de Réduction Binoculaire
pour SpIOMM/SITELLE), the reduction software we designed in order to take advantage of the two output data.
ORBS will also be used to reduce the data of SITELLE (Spectromètre-Imageur pour l' Étude en Long et en Large
des raies d' Émissions) { the direct successor of SpIOMM, which will be in operation at the Canada-France-
Hawaii Telescope (CFHT) in early 2013. SITELLE will deliver larger data cubes than SpIOMM (up to 2 cubes
of 34 Go each). We thus have made a strong effort in optimizing its performance efficiency in terms of speed
and memory usage in order to ensure the best compliance with the quality characteristics discussed with the
CFHT team. As a result ORBS is now capable of reducing 68 Go of data in less than 20 hours using only 5 Go
of random-access memory (RAM).