Proceedings Volume 9149

Observatory Operations: Strategies, Processes, and Systems V

Proceedings Volume 9149

Observatory Operations: Strategies, Processes, and Systems V

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 20 August 2014
Contents: 14 Sessions, 84 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2014
Volume Number: 9149

Table of Contents


Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9149
  • Archive Operations and Data Flow
  • Time Domain Follow-up I
  • Time Domain Follow-up II
  • Operations Benchmarking and Metrics
  • Program and Observation Scheduling
  • Science Operations I
  • Science Operations II
  • Operations and Data Quality Control
  • User Support
  • Site and Facility Operations I
  • Site and Facility Operations II
  • Site and Facility Operations III
  • Posters: Thursday
Front Matter: Volume 9149
Front Matter: Volume 9149
This PDF file contains the front matter associated with SPIE Proceedings Volume 9149, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Archive Operations and Data Flow
The ALMA archive and its place in the astronomy of the future
Felix Stoehr, Mark Lacy, Stephane Leon, et al.
The Atacama Large Millimeter/submillimeter Array (ALMA), an international partnership of Europe, North America and East Asia in cooperation with the Republic of Chile, is the largest astronomical project in existence. While ALMA’s capabilities are ramping up, Early Science observations have started. The ALMA Archive is at the center of the operations of the telescope array and is designed to manage the 200 TB of data that will be taken each year, once the observatory is in full operations. We briefly describe design principles. The second part of this paper focuses on how astronomy is likely to evolve as the amount and complexity of data taken grows. We argue that in the future observatories will compete for astronomers to work with their data, that observatories will have to reorient themselves to from providing good data only to providing an excellent end-to-end user-experience with all its implications, that science-grade data-reduction pipelines will become an integral part of the design of a new observatory or instrument and that all this evolution will have a deep impact on how astronomers will do science. We show how ALMA’s design principles are in line with this paradigm.
Data products in the ESO Science Archive Facility
Jörg Retzlaff, Magda Arnaboldi, Martino Romaniello, et al.
The European Southern Observatory Science Archive Facility is evolving from an archive containing predominantly raw data into a resource also offering science-grade data products for immediate analysis and prompt interpretation. New products originate from two different sources. On the one hand Principal Investigators of Public Surveys and other programmes reduce the raw observational data and return their products using the so-called Phase 3 - a process that extends the Data Flow System after proposal submission (Phase 1) and detailed specification of the observations (Phase 2). On the other hand raw data of selected instruments and modes are uniformly processed in-house, independently of the original science goal. Current data products assets in the ESO science archive facility include calibrated images and spectra, as well as catalogues, for a total volume in excess of 16 TB and increasing. Images alone cover more than 4500 square degrees in the NIR bands and 2400 square degrees in the optical bands; over 85000 individually searchable spectra are already available in the spectroscopic data collection. In this paper we review the evolution of the ESO science archive facility content, illustrate the data access by the community, give an overview of the implemented processes and the role of the associated data standard.
JWST science data products
Daryl Swade, Howard Bushouse, Gretchen Greene, et al.
Science data products for James Webb Space Telescope (JWST) ©observations will be generated by the Data Management Subsystem (DMS) within the JWST Science and Operations Center (S&OC) at the Space Telescope Science Institute (STScI). Data processing pipelines within the DMS will produce uncalibrated and calibrated exposure files, as well as higher level data products that result from combined exposures, such as mosaic images. Information to support the science observations, for example data from engineering telemetry, proposer inputs, and observation planning will be captured and incorporated into the science data products. All files will be generated in Flexible Image Transport System (FITS) format. The data products will be made available through the Mikulski Archive for Space Telescopes (MAST) and adhere to International Virtual Observatory Alliance (IVOA) standard data protocols.
Telluric-line subtraction in high-accuracy velocimetry: a PCA-based approach
Étienne Artigau, Nicola Astudillo-Defru, Xavier Delfosse, et al.
Optical velocimetry has led to the detection of more than 500 planets to date and there is a strong effort to push m/s velocimetry to the near-infrared to access cooler and lighter stars. The presence of numerous telluric absorption lines in the nIR brings an important challenge. As the star’s barycentric velocity varies through the year, the telluric absorption lines effectively varies in velocity relative to the star’s spectrum by the same amount leading to important systematic RV offsets. We present a novel Principal component analysis-based approach for telluric line subtraction and demonstrated its effectiveness with archival HARPS data for GJ436 and τ Ceti, over parts of the R-band that contain strong telluric absorption lines. The main results are: 1) a better RV accuracy with excluding only a few percentage of the domain, 2) better use of the entire spectrum to measure RV and 3) a higher telescope time efficency by using A0V telluric standard from telescope archive.
Time Domain Follow-up I
Reengineering observatory operations for the time domain
Observatories are complex scientific and technical institutions serving diverse users and purposes. Their telescopes, instruments, software, and human resources engage in interwoven workflows over a broad range of timescales. These workflows have been tuned to be responsive to concepts of observatory operations that were applicable when various assets were commissioned, years or decades in the past. The astronomical community is entering an era of rapid change increasingly characterized by large time domain surveys, robotic telescopes and automated infrastructures, and – most significantly – of operating modes and scientific consortia that span our individual facilities, joining them into complex network entities. Observatories must adapt and numerous initiatives are in progress that focus on redesigning individual components out of the astronomical toolkit. New instrumentation is both more capable and more complex than ever, and even simple instruments may have powerful observation scripting capabilities. Remote and queue observing modes are now widespread. Data archives are becoming ubiquitous. Virtual observatory standards and protocols and astroinformatics data-mining techniques layered on these are areas of active development. Indeed, new large-aperture ground-based telescopes may be as expensive as space missions and have similarly formal project management processes and large data management requirements. This piecewise approach is not enough. Whatever challenges of funding or politics facing the national and international astronomical communities it will be more efficient – scientifically as well as in the usual figures of merit of cost, schedule, performance, and risks – to explicitly address the systems engineering of the astronomical community as a whole.
Prospects and challenges in the electromagnetic follow-up of LIGO-Virgo gravitational wave transients
The kilometer-scale ground based gravitational wave (GW) detectors, LIGO and Virgo, are being upgraded to their advanced configurations. We expect the two LIGO observatories to undertake a 3 month science run in 2015 with a limited sensitivity. Virgo should come online in 2016, and join LIGO for a 6 month science run. Through a sequence of science runs and commissioning periods, the final sensitivity should be reached by ~2019. LIGO and Virgo are expected to deliver the first direct detection of gravitational wave transients in the next few years. Most of the known sources of GWs targeted by LIGO and Virgo will likely be luminous in the electromagnetic (EM) spectrum as well. Compact binary coalescences are thought to be progenitors of short gamma-ray bursts, while long gamma-ray bursts are likely to be associated with core collapse supernova. A joint detection of gravitational and EM radiation may help confirm these associations, and expand our understanding of those astrophysical systems. Due to the transient nature, a search for the EM counterparts to GW events should be done with the shortest latency. In this paper we describe the EM follow-up program of Advanced LIGO and Virgo, from the search for GWs to the production of sky maps. Furthermore, we quantify the expected sky localization errors in the first two years of operation of the advanced detectors network.
Time Domain Follow-up II
ANTARES: a prototype transient broker system
Abhijit Saha, Thomas Matheson, Richard Snodgrass, et al.
The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint project of the National Optical Astronomy Observatory and the Department of Computer Science at the University of Arizona. The goal is to build the software infrastructure necessary to process and filter alerts produced by time-domain surveys, with the ultimate source of such alerts being the Large Synoptic Survey Telescope (LSST). The ANTARES broker will add value to alerts by annotating them with information from external sources such as previous surveys from across the electromagnetic spectrum. In addition, the temporal history of annotated alerts will provide further annotation for analysis. These alerts will go through a cascade of filters to select interesting candidates. For the prototype, ‘interesting’ is defined as the rarest or most unusual alert, but future systems will accommodate multiple filtering goals. The system is designed to be flexible, allowing users to access the stream at multiple points throughout the process, and to insert custom filters where necessary. We describe the basic architecture of ANTARES and the principles that will guide development and implementation.
Operations Benchmarking and Metrics
Tracking progress: monitoring observing statistics and telescope usage at the Southern African Large telescope
Steven M. Crawford, Anthony Koeslag, Encarni Romero Colmenero, et al.
Monitoring the performance of a facility is critical to successful scientific operations, and even more so, for queue based telescopes such as SALT. We highlight the steps that have been undertaken in order to monitor the performance of the Southern African Large Telescope from proposal submission to on-sky observations, and finally to publication. A suite of dedicated software tools has been produced in order to monitor the performance of the telescope, weather conditions, and scientific productivity. We report on some of the key metrics for SALT since the start of science operations to provide a baseline for its current performance. After taking account that science operations only began in September 2011, the number of papers produced by SALT since that time is similar to other 8m class observatories at the beginning of their operations.
A bibliometric analysis of observatory publications for the period 2008-2012
The primary scientific output from an astronomical telescope is the collection of papers published in refereed journals. A telescope's productivity is measured by the number of papers published which are based upon data taken with the telescope. The scientific impact of a paper can be measured quantitatively by the number of citations that the paper receives. In this paper I will examine the productivity and impact of over 20 telescopes, mainly optical/IR, with apertures larger than 3.5-m for the years between 2008 and 2012.
The LSST metrics analysis framework (MAF)
R. Lynne Jones, Peter Yoachim, Srinivasan Chandrasekharan, et al.
We describe the Metrics Analysis Framework (MAF), an open-source python framework developed to provide a user-friendly, customizable, easily-extensible set of tools for analyzing data sets. MAF is part of the Large Synoptic Survey Telescope (LSST) Simulations effort. Its initial goal is to provide a tool to evaluate LSST Operations Simulation (OpSim) simulated surveys to help understand the effects of telescope scheduling on survey performance, however MAF can be applied to a much wider range of datasets. The building blocks of the framework are Metrics (algorithms to analyze a given quantity of data), Slicers (subdividing the overall data set into smaller data slices as relevant for each Metric), and Database classes (to access the dataset and read data into memory). We describe how these building blocks work together, and provide an example of using MAF to evaluate different dithering strategies. We also outline how users can write their own custom Metrics and use these within the framework.
Program and Observation Scheduling
Improving the LSST dithering pattern and cadence for dark energy studies
The Large Synoptic Survey Telescope (LSST) will explore the entire southern sky over 10 years starting in 2022 with unprecedented depth and time sampling in six filters, ugrizy. Artificial power on the scale of the 3.5 deg LSST field-of-view will contaminate measurements of baryonic acoustic oscillations (BAO), which fall at the same angular scale at redshift z ~ 1. Using the HEALPix framework, we demonstrate the impact of an “un- dithered” survey, in which 17% of each LSST field-of-view is overlapped by neighboring observations, generating a honeycomb pattern of strongly varying survey depth and significant artificial power on BAO angular scales. We find that adopting large dithers (i.e., telescope pointing o sets) of amplitude close to the LSST field-of-view radius reduces artificial structure in the galaxy distribution by a factor of ~10. We propose an observing strategy utilizing large dithers within the main survey and minimal dithers for the LSST Deep Drilling Fields. We show that applying various magnitude cutos can further increase survey uniformity. We find that a magnitude cut of r < 27:3 removes significant spurious power from the angular power spectrum with a minimal reduction in the total number of observed galaxies over the ten-year LSST run. We also determine the effectiveness of the observing strategy for Type Ia SNe and predict that the main survey will contribute ~100,000 Type Ia SNe. We propose a concentrated survey where LSST observes one-third of its main survey area each year, increasing the number of main survey Type Ia SNe by a factor of ~1.5, while still enabling the successful pursuit of other science drivers.
Planning and scheduling at STScI: from Hubble to the James Webb Space Telescope
David S. Adler, Wayne Kinzel, Ian Jordan
While HST’s planning and scheduling processes are mature, JWST’s–with a planned 2018 launch–are still in development. The STScI science, engineering, software, and operations teams are working together to get the JWST planning and scheduling systems up and running in the next few years. Here, we review the improvements made to HST’s planning and scheduling processes over the past three decades, as well as the current state of the observing program. Also, differences between the two telescopes are discussed, as well as how they affect the creation of the JWST planning and scheduling system.
Novel scheduling approaches in the era of multi-telescope networks
E. S. Saunders, S. Lampoudi, T. A. Lister, et al.
Las Cumbres Observatory Global Telescope (LCOGT) is developing a worldwide network of fully robotic optical telescopes dedicated to time-domain astronomy. Observatory automation, longitudinal spacing of the sites, and a centralised network scheduler enable a range of observing modes impossible with traditional manual observing from a single location. We discuss the design goals of the LCOGT network scheduler, and in particular examine the unique network characteristics we seek to exploit for novel observing. We present an analysis of the key design trade-offs informing the scheduling architecture and data model, with special emphasis on both the unusual capabilities we have implemented, and some of the limitations of our approach. Finally, we describe some of the lessons we have learnt as we have moved from the beta test phase into full operational deployment in 2014.
Seeing and ground meteorology forecast for site quality and observatory operations
C. Giordano, J. Vernin, C. Muñoz-Tuñon, et al.
The quality of astronomical observations is strongly related to the quality properties of the atmosphere. This parameter is important for the determination of the observation modes, and for observation program, the socalled flexible scheduling. We propose to present the implementation of the WRF model in order to routinely and automatically forecast the optical conditions. The purpose of our study is to predict 24 hours ahead the optical conditions above an observatory to optimize the observation time, not only the meteorological conditions at ground level, but also the vertical distribution of the optical turbulence and the wind speed, i.e the so-called astronomical seeing. The seeing is computed using the Trinquet-Vernin model coupled with the vertical profiles of the wind shear and the potential temperature predicted by the WRF model. We made a comparison between the WRF output and the in situ measurements made with the DIMM and an automatic weather station above the Observatorio del Roque de los Muchachos, Canary Island. Here we show that the increase of resolution in both the terrain model and 3D grid yields better forecast when compared with in situ optical and meteorological observations.
The LSST OCS scheduler design
The Large Synoptic Survey Telescope (LSST) is a complex system of systems with demanding performance and operational requirements. The nature of its scientific goals requires a special Observatory Control System (OCS) and particularly a very specialized automatic Scheduler. The OCS Scheduler is an autonomous software component that drives the survey, selecting the detailed sequence of visits in real time, taking into account multiple science programs, the current external and internal conditions, and the history of observations. We have developed a SysML model for the OCS Scheduler that fits coherently in the OCS and LSST integrated model. We have also developed a prototype of the Scheduler that implements the scheduling algorithms in the simulation environment provided by the Operations Simulator, where the environment and the observatory are modeled with real weather data and detailed kinematics parameters. This paper expands on the Scheduler architecture and the proposed algorithms to achieve the survey goals.
Artificial intelligence for the CTA Observatory scheduler
Josep Colomé, Pau Colomer, Jordi Campreciós, et al.
The Cherenkov Telescope Array (CTA) project will be the next generation ground-based very high energy gamma-ray instrument. The success of the precursor projects (i.e., HESS, MAGIC, VERITAS) motivated the construction of this large infrastructure that is included in the roadmap of the ESFRI projects since 2008. CTA is planned to start the construction phase in 2015 and will consist of two arrays of Cherenkov telescopes operated as a proposal-driven open observatory. Two sites are foreseen at the southern and northern hemispheres. The CTA observatory will handle several observation modes and will have to operate tens of telescopes with a highly efficient and reliable control. Thus, the CTA planning tool is a key element in the control layer for the optimization of the observatory time. The main purpose of the scheduler for CTA is the allocation of multiple tasks to one single array or to multiple sub-arrays of telescopes, while maximizing the scientific return of the facility and minimizing the operational costs. The scheduler considers long- and short-term varying conditions to optimize the prioritization of tasks. A short-term scheduler provides the system with the capability to adapt, in almost real-time, the selected task to the varying execution constraints (i.e., Targets of Opportunity, health or status of the system components, environment conditions). The scheduling procedure ensures that long-term planning decisions are correctly transferred to the short-term prioritization process for a suitable selection of the next task to execute on the array. In this contribution we present the constraints to CTA task scheduling that helped classifying it as a Flexible Job-Shop Problem case and finding its optimal solution based on Artificial Intelligence techniques. We describe the scheduler prototype that uses a Guarded Discrete Stochastic Neural Network (GDSN), for an easy representation of the possible long- and short-term planning solutions, and Constraint Propagation techniques. A simulation platform, an analysis tool and different test case scenarios for CTA were developed to test the performance of the scheduler and are also described.
Science Operations I
Remote access and operation of telescopes by the scientific users
P. G. Edwards, S. Amy, D. Brodrick, et al.
The Australia Telescope National Facility operates three radio telescopes: the Parkes 64m Telescope, the Australia Telescope Compact Array (ATCA), and the Mopra 22m Telescope. Scientific operation of all these is conducted by members of the investigating teams rather than by professional operators. All three can now be accessed and controlled from any location served by the internet, the telescopes themselves being unattended for part or all of the time. Here we describe the rationale, advantages, and means of implementing this operational model.
ALMA observations during its first early science cycles
Lars-Åke Nyman, Pierre Cox, Stuartt Corder, et al.
The Atacama Large Millimeter/submillimeter Array (ALMA) is a new interferometer operated on Llano de Chajnantor at 5050 m altitude in the Chilean Andes. It consists of 66 antennas operating in the mm/submm windows between 3 and 0.3 mm wavelength. Early science observations using 16 antennas (known as Cycle 0) started in parallel with construction in September 2011, in order to provide useful results to the astronomy community and to facilitate the ongoing characterization of its system. ALMA is currently in Cycle 2 of early science observations. This presentation describes the development and progress of ALMA observations and data processing from Cycle 0 towards full operations.
Flux-calibration of medium-resolution spectra from 300 nm to 2500 nm
Sabine Moehler, Andrea Modigliani, Wolfram Freudling, et al.
While the near-infrared wavelength regime is becoming more and more important for astrophysics there are few spectrophotometric standard star data available to flux calibrate such data. On the other hand flux calibrating high-resolution spectra is a challenge even in the optical wavelength range, because the available flux standard data are often too coarsely sampled. We describe a method to obtain reference spectra derived from stellar model atmospheres, which allow users to derive response curves from 300 nm to 2500 nm also for high-resolution spectra. We verified that they provide an appropriate description of the observed standard star spectra by checking for residuals in line cores and line overlap regions in the ratios of observed spectra to model spectra. The finally selected model spectra are then empirically corrected for remaining mismatches and photometrically calibrated using independent observations. In addition we have defined an automatic method to correct for moderate telluric absorption using telluric model spectra with very high spectral resolution, that can easily be adapted to the observed data. This procedure eliminates the need to observe telluric standard stars, as long as some knowledge on the target spectrum exists.
Quantifying photometric observing conditions on Paranal using an IR camera
A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer, manufactured by Radiometer Physics GmbH (RPG), is used to monitor sky conditions over ESO’s Paranal observatory in support of VLT science operations. In addition to measuring precipitable water vapour (PWV) the instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. Due to its extended operating range down to -100 °C it is capable of detecting very cold and very thin, even sub-visual, cirrus clouds. We present a set of instrument flux calibration values as compared with a detrended fluctuation analysis (DFA) of the IR camera zenith-looking sky brightness data measured above Paranal taken over the past two years. We show that it is possible to quantify photometric observing conditions and that the method is highly sensitive to the presence of even very thin clouds but robust against variations of sky brightness caused by effects other than clouds such as variations of precipitable water vapour. Hence it can be used to determine photometric conditions for science operations. About 60 % of nights are free of clouds on Paranal. More work will be required to classify the clouds using this technique. For the future this approach might become part of VLT science operations for evaluating nightly sky conditions.
Science Operations II
Translating PI observing proposals into ALMA observing scripts
The ALMA telescope is a complex 66-antenna array working in the specialized domain of mm- and sub-mm aperture synthesis imaging. To make ALMA accessible to technically inexperienced but scientifically expert users, the ALMA Observing Tool (OT) has been developed. Using the OT, scientifically oriented user input is formatted as observing proposals that are packaged for peer-review and assessment of technical feasibility. If accepted, the proposal’s scientifically oriented inputs are translated by the OT into scheduling blocks, which function as input to observing scripts for the telescope’s online control system. Here I describe the processes and practices by which this translation from PI scientific goals to online control input and schedule block execution actually occurs.
Solar Wind Electrons Alphas and Protons (SWEAP) Science Operations Center initial design and implementation
Kelly E. Korreck, Justin C. Kasper, Anthony W. Case, et al.
Solar Probe Plus, scheduled to launch in 2018, is a NASA mission that will fly through the Sun's atmosphere for the first time. It will employ a combination of in situ plasma measurements and remote sensing imaging to achieve the mission's primary goal: to understand how the Sun's corona is heated and how the solar wind is accelerated. The Solar Wind Electrons Alphas and Protons (SWEAP) instrument suite consists of a Faraday cup and three electrostatic analyzers. In order to accomplish the science objectives, an encounter-based operations scheme is needed. This paper will outline the SWEAP science operations center design and schemes for data selection and down link.
GBOT: ground based optical tracking of the Gaia satellite
Martin Altmann, Sebastien Bouquillon, Francois Taris, et al.
Gaia, the 1 billion star, high precision, astrometric satellite will revolutionise our understanding in many areas of astronomy ranging from bodies in our Solar System to the formation and structure of our Galaxy. To fully achieve the ambitious goals of the mission, and to completely eliminate effects such as aberration, we must know the position and velocity vectors of the spacecraft as it orbits the Lagrange point to an accuracy greater than can be obtained by traditional radar techniques, leading to the decision to conduct astrometric observations of the Gaia satellite itself from the ground. Therefore the Ground Based Optical Tracking (GBOT) project was formed and a small worldwide network using 1-2 m telescopes established in order to obtain one measurement per day of a precision/accuracy of 20 mas. We will discuss all aspects of GBOT, setup, feasibility considerations, preliminary tests of observing methods, partner observatories, the pipeline/database (see also contribution by Bouquillon et al.1).
The Gaia payload uplink commanding system
A. Mora, A. Abreu, N. Cheek, et al.
This document describes the uplink commanding system for the ESA Gaia mission. The need for commanding, the main actors, data flow and systems involved are described. The system architecture is explained in detail, including the different levels of configuration control, software systems and data models. A particular subsystem, the automatic interpreter of human-readable onboard activity templates, is also carefully described. Many lessons have been learned during the commissioning and are also reported, because they could be useful for future space survey missions.
NuSTAR observatory science operations: on-orbit acclimation
The Nuclear Spectroscopic Telescope Array (NuSTAR) is the first focusing high energy (3-79 keV) X-ray observatory. The NuSTAR project is led by Caltech, which hosts the Science Operations Center (SOC), with mission operations managed by UCB Space Sciences Laboratory. We present an overview of NuSTAR science operations and describe the on-orbit performance of the observatory. The SOC is enhancing science operations to serve the community with a guest observing program beginning in 2015. We present some of the challenges and approaches taken by the SOC to operating a full service space observatory that maximizes the scientific return from the mission.
Operations and Data Quality Control
Focus and alignment of the Space Surveillance Telescope: procedures and year 2 performance results
Deborah Freedman Woods, Richard L. Lambour, Walter J. Faccenda, et al.

The Space Surveillance Telescope (SST) is a three-mirror Mersenne-Schmidt telescope with a 3.5 m primary mirror that is designed for deep, wide-area sky surveys. The SST design incorporates a camera with charge-coupled devices (CCDs) on a curved substrate to match the telescope’s inherent field curvature, capturing a large field-of-view (6 square degree) with good optical performance across the focal surface. The unique design enables a compact mount construction for agile pointing, contributing to survey efficiency. However, the optical properties make the focus and alignment challenging due to an inherently small depth of focus and the additional degrees of freedom that result from having a powered tertiary mirror. Adding to the challenge, the optical focus and alignment of the mirrors must be accomplished without a dedicated wavefront sensor.

Procedures created or adapted for use at the SST have enabled a successful campaign for focus and alignment, based on a five-step iterative process to (1) position the tertiary mirror along the optical axis to reduce defocus; (2) reduce spherical aberration by a coordinate move of the tertiary and secondary mirrors; (3) measure the higher order aberrations including astigmatism and coma; (4) associate the measured aberrations with the predictions of optical ray-tracing analysis; and (5) apply the mirror corrections and repeat steps 1-4 until optimal performance is achieved (Woods et al. 2013). A set of predicted mirror motions are used to maintain system performance across changes in telescope elevation pointing and in temperature conditions, both nightly and seasonally. This paper will provide an overview of the alignment procedure developed for the SST and will report on the focus performance through the telescope’s second year, including lessons learned over the course of operation.

Highly automated on-orbit operations of the NuSTAR telescope
Bryce Roberts, Manfred Bester, Renee Dumlao, et al.
UC Berkeley's Space Sciences Laboratory (SSL) currently operates a fleet of seven NASA satellites, which conduct research in the fields of space physics and astronomy. The newest addition to this fleet is a high-energy X-ray telescope called the Nuclear Spectroscopic Telescope Array (NuSTAR). Since 2012, SSL has conducted on-orbit operations for NuSTAR on behalf of the lead institution, principle investigator, and Science Operations Center at the California Institute of Technology. NuSTAR operations benefit from a truly multi-mission ground system architecture design focused on automation and autonomy that has been honed by over a decade of continual improvement and ground network expansion. This architecture has made flight operations possible with nominal 40 hours per week staffing, while not compromising mission safety. The remote NuSTAR Science Operation Center (SOC) and Mission Operations Center (MOC) are joined by a two-way electronic interface that allows the SOC to submit automatically validated telescope pointing requests, and also to receive raw data products that are automatically produced after downlink. Command loads are built and uploaded weekly, and a web-based timeline allows both the SOC and MOC to monitor the state of currently scheduled spacecraft activities. Network routing and the command and control system are fully automated by MOC's central scheduling system. A closed-loop data accounting system automatically detects and retransmits data gaps. All passes are monitored by two independent paging systems, which alert staff of pass support problems or anomalous telemetry. NuSTAR mission operations now require less than one attended pass support per workday.
An update on the status and performance of the Radiometric All-Sky Infrared Camera (RASICAM)
Kevin Reil, Peter Lewis, Rafe Schindler, et al.
The Radiometric All-Sky Infrared Camera (RASICAM) has been operating on Cerro Tololo for over two years looking for clouds in the 10 to 12 micron IR band. Every 90 seconds each night RASICAM collects an integrated image of sky conditions and reports them to the Blanco telescope control system (TCS) to be shared with other instruments. We report on the RASICAM design, calibration and performance of the system. Additionally, correlation with conditions as observed in the Dark Energy Camera (DECam) will be presented.
The Dark Energy Survey and operations: Year 1
H. T. Diehl, T. M. C. Abbott, J. Annis, et al.
The Dark Energy Survey (DES) is a next generation optical survey aimed at understanding the accelerating expansion of the universe using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type Ia supernovae. To perform the 5000 sq-degree wide field and 30 sq-degree supernova surveys, the DES Collaboration built the Dark Energy Camera (DECam), a 3 square-degree, 570-Megapixel CCD camera that was installed at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory (CTIO). DES started its first observing season on August 31, 2013 and observed for 105 nights through mid-February 2014. This paper describes DES “Year 1” (Y1), the strategy and goals for the first year's data, provides an outline of the operations procedures, lists the efficiency of survey operations and the causes of lost observing time, provides details about the quality of the first year's data, and hints at the “Year 2” plan and outlook.
SALSA: a tool to estimate the stray light contamination for low-Earth orbit observatories
Thibault Kuntzer, Andrea Fortier, Willy Benz
Stray light contamination reduces considerably the precision of photometric of faint stars for low altitude spaceborne observatories. When measuring faint objects, the necessity of coping with stray light contamination arises in order to avoid systematic impacts on low signal-to-noise images. Stray light contamination can be represented by a flat offset in CCD data. Mitigation techniques begin by a comprehensive study during the design phase, followed by the use of target pointing optimisation and post-processing methods. We present a code that aims at simulating the stray-light contamination in low-Earth orbit coming from reflexion of solar light by the Earth. StrAy Light SimulAtor (SALSA) is a tool intended to be used at an early stage as a tool to evaluate the effective visible region in the sky and, therefore to optimise the observation sequence. SALSA can compute Earth stray light contamination for significant periods of time allowing missionwide parameters to be optimised (e.g. impose constraints on the point source transmission function (PST) and/or on the altitude of the satellite). It can also be used to study the behaviour of the stray light at different seasons or latitudes. Given the position of the satellite with respect to the Earth and the Sun, SALSA computes the stray light at the entrance of the telescope following a geometrical technique. After characterising the illuminated region of the Earth, the portion of illuminated Earth that affects the satellite is calculated. Then, the flux of reflected solar photons is evaluated at the entrance of the telescope. Using the PST of the instrument, the final stray light contamination at the detector is calculated. The analysis tools include time series analysis of the contamination, evaluation of the sky coverage and an objects visibility predictor. Effects of the South Atlantic Anomaly and of any shutdown periods of the instrument can be added. Several designs or mission concepts can be easily tested and compared. The code is not thought as a stand-alone mission designer. Its mandatory inputs are a time series describing the trajectory of the satellite and the characteristics of the instrument. This software suite has been applied to the design and analysis of CHEOPS (CHaracterizing ExOPlanet Satellite). This mission requires very high precision photometry to detect very shallow transits of exoplanets. Different altitudes and characteristics of the detector have been studied in order to find the best parameters, that reduce the effect of contamination.
User Support
The European ALMA Regional Centre: a model of user support
P. Andreani, F. Stoehr, M. Zwaan, et al.
The ALMA Regional Centres (ARCs) form the interface between the ALMA observatory and the user community from the proposal preparation stage to the delivery of data and their subsequent analysis. The ARCs provide critical services to both the ALMA operations in Chile and to the user community. These services were split by the ALMA project into core and additional services. The core services are financed by the ALMA operations budget and are critical to the successful operation of ALMA. They are contractual obligations and must be delivered to the ALMA project. The additional services are not funded by the ALMA project and are not contractual obligations, but are critical to achieve ALMA full scientific potential. A distributed network of ARC nodes (with ESO being the central ARC) has been set up throughout Europe at the following seven locations: Bologna, Bonn-Cologne, Grenoble, Leiden, Manchester, Ondrejov, Onsala. These ARC nodes are working together with the central node at ESO and provide both core and additional services to the ALMA user community. This paper presents the European ARC, and how it operates in Europe to support the ALMA community. This model, although complex in nature, is turning into a very successful one, providing a service to the scientific community that has been so far highly appreciated. The ARC could become a reference support model in an age where very large collaborations are required to build large facilities, and support is needed for geographically and culturally diverse communities.
The human pipeline: distributed data reduction for ALMA
Scott L. Schnee, Crystal Brogan, Daniel Espada, et al.
Users of the Atacama Large Millimeter/submillimeter Array (ALMA) are provided with calibration and imaging products in addition to raw data. In Cycle 0 and Cycle 1, these products are produced by a team of data reduction experts spread across Chile, East Asia, Europe, and North America. This article discusses the lines of communication between the data reducers and ALMA users that enable this model of distributed data reduction. This article also discusses the calibration and imaging scripts that have been provided to ALMA users in Cycles 0 and 1, and what will be different in future Cycles.
The Gemini Observatory fast turnaround program
R. E. Mason, S. Côté, M. Kissler-Patig, et al.
Gemini's Fast Turnaround program is intended to greatly decrease the time from having an idea to acquiring the supporting data. The scheme will offer monthly proposal submission opportunities, and proposals will be reviewed by the principal investigators or co-investigators of other proposals submitted during the same round. Here, we set out the design of the system and outline the plan for its implementation, leading to the launch of a pilot program at Gemini North in January 2015.
Site and Facility Operations I
LCOGT network observatory operations
Andrew Pickles, Annie Hjelstrom, Todd Boroson, et al.
We describe the operational capabilities of the Las Cumbres Observatory Global Telescope Network. We summarize our hardware and software for maintaining and monitoring network health. We focus on methodologies to utilize the automated system to monitor availability of sites, instruments and telescopes, to monitor performance, permit automatic recovery, and provide automatic error reporting. The same jTCS control system is used on telescopes of apertures 0.4m, 0.8m, 1m and 2m, and for multiple instruments on each. We describe our network operational model, including workloads, and illustrate our current tools, and operational performance indicators, including telemetry and metrics reporting from on-site reductions. The system was conceived and designed to establish effective, reliable autonomous operations, with automatic monitoring and recovery - minimizing human intervention while maintaining quality. We illustrate how far we have been able to achieve that.
Instrumentation on Paranal Observatory: how to keep their reliability and performances over their lifetime?
Instrumentation on Paranal Observatory is currently composed of 18 scientific instruments (operational, in commissioning or in stand by) and 8 technical instruments (test camera, fringe tracking, AO modules, laser guide star facility, tip-tilt sensor). Their implementations and operations started 15 years ago. Over the years enough information to describe their typical behavior was gathered to define a preventive maintenance plan for each instrument and/or a general refurbishment in order to keep their reliability and performance.
A comparison of operation models and management strategies for the Spitzer Space Telescope and the Nuclear Spectroscopic Telescope Array
The Spitzer Space Telescope was launched in 2003 as part of NASA’s Great Observatory Program, measuring the infrared universe. As a 100% Community Observatory, Spitzer started with a large infrastructure, and has been trimmed during its extended missions to less than two-thirds of its original budget. The Nuclear Spectroscopic Telescope Array is a NASA Small Explorer Mission targeting the high-energy x-ray sky. It was launched in June of 2012 and is currently carrying out its two-year primary mission. This paper discusses a comparison of the two missions; differences between large and small missions, Community and Principal Investigator missions, and operations and management strategies of each. In addition, the paper will discuss the process of downsizing a large mission into a model similar to that of an explorer class spacecraft.
Maintaining a suite of binocular facility instruments at the Large Binocular Telescope
Robert O. Reynolds, John Morris, Jennifer Power, et al.
Facility Instruments at the Large Binocular Telescope (LBT) include the Large Binocular Camera (LBC), a pair of wide-field imagers at the prime focus, the LUCIFER (or LUCI) near-infrared imager and spectrograph pair, and the Multi-Object Double Spectrograph (MODS), a pair of long-slit spectrographs. The disciplines involved in instrument support are reviewed, as well as scheduling of support personnel. A computerized system for instrument maintenance scheduling and spare parts inventory is described. Instrument problems are tracked via an online reporting system, and statistics on types of instrument problems are discussed, as well as applicability of the system to troubleshooting.
LBTO's long march to full operation - step 1
For the LBT Observatory, the next couple of years promise to be both exciting and challenging. Exciting as the long awaited suite of first generation instruments and GLAO become available for binocular operations, while regular interferometric observations will make LBT the first operational ELT. Challenging because LBTO will have to handle maintenance and upgrades of instruments or key components like its adaptive secondaries about which it has much to learn. Step1 will outline a plan optimizing LBTOs scientific production while mitigating the consequences of the inevitable setbacks the challenges will bring.
Site and Facility Operations II
Commissioning and operation of the new Karl G. Jansky Very Large Array
We give an overview of the scientific commissioning and early operation of the Karl G. Jansky Very Large Array (VLA). The Expanded VLA Construction Project was a decade-long project to transform the capabilities of the VLA, culminating in its re-dedication in 2012 as the Jansky VLA. The need to keep a vibrant and engaged user community throughout the entire construction project translated into operational requirements (one of which was allowing the minimum down-time possible), and the need for a mechanism to provide the community with early access to the new capabilities alongside on-going construction and commissioning, using a staged approach. This access was enabled during the EVLA Construction Project by defining an Open Shared Risk Observing (OSRO) program for the general community, and a Resident Shared Risk Observing (RSRO) program for those requesting capabilities not fully commissioned in exchange for a period of residency to help commission and test those capabilities with the assistance of NRAO staff. The OSRO program has become the General Observing (GO) program in full operations, and the RSRO program has continued as a means of maintaining, and adding to, an active pool of users with innovative ideas for new capabilities, driven by their science. Besides the new technical capabilities, the start of full operations of the Jansky VLA also introduced full dynamic scheduling, including the ability for fast (less than 24 hour) response to triggers and targets of opportunity, and the delivery of pipeline-calibrated visibility data for continuum projects. We discuss some of the challenges resulting from the new capabilities and operational model for the VLA.
Auxiliary instruments for the absolute calibration of the ASTRI SST-2M prototype for the Cherenkov Telescope Array
Maria Concetta Maccarone, Alberto Segreto, Osvaldo Catalano, et al.
ASTRI SST-2M is the end-to-end prototype telescope under development by the Italian National Institute of Astrophysics, INAF, proposed for the investigation of the highest-energy gamma-ray band in the framework of the Cherenkov Telescope Array, CTA. The ASTRI SST-2M prototype will be installed in Italy at the INAF station located at Serra La Nave on Mount Etna during Fall 2014. The calibration and scientific validation phase will start soon after. The calibration of a Cherenkov telescope includes several items and tools. The ASTRI SST- 2M camera is equipped with an internal fiber illumination system that allows to perform the relative calibration through monitoring of gain and efficiency variations of each pixel. The absolute calibration of the overall system, including optics, will take advantage from auxiliary instrumentation, namely UVscope and UVSiPM, two small-aperture multi-pixels photon detectors NIST calibrated in lab. During commissioning phase, to measure the main features of ASTRI SST-2M, as its overall spectral response, the main telescope and the auxiliary UVscope-UVSiPM will be illuminated simultaneously by a spatially uniform flux generated by a ground-based light source, named Illuminator, placed at a distance of few hundreds meters. Periodically, during clear nights, the flux profiles of a reference star tracked simultaneously by ASTRI SST-2M and UVscope-UVSiPM will allow to evaluate the total atmospheric attenuation and the absolute calibration constant of the ASTRI SST-2M prototype. In this contribution we describe the auxiliary UVscope-UVSiPM and Illuminator sub-system together with an overview of the end-to-end calibration procedure foreseen for the ASTRI SST-2M telescope prototype.
Calibration strategies for the Cherenkov Telescope Array
Markus Gaug, David Berge, Michael Daniel, et al.
The Central Calibration Facilities workpackage of the Cherenkov Telescope Array (CTA) observatory for very high energy gamma ray astronomy defines the overall calibration strategy of the array, develops dedicated hardware and software for the overall array calibration and coordinates the calibration efforts of the different telescopes. The latter include LED-based light pulsers, and various methods and instruments to achieve a calibration of the overall optical throughput. On the array level, methods for the inter-telescope calibration and the absolute calibration of the entire observatory are being developed. Additionally, the atmosphere above the telescopes, used as a calorimeter, will be monitored constantly with state-of-the-art instruments to obtain a full molecular and aerosol profile up to the stratosphere. The aim is to provide a maximal uncertainty of 10% on the reconstructed energy-scale, obtained through various independent methods. Different types of LIDAR in combination with all-sky-cameras will provide the observatory with an online, intelligent scheduling system, which, if the sky is partially covered by clouds, gives preference to sources observable under good atmospheric conditions. Wide-field optical telescopes and Raman Lidars will provide online information about the height-resolved atmospheric extinction, throughout the field-of-view of the cameras, allowing for the correction of the reconstructed energy of each gamma-ray event. The aim is to maximize the duty cycle of the observatory, in terms of usable data, while reducing the dead time introduced by calibration activities to an absolute minimum.
Operating observatories: the need for a new paradigm
At a time of declining funding, the managers of ground based observatories may not be in the best position to ensure adequate resources either for developing new facilities or new instruments or for upgrading existing facilities. Nor can there be dependence upon the traditional support for researchers which in turn implies that there is inadequate founding to cover the cost of operations. For historical reasons, an overwhelming number of observatories in the USA are affiliated with, or hosted by, universities yet, because of the traditional lack of entrepreneurial thinking and the complexity and the extent of administrations, a university may not be the best environment to develop new approaches to the management of observatories; nor is an academic background of necessity the best preparation for best management practices. We propose that observatories should adopt a business-like approach, to be service providers, and to use the same metrics as for a business. This approach may entail forming corporations, forming consortia, spreading the risk and to find additional sources of income from sales and spin-offs.
The Isaac Newton Group of Telescopes on La Palma
The ING runs the highly-productive 4.2-mWilliam Herschel Telescope (WHT) and 2.5-m Isaac Newton Telescope (INT) on La Palma in the Canary Islands. I give an overview of the current operational model, commenting on how the model has evolved since the mid-1980s, and on the experience gained with e.g. instrument development; adaptive-optics/LGS deployment; hosting visiting instruments; scheduling; fault handling; student vs staff support of observers; and performance monitoring.
Site and Facility Operations III
Tiers of the maintenance concept at ALMA in operations
The Atacama Large Millimeter/submillimeter Array finds itself in the transition into full operations. Previous construction activities are being wrapped up, and regular, repetitive maintenance and upkeep will dominate the daily life, which asks for a consolidation and streamlining of the activities at the observatory. Especially the shifting focus to the high site of the observatory deserves more attention, since assembly, integration and verification activities at the base camp have ceased by now. In parallel, adjustments in the host country's labor legislation for operations at high geographic altitudes demand a review of the way things are done. This talk outlines the underlying operational concepts, lists the limiting constraints, describes the implementation of our reactions to those, and outlines our future intentions, which will be one in a number of steps towards optimization of the productivity of the observatory. The latter is the top level goal, which the Joint ALMA Observatory (JAO) has signed up for.
Creation of an instrument maintenance program at W. M. Keck Observatory
G. M. Hill, S. H. Kwok, J. A. Mader, et al.
Until a few years ago, the W. M. Keck Observatory (WMKO) did not have a systematic program of instrument maintenance at a level appropriate for a world-leading observatory. We describe the creation of such a program within the context of WMKO’s lean operations model which posed challenges but also guided the design of the system and resulted in some unique and notable capabilities. These capabilities and the flexibility of the system have led to its adoption across the Observatory for virtually all PM’s. The success of the Observatory in implementing the program and its impact on instrument reliability are presented. Lessons learned are reviewed and strategic implications discussed.
Science operations for LCOGT: a global telescope network
T. Boroson, T. Brown, A. Hjelstrom, et al.
The Las Cumbres Observatory Global Telescope Network comprises nine 1-meter and two 2-meter telescopes, all robotic and dynamically scheduled, at five sites spanning the globe. Instrumentation includes optical imagers and low-dispersion spectrographs. A suite of high-dispersion, high-stability spectrographs is being developed for deployment starting late this year. The network has been designed and built to allow regular monitoring of time-variable or moving objects with any cadence, as well as rapid response to external alerts. Our intent is to operate it in a totally integrated way, both in terms of scheduling and in terms of data quality. The unique attributes of the LCOGT network make it different enough from any existing facility that alternative approaches to optimize science productivity can be considered. The LCOGT network V1.0 began full science operations this year. It is being used in novel ways to undertake investigations related to supernovae, microlensing events, solar system objects, and exoplanets. The network’s user base includes a number of partners, who are providing resources to the collaboration. A key project program brings together many of these partners to carry out large projects. In the long term, our vision is to operate the network as a part of a time-domain system, in which pre-planned monitoring observations are interspersed with autonomously detected and classified events from wide-area surveys.
Setting the standard: 25 years of operating the JCMT
Jessica T. Dempsey, Graham S. Bell, Antonio Chrysostomou, et al.
The James Clerk Maxwell Telescope (JCMT) is the largest single-dish submillimetre telescope in the world, and throughout its lifetime the volume and impact of its science output have steadily increased. A key factor for this continuing productivity is an ever-evolving approach to optimising operations, data acquisition, and science product pipelines and archives. The JCMT was one of the first common-user telescopes to adopt flexible scheduling in 2003, and its impact over a decade of observing will be presented. The introduction of an advanced data-reduction pipeline played an integral role, both for fast real-time reduction during observing, and for science-grade reduction in support of individual projects, legacy surveys, and the JCMT Science Archive. More recently, these foundations have facilitated the commencement of remote observing in addition to traditional on-site operations to further increase on-sky science time. The contribution of highly-trained and engaged operators, support and technical staff to efficient operations will be described. The long-term returns of this evolution are presented here, noting they were achieved in face of external pressures for leaner operating budgets and reduced staffing levels. In an era when visiting observers are being phased out of many observatories, we argue that maintaining a critical level of observer participation is vital to improving and maintaining scientific productivity and facility longevity.
SciOps2.0: an evolution of ESO/VLT's science operations model
Christophe Dumas, Henri Boffin, Stéphane Brillant, et al.
This paper presents the recent changes undergone by the Science Operations department of the ESO Paranal Observatory. This revised science operations model, named SciOps2, aims at improving operations efficiency and quality of the data delivered to our community of users. The changes regarding the new department structure, its staffing, and the distribution of tasks and responsibilities, are described in details, as well as the measured impact of these changes.
SUMO: operation and maintenance management web tool for astronomical observatories
SUMO is an Operation and Maintenance Management web tool, which allows managing the operation and maintenance activities and resources required for the exploitation of a complex facility. SUMO main capabilities are: information repository, assets and stock control, tasks scheduler, executed tasks archive, configuration and anomalies control and notification and users management. The information needed to operate and maintain the system must be initially stored at the tool database. SUMO shall automatically schedule the periodical tasks and facilitates the searching and programming of the non-periodical tasks. Tasks planning can be visualized in different formats and dynamically edited to be adjusted to the available resources, anomalies, dates and other constrains that can arise during daily operation. SUMO shall provide warnings to the users notifying potential conflicts related to the required personal availability or the spare stock for the scheduled tasks. To conclude, SUMO has been designed as a tool to help during the operation management of a scientific facility, and in particular an astronomical observatory. This is done by controlling all operating parameters: personal, assets, spare and supply stocks, tasks and time constrains.
The Observatorio Astrofísico de Javalambre: current status, developments, operations, and strategies
A. J. Cenarro, M. Moles, A. Marín-Franch, et al.
The Observatorio Astrofísico de Javalambre (OAJ) is a new Spanish astronomical facility particularly designed for carrying out large sky surveys. The OAJ is mainly motivated by the development of J-PAS, the Javalambre- PAU Astrophysical Survey, an unprecedented astronomical survey that aims to observe 8500 deg2 of the sky with a set of 54 optical contiguous narrow-band filters (FWHM ~14 nm) and 5 mid and broad-band ones. J-PAS will provide a low resolution spectrum (R ~ 50) for every pixel of the Northern sky down to AB~22:5 - 23:5 per square arcsecond (at 5 σ level), depending on the narrow-band filter, and ~ 2 magnitudes deeper for the redder broad-band filters. The main telescope at the OAJ is the Javalambre Survey Telescope (JST/T250), an innovative Ritchey-Chrétien, alt-azimuthal, large-etendue telescope with a primary mirror diameter of 2.55m and 3 deg (diameter) FoV. The JST/T250 is the telescope devoted to conduct J-PAS with JPCam, a panoramic camera of 4.7 deg2 FoV and a mosaic of 14 large format CCDs that, overall, amounts to 1.2 Gpix. The second largest telescope at the OAJ is the Javalambre Auxiliary Survey Telescope (JAST/T80), a Ritchey-Chrétien, German-equatorial telescope of 82 cm primary mirror and 2 deg FoV, whose main goal is to perform J-PLUS, the Javalambre Photometric Local Universe Survey. J-PLUS will cover the same sky area of J-PAS using the panoramic camera T80Cam with 12 filters in the optical range, which are specifically defined to perform the photometric calibration of J-PAS. The OAJ project officially started in mid 2010. Four years later, the OAJ is mostly completed and the first OAJ operations have already started. The civil work and engineering installations are finished, including the telescope buildings and the domes. JAST/T80 is at the OAJ undertaking commissioning tasks, and JST/T250 is in AIV phase at the OAJ. Related astronomical subsystems like the seeing and atmospheric extinction monitors and the all-sky camera are fully operative. This paper aims to present a brief description and status of the OAJ main installations, telescopes and cameras. The current development and operation plan of the OAJ in terms of staffing organization, resources, observation scheduling, and data archiving, is also described.
Posters: Thursday
Upgrading, monitoring and operation of a dome drive system
Steven E. Bauman, Bill Cruise, Ivan Look, et al.
CFHT’s decision to move away from classical observing prompted the development of a remote observing environment aimed at producing science observations from headquarters facility in Waimea, HI. This remote observing project commonly referred to as the Observatory Automation Project (OAP ) was completed at the end of January 2011 and has been providing the majority of science data ever since. A comprehensive feasibility study was conducted to determine the options available to achieve remote operations of the observatory dome drive system. After evaluation, the best option was to upgrade the original hydraulic system to utilize variable frequency drive (VFD) technology. The project upgraded the hydraulic drive system, which initially utilized a hydraulic power unit and three (3) identical drive units to rotate the dome. The new electric drive system replaced the hydraulic power unit with electric motor controllers, and each drive unit reuses the original drive and swaps one for one the original hydraulic motors with an electric motor. The motor controllers provide status and monitoring parameters for each drive unit which convey the functionality and health of the system. This paper will discuss the design upgrades to the dome drive rotation system, as well as some benefits, control, energy savings, and monitoring.
Dome venting: the path to thermal balance and superior image quality

The Canada France Hawaii Telescope operates a 3.6m Optical/Infrared telescope on the summit of Mauna Kea. As an effort to improve delivered image quality in a cost-effective manner, a dome venting project was initiated to eliminate local contributions to 'seeing' that exist along the optical path and arise to a large extent due to temperature gradients throughout the dome volume.

The quality of images delivered by the telescope is adversely affected by variations in air temperature within the telescope dome. Air temperature differences are caused by the air’s contact with large structures. They are different from ambient as a result of their large thermal inertias and the consequent inability of these structures to follow rapid air temperature changes.

The dome venting project is an effort to add a series of large openings, “vents”, in the skin of the dome with the purpose of allowing free stream summit winds to flush out “stagnant air”. The term, “stagnant air”, applies to thermally mixed air from the inside of the dome environment that, for one reason or another, has been heated or cooled by surfaces in the dome environment.

The addition of vents to the CFHT dome is intended to facilitate the passive flushing of interior air by the local wind, thereby greatly reducing air temperature variations, a process that has been successfully demonstrated to improve image quality at other telescope facilities and supported by recent water tunnel tests conducted by CFHT staff.

Strategies for personnel sustainable lifecycle at astronomical observatories and local industry development
Eduardo A. Bendek, Michael Leatherbee, Heather Smith, et al.
Specialized manpower required to efficiently operate world-class observatories requires large investments in time and resources to train personnel in very specific areas of engineering. Isolation and distances to mayor cities pose a challenge to retain motivated and qualified personnel on the mountain. This paper presents strategies that we believe may be effective for retaining this specific know-how in the astronomy field; while at the same time develop a local support industry for observatory operations and astronomical instrumentation development. For this study we choose Chile as a research setting because it will host more than 60% of the world’s ground based astronomical infrastructure by the end of the decade, and because the country has an underdeveloped industry for astronomy services. We identify the astronomical infrastructure that exists in the country as well as the major research groups and industrial players. We further identify the needs of observatories that could be outsourced to the local economy. As a result, we suggest spin-off opportunities that can be started by former observatory employees and therefore retaining the knowhow of experienced people that decide to leave on-site jobs. We also identify tools to facilitate this process such as the creation of a centralized repository of local capabilities and observatory needs, as well as exchange programs within astronomical instrumentation groups. We believe that these strategies will contribute to a positive work environment at the observatories, reduce the operation and development costs, and develop a new industry for the host country.
Measure fiber position errors from spectra data for LAMOST
LAMOST is a 4m reflecting Schmidt telescope special designed for conducting multifiber spectroscopic survey with 4000 fibers. Fiber position errors greatly impact spectral data SNR. There are three groups of sources that contribute to fiber position errors: errors orthogonal to the optical axis of telescope, errors parallel to the optical axis, and the fiber tilt from the telescope optical axis. It is difficult to measure these errors, especially during the observation. In this poster, we propose an indirect method to calculate the total and systematic position errors for each individual fiber from spectra data by constructing a model of magnitude loss due to the fiber position error for the point source.
Acquiring multiple stars with the LINC-NIRVANA Pathfinder
The LINC-NIRVANA Pathfinder1 (LN-PF), a ground-layer adaptive optics (AO) system recently commissioned at the Large Binocular Telescope (LBT), is one of 4 sensors that provide AO corrected images to the full LINC-NIRVANA instrument. With first light having taken place on November 17, 2013,2, 3 the core goals for the LN-PF have been accomplished. In this report, we look forward to one of the LN-PF extended goals. In particular, we review the acquisition mechanism required to place each of several star probes on its corresponding star in the target asterism. For emerging AO systems in general, co-addition of light from multiple stars stands as one of several methods being pursued to boost sky coverage. With 12 probes patrolling a large field of view (an annulus 6-arcminutes in diameter), the LN-PF will provide a valuable testbed to verify this method.
ESPRESSO data flow: from design to development
P. Di Marcantonio, V. D'Odorico, G. Cupani, et al.
The Echelle SPectrograph for Rocky Exoplanets and Stable Spectral Observations (ESPRESSO) is an extremely stable high-resolution spectrograph currently under construction, to be placed at Paranal Observatory in the ESO VLT Combined Coudé Laboratory (CCL). With its groundbreaking characteristics (resolution up to ∼200,000; wavelength range from 380 to 780 nm; centimeter-per-second precision in wavelength calibration) and its very specific science cases (search for terrestrial exoplanets with the radial velocity method; measure of the variation of fundamental constants through observations of QSO spectra), ESPRESSO is aimed to be a real "science machine", an instrument whose data flow subsystems are designed in a fully-integrated way to directly extract scientific results from observations. To this purpose, an end-to-end operations scheme will be properly tackled through tailored observation strategy, observation preparation, data reduction and data analysis tools. The software design has successfully passed the ESO final design review in May 2013 and it is now in development phase. In this paper we present the final design for the ESPRESSO data flow system (DFS) with some insights into the new concepts and algorithms that will be introduced for observation strategy/preparation and data reduction/analysis. Eventually, peculiarities and challenges needed to adapt the ESPRESSO DFS in the pre-existing ESO/VLT DFS framework are outlined.
Operational support and service concepts for observatories
Peter Emde, Pierre Chapus
The operational support and service for observatories aim at the provision, the preservation and the increase of the availability and performance of the entire structural, mechanical, drive and control systems of telescopes and the related infrastructure. The operational support and service levels range from the basic service with inspections, preventive maintenance, remote diagnostics and spare parts supply over the availability service with telephone hotline, online and on-site support, condition monitoring and spare parts logistics to the extended service with operations and site and facility management. For the level of improvements and lifecycle management support they consist of expert assessments and studies, refurbishments and upgrades including the related engineering and project management activities.
Status of ALMA offline software in the transition from construction to full operations
Daniel Espada, Masao Saito, Lars-Åke Nyman, et al.
The transition from construction to full operations of the Atacama Large Millimeter/submillimeter Array (ALMA) brings the challenge to have not only software subsystems that are functional and stable but also to develop a system that works flawlessly as a single entity from proposal preparation to the delivery of the final data products to ALMA users. This is especially challenging as the different subsystems have to be constantly updated and improved to accommodate new observing modes and increasing capabilities. We present recent progress and future initiatives in the different offline subsystems that are currently being developed and used in ALMA operations: proposal preparation, submission and observation preparation (Observing Tool and submission server), proposal review process (Ph1M), project tracking (Project Tracker, Life Cycle), observation bookkeeping (Shift Log Tool), calibrator database (Source Catalogue), monitor and control of observations (Operations Monitoring and Control tool), dynamic scheduler, data reduction pipeline, quality assurance and trend analysis (AQUA), archive, as well as additional user support systems such as the Science Portal.
The NOAO Data Laboratory: a conceptual overview

The NOAO Data Lab will allow users to efficiently utilize catalogs of billions of objects, augment traditional telescope imaging and spectral data with external archive holdings, publish high level data products of their research, share custom results with collaborators and experiment with analysis toolkits. The goal of the Data Lab is to provide a common framework and workspace for science collaborations and individuals to use and disseminate data from large surveys.

In this paper we describe the motivations behind the NOAO Data Lab and present a conceptual overview of the activities we plan to support. Specific science cases will be used to develop a prototype framework and tools, allowing us to work directly with scientists from survey teams to ensure development will remain focused on scientifically productive tasks. This will additionally develop a pool of both scientific and technical experts who can provide ongoing advice and support for community users as the scope and capabilities of the Data Lab expand.

Artificial intelligence for the EChO long-term mission planning tool
The Exoplanet Characterisation Observatory (EChO) was an ESA mission candidate competing for a launch opportunity within the M3 call. Its main aim was to carry out research on the physics and chemistry of atmospheres of transiting planets. This requires the observation of two types of events: primary and secondary eclipses. The events of each exoplanet have to be observed several times in order to obtain measurements with adequate Signal-to-Noise Ratio. Furthermore, several criteria must be considered to perform an observation, among which we can highlight the exoplanet visibility, its event duration, and the avoidance of overlapping with other tasks. It is important to emphasize that, since the communications for transferring data from ground stations to the spacecraft are restricted, it is necessary to compute a long-term plan of observations in order to provide autonomy to the observatory. Thus, a suitable mission plan will increase the efficiency of telescope operation, and this will result in a raise of the scientific return and a reduction of operational costs. Obtaining a long-term mission plan becomes unaffordable for human planners due to the complexity of computing the large amount of possible combinations for finding a near-optimal solution. In this contribution we present a long-term mission planning tool based on Genetic Algorithms, which are focused on solving optimization problems such as the planning of several tasks. Specifically, the proposed tool finds a solution that highly optimizes the objectives defined, which are based on the maximization of the time spent on scientific observations and the scientific return (e.g., the coverage of the mission survey). The results obtained on the large experimental set up support that the proposed scheduler technology is robust and can function in a variety of scenarios, offering a competitive performance which does not depend on the collection of objects to be observed. Finally, it is noteworthy that the conducted experiments allow us to size some aspects of the mission with the aim of guaranteeing its feasibility.
Phoenix: automatic science processing of ESO-VLT data
ESO has implemented a process to automatically create science-grade data products and offer them to the scientific community, ready for scientific analysis. This process, called 'phoenix', is built on two main concepts: 1. a certification procedure for pipelines which includes a code review and, if necessary, upgrade; and 2. a certification procedure for calibrations which are processed into master calibrations, scored and trended. These master calibrations contain all information about the intrinsic instrumental variations and instabilities inevitable for ground-based telescopes. The phoenix process then automatically processes all science data using the certified pipeline and the certified master calibrations. Phoenix currently focuses on spectroscopic data. The first phoenix project has been the processing of all science data from UVES, ESO's high-resolution Echelle spectrograph at the VLT. More than 100,000 Echelle spectra of point sources, from begin of operations (March 2000) until now, have been reduced and are available to the public from the ESO archive, The phoenix process will also feed future UVES data into the archive. The second project has been X-SHOOTER slit spectroscopy which currently has more than 30,000 Echelle spectra from the UV to the infrared (up to 2.5μm). The phoenix process will be extended to other, mostly spectroscopic, instruments with certified pipelines, like FLAMES. Also, all future VLT instruments will be supported by phoenix.
The ALMA CONOPS project: the impact of funding decisions on observatory performance
Jorge Ibsen, John Hibbard, Giorgio Filippi
In time when every penny counts, many organizations are facing the question of how much scientific impact a budget cut can have or, putting it in more general terms, which is the science impact of alternative (less costly) operational modes. In reply to such question posted by the governing bodies, the ALMA project had to develop a methodology (ALMA Concepts for Operations, CONOPS) that attempts to measure the impact that alternative operational scenarios may have on the overall scientific production of the Observatory. Although the analysis and the results are ALMA specific, the developed approach is rather general and provides a methodology for a cost-performance analysis of alternatives before any radical alterations to the operations model are adopted. This paper describes the key aspects of the methodology: a) the definition of the Figures of Merit (FoMs) for the assessment of quantitative science performance impacts as well as qualitative impacts, and presents a methodology using these FoMs to evaluate the cost and impact of the different operational scenarios; b) the definition of a REFERENCE operational baseline; c) the identification of Alternative Scenarios each replacing one or more concepts in the REFERENCE by a different concept that has a lower cost and some level of scientific and/or operational impact; d) the use of a Cost-Performance plane to graphically combine the effects that the alternative scenarios can have in terms of cost reduction and affected performance. Although is a firstorder assessment, we believe this approach is useful for comparing different operational models and to understand the cost performance impact of these choices. This can be used to take decision to meet budget cuts as well as in evaluating possible new emergent opportunities.
Characterisation of atmospheric Cherenkov transparency with all-sky camera measurements
Felix Jankowsky, Stefan Wagner

The High Energy Stereoscopic System (H.E.S.S.) in Namibia measures gamma-ray emission via the detection of Cherenkov light in the optical waveband and is therefore highly sensitive to changes in the transparency of the atmosphere. This is especially true for aerosols, small dust particles covering the sky at the H.E.S.S. site and severely reducing the atmospheric transparency for blue Cherenkov light for several days each year.

To quantify this effect, the Cherenkov Transparency Coefficient has been introduced as a hardware-independent parameter, which enables a correction of measured gamma-ray brightnesses.

Neighbouring the Cherenkov array, the Automated Telescope for Optical Monitoring (ATOM) operates an all-sky cloud camera as secondary instrument. Due to its high exposure frequency, the cloud camera may act as a detection system, if image parameters indicating low Cherenkov transparency are identified. However, the current instrument – originally conceived as a weather warning system – only produces white-light frames in low resolution. This study examines all frames taken with the current instrument since 2008 which coincide with H.E.S.S. observations in order to characterise relations with the measured Cherenkov transparency.

As a result of this preliminary study, trivial relations between the examined sky monitor observations and gamma-ray brightness can be excluded. However, it is planned to expand the scope of this activity with an upgraded device by introducing colour dependency and more advanced photometry with a larger number of objects in the near future.

Full automation of the Automatic Telescope for Optical Monitoring
Felix Jankowsky, Stefan Wagner

The Automatic Telescope for Optical Monitoring (ATOM) is a 75 cm Ritchey-Chrétien telescope situated in Göllschau, Namibia, which forms part of the High Energy Stereoscopic System (H.E.S.S.). This paper presents ANDAQ, which allows to step from robotic to fully automatic observation by eliminating the need for daily human interaction. The main module responsible for the telescope operation forms a newly developed observer program, which also includes control of the telescope enclosure and offers various other tasks, like automated flat-fielding with live-analysis.

ANDAQ features its own TCP server for outside communication, making it possible to insert commands during the night. It possesses various means of monitoring internal and environmental parameters, and adjusts observation if necessary. This paper includes an description of the all-sky camera serving as cloud detector, supplemented by an additional rain detection device, and shows how operation is stopped as soon as weather parameters are below a defined standard, and automatically restarted once conditions recover. ANDAQ possesses a modular design based on a management core which starts and stops components as needed. This eases introduction of further functionality considerably and current development efforts include closer links to the main H.E.S.S. operation as well as live-analysis of exposures, allowing repeated observation in case of increased activity of a source.

ANDAQ has undergone extensive testing and has not seen any major problems so far. It may thus well serve as base for a future automated monitoring programme for the Cherenkov Telescope Array.

The NIRSpec MSA Planning Tool for multi-object spectroscopy with JWST
Diane Karakla, Alexander Shyrokov, Klaus Pontoppidan, et al.
The James Webb Space Telescope Near-Infrared Spectrograph (NIRSpec) instrument will offer a powerful multi-object spectroscopic capability enabled by the micro-shutter arrays (MSAs). The MSAs are fixed grids of configurable shutters that can be opened and closed on astronomical scenes. With this mode, the NIRSpec instrument can observe more than 100 targets simultaneously. The NIRSpec team and software developers at the Space Telescope Science Institute (STScI) have been implementing specialized algorithms in an MSA Planning Tool (MPT) to facilitate the complex observation planning process. Two main algorithms, the “Fixed Dithers” and “Flexible Dithers” algorithms, have been defined to achieve optimal multiplexing results with different observing strategies. The MPT is available to the astronomical community as part of the ASTRONOMER’S PROPOSAL TOOL (APT), an integrated software package for the preparation of observing proposals developed by STScI.
Web-based data providing system for Hyper Suprime-Cam
Michitaro Koike, Hisanori Furusawa, Tadafumi Takata, et al.
We describe a data providing system for Hyper Suprime-Cam (HSC) of Subaru Telescope. The data providing system provides HSC data including images and catalogs of celestial objects derived from them to individual co-investigators of the Subaru Strategic Survey Program with HSC through a website. Users can select the data that they need by using its graphical user interface or writing a query in SQL and download the selected images or the catalogs.
Turning a remotely controllable observatory into a fully autonomous system
Scott Swindell, Chris Johnson, Paul Gabor, et al.
We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory’s 61” Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.
Study on fault diagnose expert system for large astronomy telescope
Jia-jing Liu, Ming-Cheng Luo, Peng-yi Tang, et al.
The development of astronomical techniques and telescopes currently entered a new vigorous period. The telescopes have trends of the giant, complex, diversity of equipment and wide span of control despite of optical, radio space telescopes. That means, for telescope observatory, the control system must have these specifications: flexibility, scalability, distributive, cross-platform and real-time, especially the fault locating and fault processing is more important when fault or exception arise. Through the analysis of the structure of large telescopes, fault diagnosis expert system of large telescope based on the fault tree and distributed log service is given.
AO operations at Gemini South
Eduardo Marin, Andrew Cardwell, Peter Pessev
The 8m Gemini South telescope is entering an exciting new era of AO operations, which put it at the forefront of astronomical AO in terms of both wide field AO, and extreme-AO systems. Major milestones achieved were the successful commissioning of GeMS, in 2012, and GPI, in late 2013 and early 2014. Currently we are operating two of the worlds most advanced astronomical AO systems. Gemini, running primarily in queue, must balance the promise of AO with the demands of the community to use non-AO instruments. We discuss the current state of the two AO systems, and their operational models. The preparations that go into planning each AO run, the difficulties in scheduling around non-AO instruments, and the differences between scheduling LGS AO and non-LGS AO are discussed.
Scheduling and calibration strategy for continuous radio monitoring of 1700 sources every three days
Walter Max-Moerbeck
The Owens Valley Radio Observatory 40 meter telescope is currently monitoring a sample of about 1700 blazars every three days at 15 GHz, with the main scientific goal of determining the relation between the variability of blazars at radio and gamma-rays as observed with the Fermi Gamma-ray Space Telescope. The time domain relation between radio and gamma-ray emission, in particular its correlation and time lag, can help us determine the location of the high-energy emission site in blazars, a current open question in blazar research. To achieve this goal, continuous observation of a large sample of blazars in a time scale of less than a week is indispensable. Since we only look at bright targets, the time available for target observations is mostly limited by source observability, calibration requirements and slewing of the telescope. Here I describe the implementation of a practical solution to this scheduling, calibration, and slewing time minimization problem. This solution combines ideas from optimization, in particular the traveling salesman problem, with astronomical and instrumental constraints. A heuristic solution using well established optimization techniques and astronomical insights particular to this situation, allow us to observe all the sources in the required three days cadence while obtaining reliable calibration of the radio flux densities. Problems of this nature will only be more common in the future and the ideas presented here can be relevant for other observing programs.
Two years of ALMA bibliography: lessons learned
Silvia Meakins, Uta Grothkopf, Marsha J. Bishop, et al.
Telescope bibliographies are integral parts of observing facilities. They are used to associate the published literature with archived observational data, to measure an observatory's scientific output through publication and citation statistics, and to define guidelines for future observing strategies. The ESO and NRAO librarians as well as NAOJ jointly maintain the ALMA (Atacama Large Millimeter/submillimeter Array) bibliography, a database of refereed papers that use ALMA data. In this paper, we illustrate how relevant articles are identified, which procedures are used to tag entries in the database and link them to the correct observations, and how results are communicated to ALMA stakeholders and the wider community. Efforts made to streamline the process will be explained and evaluated, and a first analysis of ALMA papers published after two years of observations will be given.
LaNotte: the TNG metric system after two years of data
Emilio Molinari, Nauzet Hernandez
The night accounting system laNotte, presented two years ago at SPIE Amsterdam, has now been working without interruption and a wealth of data are becoming available. We can confirm that the human interaction of the night operator is a key factor, so that a manual editor has been given to operator in order to fix all unavoidable errors and misevaluation during the night. The connections with several logging and telemetry databases complete the necessary inputs. We were thus able to monitor the effect of the introduction of a major new instrument (Harps-N@TNG), to measure with accuracy the meteo variability on the Observatory of Roque de los Muchachos, the frequency of technical problems of the various devices. LaNotte complex is also an straightforward way to account for the operations toward funding bodies, providing them with a standard set of data and the quick possibility to look for historic trends.
Safety management of an underground-based gravitational wave telescope: KAGRA
Naoko Ohishi, Shinji Miyoki, Takashi Uchiyama, et al.
KAGRA is a unique gravitational wave telescope with its location underground and use of cryogenic mirrors. Safety management plays an important role for secure development and operation of such a unique and large facility. Based on relevant law in Japan, Labor Standard Act and Industrial Safety and Health Law, various countermeasures are mandated to avoid foreseeable accidents and diseases. In addition to the usual safety management of hazardous materials, such as cranes, organic solvents, lasers, there are specific safety issues in the tunnel. Prevention of collapse, flood, and fire accidents are the most critical issues for the underground facility. Ventilation is also important for prevention of air pollution by carbon monoxide, carbon dioxide, organic solvents and radon. Oxygen deficiency should also be prevented.
Automating engineering verification in ALMA subsystems
José Ortiz, Jorge Castillo
The Atacama Large Millimeter/submillimeter Array is an interferometer comprising 66 individual high precision antennas located over 5000 meters altitude in the north of Chile. Several complex electronic subsystems need to be meticulously tested at different stages of an antenna commissioning, both independently and when integrated together. First subsystem integration takes place at the Operations Support Facilities (OSF), at an altitude of 3000 meters. Second integration occurs at the high altitude Array Operations Site (AOS), where also combined performance with Central Local Oscillator (CLO) and Correlator is assessed. In addition, there are several other events requiring complete or partial verification of instrument specifications compliance, such as parts replacements, calibration, relocation within AOS, preventive maintenance and troubleshooting due to poor performance in scientific observations. Restricted engineering time allocation and the constant pressure of minimizing downtime in a 24/7 astronomical observatory, impose the need to complete (and report) the aforementioned verifications in the least possible time. Array-wide disturbances, such as global power interruptions and following recovery, generate the added challenge of executing this checkout on multiple antenna elements at once. This paper presents the outcome of the automation of engineering verification setup, execution, notification and reporting in ALMA and how these efforts have resulted in a dramatic reduction of both time and operator training required. Signal Path Connectivity (SPC) checkout is introduced as a notable case of such automation.
Early laser operations at the Large Binocular Telescope Observatory
ARGOS is the GLAO (Ground-Layer Adaptive Optics) Rayleigh-based LGS (Laser Guide Star) facility for the Large Binocular Telescope Observatory (LBTO). It is dedicated for observations with LUCI1 and LUCI2, LBTO's pair of NIR imagers and multi-object spectrographs. The system projects three laser beams from the back of each of the two secondary mirror units, which create two constellations circumscribed on circles of 2 arcmin radius with 120 degree spacing. Each of the six Nd:YAG lasers provides a beam of green (532nm) pulses at a rate of 10kHz with a power of 14W to 18W. We achieved first on-sky propagation on the night of November 5, 2013, and commissioning of the full system will take place during 2014. We present the initial results of laser operations at the observatory, including safety procedures and the required coordination with external agencies (FAA, Space Command, and Military Airspace Manager). We also describe our operational procedures and report on our experiences with aircraft spotters. Future plans for safer and more efficient aircraft monitoring and detection are discussed.
Gemini planet imager integration to the Gemini South telescope software environment
Fredrik T. Rantakyrö, Andrew Cardwell, Jeffrey Chilcote, et al.
The Gemini Planet Imager is an extreme AO instrument with an integral field spectrograph (IFS) operating in Y, J, H, and K bands. Both the Gemini telescope and the GPI instrument are very complex systems. Our goal is that the combined telescope and instrument system may be run by one observer operating the instrument, and one operator controlling the telescope and the acquisition of light to the instrument. This requires a smooth integration between the two systems and easily operated control interfaces. We discuss the definition of the software and hardware interfaces, their implementation and testing, and the integration of the instrument with the telescope environment.
Exploring remote operation for ALMA Observatory
Tzu-Chiang Shen, Ruben Soto, Nicolás Ovando, et al.
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. The observatory has another office located in Santiago of Chile, 1600 km from the Chajnantor plain. In the Atacama desert, the wonderful observing conditions imply precarious living conditions and extremely high operation costs: i.e: flight tickets, hospitality, infrastructure, water, electricity, etc. It is clear that a purely remote operational model is impossible, but we believe that a mixture of remote and local operation scheme would be beneficial to the observatory, not only in reducing the cost but also in increasing the observatory overall efficiency. This paper describes the challenges and experience gained in such experimental proof of the concept. The experiment was performed over the existing 100 Mbps bandwidth, which connects both sites through a third party telecommunication infrastructure. During the experiment, all of the existent capacities of the observing software were validated successfully, although room for improvement was clearly detected. Network virtualization, MPLS configuration, L2TPv3 tunneling, NFS adjustment, operational workstations design are part of the experiment.
Gaia downlink data processing
H. Siddiqui, S. G. Els, R. Guerra, et al.
The Gaia survey mission, operated by the European Space Agency (ESA) and launched on 19 December 2013, will survey approximately 109 stars or 1% of the galactic stellar population over a 5.5 year period. The main purpose of this mission is micro-arcsecond astrometry, that would yield important insights into the kinematics of the galaxy, its evolution, as well as provide important additional findings, including a updated coordinate reference system to that provided by the ICRS. Gaia performs its observations using two telescopes with fields of view separated by 106.5 degrees, spinning around an orthogonal axis at about 6 hours per day. The spin axis itself precesses: it is always oriented at 45 degrees from the sun, and precesses around the sun every 63 days. Thus each part of the sky is observed approximately every 63 days. The 6-hour spin, or scan-rate matches the CCD readout rate. The amount of data to process per day - 50-130 Gigabytes - corresponds to over 30 million stellar sources. To perform this processing, the Gaia Data Processing and Analysis Consortium (DPAC) have developed approximately 2 million lines of software, divided into subsystems specific to a given functional need, that are run across 6 different Data Processing Centres (DPCs). The final result being a catalog including the 109 stars observed. Most of the daily processing is performed at the DPC in ESAC, Spain (DPCE), which runs 3 main subsystems, the MOC Interface Task (MIT), the Initial Data Treatment (IDT), and First Look (FL). The MIT ingests the initial data provided by the MOC in the form of binary data and writes (amongst other things) `star packets' containing the raw stellar information needed for IDT, which provides a basic level of processing, including stellar positions, photometry, radial velocities, cross match and catalogue updates. FL determines the payload health (e.g, the health for the 106 CCDs, geometric calibration) and astrometric performance via the one day astrometric solution. This presentation provides an overview of the DPAC software as a whole, and focuses on the daily pipeline processing: the systems used, the teams involved, the challenges during development and operations, and lessons learned.
Future-oriented maintenance strategy based on automated processes is finding its way into large astronomical facilities at remote observing sites
Armin Silber, Christian Gonzalez, Francisco Pino, et al.
With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.
Implementing extended observing at the JCMT
The James Clerk Maxwell Telescope (JCMT) is the largest single dish submillimetre telescope in the world. Recently the Joint Astronomy Centre (JAC) has learned that the JCMT will no longer receive financial support from its original supporting agencies after September 2014. There is significant pressure to complete some surveys that have been in progress at the JCMT for many years now. With the goal of completing a higher percentage of these surveys it was decided to take advantage of the hours between when the telescope operator leaves the telescope and when the day crew arrives. These hours generally have reasonable seeing and low column integrated water vapor, so they are good for observing. This observing is being performed remotely, in Hilo, without staff at the telescope, by staff members who do not have telescope operation as part of their job descriptions. This paper describes the hardware changes necessary to implement remote observing at JCMT. It also describes the software needed for remote, fail safe, operation of the telescope. The protocols and rules for passing the control of the telescope between the various groups are discussed. Since these Extended Operators are not expert telescope operators, the system was simplified as much as possible, but some training was necessary and proper checklists are essential. Due to the success of the first phase of Extending Observing at the JCMT, the hours when the weather is good and no one is at the telescope, but no day crew is on the way, are also now being utilized. Extended Observing has already yielded a considerable amount of science observing time.
Problems with twilight/supersky flat-field for wide-field robotic telescopes and the solution
Peng Wei, Zhaohui Shang, Bin Ma, et al.
Twilight/night sky images are often used for flat-fielding CCD images, but the brightness gradient in twilight/ night sky causes problems of accurate flat-field correction in astronomical images for wide-field telescopes. Using data from the Antarctic Survey Telescope (AST3), we found that when the sky brightness gradient is minimum and stable, there is still a gradient of 1% across AST3’s field-of-view of 4.3 square degrees. We tested various approaches to remove the varying gradients in individual flat-field images. Our final optimal method can reduce the spatially dependent errors caused by the gradient to the negligible level. We also suggest a guideline of flat-fielding using twilight/night sky images for wide-field robotic autonomous telescopes.
Development of database system for data obtained by Hyper Suprime-Cam on Subaru Telescope
Yoshihiko Yamada, Tadafumi Takata, Hisanori Furusawa, et al.
Hyper Suprime-Cam (HSC) is the optical and near-infrared wide-field camera equipped on the Subaru Telescope. Its huge field of view (1.5 degree diameter) with 104 CCDs and the large mirror (8.2m) of the telescope will make us to study the Universe more efficiently. The analysis pipeline for HSC data produces processed images, and object catalogs of each CCD and stacked images. For survey in next 5 years, the number of rows in the object catalog table will reach to at least 5 x 109. We show the outline of the database systems of HSC data to store those huge data.
Multi-object spectroscopy data reduction: the AF2+WYFFOS pipeline
The scientific productivity of complex instrumentation strongly depends on the availability of data-reduction pipelines. In the case of AF2+WYFFOS, the multi-object one-degree field-of-view fibre-fed spectrograph at the 4.2 m William Herschel Telescope (WHT), the full scientific exploitation of the data has often been slowed down because of the non-availability of a pipeline. A dedicated pipeline has been developed to overcome this. Running in IDL, it performs full reduction of AF2+WYFFOS data: fibre-to-fibre sensitivity corrections, fibre tracing, wavelength calibration, optimal extraction, sky subtraction.