Proceedings Volume 7019

Advanced Software and Control for Astronomy II

cover
Proceedings Volume 7019

Advanced Software and Control for Astronomy II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 11 August 2008
Contents: 18 Sessions, 112 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2008
Volume Number: 7019

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7019
  • Project Reports: Radio
  • Telescope Control I
  • Telescope Control II
  • Telescope Control III
  • Observatory Control I
  • Observatory Control II
  • Software Frameworks
  • Project Reports: Optical/IR
  • Instrument Control I
  • Software Engineering and Management
  • Data Handling and Processing I
  • Data Handling and Processing II
  • Instrument Control Poster Session
  • Telescope Control Poster Session
  • Observatory Control Poster Session
  • Software Engineering Poster Session
  • Data Handling Poster Session
Front Matter: Volume 7019
icon_mobile_dropdown
Front Matter: Volume 7019
This PDF file contains the front matter associated with SPIE Proceedings Volume 7019, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing
Project Reports: Radio
icon_mobile_dropdown
The ALMA computing project: initial commissioning
B. E. Glendenning, G. Raffi
The Atacama Large Millimeter/Submillimeter Array (ALMA) is a large radio interferometric telescope consisting of 66 antennas with variable positions, to be located at the Chajnantor 5000mat a high site (5000m) in Chile. ALMA commissioning has now started with the arrival of several antennas in Chile and will continue for the next 4 years. The ALMA Software was from the beginning has been developed as an end-to-end system including: proposal preparation, dynamic scheduling, instrument control, data handling and formatting, data archiving and retrieval, automatic and manual data processing systems, and support for observatory operations. This presentation will expand mostly on ALMA software aspects issues on which we are concentrating in this phase: management, procedures, testing and validation. While software development was based on a common software infrastructure (ALMA Common Software - ACS) from the beginning, end-to-end testing was limited by the hardware available, and was possible for years until recently only on computer models. Although the control software was available early in prototype stand-alone form to support testing of prototypes antennas, it was only recently that dynamic interferometry was reached and software could be tested end to end with a somewhat stable hardware platform. The lessons learned so far will be explained, in particular the need for a realistic validation environment, the balance to be achieved between incremental development and the needed for stability and usability, and the way to achieve all the above with a development team distributed over three four continents. Some general lessons can be drown drawn on the potential conflicts between software and system (hardware) testing, or in other words on the danger in taking short-cuts in software testing and validation.
Software for the EVLA: current status
Bryan J. Butler, David Harland, Brian Truitt, et al.
The Expanded Very Large Array (EVLA) project is the next generation instrument for high resolution long-millimeter to short-meter wavelength radio astronomy. It is currently funded by NSF, with completion scheduled for 2012. The EVLA will upgrade the VLA with new feeds, receivers, data transmission hardware, correlator, and a new software system to enable the instrument to achieve its full potential. This software includes both that required for controlling and monitoring the instrument and that involved with the scientific dataflow. This manuscript presents an update on the overall design, and details for the pre-observing portions of the software, including: user authentication; proposal preparation, submission, and handling; and observation preparation. It will focus particularly on the observation preparation software, describing an implementation of a web-based interface for creation of a detailed observation description, and plans to achieve common observation preparation software with the ALMA telescope.
Software in the CARMA heterogeneous millimeter-wave array
The Combined Array for Research in Millimeter-wave Astronomy (CARMA) is a 15 element heterogeneous millimeterwave array developed and operated by a university consortium that will be expanded to 23 elements in 2008. Commissioning began in August 2005 after completion of the relocation of antennas from the Owens Valley Radio Observatory (OVRO) and the Berkeley-Illinois-Maryland Association (BIMA) arrays to a new high site and initial scientific operations began in April 2006. The array operates in the 3-mm and 1-mm bands and has a maximum resolution of 0.15 arc seconds. Most of the software and computing infrastructure for the array is new, allowing modern technology to be introduced and to provide a common interface for the disparate antenna types. The new system is proving to be both easy to use for routine observations and yet capable enough for the development of new observing techniques by the experienced astronomer. Some of the details of the computing and software are described here, with emphasis on the control system.
Telescope Control I
icon_mobile_dropdown
A dual-consumer design for the Atacama Large Millimeter Array control subsystem
Ralph Marson, Jeffrey Kern, Allen Farris, et al.
The control subsystem for the Atacama Large Millimeter Array (ALMA) must fulfill a number of roles. Principle amongst these is the ability to conduct observations and the ability to monitor and maintain the health of the hardware. These two roles impose different requirements on the control subsystem. The ALMA control subsystem uses a design which explicitly recognizes these different roles and provides capabilities that are targeted at the astronomers, engineers and other users of the ALMA control subsystem. In this paper we will describe this aspect of the design of the ALMA control subsystem with emphasis on how the various components of the software interact to meet the requirements of these different users and produce a coherent control subsystem that can transition from a high level, astronomical perspective of the array to a detailed low-level perspective with a focus on a particular piece of hardware.
Design and field performance of the KVN main axis control system
David R. Smith, Kamal Souccar, Jeff Kendall
In early December 2007, the first of the three Korean VLBI Network (KVN) 21m diameter telescopes was brought under servo control of the main axes. In addition to the usual slewing and tracking modes common to most radio telescopes, the KVN antennas will be used extensively for VLBI and thus also have a requirement to permit them to move and settle rapidly during fast switching motions to improve phase stability. These requirements place substantial demands on the controller. To reach the required precision, a digital Drive Control Unit (DCU) is coupled with a digital Antenna Control Unit (ACU). The control law incorporates a state space controller with a full state estimator, combined with a trajectory generator and both velocity and acceleration feedforward to improve tracking performance during fast switching motions. We discuss the design of the DCU, ACU, and the control algorithm, and we present the initial results for the encoder-based pointing accuracy during calm and windy conditions while tracking, as well as the path tracking and settling performance during fast switching.
Magdalena Ridge Observatory Interferometer: the control system of the unit telescopes
Chris Mayer, Marion Fisher, Alan Greer, et al.
This paper describes the telescope control system for the Magdalena Ridge Observatory Interferometer. To achieve the rapid development time required by the project we made use of two software packages, LabVIEW from National Instruments and TCSpk from Tpoint Software. The telescope control system is built from a set of components that conform to a standard interface and implement a set of component specific commands. Data is distributed throughout the system in a uniform manner by an event system that uses the publish-subscribe paradigm.
Concise telescope pointing algorithm using IAU 2000 precepts
The accuracy requirements for pointing a ground-based telescope or antenna are comparatively modest; the latest Earth orientation models used by specialists have precision goals measured in microarcseconds and are excessive for such humble applications. Abridged formulations offer an attractive alternative: easier to get right, and much quicker to compute. Moreover, the revised computational procedures that the IAU introduced in 2000 to assist high-precision studies of Earth rotation lend themselves to approximation. Together with basic models for aberration and refraction, a page of inline C code is enough to predict the observed altazimuth coordinates of a star to an accuracy of 1-2 arcseconds, which is adequate for pointing a small telescope. This can be complemented by a similarly concise formulation of the basic pointing corrections for an equatorial or altazimuth mount.
A polynomial-based trajectory generator for improved telescope control
For any telescope, a fundamental performance requirement is the acquisition and tracking of the source. While this depends on many factors, the system accuracy is fundamentally limited by the servo tracking performance on the encoders. This tracking performance must be balanced with the need for large slewing motions to new sources. While the classical rate loop and position loop model permits basic operation, there has been increasing use through the years of gain scheduling or command pre-processors to improve telescope path planning and enable better performance. This is particularly important for telescopes that employ scanning or fast switching motions. As telescope control systems have moved to fully digital systems running at high update rates, more sophisticated approaches have become possible for telescope path planning. Taking advantage of the speed of available computation, we have developed a new real time trajectory generator that provides improved performance over previous implementations. Given a position command, the system generates a path to the desired end point. The resulting path is guaranteed to be continuous in position, velocity, and acceleration, as well as to respect specified limits in velocity, acceleration, and jerk. Significantly, the calculation provides not only the desired position over the interval, but also the velocity and acceleration, permitting their use in feedforward control to improve the tracking accuracy at all points on the path. The algorithm is presented, as well as some results with the system implemented on a real telescope.
Telescope Control II
icon_mobile_dropdown
The software for MAGIQ: a new acquisition, guiding, and image quality monitoring system at the W. M. Keck Observatory
Shui Hung Kwok, Jimmy Johnson, Sean M. Adkins, et al.
The W. M. Keck Observatory has completed the development and initial deployment of MAGIQ, the Multi-function Acquisition, Guiding and Image Quality monitoring system. MAGIQ is an integrated system for acquisition, guiding and image quality measurement for the Keck telescopes. This system replaces the acquisition and guiding hardware and software for existing instruments at the Observatory and is now the standard for visible wavelength band acquisition cameras for future instrumentation. Innovative features are provided in the MAGIQ software for use by observers and telescope operators including advanced capabilities for acquisition and image quality monitoring. In this paper we report on the design and implementation of the MAGIQ software components, including the process for developing requirements, the implementation choices and strategies, the software features and user interfaces, and the challenges of test and deployment in a working observatory.
Modern computer control for Lick Observatory telescopes
John Gates, William T. S. Deich, Anthony Misch, et al.
Poco, short for Pointing Control, is a modern telescope control system for use with the telescopes at Lick Observatory. It is currently in use with the Shane 3-meter and Nickel 1-meter telescopes. It may also be used with other telescopes in the future. The software is designed to be very reliable, accurate, flexible, and full featured while still being very easy to use. It needs to communicate with other systems such as auto-guiders, instruments, remote observing watchdogs, and possible robotic control. The telescopes use motor systems installed in the 1970's. Upgrading to modern servo motors was not practical, so the telescopes use their stepper motors for fine motor control while switching to much larger and less accurate motors for large moves. It requires a variety of techniques to quickly and smoothly reach target locations and maintain tracking. The software achieves these goals, overcoming the significant hardware limitations of these older telescope using mostly off the shelf hardware. This paper will describe the more interesting aspects of the system such as locating objects from catalog coordinates, motor control algorithms, user interfaces, communications between systems, and software architecture.
Instrument control tool kit for the Subaru Telescope laser guide star adaptive optics system
The Subaru Adaptive Optics instrument development requires reliable software that can be quickly modified to facilitate testing and changing demands. A software tool kit was created to allow rapid inclusion of diverse hardware, to isolate specific hardware, to allow expert users to write programs that interface hardware, and to allow multiple access points for control and status. The flexibility of this system allows the software to not only control the Subaru Adaptive Optics Instrument, but also a wide variety of instruments. Also, once a low level interface is written for the hardware, the hardware controls can be combined into any configuration without any programming. This analysis explains the overall software architecture of the system, the methods used to promote hardware testing, the process for adding hardware, and the flexibility inherent in the software's architecture.
PCR: a PC-based wave front reconstructor for MMT-AO
PC Reconstructor (PCR) is the control software for the natural guide star and the laser guide star systems at the 6.5m Multiple Mirror Telescope (MMT) operating with the adaptive secondary mirror on Mt. Hopkins south of Tucson, AZ. The PCR computes and corrects atmospheric turbulence featuring a common interface between the wave front sensor camera control link and the deformable mirror, diagnostic data management, vibration control, closed-loop data distribution and saving routines, and housekeeping modules. We report here on the development, use and the on-sky performance of the PCR.
Telescope Control III
icon_mobile_dropdown
The LBT-AdOpt arbitrator: coordinating many loosely coupled processes
Luca Fini, Fabio Tosetti, Lorenzo Busoni, et al.
The LBT-AdOpt Supervisor is a collection of software processes which control the operations on the set of devices which make up the Adaptive Optics subsystem. The Arbitrator is the software component which coordinates the operations of the Supervisor in order to support operations at the telescope in reply to requests issued by the Instrument Control Software. In this paper we describe the architecture of the Arbitrator, based on an extremely modular, extensible and maintainable approach, designed using object-oriented techniques, that include intensive use of classes, exception handling and design patterns, as well as a clear division of tasks.
Applications of high-rate data logging to telescope system development and operations
The Gemini Secondary Mirror Tip/tilt Systems (M2TS) have greatly benefited from the availability of software-based data logging-to-disk of internal variables at servo loop rates, enabling efficient testing and troubleshooting. Similar 'fast-logging-to-disk' systems are now being considered for other Gemini subsystems. We describe how this technique was successfully applied to the M2TS, solving intractable tuning problems; a forward look will show how extensive and fully integrated logging and diagnostic capabilities are at the heart of the new design for the M2TS-2. Designers of new and ever-larger and more complex telescope systems are challenged to consider the benefits of including such systems in their own designs at an early stage - and to consider the costs in terms of ease of performing diagnostics and loss of maintainability of not doing so.
Inspector: the GTC graphical user interface
The Inspector is the graphical user interface of the GTC Control System. It is implemented in Java and gives a unified view of the whole system by representing it as hierarchical browser of distributed objects. The ability to resolve at runtime the domain objects running distributed on the real time systems and use that domain information to dynamically generate different views of the system. Using the exact same set of tools and edition capabilities, it is as simple to create an engineering view of the GCS as it is to create a science view. Such flexibility and simplicity, have made the Inspector be, not only the interface of the final system, but also one of the most important tools used by the engineers from very early in the development process to test the functionality of their respective components. Persistency of dynamically created views, commands execution flows, visualization of system alarms and logs, are also important aspects of the Inspector which will be explained in this paper.
The GTC primary mirror control system
The Gran Telescopio Canarias (GTC) primary mirror control system is responsible for making the 36 segments behave like a monolithic mirror. It deals with 108 positioners giving 3 degrees of freedom to each segment, 168 position sensors installed between adjacent segments measuring nanometrical displacements, 216 actuators controling the figure of each segment creating torques, 216 load cells quantifying the applied deforming forces and 216 PT100 monitoring the primary mirror temperature gradient to predict structural dilatations. It provides simple engineering access to all functionalities of each device as well as real time capabilities required to work in closed-loop. All the critical parameters can be monitored from any observatory's workstation thanks to the fully embedded distributed environment included in the control system framework. Hardware interfaces as VME, CAN field buses are fully transparent for the user thanks to the Java front-end (Inspector) that allows to start, control and turn-off each part of the system with a simple mouse click.
The GTC main axes servos and control system
M. Suárez, J. Rosich, J. Ortega, et al.
The GTC azimuth and elevation axes control systems employ large custom direct-drive motors operated by means of embedded fully-digital current loops. A high-performance position loop has been developed based on sinusoidal encoder feedback with interpolation error compensation. Real-time servo feedback and trajectory tracking is implemented by object-oriented software components at CPU-level which trigger encoder sampling, interpolate the remote CORBA demands and perform high-frequency setpoint streaming for the servo controller.
Wind Evaluation Breadboard electronics and software
Miguel Núñez, Marcos Reyes, Teodora Viera, et al.
WEB, the Wind Evaluation Breadboard, is an Extremely Large Telescope Primary Mirror simulator, developed with the aim of quantifying the ability of a segmented primary mirror to cope with wind disturbances. This instrument supported by the European Community (Framework Programme 6, ELT Design Study), is developed by ESO, IAC, MEDIA-ALTRAN, JUPASA and FOGALE. The WEB is a bench of about 20 tons and 7 meter diameter emulating a segmented primary mirror and its cell, with 7 hexagonal segments simulators, including electromechanical support systems. In this paper we present the WEB central control electronics and the software development which has to interface with: position actuators, auxiliary slave actuators, edge sensors, azimuth ring, elevation actuator, meteorological station and air balloons enclosure. The set of subsystems to control is a reduced version of a real telescope segmented primary mirror control system with high real time performance but emphasizing on development time efficiency and flexibility, because WEB is a test bench. The paper includes a detailed description of hardware and software, paying special attention to real time performance. The Hardware is composed of three computers and the Software architecture has been divided in three intercommunicated applications and they have been implemented using Labview over Windows XP and Pharlap ETS real time operating system. The edge sensors and position actuators close loop has a sampling and commanding frequency of 1KHz.
Observatory Control I
icon_mobile_dropdown
The STELLA robotic observatory: first two years of high-resolution spectroscopy
The STELLA project consists of two robotic 1.2m telescopes to simultaneously monitor stellar activity with a high resolution echelle spectrograph on one telescope, and a photometric imaging instrument on the other telescope. The STELLA observatory is located at the Observatorio del Teide on the Canary island of Tenerife. The STELLA Echelle spectrograph (SES) has been operated in robotic mode for two years now, and produced approximately 10,000 spectra of the entire optical range between 390 and 900 nm at a spectral resolution of 55,000 with a peak shutter-open time of 93%. Although we do not use an iodine cell nor an actively stabilized chamber, its average radial velocity precision over the past two years was 60 to 150m/s rms, depending on target. The Wide-Field STELLA Imaging Photometer (WIFSIP) is currently being tested and will enter operation early 2009. In this paper, we present an update report on the first two years of operation.
Observation scheduling simulation framework: design and first results
This paper describes a modular component architecture for the construction of observation schedulers along with a simulation framework with which schedulers can be tested under a variety of environmental scenarios. We discuss a series of basic efficiency and quality metrics which can be used to measure the value of schedules. Results are presented from a series of simulations using this framework in which a set of observation scheduling paradigms ranging from on-demand despatching to a short-horizon look-ahead scheduler are tested under a series of increasingly challenging environmental conditions.
The SALT observation control system
Janus Brink, Anne Charles, Christian Hettlage, et al.
With the Southern African Large Telescope (SALT) on the brink of entering its fully operational phase, its suite of telescope control software has matured significantly towards the fully fledged control system intended to meet the demands of the user community. In this paper the authors present an overview of the design and implementation of the SALT Telescope Control System (TCS); detailing its main components and the interfaces between them - specifically in relation to the Observation Control System (OCS) that will allow the SALT to be used in an efficient queue-scheduled fashion. Finally, the capabilities and constraints of the design are highlighted to guide the SALT user community in preparing proposals that make optimal use of the available telescope time.
A framework for the Subaru Telescope observation control system based on the command design pattern
Eric Jeschke, Bruce Bon, Takeshi Inagaki, et al.
Subaru Telescope is developing a second-generation Observation Control System that specifically addresses some of the deficiencies of the current Subaru OCS. One area of concern is better extensibility: the current system uses a custom language for implementing commands with a complex macro processing subsystem written in C. It is laborious to improve the language and awkward for scientists to extend and use standard programming techniques. Our Generation 2 OCS provides a lightweight, object-oriented task framework based on the Command design pattern. The framework provides a base task class that abstracts services for processing status and other common infrastructure activities. Upon this are built and provided a set of "atomic" tasks for telescope and instrument commands. A set of "container" tasks based on common sequential and concurrent command processing paradigms is also included. Since all tasks share the same exact interface, it is straightforward to build up compound tasks by plugging simple tasks into container tasks and container tasks into other containers, and so forth. In this way various advanced astronomical workflows can be readily created, with well controlled behaviors. In addition, since tasks are written in Python, it is easy for astronomers to subclass and extend the standard observatory tasks with their own custom extensions and behaviors, in a high-level, full-featured programming language. In this talk we will provide an overview of the task framework design and present preliminary results on the use of the framework during two separate engineering runs.
The ALMA/EVLA project data model: steps toward a common project description for astronomy
The "Project Data Model" (PDM) is a model of the information that describes an astronomical observing project. In this paper we consider the PDM to cover the Proposal and Observing Preparation phases (also often called Phase 1 and Phase 2), and also the intermediate phase of reviewing and approving the project. At the back end of observing, the production of calibrated or partially calibrated science data, such models or data structures have been common for some time, albeit evolving (FITS, Measurement Set, etc.), but modelling the front end of observing is a relatively recent phenomenon, with most observatories creating their own versions of the "PDM". This paper describes work towards a common PDM for two radio observatories that are in development, ALMA and the EVLA. It goes further to explore the prospect of a wider common PDM that could be shared across astronomy. Is there a case to produce such a common PDM? And is it feasible? It is likely that a common model for Phase 1, an observing proposal, is possible. However, for a number of reasons a common model for Phase 2 is a much tougher challenge.
A common framework for the observation software of astronomical instruments at ESO
Eszter Pozna, G. Zins, P. Santin, et al.
The Observation Software (OS) is the supervisory software which manages all the exposures and calibrations made by an ESO/VLT instrument. It forms part of the multi-process and multi-layer ESO/VLT instrument software package, receiving astronomer instructions either from a template script or directly from the instrument's graphical user interface. In order to speed up development, ease maintenance and hence decrease the costs of the Observation Software of different instruments (at various sites VLT, VLTI, La Silla, VISTA), a software framework "Base Observation Software Stub" (BOSS) is supplied by ESO. This article introduces the objectives of the tool collecting the general features of all instrument OS, such as configuration and synchronization of the subsystems, state alignment, exposure and image file handling. The basic structure of the implementation is explained (using design patterns), showing the way the framework copes with a challenge of being constantly adjusted to new generic requirements imposed by the complexity of new instruments, performance requirements, increasing image file size and file numbers, and at the same time remaining backward compatible. The instrument-specific features are illustrated via three of many applications: FLAMES is an example of a complex instrument using a "super OS" controlling three instruments as subsystems; AMBER is a VLTI instrument; and VISTA has high performance requirements on image file handling.
ALMA Observing Tool
We present a report on the current development status of the ALMA Observing Tool, describing how the tool operates as an integrated environment for proposal and program preparation. The paper also covers the science-oriented graphical tools for both spatial and spectral setup, their system-oriented equivalents, local oscillator and correlator setup assistants as well as program validation.
Observatory Control II
icon_mobile_dropdown
Automatic validation of science programs in the Gemini Observing Tool
The key to a successful observing experience at Gemini is a well-prepared science program. The astronomer uses a software application called the Gemini Observing Tool (OT) to fill in the specifics of instrument and telescope configuration during the Phase 2 process. This task involves knowing several details about the Gemini instruments as well as particularities of the telescope and the best way to observe with them. Unfortunately, reviewing these programs can be tedious and error prone. Failure to catch a simple misconfiguration could lead to suboptimal science results or even lost time at the telescope. As part of an effort to make it easier for investigators to define the details of their programs and for the National Gemini Offices and Gemini contact scientists to check and validate them, we have included an automatic program-checking engine in the OT. The "Phase 2 Checker" continually examines the science program configuration as edits are made, finds significant problems, and reports them to the user along with suggested corrections. Since its introduction in the 2007B semester release of the Observing Tool, this feature has been very well received by the community. This paper describes the software (infrastructure and user interface) that supports the Phase 2 Checker, results of validating new and existing science programs, and future improvements we are currently considering.
KARMA: the observation preparation tool for KMOS
KMOS is a multi-object integral field spectrometer working in the near infrared which is currently being built for the ESO VLT by a consortium of UK and German institutes. It is capable of selecting up to 24 target fields for integral field spectroscopy simultaneously by means of 24 robotic pick-off arms. For the preparation of observations with KMOS a dedicated preparation tool KARMA ("KMOS Arm Allocator") will be provided which optimizes the assignment of targets to these arms automatically, thereby taking target priorities and several mechanical and optical constraints into account. For this purpose two efficient algorithms, both being able to cope with the underlying optimization problem in a different way, were developed. We present the concept and architecture of KARMA in general and the optimization algorithms in detail.
Software Frameworks
icon_mobile_dropdown
A lightweight fault-tolerant middleware for a Subaru Telescope second generation observation control system
Eric Jeschke, Bruce Bon, Takeshi Inagaki, et al.
Subaru Telescope is developing a second-generation Observation Control System that specifically addresses some of the deficiencies of the current Subaru OCS. Two areas of concern are complexity and failure handling. The current system has over 1000 dedicated OCS processes spread across a dozen hosts and provides nothing in the way of automated failover. Furthermore, manual failover is so fraught with difficulty that it is rarely attempted. Our Generation 2 OCS is written almost entirely in Python and builds upon a Subaru-developed middleware based on the XML-RPC protocol. This framework offers the following benefits: - has very few dependences outside of standard Python - provides a nearly seamless remote proxy object-oriented interface - provides optional user/password authentication and/or SSL encryption - is extremely simple to use from client applications - is connectionless, and assists transparent failover of communications and services on a cluster of hosts - has reasonable performance for a wide range of needs - allows multiple language bindings - for dynamic languages, requires no interface stub files The "back end" (service side) of the OCS is nearing completion, and has already been used successfully during two separate OCS engineering runs. It is comprised of only a couple dozen processes, and provides automated failover capabilities on a rack of commodity x86 Linux servers. We provide an overview of the middleware design and its failover capabilities. Some data on the performance of communications using the middleware protocol is included.
The Large Synoptic Survey Telescope middleware messaging system
The Large Synoptic Survey Telescope (LSST) is a project with stringent requirements on the control aspects and telemetry capture demands, to command the cadence of the survey process, and to help analyze and discover the systematics of the observing process. For that purpose, the Data Distribution Service (DDS) standard has been selected as the communications middleware to distribute information across the entire system. This paper describes the new architecture of the control system and the middleware messaging, for handling the commands and telemetry based on the use of the DDS standard.
The ALMA common software: dispatch from the trenches
The ALMA Common Software (ACS) provides both an application framework and CORBA-based middleware for the distributed software system of the Atacama Large Millimeter Array. Building upon open-source tools such as the JacORB, TAO and OmniORB ORBs, ACS supports the development of component-based software in any of three languages: Java, C++ and Python. Now in its seventh major release, ACS has matured, both in its feature set as well as in its reliability and performance. However, it is only recently that the ALMA observatory's hardware and application software has reached a level at which it can exploit and challenge the infrastructure that ACS provides. In particular, the availability of an Antenna Test Facility(ATF) at the site of the Very Large Array in New Mexico has enabled us to exercise and test the still evolving end-to-end ALMA software under realistic conditions. The major focus of ACS, consequently, has shifted from the development of new features to consideration of how best to use those that already exist. Configuration details which could be neglected for the purpose of running unit tests or skeletal end-to-end simulations have turned out to be sensitive levers for achieving satisfactory performance in a real-world environment. Surprising behavior in some open-source tools has required us to choose between patching code that we did not write or addressing its deficiencies by implementing workarounds in our own software. We will discuss these and other aspects of our recent experience at the ATF and in simulation.
Project Reports: Optical/IR
icon_mobile_dropdown
Thirty Meter Telescope: observatory software requirements, architecture, and preliminary implementation strategies
The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR alt-az telescope with a highly segmented primary mirror located in a remote location. Efficient science operations require the asynchronous coordination of many different sub-systems including telescope mount, three independent active optics sub-systems, adaptive optics, laser guide stars, and user-configured science instrument. An important high-level requirement is target acquisition and observatory system configuration must be completed in less than 5 minutes (or 10 minutes if moving to a new instrument). To meet this coordination challenge and target acquisition time requirement, a distributed software architecture is envisioned consisting of software components linked by a service-based software communications backbone. A master sequencer coordinates the activities of mid-layer sequencers for the telescope, adaptive optics, and selected instrument. In turn, these mid-layer sequencers coordinate the activities of groups of sub-systems. In this paper, TMT observatory requirements are presented in more detail, followed by a description of the design reference software architecture and a discussion of preliminary implementation strategies.
Enabling technologies and constraints for software sharing in large astronomy projects
The new observatories currently being built, upgraded or designed represent a big step up in terms of complexity (laser guide star, adaptive optics, 30/40m class telescopes) with respect to the previous generation of ground-based telescopes. Moreover, the high cost of observing time imposes challenging requirements on system reliability and observing efficiency as well as challenging constraints in implementing major upgrades to operational observatories. Many of the basic issues are common to most of the new projects, while each project also brings an additional set of very specific challenges, imposed by the unique characteristics and scientific objectives of each telescope. Finding ways to share the solution and the risk for these common problems would allow the teams in the different projects to concentrate more resources on the specific challenges, while at the same time realizing more reliable and cost efficient systems. In this paper we analyze the many dimensions that might be involved in sharing and re-using observatory software (e.g. components, design, infrastructure frameworks, applications, toolkits, etc.). We also examine observatory experiences and technology trends. This work is the continuation of an effort started in the middle of 2007 to analyze the trends in software for the control systems of large astronomy projects.
Instrument Control I
icon_mobile_dropdown
A new approach for instrument software at Gemini
Gemini Observatory is now developing its next generation of astronomical instruments, the Aspen instruments. These new instruments are sophisticated and costly requiring large distributed, collaborative teams. Instrument software groups often include experienced team members with existing mature code. Gemini has taken its experience from the previous generation of instruments and current hardware and software technology to create an approach for developing instrument software that takes advantage of the strengths of our instrument builders and our own operations needs. This paper describes this new software approach that couples a lightweight infrastructure and software library with aspects of modern agile software development. The Gemini Planet Imager instrument project, which is currently approaching its critical design review, is used to demonstrate aspects of this approach. New facilities under development will face similar issues in the future, and the approach presented here can be applied to other projects.
Gemini Planet Imager autonomous software
Jennifer Dunn, Robert Wooff, Malcolm Smith, et al.
The Gemini Planet Imager (GPI) is an "extreme" adaptive optics coronagraph system that will have the ability to directly detect and characterize young Jovian-mass exoplanets. The design of this instrument involves eight principal institutions geographically spread across North America, with four of those sites writing software that must run seamlessly together while maintaining autonomous behaviour. The objective of the software teams is to provide Gemini with a unified software system that not only performs well but also is easy to maintain. Issues such as autonomous behaviour in a unified environment, common memory to share status and information, examples of how this is being implemented, plans for early software integration and testing, command hierarchy, plans for common documentation and updates are explored in this paper. The project completed its preliminary design phase in 2007, and has just recently completed its critical design phase.
The read-out and control system of the DES camera (SISPI)
K. Honscheid, T. Abbott, J. Annis, et al.
We describe the data acquisition and control system of the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey (DES). DECam will be a 3 sq. deg. mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). The DECam data acquisition system (SISPI) is implemented as a distributed multi-processor system with a software architecture built on the Client-Server and Publish-Subscribe design patterns. The underlying message passing protocol is based on the SML inter-process communication software developed at CTIO [1]. For the DECam read-out and control system this software package was ported from LabVIEW to the Python and C programming languages. A shared variable system was added to support exchange of telemetry data and other information between different components of the system. In this paper we discuss the SISPI architecture, new concepts used in the design of the infrastructure software and provide an overview of the remaining components of the DES read-out and control system.
The NEWFIRM observing software: from design to implementation
NEWFIRM is the wide-field infra-red mosaic camera just delivered and commissioned on the Mayall 4-m telescope on Kitt Peak. As with other major instrumentation projects, the software was part of a design, development, implementation and delivery strategy. In this paper, we describe the final implementation of the NEWFIRM software from acquisition within a MONSOON controller environment, directed by the observation control system, to the quick-look functionality at the telescope and final delivery of standardized data products via the pipeline. NEWFIRM is, therefore, the culmination of several years of design and development effort on several fronts.
PyDevCom: a generic device communications application
PyDevCom is a small application written in the python programming language for communicating with astronomical instrumentation devices (e.g. temperature monitors and controllers, motion controllers, etc.) that use serial communication interfaces. It provides a highly configurable framework for defining an interface for communicating with a serial device. The configuration information for PyDevCom is stored in an XML file which is designed to be easily read and customised. Therefore when an interface to a new device is required, a new configuration file for the device is all that is needed. This avoids having to write a new device specific communications application. The core PyDevCom application can be used interactively in a Python terminal, or may be executed inside a script, providing a great deal of flexibility for testing hardware in the lab. PyDevCom has its own platform-independent GUI, based on wxPython, which automatically constructs the interface for a given device from the information in the XML configuration file. Future development for PyDevCom will add several new user interface features that include a plug-in architecture for adding specially tailored GUI interfaces written in python. Once these features have been implemented they will extend PyDevCom to function as a lightweight instrument control system.
Software Engineering and Management
icon_mobile_dropdown
UDP: an integral management system of embedded scripts implemented into the IMaX instrument of the Sunrise mission
The UDP (User Defined Program) system is a scripting framework for controlling and extending instrumentation software. It has been specially designed for air- and space-borne instruments with flexibility, error control, reuse, automation, traceability and ease of development as its main objectives. All the system applications are connected through a database containing the valid script commands including descriptive information and source code. The system can be adapted to different projects without changes in the framework tools, thus achieving great level of flexibility and reusability. The UDP system comprises: an embedded system for the execution of scripts by the instrument software; automatic tools for aiding in the creation, modification, documentation and tracing of new scripting language commands; and interfaces for the creation of scripts and execution control.
The software development process at the Chandra X-ray Center
Janet D. Evans, Ian N. Evans, Giuseppina Fabbiano
Software development for the Chandra X-ray Center Data System began in the mid 1990's, and the waterfall model of development was mandated by our documents. Although we initially tried this approach, we found that a process with elements of the spiral model worked better in our science-based environment. High-level science requirements are usually established by scientists, and provided to the software development group. We follow with review and refinement of those requirements prior to the design phase. Design reviews are conducted for substantial projects within the development team, and include scientists whenever appropriate. Development follows agreed upon schedules that include several internal releases of the task before completion. Feedback from science testing early in the process helps to identify and resolve misunderstandings present in the detailed requirements, and allows review of intangible requirements. The development process includes specific testing of requirements, developer and user documentation, and support after deployment to operations or to users. We discuss the process we follow at the Chandra X-ray Center (CXC) to develop software and support operations. We review the role of the science and development staff from conception to release of software, and some lessons learned from managing CXC software development for over a decade.
Data Handling and Processing I
icon_mobile_dropdown
CADOR and TAROT: a virtual observatory
Myrtille Bourez-Laas, Frédéric Vachier, Alain Klotz, et al.
TAROT (Telescope Action Rapide pour les Objets Transitoires - Rapid Action Telescope for Transient Objects) is a network of two robotic ground based telescopes. The telescopes are fully automated, from the scheduling of the observation requests to the processing of the data. All the applications use a specific automated processing pipeline which has been continuously improved. CADOR (Coordination et Analyse des Donnees d'Observatoires Robotises - Coordination and Data Analysis of Robotic Observatories) is a set of data base servers which manage TAROT telescopes. CADOR is the prime interface to request new observations from TAROT and to access all images saved with the possibility to make additional processing and analysis. Tarot and Cador are compliant with Virtual Observatory standard and protocols.
High performance astronomical data communications in the LSST data management system
Jeff Kantor, Ron Lambert, Chip Cox, et al.
The Large Synoptic Survey Telescope (LSST) is an 8.4m (6.5m effective), wide-field (9.6 degree2), ground-based telescope with a 3.2 GPixel camera. It will survey over 20,000 degree2 with 1,000 re-visits over 10 years in six visible bands, and is scheduled to begin full scientific operations in 2016. The Data Management System will acquire and process the images, issue transient alerts, and catalog the world's largest database of optical astronomical data. Every 24 hours, 15 terabytes of raw data will be transferred via redundant 10 Gbps fiber optics down from the mountain summit at Cerro Pachon, Chile to the Base Facility in La Serena for transient alert processing. Simultaneously, the data will be transferred at 2.5Gbps over fiber optics to the Archive Center in Champaign, Illinois for archiving and further scientific processing and creation of scientific data catalogs. Finally, the Archive Center will distribute the processed data and catalogs at 10Gbps to a number Data Access Centers for scientific ,educational, and public access. Redundant storage and network bandwidth is built into the design of the system. The current networking acquistiion strategy involves leveraging existing dark fiber to handle within Chile, Chile - U.S. and within U.S. links. There are a significant number of carriers and networks involved and coordinating the acquisition, deployment, and operations of this capability. Advanced protocols are being investigated during our Research and Development phase to address anticipated challenges in effective utilization. We describe the data communications requirements, architecture, and acquisition strategy in this paper.
Data Vault: providing simple web access to NRAO data archives
Ron DuPlain, John Benson, Eric Sessoms
In late 2007, the National Radio Astronomy Observatory (NRAO) launched Data Vault, a feature-rich web application for simplified access to NRAO data archives. This application allows users to submit a Google-like free-text search, and browse, download, and view further information on matching telescope data. Data Vault uses the model-view-controller design pattern with web.py, a minimalist open-source web framework built with the Python Programming Language. Data Vault implements an Ajax client built on the Google Web Toolkit (GWT), which creates structured JavaScript applications. This application supports plug-ins for linking data to additional web tools and services, including Google Sky. NRAO sought the inspiration of Google's remarkably elegant user interface and notable performance to create a modern search tool for the NRAO science data archive, taking advantage of the rapid development frameworks of web.py and GWT to create a web application on a short timeline, while providing modular, easily maintainable code. Data Vault provides users with a NRAO-focused data archive while linking to and providing more information wherever possible. Free-text search capabilities are possible (and even simple) with an innovative query parser. NRAO develops all software under an open-source license; Data Vault is available to developers and users alike.
HERSCHEL/PACS on-board reduction flight software
Roland Ottensamer, Franz Kerschbaum
PACS, the Photodetector Array Camera and Spectrometer for the HERSCHEL Space Observatory (HSO) has a sophisticated on-board software performing data reduction and compression to reach a datarate that is compatible with the downlink requirements. For this purpose, highly specialized lossy and lossless techniques are combined to retain a maximum of the original signal quality. The FMgeneration of detector hardware for the HSO has given rise to adaptation of the already qualified flight software. In response to changed detector characteristics and observational needs the reduction/compression scheme has undergone substantial modifications, such as an additional quantization step in photometry. In spectroscopy, on-board deglitching has been sacrificed in favour of higher temporal resolution, thereby freeing CPU resources that were utilised for an improved semiadaptive arithmetic compression model for lossless compression. The modular concept allows for upgrades during the verification phase to increase in-flight performance. A detailed overview of the reduction/compression software and its capabilities is given, along with lessons learned from the FM instrument level test campaigns that had to be taken into consideration as well as demands from ground segment infrastructure to guarantee a sound operational phase.
Data Handling and Processing II
icon_mobile_dropdown
Toward a graphical user interface for the SPIRE spectrometer pipeline
C. Ordenovic, C. Surace, J.P. Baluteau, et al.
Herschel is a satellite mission led by ESA and involving an international consortium of countries. The HCSS is in charge of the data processing pipeline. This pipeline is written in Jython and includes java classes. We present a convenient way for a user to deal with SPIRE photometer and spectrometer pipeline scripts. The provided Graphical User Interface is built up automatically from Jython script. The user can choose tasks to be executed, parameterise them and set breakpoints during the pipeline execution. Results can be displayed and saved in FITS and VOTable formats.
Launching GUPPI: the Green Bank Ultimate Pulsar Processing Instrument
Ron DuPlain, Scott Ransom, Paul Demorest, et al.
The National Radio Astronomy Observatory (NRAO) is launching the Green Bank Ultimate Pulsar Processing Instrument (GUPPI), a prototype flexible digital signal processor designed for pulsar observations with the Robert C. Byrd Green Bank Telescope (GBT). GUPPI uses field programmable gate array (FPGA) hardware and design tools developed by the Center for Astronomy Signal Processing and Electronics Research (CASPER) at the University of California, Berkeley. The NRAO has been concurrently developing GUPPI software and hardware using minimal software resources. The software handles instrument monitor and control, data acquisition, and hardware interfacing. GUPPI is currently an expert-only spectrometer, but supports future integration with the full GBT production system. The NRAO was able to take advantage of the unique flexibility of the CASPER FPGA hardware platform, develop hardware and software in parallel, and build a suite of software tools for monitoring, controlling, and acquiring data with a new instrument over a short timeline of just a few months. The NRAO interacts regularly with CASPER and its users, and GUPPI stands as an example of what reconfigurable computing and open-source development can do for radio astronomy. GUPPI is modular for portability, and the NRAO provides the results of development as an open-source resource.
KISIP: a software package for speckle interferometry of adaptive optics corrected solar data
We present a speckle interferometry code for solar data taken with the help of an adaptive optics (AO) system. As any AO correction is only partial there is a need to use post-facto reconstruction algorithms to achieve the diffraction limit of the telescope over a large field of view most of the observational time. However, data rates of current and future solar telescopes are ever increasing with camera chip sizes. In order to overcome the tedious and expensive data handling, we investigate the possibility to use the presented speckle reconstruction program in a real-time application at telescope sites themselves. The program features Fourier phase reconstruction algorithms using either an extended Knox-Thompson or a triple correlation scheme. The Fourier amplitude reconstruction has been adjusted for use with models that take the correction of an AO system into account. The code has been written in the C programming language and optimized for parallel processing in a multi-processor environment. We analyze the scalability of the code to find possible bottlenecks. Finally, the phase reconstruction accuracy is validated by comparison of reconstructed data with satellite data. We conclude that the presented code is capable to run in future real-time reconstruction applications at solar telescopes if care is taken that the multi-processor environments have low latencies between the processing nodes.
CRBLASTER: a fast parallel-processing program for cosmic ray rejection
Many astronomical image-analysis programs are based on algorithms that can be described as being embarrassingly parallel, where the analysis of one subimage generally does not affect the analysis of another subimage. Yet few parallel-processing astrophysical image-analysis programs exist that can easily take full advantage of todays fast multi-core servers costing a few thousands of dollars. A major reason for the shortage of state-of-the-art parallel-processing astrophysical image-analysis codes is that the writing of parallel codes has been perceived to be difficult. I describe a new fast parallel-processing image-analysis program called crblaster which does cosmic ray rejection using van Dokkum's L.A.Cosmic algorithm. crblaster is written in C using the industry standard Message Passing Interface (MPI) library. Processing a single 800×800 HST WFPC2 image takes 1.87 seconds using 4 processes on an Apple Xserve with two dual-core 3.0-GHz Intel Xeons; the efficiency of the program running with the 4 processors is 82%. The code can be used as a software framework for easy development of parallel-processing image-anlaysis programs using embarrassing parallel algorithms; the biggest required modification is the replacement of the core image processing function with an alternative image-analysis function based on a single-processor algorithm. I describe the design, implementation and performance of the program.
Instrument Control Poster Session
icon_mobile_dropdown
Design and implementation of a service-oriented driver architecture for LINC-NIRVANA
Frank Kittmann, Florian Briegel, Lars Mohr, et al.
LINC-NIRVANA (LN) is a German-Italian Fizeau (imaging) interferometer for the Large Binocular Telescope (LBT). The Instrument Control Software (ICS) of this instrument is a hierarchical, distributed software package, which runs on several computers. In this paper we present the bottom layer of the hierarchy - the Basic Device Application (BASDA) layer. This layer simplifies the development of the ICS through a general driver architecture, which supports different types of hardware. This generic device architecture provides a high level interface to encapsulate the hardware dependent driver. The benefit of such a device architecture is to keep the basic device-driver layer flexible and independent from the hardware, and to keep the hardware transparent to the ICS. Additionally, the basic device-driver layer supports interfaces to IDL based applications for calibration and laboratory testing of astronomical instruments, and interfaces to engineering GUIs that allow to maintain the software components easily.
Installation and first light of the BOOTES-IR near-IR camera
BIRCAM is a near-infrared (0.8-2.5um) cryogenic camera based on a 1Kx1K HgCdTe array. It was designed for - and is now mounted at - one of the Nasmyth foci of the fast-slewing 0.6 m BOOTES-IR telescope at the Sierra Nevada Observatory (OSN) in Spain. The primary science mission is prompt Gamma Ray-Burst afterglow research, with an implied demand for extremely time-efficient operation. We describe the challenges of installing a heavy camera on a small high-speed telescope, of integrating the dithering mechanism, the filterwheel, and the array itself into a high-efficiency instrument, the design of the software to meet the requirements.
FPGA based control system for space instrumentation
Anna M. Di Giorgio, Pasquale Cerulli Irelli, Francesco Nuzzolo, et al.
The prototype for a general purpose FPGA based control system for space instrumentation is presented, with particular attention to the instrument control application software. The system HW is based on the LEON3FT processor, which gives the flexibility to configure the chip with only the necessary HW functionalities, from simple logic up to small dedicated processors. The instrument control SW is developed in ANSI C and for time critical (<10μs) commanding sequences implements an internal instructions sequencer, triggered via an interrupt service routine based on a HW high priority interrupt.
LBTI software architecture
LBTI is a thermal imager and a nulling interferometer to be installed on the Large Binocular Telescope (LBT). Here, we present the distributed component architecture model and its simple yet powerful software structure designed to complement the LBTI hardware model that comprises pyramid wave front sensors with its control electronic universal beam combiner, phase sensor, science imager, and all housekeeping duties to run the cryogenics, compressors, vibration monitors and the interface to the telescope control systems.
A first-generation software product line for data acquisition systems in astronomy
J. C. López-Ruiz, Rubén Heradio, José Antonio Cerrada Somolinos, et al.
This article presents a case study on developing a software product line for data acquisition systems in astronomy based on the Exemplar Driven Development methodology and the Exemplar Flexibilization Language tool. The main strategies to build the software product line are based on the domain commonality and variability, the incremental scope and the use of existing artifacts. It consists on a lean methodology with little impact on the organization, suitable for small projects, which reduces product line start-up time. Software Product Lines focuses on creating a family of products instead of individual products. This approach has spectacular benefits on reducing the time to market, maintaining the know-how, reducing the development costs and increasing the quality of new products. The maintenance of the products is also enhanced since all the data acquisition systems share the same product line architecture.
ESPRESSO control software and electronics
P. Di Marcantonio, P. Santin, I. Coretti, et al.
The Astrophysical Technology Group of the INAF-AOT, as part of a consortium led by ESO, has carried out a feasibility study for the Control Software and Electronics of a new generation optical spectrograph, named ESPRESSO. ESPRESSO has been conceived as a high-efficient, high-resolution, fiber-fed spectrograph of high mechanical and thermal stability to be located at the Coude Combined Laboratory of the VLT. These features together with its ability to gather light from 4-UT simultaneously made ESPRESSO a very challenging instrument. This paper presents an overview of the control software and electronics concept design focusing on the more critical and innovative aspects of the spectrograph.
The WIYN ODI instrument software architecture
Andrey K. Yeatts, Daniel Harbeck, John Cavin, et al.
As camera focal planes become larger, with higher resolutions and increasingly higher data throughputs, the more they resemble the enterprise data systems found in commercial data centers. The WIYN One Degree Imager (ODI) is such a system. ODI is a mosaic imager with 64 independent CCD detectors with a total resolution of approximately a gigapixel, covering 1 square degree of the sky at the WIYN 3.5 m telescope at Kitt Peak. The ODI camera will bring improved seeing, widefield imaging, new modes of operation and automated integration with the NOAO Science Archive. It will also become the workhorse instrument of the observatory, with high availability and reliability. The new flexibility of the camera will allow (and require) constant refinement of imaging techniques, and calibration and maintenance processes. Large scale, parallel data processing, management and control will be a constant in the operation of the instrument. We are developing an enterprise level data system using typical Java J2EE constructs. With the advent of relatively inexpensive clustered hardware, scaling of image operations and management to the large volumes of data in ODI should be simplified. We describe an architecture in construction for ODI's 2010 deployment.
Design and realization of the control system for the three-channel birefringent filter
Space Solar Telescope is one of the large-scale scientific programs under development in China. In it, an important part is the filter, a birefringent filter with three-channels. It consists of 17 rotatable wave plates. In coordination with other mechanical and optical components, complicated and precise adjustments of their attitudes are necessary, which requests a high-accuracy control system to ensure their concertedness. The paper describes the design and realization of the control system. It mainly has a hardware plate and a software one. The former uses an industrial controller, a control card and step motors, while the latter uses the technique construction of the object oriented. That is modularization design with lengthwise dividing as per functions and breadthwise dividing as per element layers. Shift arithmetic for whole spectrum in programs is for intelligent spectral scanning. At the same time, the control information is roundly recorded in the data base of the system. Tests show that the system is characterized by high precision, good stabilization, high data safety and user-friendly interface, totally meeting the design requirements. Also discussed in this paper is some new conceivability to realize the handiness and miniaturization of the filter to fit the use in space flight in the future.
Design and realization of the real-time spectrograph controller for LAMOST based on FPGA
Jianing Wang, Liyan Wu, Yizhong Zeng, et al.
A large Schmitt reflector telescope, Large Sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST), is being built in China, which has effective aperture of 4 meters and can observe the spectra of as many as 4000 objects simultaneously. To fit such a large amount of observational objects, the dispersion part is composed of a set of 16 multipurpose fiber-fed double-beam Schmidt spectrographs, of which each has about ten of moveable components realtimely accommodated and manipulated by a controller. An industrial Ethernet network connects those 16 spectrograph controllers. The light from stars is fed to the entrance slits of the spectrographs with optical fibers. In this paper, we mainly introduce the design and realization of our real-time controller for the spectrograph, our design using the technique of System On Programmable Chip (SOPC) based on Field Programmable Gate Array (FPGA) and then realizing the control of the spectrographs through NIOSII Soft Core Embedded Processor. We seal the stepper motor controller as intellectual property (IP) cores and reuse it, greatly simplifying the design process and then shortening the development time. Under the embedded operating system μC/OS-II, a multi-tasks control program has been well written to realize the real-time control of the moveable parts of the spectrographs. At present, a number of such controllers have been applied in the spectrograph of LAMOST.
Software structure for Vega/Chara instrument
VEGA (Visible spEctroGraph and polArimeter) is one of the focal instruments of the CHARA array at Mount Wilson near Los Angeles. Its control system is based on techniques developed on the GI2T interferometer (Grand Interferometre a 2 Telescopes) and on the SIRIUS fibered hyper telescope testbed at OCA (Observatoire de la Cote d'Azur). This article describes the software and electronics architecture of the instrument. It is based on local network architecture and uses also Virtual Private Network connections. The server part is based on Windows XP (VC++). The control software is on Linux (C, GTK). For the control of the science detector and the fringe tracking systems, distributed API use real-time techniques. The control software gathers all the necessary informations of the instrument. It allows an automatic management of the instrument by using an original task scheduler. This architecture intends to drive the instrument from remote sites, such as our institute in South of France.
The software upgrade of NICS
Emanuel Rossetti, Vincenzo Guido, Ernesto Oliva
NICS (the Near Infrared Camera Spectrometer) is a cooled near-infrared camera-spectrometer that has been developed in the late 90's at the INAF-Arcetri Astrophysical Observatory for the Ø3.5 m "Telescopio Nazionale Galileo" (TNG) at the La Palma Observatory. The instrument has been operating for regular scientific observations since the beginning of 2001. During the 2001-2007 period it has been used in about 410 nights yielding data which contributed to the production of 60 refereed papers which collected a total of more than 800 citations. At the age of 8 years, NICS is still among the most efficient and versatile infrared instruments existing worldwide. To improve its observational efficiency, we have designed and we are currently developing new control software and GUI interfaces. The former has been devised to optimize the low level tasks (in particular the motors controls), the latter to simplify the communications between the observer and the instrument. We give here a short description of NICS software upgrade.
The GIANO control software system
Emanuel Rossetti, Ernesto Oliva, Livia Origlia
GIANO is an ultra-stable IR echelle spectrometer, optimized for both low (R≃400) and high (R≃50,000) resolution, that will be installed at the Nasmyth-B focus of the Italian national telescope (TNG). At the beginning of this year the assembling phase of GIANO has started, at the Infrared Laboratory of INAFArcetri, and is currently in progress. We describe, here, the general control software structure of the instrument concerning both the user interface and the controls of all subsystems. We present also the software interface which provides the communication with the cryogenic system of the instrument and is handled by means of a Programmable Logic Controller.
An SOA developer framework for astronomical instrument control software
We present a new and flexible developer framework for high performance service oriented architecture (SOA) based systems, using the middleware called ICE by ZeroC Inc. for interprocess communication. The framework was developed at the Max Planck Institute for Astronomy within the scope of the LBT interferometer LINC-NIRVANA control software, but may also be used, in respect of its flexibility, for other astronomical instruments. The systems architecture was designed to decrease the development effort of large SOA (Service Oriented Architecture) based systems like astronomical instrument control software. The advantages of this new framework are a combination of the online instrument data management, the validation and the ability to integrate user defined data manipulation.
A component based astronomical visualization tool for instrument control
For various astronomical instruments developed at the Max-Planck-Institute-Heidelberg there was a need for a highly flexible display and control tool. Many display tools (ximtool, DS9, skycat,...) are available for astronomy, but all this applications are monolitic and can't be easily enriched by plugins for interaction with the graphical display, and other functionalities for remote access and control of the instrument and data pipepline. It was developed on top of Trolltechs Cross-Platform Rich Client Development Framework Qt,1 the modern middleware Internet Communications Engine 2 from ZeroC and the template based SOA developer framework for astronomical instrumentation - NICE.3 The display tool is used on the Calar Alto Observatory (Spain) as a guider, for a wide field imager and guider at the Wise Observatory (Israel) and for the LBT interferometer Linc-Nirvana (USA).
A very sensitive all-sky CCD camera for continuous recording of the night sky
Alberto J. Castro-Tirado, Martin Jelínek, Stanislav Vítek, et al.
We present a novel design of an all-sky 4096×4096 pixels camera devoted to continuous observations of the sky. A prototype camera is running at the BOOTES-1 astronomical station in Huelva (Spain) since December 2002 and a second one is working at the BOOTES-2 station in Málaga (Spain) since July 2004. Scientific applications are the search for simultaneous optical emission associated to gamma-ray bursts, study of meteor showers, and determination of possible areas for meteorite recovery from the reconstruction of fireball trajectories. This last application requires that at least two such devices for simultaneously recording the sky at distance of the order of ~ 100 km. Fifteen GRB error boxes (13 for long/soft events and 2 for short/hard GRBs) have been imaged simultaneously to the gamma-ray emission, but no optical emission has been detected. Bright fireballs have been also recorded, allowing the determination of trajectories, as in the case of the fireball of 30 July 2005. This device is a very promising instrument for continuous recording of the night sky with moderate angular resolution and limiting magnitude (up to R ~ 10).
JMaCS: a Java monitoring and control system
JMaCS is a software intended to facilitate, in soft realtime, the local or remote interactive and programmatic monitoring and control of some distributed target, sucha as an astronmical telescope or telescope network. It is derviced from experimental softaware written for a radar used for observing the Earth's ionosphere, and aims to bring to bear the remote polymorphism afforded by Java RMI (Remote Method Invocation). Teh core software does not provide all these facilities itself, but only a standard way to plug device interfaces into a third-party JMaCS implementation. Itis presented here, together with the JMaCS implementation developed in parallel and a demonstration target, as proof of concept.
Workstation software framework
The Workstation Software Framework (WSF) is a state machine model driven development toolkit designed to generate event driven applications based on ESO VLT software. State machine models are used to generate executables. The toolkit provides versatile code generation options and it supports Mealy, Moore and hierarchical state machines. Generated code is readable and maintainable since it combines well known design patterns such as the State and the Template patterns. WSF promotes a development process that is based on model reusability through the creation of a catalog of state machine patterns.
Architectural design of the control software for the SPHERE Planet Finder VLT instrument
A. Baruffolo, P. Bruno, S. Cétre, et al.
SPHERE (Spectro-Polarimetric High-contrast Exoplanet REsearch) is a second generation instrument for the VLT, currently under design, whose prime objective is the discovery and study of new extrasolar giant planets orbiting nearby stars by direct imaging of their circumstellar environment. It is a complex instrument, consisting of an extreme Adaptive Optics System (SAXO), various coronagraphs, an infrared differential imaging camera (IRDIS), an infrared integral field spectrograph (IFS) and a visible differential polarimeter (ZIMPOL). Its complexity is reflected in the large number of devices that have to be controlled and of the calibration procedures required for a full characterization of the instrument. In this paper we report on the current status of the design of the control software for the SPHERE instrument. We begin by describing the engineering process that we adopted for all phases of the project. We then discuss the architecture of the software and of the control hardware, and we give an outline of the calibration and observation procedures. Finally, we provide some details on the on-line data processing procedures required for quick-look and calibration, as well as a description of the format used for archiviation of data from the scientific detectors and from the adaptive optics system.
Telescope Control Poster Session
icon_mobile_dropdown
Distributed modeling and control of a segmented mirror surface
The next generation of ground-based optical telescopes will employ increasingly large primary mirrors to achieve superior resolution and light collecting abilities. Many of these large mirror surfaces will be segmented into an array of hundreds of smaller mirror segments. The corresponding number of required sensors and actuators will be in the order of thousands, which creates a challenging control problem to stabilize and align each segment from external disturbances - wind shake, gravity forces, thermal effects, seismic effects and induced vibrations from surrounding equipment and telescope motion - so that the telescope's image quality requirements can be met. The use of a centralized control scheme may be infeasible due to the large number of inputs and outputs of the resulting control system, while a decentralize control scheme would lack global performance. An attractive alternative approach is an interconnected network of distributed controllers that provide global control with a highly scalable design and implementation. A segmented mirror can be considered as an interconnected system comprised of many similar discrete subsystems, where each subsystem represents an individual mirror segments and its dynamics are coupled directly to its neighboring segments. The resulting distributed controller network of controller subsystems are similarly coupled and working cooperatively to achieve the desired global performance.
The Large Binocular Telescope azimuth and elevation encoder system
David S. Ashby, Tom Sargent, Dan Cox, et al.
A typical high-resolution encoder interpolator relies on careful mechanical alignment of the encoder read-heads and tight electrical tolerances of the signal processing electronics to ensure linearity. As the interpolation factor increases, maintaining these tight mechanical and electrical tolerances becomes impractical. The Large Binocular Telescope (LBT) is designed to utilize strip-type encoders on the main axes. Because of the very large scale of the telescope, the accumulative length of the azimuth and elevation encoder strips exceeds 80 meters, making optical tape prohibitively expensive. Consequently, the designers of the LBT incorporated the far less expensive Farrand Controls Inductosyn® linear strip encoder to encode the positions of the main axes and the instrument rotators. Since the cycle pitch of these encoders is very large compared to that of optical strip encoders, the interpolation factor must also be large in order to achieve the 0.005 arcsecond encoder resolution as specified. The authors present a description of the innovative DSP-based hardware / software solution that adaptively characterizes and removes common systematic cycle-to-cycle encoder interpolation errors. These errors can be caused by mechanical misalignment, encoder manufacturing flaws, variations in electrical gain, signal offset or cross-coupling of the encoder signals. Simulation data are presented to illustrate the performance of the interpolation algorithm, and telemetry data are presented to demonstrate the actual performance of the LBT main-axis encoder system.
The Gemini secondary mirror tip/tilt system: past, present, and future
Christopher J. Carter, Mathew J. Rippa, Roberto Rojas, et al.
The Gemini Observatory is currently in the early stages of a major upgrade of the Secondary Mirror Tip/tilt Systems (M2TS). Although these systems continue to deliver good fast-steering and chopping performance at both sites, there are persistent and occasionally time-consuming issues that need to be addressed in order for them to deliver their full potential and further reduce downtime. We present an overview of the system, outline its capabilities, and review the early commissioning process and some of the issues encountered. We describe the augmentation of the original system with data logging features which made possible some critical servo tuning work that was key in delivering improved performance. The hardware and software upgrade project to date is discussed, along with a brief overview of items it intends to address.
Research and realization of message bus architecture for LAMOST control system
The LAMOST (Large sky Area Multi-Object fibre Spectroscopic Telescope) has now come to its final completion of R&D stage. Major functions of the telescope have successfully passed a serial site tests recently, and various kinds of applications integrated into the automation of the telescope chamber is being under vigorous tests too. The TCS (Telescope Control System) is built on multi-layer distributed network platform with many sub-systems at different levels. How to efficiently process the enormous amount of message with particular implications running in and out the TCS is one of the major issues of the TCS software programming. The paper describes the mechanism and methodology of the LAMOST message bus structure. The realisation of message bus architecture as a result of years of research and site test is presented in general, and dealing with the message priority and manipulating smallest piece of message in parallel or in serial sequence are elaborated in particular.
Implementation of advanced modified PCF in large telescope control system
Xiaoying Shuai, Zhenchao Zhang, Yongtian Zhu
Large Telescope Control System (TCS) is a complicated system, which contains thousands of actuators. Wired TCS is inconvenient to point and track for a large telescope. This paper proposes a TCS based on IEEE 802.11 Wireless Local Area Network (WLAN), which provides flexibility, reduced infrastructure costs, and greater convenience. The IEEE 802.11 standard MAC protocol includes the DCF and the PCF. The DCF is designed for asynchronous data transmission, while the PCF is designed for real-time data. The performance of a WLAN with DCF will fall when the number of wireless station increase in a basic service set (BSS). An advanced modified PCF (APCF) is presented to poll data from the AP to stations and response data from stations to the AP in CFP. The analysis indicates that APCF can improve communication performance, and is very suitable for large TCS.
Research and implementation of a large telescope control system based on wireless smart sensors
Xiaoying Shuai, Zhenchao Zhang, Changzhi Ren, et al.
Telescope Control System (TCS) becomes more and more complexity, especially the large telescope control system of force actuators for deformed mirror and position actuators for modifiable degrees of mirrors. It is very difficult to connect thousands of sensors, actuators and controller with wired link. This paper presented a large telescope control system based on wireless smart sensor (WLTCS), connecting wireless sensors and controllers with wireless link, employing the TCP/IP protocol as communication protocol. Polling access can overcome contention and guarantee every sensor to communicate with controller in time; using intelligent control methods when some channels are interfered, multi-hop wireless paths can improve throughput and performance. The analysis and simulation indicate that WLTCS can greatly reduce complex of implementation and improve communication performance.
The study on servo-control system in the large aperture telescope
Wei Hu, Zhenchao Zhang, Daxing Wang
Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.
VST telescope primary mirror active optics actuators firmware implementation
C. Molfese, P. Schipani, L. Marty
The VST (VLT Survey Telescope) is a 2.6 m class Alt-Az telescope in installation phase at Cerro Paranal in Northern Chile, at the European Southern Observatory (ESO) site. The VST is a wide-field imaging telescope dedicated to supply databases for the ESO Very Large Telescope (VLT) science and to carry out stand-alone observations in the Ultraviolet to Infrared spectral range. The VST is provided with an active optics control system to actively compensate the optical aberrations; it is based on 84 actuators controlling the shape of the primary mirror and a hexapode for secondary mirror positioning. The present paper focuses on the implementation of the microcontroller programming firmware for the Primary Mirror Actuators Electronic Control Board. The most relevant problems encountered during the implementation of this real time multitasking distributed control application are described; optimization problems due to low performing hardware platform, not provided with operating system, are also reported. Several described topics are applicable to other distributed control systems, requiring closed loop control system and communication capability with a higher level computer.
VST primary mirror active optics electronics
C. Molfese, P. Schipani, M. Capaccioli, et al.
The VLT Survey Telescope (VST), a telescope with a 2.6 m. primary mirror designed and implemented by I.N.A.F. in cooperation with the European Southern Observatory (ESO), is provided with an active optics system to correct the optical aberrations due to polishing imperfections, misalignments, thermal and gravity reasons. For the primary mirror, a distributed control system is required to impose the desired force values in a sufficient number of points to maintain the optimal shape in different positions of the altitude axis. The forces are applied by means of 84 electro-mechanical actuators, provided with an electronic Primary Mirror Actuator Control Boards (M1ACB). This paper focuses mainly on the hardware electronics and is referred to the control system new version, designed in 2007, whose implementation is in progress. The new design has taken into account all the experience done on the system previous version, solving all the encountered problems of functionality and reliability.
A type of displacement actuator applied on LAMOST
A type of displacement actuator used in active optics on Astronomical Telescope LAMOST was described in the paper. Now it have obtained success on the small LAMOST. Tests of the actuator using dual-frequency laser interferometer give some main parameters of them, and also give influence of condition varieties example for pull or push force. It show how do these conditons affect the actuator, and how to use the actuator to fit active optices. Finding out the characteristics after testing, these actuators were applied on the telescope. Some puzzles were encountered and solved all, which was showed in the paper. Finally we could control these actuators go forward or backward to several tens of nanometers accurately. By these technologic problems solved in the test and locations, these actuators could be applied on LAMOST or larger Astronomical Telescope.
The micro-displacement worktable control system of mirror detection
Yu Ye, Zhenchao Zhang, Aihua Li
This paper mainly introduces hardware design and control method of the system which is used for detecting the MA segmented mirrors in the LAMOST. According to the demand of sub-aperture stitching interferometer, the system adopts a control card to control the stepping motor to drive the worktable moving on the X-Y. The MA sub-mirror surface will be changed through active optical correction and add-subtract power of force actuators. The detection result of the MA segmented mirror of 14 shows that root mean square (RMS) of surface accuracy error is 21.387nm less than 0.035 λ(λ=632.8nm). It is demonstrated that the control system can work very well and shorten the time of detection.
Pre-research on arithmetic for facing control of segmented-mirror in LAMOST
Main mirror in LAMOST is a spherical mirror with 4 meters effective aperture, it is assembled by 37 hexagonal segments, and the orientations of these segments were adjusted by actuators to achieve optical co-focal status. Arithmetic for facing control of segmented-mirror was pre-researched in this paper, based on present condition in LAMOST. To maintain the main mirror in a facing figure and preserve it during the track is the core of this control. To achieve the facing figure, the unique method in segmented-mirror technology nowadays is that sensors working with actuators to form closed loop. Firstly, relationship between the measurement results of sensors and the movements of actuators was calculated and the figure control equation set was founded. Secondly, the characteristic of the coefficient matrix of this equation set was analyzed. Finally, several methods to solve this equation system were comprehensively analyzed. Damped Least-Squares Solution (DLS) was selected as the best for this paper, and this method was programmed to apply on the experiment of sub system, finer result was got. Petal-effect in the experiment was noticed, analysis was given to show the control of whole main mirror would get rid of petal-effect.
Servo control system for friction drive with ultra-low speed and high accuracy
Shihai Yang, Zhenchao Zhang
Due to its high accuracy and good performance at low speed, friction drive is widely used in turntable and large astronomical telescopes such as LAMOST and Keck. Especially, friction drives are implemented on the axes of azimuth, altitude and field rotation in LAMOST telescope. This paper describes the study on servo control system for friction drive with ultra-low speed and high accuracy. The principle, constitution, control algorithm and realization of servo system based on friction drive are analyzed and explored.
Gemini all-sky camera for laser guide star operation
As part of its Safe Aircraft Localization and Satellite Acquisition System (SALSA), Gemini is building an All Sky Camera (ASCAM) system to detect aircrafts in order to prevent propagation of the laser that could be a safety hazard for pilots and passengers. ASCAM detections, including trajectory parameters, are made available to neighbor observatories so they may compute impact parameters given their location. We present in this paper an overview of the system architecture, a description of the software solution and detection algorithm, some performance and on-sky result.
Improved guiding accuracy through slit viewer of Subaru Telescope
Akihiro Iseki, Daigo Tomono, Akito Tajitsu, et al.
The Subaru telescope provides a feature of auto guiding the telescope using the slit viewer (SV). The SV guide uses the target star as a guide star. There are advantages to guide the target star directly. However, the guide accuracy was not good. The guide star is located on the slit and some lights is vignetted by the slit. But the SV guide simply took the centroid of the light to measure the position of the star without taking the vignetting of the slit into account. In 2006, we improved the SV to detect center of gravity with the vignetting of the slit taken into account. It assumes a Gaussian distribution of light except for the slit. By this improvement, the guiding accuracy of the telescope improved from 0.37 arcsec to less than 0.2 arcsec. The effect of the improvement was also confirmed with actual observations.
Oeil 1.0 visual control system for the pointing and tracking of ground-based telescopes
Julián Rodríguez Ferreira, Ángela Gélvez Espinel, Arturo Plata Gómez, et al.
An inexpensive solution using an artificial vision system appears to the problem of pointing, tracking and guiding a telescope. A stellar sensor called "R2D2" has been developed. The elements of the system are a CMOS sensor with the respective optics and "Oeil 1.0", a software package that we have developed under the object oriented environment of Matlab 7.0. The software is built in a graphical user interface that allows an easy interaction between the user and the system. The stellar sensor is mounted in parallel to the telescope, captures images of the night sky and sends them to the software, which makes preprocessing and noise reduction routines, identifies the stars and their centroids, and calculates the equatorial coordinates of all the stars, the center of the image and any other point into the image. Finally the tracking of the stellar object placed in the center of the field of view is made thanks to the continuously feedback of images and the successful results obtained with the image processing techniques.
An amateur telescope control system: toward a generic telescope control model
Rodrigo J. Tobar, Horst H. von Brand, Mauricio A. Araya, et al.
Control System for an Amateur Telescope (CSAT) is a distributed telescope control system model for amateur telescopes with transparent interchangeable components, built using the ALMA Common Software (ACS) framework. The CSAT project has been thought as the first step towards a generic telescope control model, which will consist on a generic control framework for any telescope mount. With the ACS Container/Component model, a completely different hardware can be supported by just re-implementing the low-level components for the new setup. This way, CSAT becomes a very good example of all the features that ACS provides for building a generic telescope control framework.
A nonlinear disturbance-decoupled elevation axis controller for the Multiple Mirror Telescope
Dusty Clark, Tom Trebisky, Keith Powell
The Multiple Mirror Telescope (MMT), upgraded in 2000 to a monolithic 6.5m primary mirror from its original array of six 1.8m primary mirrors, was commissioned with axis controllers designed early in the upgrade process without regard to structural resonances or the possibility of the need for digital filtering of the control axis signal path. Post-commissioning performance issues led us to investigate replacement of the original control system with a more modern digital controller with full control over the system filters and gain paths. This work, from system identification through controller design iteration by simulation, and pre-deployment hardware-in-the-loop testing, was performed using latest-generation tools with Matlab® and Simulink®. Using Simulink's Real Time Workshop toolbox to automatically generate C source code for the controller from the Simulink diagram and a custom target build script, we were able to deploy the new controller into our existing software infrastructure running Wind River's VxWorks™real-time operating system. This paper describes the process of the controller design, including system identification data collection, with discussion of implementation of non-linear control modes and disturbance decoupling, which became necessary to obtain acceptable wind buffeting rejection.
Observatory Control Poster Session
icon_mobile_dropdown
OAdM robotic observatory: solutions for an unattended small-class observatory
J. Colomé, I. Ribas, D. Fernández, et al.
The Montsec Astronomical Observatory (OAdM) is a small-class observatory working on a completely unattended control, due to the isolation of the site. Robotic operation is, then, mandatory for its routine use. The level of robotization of an observatory is given by the confidence reached to respond to environment changes and by the required human interaction due to possible alarms. These two points establish a level of human attendance to ensure low risk at any time. There are key problems to solve when a robotic control is envisaged. Learned lessons and solutions to these issues at the OAdM are discussed here. We present a description of the general control software (SW) and several SW packages developed. The general control SW specially protects the system at the identified single points of failure and makes a distributed control of any subsystem, which are able to respond independently when an alarm is triggered on thanks to a top-down control flow. Specific SW packages developed are: an environment monitoring SW, a set of alarm routines, a pipeline for calibration and analysis of the images taken, and an observation scheduler. All together compose a SW suite designed to reach the complete robotization of an observatory.
Systems and control software for the Atacama Cosmology Telescope
E. R. Switzer, C. Allen, M. Amiri, et al.
The Atacama Cosmology Telescope (ACT) is designed to measure temperature anisotropies of the cosmic microwave background (CMB) at arcminute resolution. It is the first CMB experiment to employ a 32×32 close-packed array of free-space-coupled transition-edge superconducting bolometers. We describe the organization of the telescope systems and software for autonomous, scheduled operations. When paired with real-time data streaming and display, we are able to operate the telescope at the remote site in the Chilean Altiplano via the Internet from North America. The telescope had a data rate of 70 GB/day in the 2007 season, and the 2008 upgrade to three arrays will bring this to 210 GB/day.
Achieving design reuse: a case study
Peter J. Young, Jon J. Nielsen, William H. Roberts, et al.
The RSAA CICADA data acquisition and control software package uses an object-oriented approach to model astronomical instrumentation and a layered architecture for implementation. Emphasis has been placed on building reusable C++ class libraries and on the use of attribute/value tables for dynamic configuration. This paper details how the approach has been successfully used in the construction of the instrument control software for the Gemini NIFS and GSAOI instruments. The software is again being used for the new RSAA SkyMapper and WiFeS instruments.
Gathering headers in a distributed environment
The Advanced Technology Solar Telescope (ATST) has implemented a novel method for gathering header information on data products. At the time of data collection, the specific state of the telescope and instrumentation needs to be collected and associated with the saved data. The ATST performs this task by issuing a header request event across the ATST event system. All observatory software components that are registered for the event and are participating in the current experiment or observation report status information to a central header repository. Various types of header request events may be selected for start or stop of individual frames, groups of frames, or entire observations. The final data products are created by combining the data files with all or some of stored header information in the database. The resulting data file may be generated in any possible format, including FITS. Much of the implementation of this approach is integrated into the ATST technical framework, simplifying the development process for component writers and ensuring consistent responses to header request events.
Observation process of LAMOST using observatory control system: testing for the command model and interface agent
Shi Wei Sun, A-Li Luo
Test observation of LAMOST controlled by the software Observatory Control System (OCS) had been carried out. In this paper, the process of the observation and some test are presented, and the command model and interface agent of the OCS are introduced. The driving of the model and the communication between OCS and different subsystems are also analyzed. Test for observation flow by single steps are achieved.
Software design for the control system of small LAMOST
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST hereafter, will become an astronomical telescope with largest field of view and most efficient observation in 4-m aperture telescopes in the world by its completion in 2008. In June 2007, Small System for LAMOST was completed successfully. Small LAMOST is composed of mirror with aperture of 3 meter(effective aperture of 2 meter ) 250 fibers one spectrograph two 4kx4k CCD camera tracking and controlling system. This paper presents the study in Small LAMOST. It comprises three main parts. First, it introduces the software design for the control system of Small LAMOST, including Mount tracking, Focal Plane tracking, GPS time ticking and time synchronization between computers, auto-guiding, etc. The design has proved correct and feasible. Second, it describes some technical solutions to the requirements of precision, real time and open architecture for Small LAMOST. Lastly, some experimental data and curves are given to show the tracking precision of Small LAMOST.
Optimizing real-time web-based user interfaces for observatories
In using common HTML/Ajax approaches for web-based data presentation and telescope control user interfaces at the MMT Observatory (MMTO), we rapidly were confronted with web browser performance issues. Much of the operational data at the MMTO is highly dynamic and is constantly changing during normal operations. Status of telescope subsystems must be displayed with minimal latency to telescope operators and other users. A major motivation of migrating toward web-based applications at the MMTO is to provide easy access to current and past observatory subsystem data for a wide variety of users on their favorite operating system through a familiar interface, their web browser. Performance issues, especially for user interfaces that control telescope subsystems, led to investigations of more efficient use of HTML/Ajax and web server technologies as well as other web-based technologies, such as Java and Flash/Flex. The results presented here focus on techniques for optimizing HTML/Ajax web applications with near real-time data display. This study indicates that direct modification of the contents or "nodeValue" attribute of text nodes is the most efficient method of updating data values displayed on a web page. Other optimization techniques are discussed for web-based applications that display highly dynamic data.
Implementation of the software systems for the SkyMapper automated survey telescope
A. Vaccarella, T. Preston, A. Czezowski, et al.
This paper describes the software systems implemented for the wide-field, automated survey telescope, SkyMapper. The telescope is expected to operate completely unmanned and in an environment where failures will remain unattended for several days. Failure analysis was undertaken and the control system extended to cope with subsystem failures, protecting vulnerable detectors and electronics from damage. The data acquisition and control software acquires and stores 512 MB of image data every twenty seconds. As a consequence of the short duty cycle, the preparation of the hardware subsystems for the successive images is undertaken in parallel with the imager readout. A science data pipeline will catalogue objects in the images to produce the Southern Sky Survey.
The RTS2 protocol
Petr Kubánek, Martin Jelínek, John French, et al.
Remote Telescope System 2nd version (RTS2) is an open source project aimed at developing a software environment to control a fully robotic observatory. RTS2 consists of various components, which communicate via an ASCII based protocol. As the protocol was from the beginning designed as an observatory control system, it provides some unique features, which are hard to find in the other communication systems. These features include advanced synchronisation mechanisms and strategies for setting variables. This presentation describes the protocol and its unique features. It also assesses protocol performance, and provides examples how the RTS2 library can be used to quickly build an observatory control system.
The control and data concept for the robotic solar telescope ChroTel
C. Halbgewachs, Ch. Bethge, P. Caligari, et al.
The solar telescope ChroTel is designed as a robotic telescope so that no user interaction is necessary for observation. The telescope will start tracking in the morning as soon as weather conditions are appropriate and will process a user defined observation routine until sunset. Weather conditions and system status are continuously monitored to close the telescope shutter in case of bad weather or to drive to the stow position in case of an error. The ChroTel control software was programmed in LabVIEW.
PLATO control and robotics
Daniel M. Luong-Van, Michael C. B. Ashley, Jon R. Everett, et al.
PLATO, the 'PLATeau Observatory', is a robotic Antarctic observatory developed by UNSW for deployment to Dome A, the highest point on the Antarctic plateau. PLATO is designed to run autonomously for up to a year, providing power, communications and thermal management for a suite of scientific and site-testing instruments. To achieve this degree of autonomy, multiple-redundant Linux-based 'supervisor' computers, each with their own watchdog-timer and Iridium satellite-modem, communicate with each other and with the outside world. The active supervisor computer monitors and controls the PLATO power distribution, thermal and engine management subsystems via a CAN (Control Area Network) bus. High-bandwidth communication between the instruments and the supervisor computers is via a 100 Mbps Local Area Network. Data is stored in cold-verified flash memory. The PLATO computers monitor up to 140 analog channels and distribute electrical power and heating to 96 current-monitored channels via an intelligent load-shedding algorithm.
Controlling the Hamburg Robotic Telescope: a description of the software
Jose N. González-Pérez, Alexander Hempelmann, Marco Mittag, et al.
The Hamburg Robotic Telescope (HRT) is a fully automatic 1.2m telescope designed for high resolution spectroscopy of active stars. It uses the Heidelberg Extended Range Optical Spectrograph (HEROS) which is fed by a 50μm fiber connected to the Nasmyth focus of the telescope through an adapter. Here we present the software that controls the whole system of the HRT. This software works both in fully automatic and interactive mode. It organizes the interaction between the Central Control System (CCS: the core of the system) and the subsystems: building, telescope, spectrograph, adapter, environmental sensors (weather station and sky monitor) and scheduler. The CCS performs its operation by sending commands (ASCII messages through TCP/IP sockets) to the different subsystems. The robotic operation is divided into discrete procedures, such as "Initialization", "Observation" or "Calibration". Each procedure consists of a set of commands which will be carried out (sequentially or even in parallel) if a set of conditions is met: e.g. only when one command is successfully accomplished, the next will be sent. Furthermore, the Error Handler takes the necessary actions when a problem inhibits the normal progress of the observation (e.g. bad weather, non-detection of the target or technical problems. The scheduler selects the target from a primary list in a manner which combines the scientific priority with observational feasibility and the history of observations. Finally, we present the Automatic Reduction Pipeline developed on the basis of REDUCE, an IDL reduction package, to obtain the final spectrum from the raw data.
Software Engineering Poster Session
icon_mobile_dropdown
Virtualization as an alternative for astronomical software integration
Luis A. Martínez, Abel Bernal, Fernando Garfias
The integration of software which requires different operating system platforms to run, is a common challenge that has to be overcome by astronomical software developers. In recent years, the possibility to execute different operating systems (OS) and programs at the same time, on a single computer by means of virtual machines, known as virtualization, has emerged as a novel tool to integrate software from different platforms. In this paper, we share our virtualization experiences and how virtualization has improved the software integration of two astronomical software projects developed at the Instituto de Astronomía, Universidad Nacional Autónoma de Mexico (IAUNAM).
Software regression testing: practical experience at the ALMA test facility
B. Lopez, R. Araya, N. Barriga, et al.
The Atacama Large Millimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and Japan. ALMA will consist of at least 50 twelve meter antennas operating in the millimeter and submillimeter wavelength range. It will be located at an altitude above 5000m in the Chilean Atacama desert. The ALMA Test Facility (ATF), located in New Mexico, USA, is a proving ground for development and testing of hardware, software, commissioning and operational procedure. At the ATF emphasis has shifted from hardware testing to software and operational functionality. The support of the varied goals of the ATF requires stable control software and at the same time flexibility for integrating newly developed features. For this purpose regression testing has been introduced in the form of a semi-automated procedure. This supplements the established offline testing and focuses on operational functionality as well as verifying that previously fixed faults did not re-emerge. The regression tests are carried out on a weekly basis as a compromise between the developers' response- and the available technical time. The frequent feedback allows the validation of submitted fixes and the prompt detection of sideeffects and reappearing issues. Results of nine months are presented that show the evolution of test outcomes, supporting the conclusion that the regression testing helped to improve the speed of convergence towards stable releases at the ATF. They also provided an opportunity to validate newly developed or re-factored software at an early stage at the test facility, supporting its eventual integration. Hopefully this regression test procedure will be adapted to commissioning operations in Chile.
A case of reliable remote functionality
All data are provided by Huairou Solar Observing Station (HSOS), Beijing, China, a main observation station that is designed to predicate solar activities and space environment in China. With the more and more complicated Internet environment, many security issues arise, such as viruses, attacks to potential security weakness in the network protocol, improperly using operation systems within a network etc., we have to design a more secure and reliable network architecture in order to implement remote functionalities to meet the demands of the data users: automatic data transmission, remote observation, control and maintenance of various systems. In this article, the author presents a real case discussing various aspects while implementing it, that includes a new network topology changed from an original architecture to a more secure one, the selections of the secure products, secure strategies deployment, the problems encountered during operation and the solutions in use at HSOS. The article also gives some inputs from the author's experiences of the network security. The case is implemented in the entire observing station, which includes multiple observing systems, at HSOS.
Data Handling Poster Session
icon_mobile_dropdown
Storage options for petabytes of data
Lisa Paton, Jonathan Cirtain, Paul Grant
New instrumentation that produces extremely large quantities of data presents a challenge in data processing and management. The Solar Dynamics Observatory (SDO) will house instruments that will produce 1.4TB of data per day. Processing and storing that quantity of data is a serious challenge. The instrument team for the Atmospheric Imaging Assembly (AIA) that will fly on SDO spent the last 2 weeks in September doing a large-scale side-by-side comparison of archive equipment from Apple, BlueArc, EMC, Network Appliance, SGI and Sun Microsystems. Each vendor provided 100TB of SATA disk space and the required servers to showcase their unique solutions to the problem of petabyte sized archives. The results of the testing demonstrate some of the options available in this arena. We will discuss the results of the testing, the differences and similarities between the vendors and the applicability of the technologies to various environments.
SPHERE baseline software for reducing calibration data
The Spectro-Polarimetric High-contrast Exoplanet Research (SPHERE) instrument for the VLT is designed for discovering and studying new extra-solar giant planets orbiting nearby stars by direct imaging. In this paper, we describe the philosophy behind the SPHERE baseline data processing sequences dealing with calibration observations, and how these can affect the reduction of subsequent calibrations and scientific data. Additionally, we present the result of our detector simulations and the first tests of data reduction recipe prototypes.
Automated HST/STIS reference file generation pipeline using OPUS
Rosa I. Diaz, Michel Swam, Paul Goudfrooij
Bias and Dark reference files are part of the basic reduction of the CCD data taken by the Space Telescope Imaging Spectrograph (STIS) aboard the Hubble Space Telescope (HST). At STScI, the STIS team has been creating these reference files using the Bias and Dark Pipeline. This pipeline system starts with automatic retrieval of bias and dark exposures from the HST archive after they have been ingested. After data retrieval, a number of automatic scripts is executed in a manner compatible with the OPUS pipeline architecture. We encourage any group looking to streamline a stepwise calibration process to look into this software.
Bayesian approach to estimation of the map of dark current in wavelet domain
This paper deals with advanced methods for elimination of the thermally generated charge in the astronomical images, which were acquired by Charged Coupled Device (CCD) sensor. There exist a number of light images acquired by telescope, which were not corrected by dark frame. The reason is simple the dark frame doesn't exists, because it was not acquired. This situation may for instance come when sufficient memory space is not available. There will be discussed the correction method based on the modeling of the light and dark image in the wavelet domain. As the model for the dark frame image and for the light image the generalized Laplacian was chosen. The models parameters were estimated using moment method, whereas an extensive measurement on astronomical camera were proposed and done. This measurement simplifies estimation of the dark frame model parameters. Finally a set of the astronomical testing images were corrected and then the objective criteria for an image quality evaluation based on the aperture photometry were applied.
The lossy compression technique based on KLT
A higher and better precision can been achieved while using more recent observation techniques and superior detection sensors. It has brought a very rapid increase of data amount as a result. The high spatial resolution (up to ten million pixels) and a high bit level grey scale images (quantization depth up to 16 bits) are used in astronomy and other scientific applications. A very large volume of image data has been taken during operation of a modern project of automatic (i.e. robotic) sky observation systems. The coder based on the Karhunen-Loeve transform (KLT) has been chosen for astronomical image compression in this paper. The astrometry and photometry measurements have confirmed a possibility of the coder blocks arrangement with production of an accepted error and a sophisticated data stream.
An adaptive algorithm based on RBF for extracting the flux of fiber spectrum
An adaptive algorithm is presented for extracting the flux of the fiber spectrum from a two-dimensional image observed by LAMOST (Large Sky Area Multi-Object Fiber Spectroscopic Telescope). The new algorithm is based on RBF (Radial basis function) neural network, employing the Gaussian basis function to approximate the profile of the spectrum in the spatial direction. In this study, an experiment is performed with the simulated data. The experimental results show that the new algorithm can highly enhance the computing speed while preserving the accuracy in the flux extraction. A feasible approach is thus offered for extracting the flux of the fiber spectrum for LAMOST.
Automated stellar spectral analysis software for survey spectra
A-Li Luo, Yue Wu, Jingkun Zhao, et al.
A spectral analysis pipeline of LAMOST (Large sky Area Multi-Object fiber Spectroscopic Telescope), which produces archived spectral type data, is introduced. By studying observational and theoretical stellar spectra, spectral features within medium resolution are discussed, those lines and bands with high sensitivity to stellar atmospheric parameters, viz. effective temperature (Teff), surface gravity (logg) and metallicity ([Fe/H]), were selected. According to the research, selected features were put into different objective algorithms to extract parameters. The application of three algorithms to SDSS/SEGUE spectra, namely radial basis function neural network (RBFN), back propagation neural network (BPN) and non-parameter regression (NPR), shows intrinsic statistical consistency. Based on the above research, a stellar atmospheric parameter pipeline for LAMOST is designed.
EVALSO: enabling virtual access to Latin American southern observatories
In the field of observational astrophysics, the remoteness of the facilities and the ever increasing data volumes and detectors poses new technological challenges. As an example, the VISTA and VST wide field telescopes, which are being constructed at the ESO's Cerro Paranal Observatory and will be ready in the next few years, have cameras which will produce after just one year of operation a volume of data that will exceed all the data collected by the VLT since the start of operations in 1999. This sets serious limitations if such large quantities of data must be transferred and accessed in a short time by the participating European Institutions. The EVALSO project, approved by the European Community, addresses these targets in two major ways. It will create a physical infrastructure to efficiently connect these facilities to Europe. This infrastructure will be complementary to the international infrastructure already created in the last years with the EC support (RedCLARA, ALICE, GEANT). Besides this, it will provide the astronomers with Virtual Presence (VP), i.e. the tools to perform and control an astronomical observation from the user's site. The main role of INAF - Astronomical Observatory of Trieste (OAT) within the project will be the definition of the architecture, the development of VP system and the integration of a prototype to be used as a demonstrator. This paper will focus on the description of the Virtual Presence system.
An automatic system for photometric redshift estimation based on sky survey data
With the large-scale multicolor photometry and fiber-based spectroscopy projects carried out, millions of uniform samples are available to the astronomers. Based on this situation, we have developed an automatic system to estimate photometric redshifts for both galaxies and quasars. In this paper we give an exhaustive introduction of the system. We first describe a series of methods integrated in this system, such as template fitting, color-magnitude-redshift relation, polynomial regression, support vector machines and kernel regression. The merits and demerits of these approaches have been indicated. Therefore, users can choose some suitable algorithm to estimate photometric redshifts according to data characteristics and science requirements. Then, we present a case study to illustrate how the system works. In order to build a more robust system of increasing the accuracy and speed of photometric redshift estimation, we pay special attention to algorithm choice and data preparation. From the user's viewpoint, an easy used interface will be provided. Finally, we point out the promising techniques of measuring photometric redshifts and the application prospects of this system. In the future, the system will become an essential tool for automatedly determining photometric redshifts in the study of the large-scale structure of the Universe and the formation and evolution of galaxies.
Knowledge discovery in astronomical data
Yanxia Zhang, Hongwen Zheng, Yongheng Zhao
With the construction and development of ground-based and space-based observatories, astronomical data amount to Terascale, even Petascale. How to extract knowledge from so huge data volume by automated methods is a big challenge for astronomers. Under this situation, many researchers have studied various approaches and developed different softwares to solve this issue. According to the special task of data mining, we need to select an appropriate technique suiting the requirement of data characteristics. Moreover all algorithms have their own pros and cons. We introduce the characteristics of astronomical data, present the taxonomy of knowledge discovery, and describe the functionalities of knowledge discovery in detail. Then the methods of knowledge discovery are touched upon. Finally the successful applications of data mining techniques in astronomy are summarized and reviewed. Facing data avalanche in astronomy, knowledge discovery in databases (KDD) shows its superiority.
SPHERE data reduction and handling system: overview, project status, and development
The SPHERE project is a ESO second generation instrument which aims to detect giant extra-solar planets in the vicinity of bright stars and to characterise the objects found through spectroscopic and polarimetric observations.Technical tolerances are the tightest ever for an instrument installed at the VLT, and SPHERE demands a rather unique DRH software package to accompany the data from the observation preparation to the search for planetary signals. This paper addresses the current status of the data reduction and handling system (DRHS) for the SPHERE instruments. It includes descriptions of the calibration and science data, reduction steps and their data products. The development strategy for creating of a coherent software that allows to achieve high observation efficiency is briefly discussed.
Predicting photometric redshifts with polynomial regression
The Sloan Digital Sky Survey (SDSS) is an ambitious photometry and spectra project, providing huge and abundant samples for photometric redshift estimation. We employ polynomial regression to estimate photometric redshifts using 330,000 galaxies with known spectroscopic redshifts from SDSS Release Four spectroscopic catalog, and compare three polynomial regressionmethods, i.e. linear regression, quadratic regression and cubic regression with different samples. This technique gives absolute convergence in a finite number of steps, represents better fit with fewer coefficients and yields the result as a mathematical expression. This method is much easier to use and understand than other empirical methods for astronomers. Our result indicates that equally or more powerful accuracy is provided, moreover, the best r.m.s. dispersion of this approach is 0.0256. In addition, the comparison between our results with other works is addressed.
The oversampling mode for CoRoT exo-field observations
C. Surace, R. Alonso, P. Barge, et al.
CoRoT (Convection, Rotation and planetary Transits) is a satellite mission led by CNES. CoRot has been successfully launched on December 27th of 2006. One of its goals is to discover new exo-planets using the transit method. It observes stars and sample their emission light every 512 seconds leading to observing runs of 12000 light curves over a 6 months period. For each run, 1000 of these light curves can be over-sampled up to 32 second allowing a transit detection. In order to select the targets to be over-sampled, the ground segment team at LAM set up an infrastructure to get and analyse preliminary N1 data within a week delay. The selected target are ordered in a list transmitted to the "Centre de Mission Corot" (CMC). We present the infrastructure of the over-sampling mode, the over-sampling software used for detection in raw light-curves and the mechanisms of list ordering and selection. This paper describes as well the feed back over the past one and a half year of operation.