Show all abstracts
View Session
- Project Status
- Systems and Servos
- Instrument/Guider Software
- Real-Time Systems
- Frameworks
- Software Engineering
- Remote Observing Reports
- Remote Observing Infrastructure
- Poster Session a: Project Status
- Poster Session b: Real-Time Systems
- Poster Session c: Instruments
- Poster Session d: Telescope Systems
- Poster Session e: Frameworks
- Poster Session f: Software Engineering
- Poster Session g: Modeling, Simulation, and Control
- Poster Session h: Remote Observing
- Poster Session i: Data Management
- Poster Session d: Telescope Systems
Project Status
ALMA communications requirements and design
Show abstract
The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating from millimeter to sub-millimeter wavelengths. ALMA will be located at an altitude of about 5000m in the Chilean Atacama desert. The main challenge for the development of the ALMA software, which will support the whole end-to-end operation, it is the fact that the computing group is extremely distributed. Groups at different institutes have started the design of all subsystems based on the ALMA Common Software framework (ACS) that provides the necessary standardization.
The operation of ALMA by a community of astronomers distributed over various continents will need an adequate network infrastructure. The operation centers in Chile are split between an ALMA high altitude site, a lower altitude control centre, and a support centre in Santiago. These centers will be complemented by ALMA Regional Centers (ARCs) in Europe, North America, and Japan.
All this will require computing and communications equipment at more than 5000m in a radio-quiet area. This equipment must be connected to high bandwidth and reliable links providing access to the ARCs. The design of a global computing and communication infrastructure is on-going and aims at providing an integrated system addressing both the operational computing needs and normal IT support. The particular requirements and solutions foreseen for ALMA in terms of computing and communication systems will be explained.
ALMA test interferometer control system: past experiences and future developments
Show abstract
The Atacama Large Millimeter Array (ALMA) will, when it is completed
in 2012, be the world's largest millimeter & sub-millimeter radio
telescope. It will consist of 64 antennas, each one 12 meters in
diameter, connected as an interferometer.
The ALMA Test Interferometer Control System (TICS) was developed as a
prototype for the ALMA control system. Its initial task was to provide
sufficient functionality for the evaluation of the prototype
antennas. The main antenna evaluation tasks include surface
measurements via holography and pointing accuracy, measured at both
optical and millimeter wavelengths.
In this paper we will present the design of TICS, which is a
distributed computing environment. In the test facility there are four
computers: three real-time computers running VxWorks (one on each
antenna and a central one) and a master computer running Linux. These
computers communicate via Ethernet, and each of the real-time
computers is connected to the hardware devices via an extension of the
CAN bus.
We will also discuss our experience with this system and outline
changes we are making in light of our experiences.
A status update of the VLTI control system
Show abstract
In the last two years the Very Large Telescope Interferometer (VLTI) has, on one hand grown with the addition of new subsystems, on the other hand matured with experience from commissioning and operation. Two adaptive optics systems for the 8-m unit telescopes have been fully integrated in the VLTI infrastructure. The first scientific instrument, MIDI, has been commissioned and is now being offered to the community. A second scientific instrument AMBER is currently being commissioned. The performance of the interferometer is being enhanced by the installation of a dedicated fringe sensor, FINITO, and a tip-tilt sensor in the interferometric laboratory, IRIS, and the associated control loops. Four relocatable auxiliary 1.8 m telescopes and three additional delay lines are being added to the infrastructure. At the same time the design and development of the dual feed PRIMA facility, which will have major impact on the existing control system, is in full swing. In this paper we review the current status of the VLTI control system and assess the impact on complexity and reliability caused by this explosion in size. We describe the applied methods and technologies to maximize the performance and reliability in order to keep VLTI and its control system a competitive, reliable and productive facility.
SOAR TCS: from implementation to operation
Show abstract
The 4.1 meter Southern Astrophysical Research (SOAR) Telescope is now entering the operations phase, after a period of construction and system commissioning. The SOAR TCS implemented in the LabVIEW software package, has kept pace throughout development with the installation of the other telescope subsystems, and has proven to be a key component for the successful deployment of SOAR. In this third article of the SOAR TCS series, we present the results achieved when operating the SOAR telescope under control of the SOAR TCS software. A review is made of the design considerations and the implementations details, followed by a presentation of the software extensions that allows a seamless integration of instruments into the system, as well as the programming techniques that permit the execution of remote observing procedures.
Systems and Servos
Development of a state machine sequencer for the Keck Interferometer: evolution, development, and lessons learned using a CASE tool approach
Show abstract
This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii.
The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation.
The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.
The design of 'cancelable' data acquisition environments
Show abstract
This paper presents a discussion of the architectural issues resulting when software systems need to cancel operations once they have been initiated. This may seem a minor issue, but our experience is that this requirement can have a huge effect on the design of instrumental software environments. A number of major constraints on the structure of command-based environments such as the AAO's DRAMA system can be traced to the perceived need to be able to cancel any operation cleanly. This becomes particularly difficult to implement if these operations involve significant amounts of time or even potentially indefinite amounts of time, such as operations involving blocking I/O. In general, the cleanest results come from having a process or thread cancel itself, rather than relying on the ability to cancel it externally, but this turns the problem into one of finding mechanisms whereby processes can discover, reliably, that they need to cancel themselves. As system architectures are considered for the next generation of telescopes, it seems timely to consider these design problems and even to what extent the ideal requirement of cleanly cancellable operations may have been reduced by the move towards queue-scheduled operations and away from traditional interactive observing.
Altair interactions
Show abstract
Altair, ALTitude-conjugated Adaptive optics for InfraRed at Gemini North, was commissioned last October and is one of Canada’s major contributions to the Gemini Project, a seven-nation consortium that built identical 8m telescopes in Hawaii (Gemini North) and Chile (Gemini South). Altair coordinates and transfers data and status to both local and external subsystems at very high speeds. External Gemini subsystems include the Telescope Control System (TCS), Acquisition and Guiding (A&G), Observatory Control System (OCS), Gemini Interlock System (GIS), Time Server, Data Handling System (DHS), and Status and Alarm Database. This paper focuses on a few select sequences such as closing the control loop and delivering a corrected image, collecting statistics, and display data to highlight the complexity of the interactions within Altair.
Time to go H-∞
Show abstract
Traditionally telescope main axes controllers use a cascaded PI structure. We investigate the benefits and limitations of this and question if better performance can be achieved with modern control techniques. Our interest is mainly to improve disturbance rejection since the tracking performance normally is easy to achieve. Comparison is made to more advanced controller structures using H-∠infinity design. This type of controller is more complex and needs a mathematical model of the telescope dynamics. We discuss how to obtain this model and also how to reduce it to a more manageable size using state of the art model reduction techniques. As a design example the VLT altitude axis is chosen.
Instrument/Guider Software
UML modeling of the LINC-NIRVANA control software
Show abstract
LINC-NIRVANA is a Fizeau interferometer for the Large Binocular Telescope (LBT) doing imaging in the near infrared (J,H,K - band). Multi-conjugated adaptive optics is used to increase sky coverage and to get diffraction limited images over a 2 arcminute field of view. The control system consists of five independent loops, which are mediated through a master control. Due to the configuration, LINC-NIRVANA has no delay line like other interferometers. To remove residual atmospheric piston, the system must control both the primary and secondary mirrors, in addition to a third, dedicated piston mirror. This leads to a complex and interlocked control scheme and software. We will present parts of the instrument software design, which was developed in an object-oriented manner using UML. Several diagram types were used to structure the overall system and to evaluate the needs and interfaces of each sub-system to each other.
The VISTA IR camera software design
Show abstract
VISTA is a wide-field survey telescope with a 1.6° field of view, sampled with a camera containing a 4 x 4 array of 2K x 2K pixel infrared detectors. The detectors are spaced so an image of the sky can be constructed without gaps by combining 6 overlapping observations, each part of the sky being covered at least twice, except at the tile edges. Unlike a typical ESO-VLT instrument, the camera also has a set of on-board wavefront sensors. The camera has a filter wheel, a collection of pressure and temperature sensors, and a thermal control system for the detectors and the cryostat window, but the most challenging aspect of the camera design is the need to maintain a sustained data rate of 26.8 Mb/second from the infrared detectors. The camera software needs to meet the requirements for VISTA, to fit into the ESO-VLT software architecture, and to interface with an upgraded IRACE system being developed by ESO-VLT. This paper describes the design for the VISTA camera software and discusses the software development process. It describes the solutions we have adopted to achieve the desired data rate, maximise survey speed, meet ESO-VLT standards, interface to the IRACE software and interface the on-board wavefront sensors to the VISTA telescope software.
The OmegaCAM instrument software: implementation and integration
Show abstract
OmegaCAM is the wide field optical imager for the VLT Survey Telescope (VST), part of the VLT Observatory, operated by the European Southern Observatory (ESO). The camera consists of a mosaic of 32 4k x 2k CCDs, that almost completely fill with an array of 16k x 16k pixels its 1 degree squared field of view. The instrument will start scientific operations in the first quarter of 2005. In this paper, after a brief review of the instrument software design, we describe the functionality for each major software subsystem: ICS (Instrument Control Software) which is in charge of the control of the opto-mechanics, in particular of the filter system, AG, which takes care of autoguiding, IA (Image Analysis), in charge of measuring aberrations using a curvature-like wavefront sensor, OS (Observation Software) which coordinates all instrument subsystems in the execution of scientific observation and creates data files for the archive. Finally we report about the activities for the integration of the software with the opto-mechanics and the instrument electronics.
ESO-VLT/FLAMES: control software for a multi-object observing facility
Show abstract
FLAMES is a complex observational facility for multi-object spectrography installed at ESO VLT UT2 telescope at Paranal. It consists of a Fibre Positioner that feeds GIRAFFE, a medium-high resolution spectrograph, and UVES, a high resolution stand-alone spectrograph operational in slit mode since 1999. The Positioner is the core component of FLAMES. It is a rather large and complex system comprising two spherical focal plates of approx. 90 cm in diameter, an exchanger mechanism, R-θ robot motions and a pneumatic gripper mechanism with a built in miniature CCD camera. The main task of the Positioner is to place a fibre (button) at a given focal plate position with accuracy better than 40 microns. The fibre positioning process is performed on the plate attached to the robot while an observation is being performed on the plate attached to the telescope rotator. The whole instrument is driven by software designed in accordance with the VLT Common Software standards, allowing the complete integration of the instrument in the VLT environment. The paper mainly focuses on two areas: the low level control and the performance of the Fibre Positioner; and the high level coordinating software architecture that provides facility for parallel operations of multiple instruments.
CFHT MegaPrime guide and focus control system
Show abstract
The Canada-France-Hawaii Telescope is now operating a wide-field visible camera with a one-degree field of view. We have developed a guiding and auto-focus system that uses two stage-mounted CCD cameras fed by Shack-Hartmann optics providing position and focus error signals to the telescope guiding and focus control systems. The two camera stages patrol guide fields separated by more than a degree, one to the north and one to the south of the main camera field. Guiding generates a 50 Hz correction signal applied to a tip-tilt plate in the light path and a low frequency correction signal sent to control telescope position. During guiding a focus error signal is used to adjust telescope focus. Calibration issues include guide camera focusing, image distortion produced by the wide field corrector, guide stage positioning, and determining ideal guide star positions on the cameras. This paper describes the resulting system, including preselected guide star acquisition, guiding, telescope focus control, and calibration.
Active optics and auto-guiding control for VISTA
Show abstract
The VISTA wide field survey telescope will use the ESO Telescope Control System as used on the VLT and NTT. However the sensors for both auto-guiding and active optics are quite different and so the ESO TCS will require some significant modifications. VISTA will use large format CCDs at fixed locations in the focal plane for auto-guiding and a pair of curvature sensors, also fixed in the focal plane, for wave-front sensing. As a consequence, three reference stars are required for each science observation in contrast to the VLT which uses a single star for both auto-guiding and active optics. This paper will outline the reasons for adopting this design, review how it differs from the VLT/NTT and describe the modifications that are being made to the ESO TCS to enable it to be used for VISTA. It will describe the software that implements auto-guiding and active optics in the VLT TCS and how the design has been adapted to the different requirements of VISTA. This will show how the modular and distributed design of the ESO TCS has enabled it to be adapted to a new telescope with radically different design choices whilst maintaining the existing architecture and the bulk of the existing implementation.
Real-Time Systems
Real-time operation without a real-time operating system for instrument control and data acquisition
Show abstract
We are building the Field-Imaging Far-Infrared Line Spectrometer (FIFI LS) for the US-German airborne observatory SOFIA. The detector read-out system is driven by a clock signal at a certain frequency. This signal has to be provided and all other sub-systems have to work synchronously to this clock. The data generated by the instrument has to be received by a computer in a timely manner. Usually these requirements are met with a real-time operating system (RTOS).
In this presentation we want to show how we meet these demands differently avoiding the stiffness of an RTOS. Digital I/O-cards with a large buffer separate the asynchronous working computers and the synchronous working instrument. The advantage is that the data processing computers do not need to process the data in real-time. It is sufficient that the computer can process the incoming data stream on average. But since the data is read-in synchronously, problems of relating commands and responses (data) have to be solved: The data is arriving at a fixed rate. The receiving I/O-card buffers the data in its buffer until the computer can access it. To relate the data to commands sent previously, the data is tagged by counters in the read-out electronics. These counters count the system's heartbeat and signals derived from that. The heartbeat and control signals synchronous with the heartbeat are sent by an I/O-card working as pattern generator. Its buffer gets continously programmed with a pattern which is clocked out on the control lines. A counter in the I/O-card keeps track of the amount of pattern words clocked out. By reading this counter, the computer knows the state of the instrument or knows the meaning of the data that will arrive with a certain time-tag.
ALMA correlator computer systems
Show abstract
We present a design for the computer systems which control, configure,
and monitor the Atacama Large Millimeter Array (ALMA) correlator and
process its output. Two distinct computer systems implement this
functionality: a rack- mounted PC controls and monitors the
correlator, and a cluster of 17 PCs process the correlator output into
raw spectral results. The correlator computer systems interface to
other ALMA computers via gigabit Ethernet networks utilizing CORBA and
raw socket connections. ALMA Common Software provides the software
infrastructure for this distributed computer environment. The control
computer interfaces to the correlator via multiple CAN busses and the
data processing computer cluster interfaces to the correlator via
sixteen dedicated high speed data ports. An independent array-wide
hardware timing bus connects to the computer systems and the
correlator hardware ensuring synchronous behavior and imposing hard
deadlines on the control and data processor computers. An aggregate
correlator output of 1 gigabyte per second with 16 millisecond periods
and computational data rates of approximately 1 billion floating point
operations per second define other hard deadlines for the data
processing computer cluster.
Designing a common real-time controller for VLT applications
Show abstract
The increasing number of digital control applications in the context of the VLT, and particularly the VLT Interferometer, brought the need to find a common solution to address the problems of performance and maintainability. Tools for Advanced Control (TAC) aims at helping both control and software engineers in the design and prototyping of real-time control applications by providing them with a set of standard functions and an easy way to combine them to create complex control algorithms. In this paper we describe the software architecture and design of TAC, the VLT standard for digital control applications. Algorithms are described at schematic level and take the form of a set of interconnected function blocks. Periodical execution of the algorithm as well as features like runtime modification of parameters and probing of internal data are also managed by TAC, allowing the application designers to avoid spending time writing low value software code and therefore focus on application-specific concerns. We also summarize the results achieved on the first actual applications using TAC, to manage real-time control or digital signal processing algorithms, currently deployed and being commissioned at Paranal Observatory.
The Generic Pixel Server dictionary
Show abstract
Instruments and telescopes being planned for the US community include a wide assortment of facilities. These will require a consistent interface. Existing controllers use a variety of interfaces that will make using multiple controller types difficult. A new architecture that takes maximum advantage of code and hardware re-use, maintainability and extensibility is being developed at NOAO. The MONSOON Image acquisition/Detector controller system makes maximum use of COTS hardware and Open-Source development and can support OUV and IR detectors, singly or in very large mosaics. A basic requirement of the project was the ability to seamlessly handle even massive focal planes like LSST and ODI.
Software plays a vital role in the flexibility of the MONSOON system. The authors have built on their experience with previous systems (E.g. GNAAC, wildfire, ALICE, SDSU etc.), to develop a command interface, based on a dictionary of commands that can be applied to any detector controller project. The Generic Pixel Server, or GPX, concept consists of a dictionary that not only supports the needs of projects that use MONSOON controllers, but the set of commands can be used as the interface to any detector controller with only modest additional effort. This generic command set (the GPX dictionary) is defined here as introduction to the GPX concept.
Optimization of SDSU-2 CCD controller hardware and software for CCD mosaics
Show abstract
The San Diego State University Generation 2 CCD controller (SDSU-2)
architecture is widely used in both optical and infrared astronomical
instruments. This architecture was employed in the CCD controllers
for the DEIMOS instrument commissioned on Keck-II in June 2002.
In 2004, the CCD dewar in the HIRES instrument on Keck-I will be
upgraded to a 3 x 1 mosaic of MIT/LL 2K x 4K CCDs controlled by an
SDSU-2 CCD controller.
For each of these SDSU-2 CCD controllers, customized versions of PAL
chips were developed to extend the capabilities of this controller
architecture. For both mosaics, a custom timing board PAL enables rapid, software-selectable switching between dual- and single-amplifier-per-CCD readout modes while reducing excess utilization of fiber optic bandwidth for the latter. For the HIRES CCD mosaic, a custom PAL for the clock generation boards provides software selection of different clock waveforms that can address the CCDs of the mosaic either individually or globally, without any need to reset the address jumpers on these boards.
The custom PAL for the clock generation boards enables a method for
providing differing exposure times on each CCD of the mosaic. These
distinct exposure times can be implemented in terms of a series of
sub-exposures within a single, global mosaic observation. This allows for more effective observing of sources that have flux gradients across the spectral dimension of the CCD mosaic because those CCDs located near the higher end of the flux gradient can be read out more frequently, thus reducing the number of cosmic rays in each individual sub-exposure from those CCDs.
Frameworks
The ALMA software architecture
Show abstract
The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself
will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that
separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America
and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts
at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar
tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns:
application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with
services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.
The ALMA common software: a developer-friendly CORBA-based framework
Show abstract
The ALMA Common Software (ACS) is a set of application frameworks built on top of CORBA. It provides a common software infrastructure to all partners in the ALMA collaboration. The usage of ACS extends from high-level applications such as the Observation Preparation Tool [7] that will run on the desk of astronomers, down to the Control Software [6] domain. The purpose of ACS is twofold: from a system perspective, it provides the implementation of a coherent set of design patterns and services that will make the whole ALMA software [1] uniform and maintainable; from the perspective of an ALMA developer, it provides a friendly programming environment in which the complexity of the CORBA middleware and other libraries is hidden and coding is drastically reduced. The evolution of ACS is driven by a long term development plan, however on the 6-months release cycle the plan is adjusted based on incoming requests from ALMA subsystem development teams. ACS was presented at SPIE 2002[2]. In the two years since then, the core services provided by ACS have been extended, while the coverage of the application framework has been increased to satisfy the needs of high-level and data flow applications. ACS is available under the LGPL public license. The patterns implemented and the services provided can be of use also outside the astronomical community; several projects have already shown their interest in ACS. This paper presents the status of ACS and the progress over the last two years. Emphasis is placed on showing how requests from ACS users have driven the selection of new features.
Container-component model and XML in ALMA ACS
Show abstract
ALMA software, from high-level data flow applications down to instrument control, is built using the ACS framework. To meet the challenges of developing distributed software in distributed teams, ACS offers a container/component model that integrates the use of XML transfer objects. ACS containers are built on top of CORBA and are available for C++, Java, and Python, so that ALMA software can be written as components in any of these languages. The containers perform technical aspects of the software system, while components can focus on the implementation of functional requirements.
Like Web services, components can use XML to exchange structured data by value. For Java components, the container seamlessly integrates the use of XML binding classes, which are Java classes that encapsulate access to XML data through type-safe methods. Binding classes are generated from XML schemas, allowing the Java compiler to enforce compliance of application code with the XML schemas.
This presentation will explain the capabilities of the ACS container/component model, and how it relates to other middleware technologies that are popular in industry.
An enterprise software architecture for the Green Bank Telescope (GBT)
Show abstract
The enterprise architecture presents a view of how software utilities and applications are related to one another under unifying rules and principles of development. By constructing an enterprise architecture, an organization will be able to manage the components of its systems within a solid conceptual framework. This largely prevents duplication of effort, focuses the organization on its core technical competencies, and ultimately makes software more maintainable. In the beginning of 2003, several prominent challenges faced software development at the GBT. The telescope was not easily configurable, and observing often presented a challenge, particularly to new users. High priority projects required new experimental developments on short time scales. Migration paths were required for applications which had proven difficult to maintain. In order to solve these challenges, an enterprise architecture was created, consisting of five layers: 1) the telescope control system, and the raw data produced during an observation, 2) Low-level Application Programming Interfaces (APIs) in C++, for managing interactions with the telescope control system and its data, 3) High-Level APIs in Python, which can be used by astronomers or software developers to create custom applications, 4) Application Components in Python, which can be either standalone applications or plug-in modules to applications, and 5) Application Management Systems in Python, which package application components for use by a particular user group (astronomers, engineers or operators) in terms of resource configurations. This presentation describes how these layers combine to make the GBT easier to use, while concurrently making the software easier to develop and maintain.
A standard control system for the Large Millimeter Telescope and instruments
Show abstract
The Large Millimeter Telescope monitor and control system is
automatically generated from a set of XML configuration files. This
insures that all inter-system communications and user interfaces
adhere to a common standard. The system was originally designed to
control the electro-mechanical components of the telescope but it maps
well to the control of instruments. Properties of the instruments are
defined in XML and subsequent control and communication code and user
interfaces are generated. This approach works well in theory, however,
when it comes to installing the system on the actual instruments,
several problems arise: the goals of instrument developers, software support for instrument developers, hardware compatibility issues, and choice of computer architecture and development environment.
In this paper, we present a discussion of the above issues and suggest tried solutions.
Software Engineering
James Web Space Telescope: supporting multiple ground system transitions in one year
Show abstract
Ideas, requirements, and concepts developed during the very early phases of the mission design often conflict with the reality of a situation once the prime contractors are awarded. This happened for the James Webb Space Telescope (JWST) as well. The high level requirement of a common real-time ground system for both the Integration and Test (I&T), as well as the Operation phase of the mission is meant to reduce the cost and time needed later in the mission development for recertification of databases, command and control systems, scripts, display pages, etc. In the case of JWST, the early Phase A flight software development needed a real-time ground system and database prior to the spacecraft prime contractor being selected. To compound the situation, the very low level requirements for the real-time ground system were not well defined. These two situations caused the initial real-time ground system to be switched out for a system that was previously used by the flight software development team. To meet the high-level requirement, a third ground system was selected based on the prime spacecraft contractor needs and JWST Project decisions. The JWST ground system team has responded to each of these changes successfully. The lessons learned from each transition have not only made each transition smoother, but have also resolved issues earlier in the mission development than what would normally occur.
Facing software complexity on large telescopes
Show abstract
The successful development of any complex control system requires a blend of good software management, an appropriate computer architecture and good software engineering. Due to the large number of controlled parts, high performance goals and required operational efficiency, the control systems for large telescopes are particularly challenging to develop and maintain.
In this paper the authors highlight some of the specific challenges that need to be met by control system developers to meet the requirements within a limited budget and schedule. They share some of the practices applied during the development of the Southern African Large Telescope (SALT) and describe specific aspects of the design that contribute to meeting these challenges. The topics discussed include: development methodology, defining the level of system integration, computer architecture, interface management, software standards, language selection, user interface design and personnel selection.
Time will reveal the full truth, but the authors believe that the significant progress achieved in commissioning SALT (now 6 months from telescope completion), can largely be attributed to the combined application of these practices and design concepts.
Applying VLT software to a new telescope: methods and observations
Show abstract
The VISTA wide field survey telescope will be operated and maintained from 2006 by ESO at their Cerro Paranal Observatory. To minimise both development costs and operational costs, the telescope's software will reuse software from the VLT wherever feasible. Some software modules will be reused without modification, others will include modifications or enhancements and yet others will be complete rewrites or completely new. This paper examines the methods used in the software development process to integrate existing and new software in a transparent and maintainable manner. On the basis of the work so far performed, some lessons are presented for the reuse of VLT software for a new telescope by an organisation without previous knowledge of VLT software.
Software engineering practices for the EGO Virgo project
Show abstract
The Virgo Gravitational Waves Detector has recently entered its commissioning phase. An important element in this phase is the application of Software Engineering (SE) practices to the Control and Data Analysis Software. This article focus on the experience in applying those SE practices as a simple but effective set of standards and tools. The main areas covered are software configuration management, problem reporting, integration planning, software testing and systems performance monitoring.
Key elements of Software Configuration Management (SCM) are source code control allowing checkin/checkout of sources from a software archive combined with a backup plan. The tool SCVS developed on top of CVS in order to provide an easier and more structured use mode is supporting this.
Tracking bugs and modifications is a necessary complement of SCM. A central database with email and web interface to submit, query and modify Software Problem Reports (SPR) has been implemented on top of the WREQ tool.
Integrating software components that were not designed with integration in mind is one of the major problems in software development. An explicit Integration Plan is therefore absolutely essential. We are currently implementing a slow upgrade cycle Common Software Releases management as structured integration plan.
Software Testing must be closely integrated with development and to the most feasible extent automatic. With the use of the automated test tool tat, the developer can incrementally build a unit/regression test suite that will help measure progress, spot unintended side effects, and focus the development efforts.
One of the characteristics of large and complex projects, like Virgo, is the difficulty in understanding how well the different subsystems are performing and then plan for changes. In order to support System Performance Monitoring the tool Big Brother has been adopted to make it possible to trace the reliability of the different subsystems and thus providing essential information for software improvements.
Remote Observing Reports
Telescope networking and user support via Remote Telescope Markup Language
Show abstract
Remote Telescope Markup Language (RTML) is an XML-based interface/document format designed to facilitate the exchange of astronomical observing requests and results between investigators and observatories as well as within networks of observatories. While originally created to support simple imaging telescope requests (Versions 1.0-2.1), RTML Version 3.0 now supports a wide range of applications, from request preparation, exposure calculation, spectroscopy, and observation reports to remote telescope scheduling, target-of-opportunity observations and telescope network administration. The elegance of RTML is that all of this is made possible using a public XML Schema which provides a general-purpose, easily parsed, and syntax-checked medium for the exchange of astronomical and user information while not restricting or otherwise constraining the use of the information at either end. Thus, RTML can be used to connect heterogeneous systems and their users without requiring major changes in existing local resources and procedures. Projects as very different as a number of advanced amateur observatories, the global Hands-On Universe project, the MONET network (robotic imaging), the STELLA consortium (robotic spectroscopy), and the 11-m Southern African Large Telescope are now using or intending to use RTML in various forms and for various purposes.
TALON: the telescope alert operation network system: intelligent linking of distributed autonomous robotic telescopes
Show abstract
The internet has brought about great change in the astronomical community, but this interconnectivity is just starting to be exploited for use in instrumentation. Utilizing the internet for communicating between distributed astronomical systems is still in its infancy, but it already shows great potential. Here we present an example of a distributed network of telescopes that performs more efficiently in synchronous operation than as individual instruments. RAPid Telescopes for Optical Response (RAPTOR) is a system of telescopes at LANL that has intelligent intercommunication, combined with wide-field optics, temporal monitoring software, and deep-field follow-up capability all working in closed-loop real-time operation. The Telescope ALert Operations Network (TALON) is a network server that allows intercommunication of alert triggers from external and internal resources and controls the distribution of these to each of the telescopes on the network. TALON is designed to grow, allowing any number of telescopes to be linked together and communicate. Coupled with an intelligent alert client at each telescope, it can analyze and respond to each distributed TALON alert based on the telescopes needs and schedule.
eSTAR: intelligent observing and rapid responses
Show abstract
The eSTAR Project uses intelligent agent technologies to carry out resource discovery, submit observation requests and analyze the reduced data returned from a network of robotic telescopes in an observational grid. The agents are capable of data mining and cross-correlation tasks using on-line catalogues and databases and, if necessary, requesting additional data and follow-up observations from the telescopes on the network. We discuss how the maturing agent technologies can be used both to provide rapid followup to time critical events, and for long term monitoring of known sources, utilising the available resources in an intelligent manner.
Remote Observing Infrastructure
Optimizing the use of X and VNC protocols for support of remote observing
Show abstract
Remote observing is the dominant mode of operation for both Keck
Telescopes and their associated instruments. Over 90% of all Keck
observations are carried out remotely from the Keck Headquarters in
Waimea, Hawaii (located 40 kilometers from the telescopes on the summit of Mauna Kea). In addition, an increasing number of observations are now conducted by geographically-dispersed observing teams, with some team members working from Waimea while others collaborate from Keck remote observing facilities located in California. Such facilities are now operational on the Santa Cruz and San Diego campuses of the University of California, and at the California Institute of Technology in Pasadena.
This report describes our use of the X and VNC protocols for providing
remote and shared graphical displays to distributed teams of observers
and observing assistants located at multiple sites. We describe the
results of tests involving both protocols, and explore the limitations
and performance of each under different regimes of network bandwidth
and latency. We also examine other constraints imposed by differences
in the processing performance and bit depth of the various frame buffers used to generate these graphical displays.
Other topics covered include the use of ssh tunnels for securely encapsulating both X and VNC protocol streams and the results of tests of ssh compression to improve performance under conditions of limited network bandwidth. We also examine trade-offs between different topologies for locating VNC servers and clients when sharing
displays between multiple sites.
Poster Session a: Project Status
The study on the scheme of measurement and control for FAST
Show abstract
Newly developed method and technology for determining the spatial position of the feeds of the FAST are introduced in this paper. Base on the measurements of the position and orientation of cabin in which the feeds are mounted, a loop feedback control enables accurately driving the feeds along desired tracks. The key technique of this implementation is the precise measurement of 6-freedom coordinates of the cabin in air with high sampling rate. An innovated way for this purpose is put forward and tested, combining data by different type of sensors. The errors of measurements and their influences on the control accuracy are analyzed theoretically, and checked by model tested. The experiment shows the feasibility and effectivity of the scheme of measurement and control for the telescope.
Operational performance of the EVLA optical systems LO and IF
Show abstract
The Expanded Very Large Array (EVLA) uses fiber optic technologies for intermediate frequency (IF) digital data transmission, and local oscillator and reference distribution (LO). These signals are sent on separate fibers to each of the 27 EVLA antennas. The data transmission system transmits the four digitized IF signals from the antennas to the central electronics building. A sustained data rate of 10.24 Gbits/s per channel and 122.88 Gbits formatted per antenna is achieved. Each IF signal uses a set of three channels, twelve channels in total, and is wavelength division multiplexed onto a single fiber. The IF system configuration includes an EML CW laser, an erbium doped fiber amplifier (EDFA), passive optical multiplexers, up to 22 km of standard single mode fiber, and an APD optical receiver.
The LO system uses two fibers to provide a round trip phase measurement at 1310 nm. The phase requirement for the LO system requires that a phase stability of less than 2.8 picoseconds per hour at 40 GHz be maintained across the entire array. To accomplish this, a near real-time continuous measurement is made of the phase delay of the amplitude modulated 512 MHz signals that are distributed to each antenna. This information is used by the correlator to set the delay on each of the baselines in the array. This paper presents a complete description of the two EVLA fiber systems, LO and IF, including specific component specifications.
Poster Session b: Real-Time Systems
An implementation of the ATST virtual instrument model
Show abstract
The Advanced Technology Solar Telescope (ATST) is intended to be the premier facility for experimental solar physics. A premium has been placed on operating ATST as a laboratory-style observatory to maximize the flexibility available to solar physicists. In particular, the main observation platform is a rotating coude platform supporting eight optical benches on which instruments may be assembled from available components. The Virtual Instrument Model has been developed to formalize the operation of a facility where instruments may exist for a single experiment before components are reassembled into a new instrument. The model allows the laboratory-style operation to fit easily within a typical modern telescope control system. This paper presents one possible implementation of the Virtual Instrument Model that is based on the container/component model becoming increasing popular in software development.
The MONSOON Generic Pixel Server software design
Show abstract
MONSOON is the next generation OUV-IR controller development project being conducted at NOAO. MONSOON was designed from the start as an "architecture" that provides the flexibility to handle multiple detector types, rather than as a set of specific hardware to control a particular detector. The hardware design was done with maintainability and scalability as key factors. We have, wherever possible chosen commercial off-the-shelf components rather than use in-house or proprietary systems.
From first principles, the software design had to be configurable in order to handle many detector types and focal plane configurations. The MONSOON software is multi-layered with simulation of the hardware built in. By keeping the details of hardware interfaces confined to only two libraries and by strict conformance to a set of interface control documents the MONSOON software is usable with other hardware systems with minimal change. In addition, the design provides that focal plane specific details are confined to routines that are selected at load time.
At the top-level, the MONSOON Supervisor Level (MSL), we use the GPX dictionary, a defined interface to the software system that instruments and high-level software can use to control and query the system. Below this are PAN-DHE pairs that interface directly with portions of the focal plane. The number of PAN-DHE pairs can be scaled up to increase channel counts and processing speed or to handle larger focal planes. The range of detector applications supported goes from single detector LAB systems, four detector IR systems like NEWFIRM, up to 500 CCD focal planes like LSST. In this paper we discuss the design of the PAN software and it's interaction with the detector head electronics.
The MONSOON implemention of the Generic Pixel Server
Show abstract
MONSOON is NOAO's diverse, future-proof, array controller project that holds the promise of a common hardware and software platform for the whole of US astronomy. As such it is an implementation of the Generic Pixel Server which is a new concept that serves OUV-IR pixel data. The fundamental element of the server is the GPX dictionary which is the only entry point into the system from instrumentation or observatory level software. In the MONSOON implementation, which uses mostly commercial off-the-shelf hardware and software components, the MONSOON supervisor layer (MSL) is the highest level layer and this communicates with multiple Pixel-Acquisition-Node / Detector-Head-Electronics (PAN-DHE) pairs to co-ordinate the acquisition of the celestial data. The MSL is the MONSOON implementation of the GPX and this paper discusses the design requirements and the techniques used to meet them.
Automated software configuration in the MONSOON system
Show abstract
MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from
multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.
Instrumentation control using the Rabbit 2000 embedded microcontroller
Show abstract
Embedded microcontroller modules offer many advantages over the standard PC such as low cost, small size, low power consumption, direct access to hardware, and if available, access to an efficient preemptive real-time multitasking kernel. Typical difficulties associated with an embedded solution include long development times, limited memory resources, and restricted memory management capabilities. This paper presents a case study on the successes and challenges in developing a control system for a remotely controlled, Alt-Az steerable, water vapour detector using the Rabbit 2000 family of 8-bit microcontroller modules in conjunction with the MicroC/OS-II multitasking real-time kernel.
ACS sampling system: design, implementation, and performance evaluation
Show abstract
By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.
Application of real time database to LAMOST control system
Show abstract
The QNX based real time database is one of main features for Large sky Area Multi-Object fiber Spectroscopic Telescope's (LAMOST) control system, which serves as a storage and platform for data flow, recording and updating timely various status of moving components in the telescope structure as well as environmental parameters around it. The database joins harmonically in the administration of the Telescope Control System (TCS). The paper presents methodology and technique tips in designing the EMPRESS database GUI software package, such as the dynamic creation of control widgets, dynamic query and share memory. The seamless connection between EMPRESS and the graphical development tool of QNX’s Photon Application Builder (PhAB) has been realized, and so have the Windows look and feel yet under Unix-like operating system. In particular, the real time feature of the database is analyzed that satisfies the needs of the control system.
Design of PCI-based data acquisition, antenna control, and real-time web-based database for a solar radio observational system
Show abstract
Solar activity is one of the main sources of space disturbances, which are primarily responsible for space disaster weather. Solar activity is concerned with 11 years period and has many exhibitions such as the change of sunspot's number and solar radio flux in 10.7cm wavelength. The 1.0-2.0 GHz, 2.6-3.8 GHz, and 5.2-7.6 GHz solar radio spectrometers and 2840 MHz solar radio telescope of National Astronomical Observatory in Huairou Solar Observational Station have got considerable radio flux data since 1999. In order to make further researches on solar action and develop space weather forecast, the real-time observed data should be well utilized. Therefore we designed the data acquisition, antenna control and real-time web-based database system for the 2840 MHz solar radio telescope. The paper introduces the whole design of a PCI-based data acquisition, antenna control and real-time web-based database system for the solar radio observation at HuaiRou in China. The popular PCI controller-PCI9052 is utilized to implement the interface between PCI bus and peripheral devices. PLD chip is applied for the data transferring and antenna control. The device driver of Windows is developed based on Driverworks and Windows DDK. The real-time database is based on MySQL and Apache.
Poster Session c: Instruments
Control software for OSIRIS: an infrared integral-field spectrograph for the Keck adaptive optics system
Show abstract
OSIRIS is an infrared integral-field spectrograph built for the Keck AO system. Integral-field spectrographs produce very complicated raw data products, and OSIRIS is no exception. OSIRIS produces frames that contain up to 4096 interleaved spectra. In addition to the IFU capabilities of OSIRIS, the instrument is equipped with a parallel-field imager to monitor current AO conditions by imaging an off-axis star and evaluating its PSF. The design of the OSIRIS software was driven by the complexity of the instrument, switching the focus from simply controlling the instrument components to targeting the acquisition of usable scientific data.
OSIRIS software integrates the planning, execution, and reduction of observations. An innovation in the OSIRIS control software is the formulation of observations into 'datasets' rather than individual frames. Datasets are functional groups of frames organized by the needs and capabilities of the data reduction software (DRS). A typical OSIRIS dataset consists of dithered spectral observations, coupled with the associated imaging data from the parallel-field AO PSF imager. A Java-based planning tool enables 'sequences' of datasets to be planned and saved both prior to and during observing sessions. An execution client interprets these XML-based files, configures the hardware servers for both OSIRIS and AO, and executes the observations. The DRS, working on one dataset of raw data at a time, produces science-quality data that is ready for analysis. This methodology should lead to superior observational efficiency, decreased likelihood of observer error, minimized reduction time, and therefore, faster scientific discovery.
EMIR and OSIRIS instruments: common data acquisition software architecture
Show abstract
OSIRIS (Optical System for Imaging and low/intermediate-Resolution Integrated Spectroscopy) and EMIR (InfraRed MultiObject Spectrograph) are instruments designed to obtain images and low resolution spectra of astronomical objects in the optical and infrared domains. They will be installed on Day One and Day Two, respectively, in the Nasmyth focus of the 10-meter Spanish GTC Telescope. This paper describes the architecture of the Data Acquisition System (DAS), emphasizing the functional and quality attributes. The DAS is a component oriented, concurrent, distributed and real time system which coordinates several activities: acquisition of images coming from the detectors controller, tagging, and data communication with the required telescope system resources. This architecture will minimize efforts in the development of future DAS. Common aspects, such as the data process flow, concurrency, asynchronous/synchronous communication, memory management, and exception handling, among others, are managed by the proposed architecture. This system also allows a straightforward inclusion of variable parts, such as dedicated hardware and different acquisition modes. The DAS has been developed using an object oriented approach and uses the Adaptive Communication Environment (ACE) to be operating system independent.
Performance of the Lowell Observatory instrumentation system
Show abstract
The Lowell Observatory Instrumentation System (LOIS) is an instrument control software system with a common interface that can control a variety of instruments. Its user interface includes GUI-based, scripted, and remote program control interfaces, and supports operational paradigms ranging from traditional direct observer interaction to fully automated operation. Currently LOIS controls a total of ten instruments built at Lowell Observatory (including one for SOFIA), NASA Ames Research Center, MIT (for Magellan), and Boston University. Together, these instruments include optical and near-IR imaging, spectroscopic, and polarimetric capability. This paper reviews the actual design of LOIS in comparison to its original design requirements and implementation approaches, and evaluates its strengths and weaknesses relative to operational performance, user interaction and feedback, and extensibility to new instruments.
The Goodman spectrograph control system
Show abstract
The Goodman spectrograph is an all-refracting articulated-camera high-throughput imaging spectrograph for the SOuthern Astrophysical Research telescope (SOAR). It is designed to take advantage of Volume Phase Holographic (VPH) gratings. Due to the high level of mechanical complexity, a fully graphical control system with parallel motor control was developed. We have developed a software solution in LabVIEW that functions as a control system, component management tool, and engineering platform. A modular software design allows other instrument projects to easily adopt our approach. Distinguishing features of the control system include automated configuration changes, remote capability, and PDA control for component swaps.
The data flow system for the AAO2 controllers
Show abstract
The AAO's new AAO2 detector controllers can handle both infra-red detectors and optical CCDs. IR detectors in particular place considerable demands on a data handling system, which has to get the data from the controllers into the data processing chain as efficiently as possible, usually with significant constraints imposed by the need to read out the detector in as smooth a manner as possible. The AAO2 controller makes use of a VME chassis that contains both a real-time VxWorks system and a UNIX system. These share access to common VME memory, the VxWorks system reading from the controller into the shared memory and the UNIX system reading it from the shared memory and processing it. Modifications to the DRAMA data acquisition environment's bulk-data sub-system hide this use of VME shared memory in the normal DRAMA bulk-data API. This means that the code involved can be tested under UNIX, using standard UNIX shared memory mechanisms, and then deployed on the VxWorks/UNIX VME system without any code changes being needed. When deployed, the data transfer from the controller via VxWorks into the UNIX-based data processing chain is handled by consecutive DMA transfers into and out of VME memory, easily achieving the required throughput. We discuss aspects of this system, including a number of the less obvious problems that were encountered.
The development process of the LUCIFER control software
Show abstract
In this paper we present the software development process and history of the LUCIFER (LBT NIR spectroscopic Utility with Camera and Integral- Field Unit for Extragalactic Research) multi-mode near-infrared instrument, which is one of the first light instruments of the LBT on Mt. Graham, Arizona. The software is realised as a distributed system in Java using its remote method invocation service (RMI). We describe the current status of the software and give an overview of the planned computer hardware architecture.
The LBT double prime focus camera control software
Show abstract
The LBT double prime focus camera (LBC) is composed of twin CCD mosaic imagers. The instrument is designed to match the double channel structure of the LBT telescope and to exploit parallel observing mode by optimizing one camera at blue and the other at red side of the visible spectrum. Because of these facts, the LBC activity will likely consist of simultaneous multi-wavelength observation of specific targets, with both channels working at the same time to acquire and download images at different rates. The LBC Control Software is responsible for coordinating these activities by managing scientific sensors and all the ancillary devices such as rotators, filter wheels, optical correctors focusing, house-keeping information, tracking and Active Optics wavefront sensors. The result is obtained using four dedicated PCs to control the four CCD controllers and one dual processor PC to manage all the other aspects including instrument operator interface. The general architecture of the LBC Control Software is described as well as solutions and details about its implementation.
Data acquirement and process system based on ethernet for multichannel solar telescope
Show abstract
For astronomical observations, there are many kinds of CCD cameras for different scientific purposes. Sometimes, this even happens in one telescope. Traditionally, a CCD camera has an individual image grabber, data process unit, and corresponding control computer. Consequently, this brings some inconvenience and problems not only to the system management but also to the updating of the system. This paper presents a resolution to this problem for the Multi-Channel Solar Telescope (MCST). All CCD cameras are connected in an Ethernet through an Ethernet interface. A server is needed to send commands to all cameras and transfer data through TCP/IP. Each CCD camera has an embedded system to control the camera, receive commands from the server and signals from the camera, process, and store the data. This paper describes the design of an Ethernet controlled camera. The camera is PULNIX TM1010, which is controlled by an Altera embedded system by Cyclone EP1C20F400C7 FPGA, which embedded with a Nios processor.
Architecture of the software for LAMOST fiber positioning subsystem
Show abstract
The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.
Slitmasks from observer to telescope: astrometric slitmask manufacturing and control for Keck spectrographs
Show abstract
This paper documents the astrometric slitmask design, submission,
fabrication, control and configuration tools used for two large
spectrographs at W. M. Keck Observatory on Mauna Kea, Hawai'i.
For supplemental illustrations and documents, including an online
version of the poster and interactive demos, we refer the reader to
http://spg.ucolick.org/Docs/SPIE/2004 .
Poster Session d: Telescope Systems
Software controls for the ATST Solar Telescope
Show abstract
The Advanced Technology Solar Telescope (ATST) is intended to be the premier solar observatory for experimental solar physics. The ATST telescope control software is designed to operate similar to current nighttime telescopes, but will contain added functionality required for solar observations. These additions include the use of solar coordinate systems, non-sidereal track rates, solar rotation models, alternate guide signal sources, the control of thermal loads on the telescope, unusual observation and calibration motions, and serendipitous acquisition of transient objects.
These requirements have resulted in a design for the ATST telescope control system (TCS) that is flexible and well-adapted for solar physics experiments. This report discusses both the classical design of the ATST TCS and the modifications required to observe in a solar physics environment. The control and servo loops required to operate both the pointing and wavefront correction systems are explained.
LBT-AdOpt control software
Show abstract
The LBT-AdOpt subsystem is a complex machine which includes several
software controlled parts. It is essentially divided into two parts: a
real-time loop which implements the actual adaptive optics control loop, from the wavefront sensor to the deformable secondary mirror, and a supervisor which performs a number of coordination and diagnostics tasks. The coordination and diagnostics task are essential for the proper operation of the system both as an aid for the preparation of observations and because only a continuous monitoring of dynamic system parameters can guarantee optimal performances and system safety during the operation. In the paper we describe the overall software architecture of the LBT-AdOpt supervisor and we discuss the functionalities required for a proper
operation.
Porting and refurbishment of the WSS TNG control software
Show abstract
The Workstation Software Sytem (WSS) is the high level control software of the Italian Galileo Galilei Telescope settled in La Palma Canary Island developed at the beginning of '90 for HP-UX workstations. WSS may be seen as a middle layer software system that manages the communications between the real time systems (VME), different workstations and high level applications providing a uniform distributed environment. The project to port the control software from the HP workstation to Linux environment started at the end of 2001. It is aimed to refurbish the control software introducing some of the new software technologies and languages, available for free in the Linux operating system. The project was realized by gradually substituting each HP workstation with a Linux PC with the goal to avoid main changes in the original software running under HP-UX. Three main phases characterized the project: creation of a simulated control room with several Linux PCs running WSS (to check all the functionality); insertion in the simulated control room of some HPs (to check the mixed environment); substitution of HP workstation in the real control room. From a software point of view, the project introduces some new technologies,
like multi-threading, and the possibility to develop high level WSS
applications with almost every programming language that implements the Berkley sockets. A library to develop java applications has also been created and tested.
Poster Session e: Frameworks
A CORBA event system for ALMA common software
Show abstract
The ALMA Common Software notification channel framework provides developers with an easy to use, high-performance, event-driven system supported across multiple programming languages and operating systems. It sits on top of the CORBA notification service and hides nearly all CORBA from developers. The system is based on a push event channel model where suppliers push events onto the channel and consumers process these asynchronously. This is a many-to-many publishing model whereby multiple suppliers send events to multiple consumers on the same channel. Furthermore, these event suppliers and consumers can be coded in C++, Java, or Python on any platform supported by ACS. There are only two classes developers need to be concerned with: SimpleSupplier and Consumer. SimpleSupplier was designed so that ALMA events (defined as IDL structures) could be published in the simplest manner possible without exposing any CORBA to the developer. Essentially all that needs to be known is the channel's name and the IDL structure being published. The API takes care of everything else. With the Consumer class, the developer is responsible for providing the channel's name as well as associating event types with functions that will handle them.
The use of object-oriented techniques and CORBA in astronomical instrumentation control systems
Show abstract
Control software for astronomy matches the ever increasing complexity of new large instrumentation projects. In order to speed the development cycle, object-oriented techniques have been used to generate loosely coupled software objects and larger scale components that can be reused in future projects. Such object-oriented systems provide for short development cycles which can respond to changing requirements and allow for extension. The Unified Modeling Language (UML) has been used for the analysis, design and implementation of this software. A distributed system is supported by the use of an object broker such as CORBA. These techniques are being applied to the development of an instrument control system for the UK spectrograph within FMOS (Fiber-fed Multi-Object Spectrograph). This is a second generation instrument for the Subaru Telescope of the National Astronomical Observatory of Japan.
Integrating JSky into the Large Millimeter Telescope monitor and control system
Show abstract
The Large Millimeter Telescope monitor and control system (LMTMC) is an automatically generated software system that is implemented using XML and Java. One of the requirements of the system is catalog support. Rather than developing new catalog navigation techniques and building them into the automatically generated code, we chose to use JSky. JSky is a set of Java components providing catalog and image support for Astronomy. The JSky classes are extended to form new classes with additional capabilities that tighten the integration with the LMTMC system. Not only can users navigate local and web hosted catalogs, they can also direct output from catalogs into the control panels of the system eliminating error-prone typing or cut and paste operations. In addition, users can retrieve digital sky survey images from the catalogs, and superimpose scientific data on them to verify correct operation.
A new GUI system for ASPRO
Show abstract
ASPRO (Astronomical Software to PrepaRe Observations) is a software tool built and maintained by the Jean-Marie Mariotti Center (JMMC) that provides means to prepare and test the validity of observations on various existing interferometers, noticeably the VLTI. Concerning our web development of ASPRO, our new generic GUI system is a fast light solution to bring a GUI to the applications or languages that haven't such capability. The toolkit is conceptually divided into three parts. The main application is considered as a server. The client handles widgets and graphic presentations and the user interactions according to the applications. One gateway system manages the data flow. Then messages are generated by applications and addressed to the client. The client presents informations to the user and returns back values or commands. This paper describes the overall architecture. It details the application's interfaces. It lists widgets capabilities and graphic functions. It presents features of our first JAVA client which supports XML exchanges. Finally we present how several distant applications are linked to one client but also between themselves.
Rapid prototyping of the Large Millimeter Telescope monitor and control system for effective user interface design
Show abstract
The monitor and control system of a telescope must provide users with a way to control certain values in the system and view other constantly changing values. Users may also want to log system values to a database and chart changes to numerical values in real time. The components of a telescope system may change and instruments may be added and removed. The set of values that the monitor and control system must provide access to may therefore change. The challenge is to provide a flexible monitor and control system to accommodate changes to the system. The Large Millimeter Telescope monitor and control system is automatically generated from a set of XML configuration files. Because the code for the system's software objects is generated automatically it is easy to include in the generated code sufficient information about the objects to inform the display. This paper will present monitor, control, logging and charting tools that automatically change to reflect changes in the components and properties of the system. These tools depend on generating software objects that include information about their own fields.
An embeddable control system for astronomical instrumentation
Show abstract
Large experimental facilities, like telescopes and focal plane instrumentation in the astronomical domain, are becoming more and more complex and expensive, as well as control systems for managing such instruments. The general trend, as can be learned by realizations carried out in the most recent years, clearly drives to most cost-effective solutions: widespread, stable standards in the software field, COTS (commercial off-the-shelf) components and industry standards in the hardware field.
Therefore a new generation of control system products needs to be developed, in order to help the scientific community to minimize the cost and efforts required for maintenance and control of their facilities.
In the spirit of the aforementioned requirements and to provide a low-cost software and hardware environment we present a working prototype of a control system, based on RTAI Linux and on ACS (Advanced Control System) framework ported to an embedded platform.
The hardware has been chosen among COTS components: a PC/104+ platform equipped with a PMAC2A motion controller card and a commercial StrongARM single board controller.
In this way we achieved a very powerful, inexpensive and robust real-time control system which can be used as a general purpose building block in the design of new instruments and could also be proposed as a standard in the field.
Poster Session f: Software Engineering
TWiki as a platform for collaborative software development management
Show abstract
The software development process in Green Bank is managed in six-week development cycles, where two cycles fall within one quarter. Each cycle, a Plan of Record is devised which outlines the team's commitments, deliverables, technical leads and scientific sponsors. To be productive and efficient, the team must not only be able to track its progress towards meeting commitments, but also to communicate and circulate the information that will help it meet its goals effectively. In the early summer of 2003, the Software Development Division installed a wiki web site using the TWiki product to improve the effectiveness of the team. Wiki sites contain web pages that are maintainable using a web interface by anyone who becomes a registered user of the site. Because the site naturally supports group involvement, the Plan of Record on the wiki now serves as the central dashboard for project tracking each development cycle. As an example of how the wiki improves productivity, software documentation is now tracked as evidence of the software deliverable. Written status reports are thus not required when the Plan of Record and associated wiki pages are kept up to date. The wiki approach has been quite successful in Green Bank for document management as well as software development management, and has rapidly extended beyond the bounds of the software development group for information management.
Poster Session g: Modeling, Simulation, and Control
The real-time control system of NAOMI
Show abstract
The Nasmyth Adaptive Optics for Multi-purpose Instrumentation (NAOMI) is the common-user Adaptive Optics (AO) system on the 4.2m William Herschel Telescope (WHT) operated by the Isaac Newton Group of Telescopes (ING).
The system contains a 76-element Deformable Mirror (DM) containing 228-degrees of freedom with Strain Gauge (SG) feedback capabilities and an 8x8 Shack Hartmann Wavefront Sensor (WFS). The wavefront corrector and wavefront sensor are controlled and coordinated by the third key component of the adaptive optics system, the Real-Time Control System (RTCS). The RTCS manages and processes interrupts and inputs including WFS image data and SG feedback signals. It also provides calculated drive signals for the system's DM and Fast Steering Mirror (FSM) as well as debug, visualisation and logging data to the user's workstation.
This paper contains a description of both the control hardware and software architecture of the RTCS including the WFS and SG real-time control loops. Each loop contains 8 Texas Instrument TMS320C44 digital signal processors, housed on DBV44 cards seated inside the NAOMI Real-Time Control Rack (RTCR) VME crate. A description of the complete processor architecture and ring structure is provided, detailing each processor's connections and external hardware communications.
The described software architecture incorporates Bulk Synchronisation Parallelism (BSP) methodology, Interrupt Service Routines (ISRs), "General Purpose" (GP) messaging, Lovetrains, Cowcatchers, the Data Transfer Mechanism (DTM) and Parameter Block Transactions (PBT).
The paper concludes with revealing planned enhancements to the current RTCS.
Flexible pointing models for large Arecibo-type optical telescopes
Show abstract
The modern day computing power to cost ratio has allowed flexible yet complex mathematical models to be implemented in various arenas. A current example is the Southern African Large Telescope and the Hobby-Eberly Telescope, Arecibo-type large optical telescopes, which have a moving prime focus confined to a spherical surface. The complexity of the moving tracking mechanism, a stationary self-aligning mirror and the scales of the structures involved in such telescopes have led to the requirement of more flexible telescope mount models. In this way the combination of low cost and a requirement for flexibility has led to the design of new mathematical models for telescopes of this type.
A case in point is the Southern African Large Telescope, due to the specific design and calibration requirements during the design and commissioning of the telescope, an adaptable mathematical model is required. Such a model should have multiple easily accessible entry points and flexibility of conversion paths between the various coordinate systems involved. In this paper the authors present an overview of the special requirements for the Southern African Large Telescope and the eventual design and implementation of a mathematical model to cope with these requirements. Some of the topics that will be discussed include: tracking challenges on SALT; layering of complexity of the mathematical model; software design and access to mathematical parameters; analytical and statistical tools for model design; and design consistency between coordinate conversions.
Pointing calibration of the SMA antennas
Show abstract
The Submillimeter Array (SMA) is a new radio interferometer consisting
of 8 antennas of 6 meters diameter each, recently deployed in operation at the summit of Mauna Kea in Hawaii. The antennas currently operate at 230, 345 and 690 GHz bands and have high enough surface accuracy to allow operations at 890 GHz. At the highest frequencies, the FWHM primary beam size of each antenna will be about 12" which imposes a stringent requirement for single-dish pointing accuracy of 1". We summarize the current status of pointing of the SMA antennas and the methods we have implemented to derive the pointing model parameters. We discuss the stability of the pointing models over time scales of several weeks. The difference between the radio and optical pointing offsets is a function of elevation only, and can be calibrated by observing a common source or a pair of neighboring sources. We present results of such a calibration and its
application to improve the radio pointing performance during submillimeter observations.
The use of RF signal simulation in a radio telescope control system
Show abstract
In addition to reliably controlling hardware, a control system should instill confidence by clearly reflecting the user's commands. If the control system of a radio telescope is capable of simulating the effects of the electronics on the RF signal, the user can be provided with practical descriptions of his or her observing configurations. A simulation allows a direct characterization of the RF signal representation rather than a raw list of attenuator, mixer or filter settings. However, because simulation is practical only if it can be kept current and accurate; it must keep pace with both engineering and operational modifications. This is possible if the software interfaces for each telescope device are identical, thus permitting hardware enhancements in the simulation to be implemented as formulated additions rather than as changes. The design of the portable monitor and control system, Ygor, used by the Green Bank Telescope treats each telescope device as an independent unit with identical control interfaces. Differences among devices are reflected by distinct sets of control Parameters. Those Parameter subsets that affect the RF signal representation are passed on to a simulation program which computes basic frequency characteristics throughout the telescope. The signal descriptions are provided to the observers as feedback both in the user interfaces and as part of their data.
Control of the Hobby-Eberly Telescope primary mirror array with the segment alignment maintenance system
Show abstract
The Segment Alignment Maintenance System (SAMS) is a control system to maintain the alignment of the 91 segment Hobby-Eberly Telescope (HET) primary mirror array. The system was developed by Blue-Line Engineering (Colorado Springs, CO) and NASA-Marshall Space Flight Center (Huntsville-Al). The core of the system is a set of 480 inductive edge sensors which measure relative shear between adjacent segments. The relative shear is used to calculate segment tip/tilt and piston corrections. Although the system has dramatically improved the performance of the HET it does not meet its error budget due to thermal drifts in the sensors. The system is now sufficiently stable that it routinely requires only one primary mirror alignment at the beginning of the night. We describe methods to calibrate this sensor drift.
Poster Session h: Remote Observing
Generic control of robotic telescopes
Show abstract
Fully robotic observatories open a cheap, but nevertheless compelling
window for scientific research. Though the telescopes are small and from the engineering point of view mostly simple, the challenging part lies in the software necessary for robotic action. If one carefully plans the development and tries to be as generic as possible, it is appropriate to create a basic software package that has the capabilities to be used on almost any robotic observatory. The aim of this article is to introduce the software used on the STELLA robotic telescope, operated by the Astrophysikalisches Institut Potsdam. Emphasis is put on issues addressing adaptation of the software to different robotic telescopes. The entire package is written in Sun's Java 1.3. It is expected to be released under the GNU public license later this year.
A secure and reliable monitor and control system for remote observing with the Large Millimeter Telescope
Show abstract
Remote access to telescope monitor and control capabilities
necessitates strict security mechanisms to protect the telescope and
instruments from malicious or unauthorized use, and to prevent data
from being stolen, altered, or corrupted. The Large Millimeter
Telescope (LMT) monitor and control system (LMTMC) utilizes the Common
Object Request Broker Architecture (CORBA) middleware technology to
connect remote software components.
The LMTMC provides reliable and secure remote observing by
automatically generating SSLIOP enabled CORBA objects. TAO, the ACE
open source Object Request Broker (ORB), now supports secure
communications by implementing the Secure Socket Layer Inter-ORB
Protocol (SSLIOP) as a pluggable protocol. This capability supplies
the LMTMC with client and server authentication, data integrity, and
encryption. Our system takes advantage of the hooks provided by TAO
SSLIOP to implement X.509 certificate based authorization. This
access control scheme includes multiple authorization levels to enable
granular access control.
Remote secure observing for the Faulkes Telescopes
Show abstract
Since the Faulkes Telescopes are to be used by a wide variety of audiences, both powerful engineering level and simple graphical interfaces exist giving complete remote and robotic control of the telescope over the internet. Security is extremely important to protect the health of both humans and equipment. Data integrity must also be carefully guarded for images being delivered directly into the classroom. The adopted network architecture is described along with the variety of security and intrusion detection software. We use a combination of SSL, proxies, IPSec, and both Linux iptables and Cisco IOS firewalls to ensure only authenticated and safe commands are sent to the telescopes. With an eye to a possible future global network of robotic telescopes, the system implemented is capable of scaling linearly to any moderate (of order ten) number of telescopes.
Remote observing capability with Subaru Telescope
Show abstract
We've implemented remote observing function to Subaru telescope Observation Software system (SOSs). Subaru telescope has three observing-sites, i.e., a telescope local-site and two remote observing-sites, Hilo base facility in Hawaii and Mitaka NAOJ headquarter in Japan. Our remote observing system is designed to allow operations not only from one of three observing-sites, but also from more than two sites concurrently or simultaneously. Considering allowance for delay in observing operations and a bandwidth of the network between the telescope-site and the remote observing-sites, three types of interfaces (protocols) have been implemented. In the remote observing mode, we use socket interface for the command and the status communication, vnc for ready-made applications and pop-up windows, and ftp for the actual data transfer. All images taken at the telescope-site are transferred to both of two remote observing-sites immediately after the acquisition to enable the observers' evaluation of the data. We present the current status of remote observations with Subaru telescope.
Fully integrated control system for the Discovery Channel Telescope
Show abstract
The Discovery Channel Telescope control system incorporates very demanding requirements regarding fast serviceability and remote operation of the telescope itself as well as facility management tools and security systems. All system capabilities are accessible from a central user interface anywhere, anytime. Although the mature stage of telescope control technology allows focusing more on science rather than on telescope operation, the time and effort needed to integrate a large suite of software modules still impose a challenge to which reusing existing software is one of the answers, especially for advanced subsystems with distributed collaborative development teams. DCT's large CCD camera presents enormous computational problems due to the overwhelming amount of generated data. Properly implemented preventive maintenance and reliability aspects of telescope operation call for historical and real time data in order to determine behavioral trends and permit early detection of failure factors. In this new approach utility monitoring and power conditioning and management are integral parts of the control system. Proposed real time spectral analysis system of sound and vibration of key mount components allows tracking mechanical component deterioration that could lead to performance degradation. Survival control cells and unmanned operation systems are other options being explored for operation in harsh climatic conditions.
Control of the TSU 2-m automatic telescope
Show abstract
Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support
automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.
Poster Session i: Data Management
The JCMT observing queue and recipe sequencer
Show abstract
The James Clerk Maxwell Telescope (JCMT), the world's largest sub-mm telescope, will soon be switching operations from a VAX/VMS based control system to a new, Linux-based, Observatory Control System1 (OCS). A critical part of the OCS is the set of tasks that are associated with the observation queue and the observing recipe sequencer: 1) the JCMT observation queue task 2) the JCMT instrument task, 3) the JCMT Observation Sequencer (JOS), and 4) the OCS console task. The JCMT observation queue task serves as a staging area for observations that have been translated from the observer's science program into a form suitable for the various OCS subsystems. The queue task operates by sending the observation at the head of the queue to the JCMT instrument task and then waits for the astronomer to accept the data before removing the observation from the queue. The JCMT instrument task is responsible for running up the set of tasks required to observe with a particular instrument at the JCMT and passing the observation on to the JOS. The JOS is responsible for executing the observing recipe, pausing/continuing the recipe when commanded, and prematurely ending or aborting the observation when commanded. The OCS console task provides the user with a GUI window with which they can control and monitor the observation queue and the observation itself. This paper shows where the observation queue and recipe sequencer fit into the JCMT OCS, presents the design decisions that resulted in the tasks being structured as they are, describes the external interfaces of the four tasks, and details the interaction between the tasks.
AQuA: an automatic pipeline for fast transients detection
Show abstract
AQuA (Automatic QUick Analysis) is a software designed to manage data
reduction and prompt detection of near infra-red (NIR) afterglows
of GRB triggered by the dedicated instruments onboard satellites and observed with the robotic telescope REM. NIR observations of GRBs early afterglow are of crucial importance for GRBs science, revealing even optical obscured or high redshift events. The core of the pipeline is an algorithm for automatic transient detection, based on a decision tree that is continuously upgraded through a Bayesian estimator (DecOAR). It assigns to every transient candidate different reliability coefficients and delivers an alert when a transient is found above the reliability threshold.
Image analysis algorithms for critically sampled curvature wavefront sensor images in the presence of large intrinsic aberrations
Show abstract
This paper describes the image analysis algorithm developed for VISTA to recover wavefront information from curvature wave front sensor images. This technique is particularly suitable in situations where the defocused images have a limited number of pixels and the intrinsic or null aberrations contribute significantly to distort the images. The algorithm implements the simplex method of Nelder and Mead. The simplex algorithm generates trial wavefront coefficients that are fed into a ray tracing algorithm which in turn produces a pair of defocused images. These trial defocused images are then compared against the images obtained from a sensor, using a fitness function. The value returned from the fitness function is fed back to the simplex algorithm, which then decides how the next set of trial coefficients is produced.
Planck/LFI DPC pipeline integration and testing
Show abstract
A geographically distibuited software project needs to have a well defined software integration & development plan to avoid extra work in the pipeline creation phase. Here we will describe the rationale in the case of the Planck/LFI DPC project and what was designed and developed to build the integration and testing environment.
Design and implementation of the spectra reduction and analysis software for LAMOST Telescope
Show abstract
The Large Sky Area Multi-Object Fibre Spectroscopic Telescope
(LAMOST) will be set up and tested. A fully automated software
system for reducing and analyzing the spectra has to be developed
before the telescope finished. Requirement analysis has been made
and data model has been designed. The software design outline is
given in this paper, including data design, architectural and
component design and user interface design, as well as the
database for this system. This paper also shows an example of
algorithm, PCAZ, for redshift determination.
GO-CART: the GOHSS Calibration and Reduction Tool
Show abstract
The raw images coming from infrared multi-echelle fiber spectrographs are quite complex to be processed, extracted and calibrated. Available procedures are in general not exhaustive or assume high knowledge of command line environments. For the instrument GOHSS, a fiber-fed high resolution NIR spectrograph to be mounted at the Italian National Telescope TNG, we have, therefore, developed GO-CART (GOhss Calibration and Reduction Tool), a tool which automatically performs the whole stage from the assessment of the master instrument calibrations up to the final sky subtracted scientific spectra, by following predefined or user written pipelines, in which an error propagation analysis is envisaged at each step of the process. GO-CART joins together the powerful graphical and imaging capabilities of IDL with the worldwide acknowledged performances of the IRAF spectra extraction packages within an easy-to-use environment. It is fully configurable to be used with different instruments and can work on any platform on which IDL and IRAF can run. A smart data organization and proper file naming rules allow for a convenient management of any final or intermediate result. GO-CART also provides specific capabilities to model and subtract scattered light from highly packed echelle images and a custom optimal matching algorithm to perform residual-free OH subtraction.
Poster Session d: Telescope Systems
Software and control system for SOAR Telescope active optical system (AOS)
Show abstract
The SOAR Telescope project has completed development of the Active Optical System (AOS) software system. This paper describes the two Computer Software Components (CSCs) that are part of the SOAR/AOS software. The first CSC is referred to as the Operations Control (OpCon) Software. The OpCon Software contains all of the software necessary for running and monitoring the Adaptive Optics Control System (AOCS). This includes the software to run the Primary Mirror Assembly (PMA), to command the Secondary Mirror Assembly (SMA) and the Turret Controller, to set the modes of the Tip/Tilt mirror, and to monitor and report status from the status data acquisition board. It includes the command and data interface to the Telescope Control System (TCS). It includes the AOCS state logic and the input routines for reading the database of command vectors. The second CSC is called the Database Generation (DBGen) Software. The DBGen Software contains the software that generates the database of PM force vectors and SM command vectors. This software uses either theoretical data or measured wavefront data to build the databases.
This paper focuses particularly on the PMA actuator control software. We describe the use of Nastran modeling data for initial deployment of the telescope and the concept for using actual measured data for calibration optimization. We also describe the software implementation designed to allow the actuator control system to meet its timing requirements during telescope slew and to meet the primary figure requirements during telescope observations.