Show all abstracts
View Session
- Front Matter: Volume 7738
- Systems Engineering for Ground-Based Telescopes I
- Modeling of Space Telescopes I
- Modeling of Ground-Based Telescopes I
- Modeling of Ground-Based Telescopes II
- Project Management I
- Project Management II
- Project Management III
- Systems Engineering for Space Telescopes
- Modeling of Space Telescopes II
- Systems Engineering for Ground-Based Telescopes II
- Systems Engineering for Ground-Based Telescopes III
- Poster Session: Modeling
- Poster Session: Systems Engineering
- Poster Session: Project Management
Front Matter: Volume 7738
Front Matter: Volume 7738
Show abstract
This PDF file contains the Front Matter associated with SPIE Proceedings Volume 7738, including Title page, Copyright information, Table of Contents, and Conference Committee listing.
Systems Engineering for Ground-Based Telescopes I
System safety and hazard analysis for the Advanced Technology Solar Telescope
Show abstract
The Advanced Technology Solar Telescope (ATST) is a four-meter class instrument being built to perform diffractionlimited
observations of the sun. This paper describes how ATST has dealt with system safety and in particular hazard
analysis during the design and development (D&D) phase. For ATST the development of a system safety plan and the
oversight of the hazard analysis fell, appropriately, to systems engineering. We have adopted the methodology described
in MIL-STD-882E, "Standard Practice for System Safety." While these methods were developed for use by the U.S.
Department of Defense, they are readily applicable to the safety needs of telescope projects. We describe the details of
our process, how it was implemented by the ATST design team, and some useful lessons learned. We conclude with a
discussion of our safety related plans during the construction phase of ATST and beyond.
Optical and system engineering in the development of a high-quality student telescope kit
Show abstract
The Galileoscope student telescope kit was developed by a volunteer team of astronomers, science education experts,
and optical engineers in conjunction with the International Year of Astronomy 2009. This refracting telescope is in
production with over 180,000 units produced and distributed with 25,000 units in production. The telescope was
designed to be able to resolve the rings of Saturn and to be used in urban areas. The telescope system requirements,
performance metrics, and architecture were established after an analysis of current inexpensive telescopes and student
telescope kits. The optical design approaches used in the various prototypes and the optical system engineering tradeoffs
will be described. Risk analysis, risk management, and change management were critical as was cost management
since the final product was to cost around $15 (but had to perform as well as $100 telescopes). In the system engineering
of the Galileoscope a variety of analysis and testing approaches were used, including stray light design and analysis
using the powerful optical analysis program FRED.
MUSE instrument global performance analysis
Show abstract
MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument developed for ESO (European Southern Observatory) and will be assembled to the VLT (Very Large Telescope) in 2012. The MUSE instrument can simultaneously record 90.000 spectra in the visible wavelength range (465-930nm), across a 1*1arcmin2 field of view, thanks to 24 identical Integral Field Units (IFU). A collaboration of 7 institutes has successfully passed the Final Design Review and is currently working on the first sub-assemblies. The sharing of performances has been based on 5 main functional sub-systems. The Fore Optics sub-system derotates and anamorphoses the VLT Nasmyth focal plane image, the Splitting and Relay Optics associated with the Main Structure are feeding each IFU with 1/24th of the field of view.
Each IFU is composed of a 3D function insured by an image slicer system and a spectrograph, and a detection function
by a 4k*4k CCD cooled down to 163°K. The 5th function is the calibration and data reduction of the instrument. This
article depicts the breakdown of performances between these sub-systems (throughput, image quality...), and underlines
the constraining parameters of the interfaces either internal or with the VLT. The validation of all these requirements is a
critical task started a few months ago which requires a clear traceability and performances analysis.
Delivered image quality budget for the Discovery Channel Telescope
Show abstract
The Discovery Channel Telescope (DCT) is a 4.3-meter telescope designed for dual optical configurations, featuring an
f/6.1, 0.5° FoV, Ritchey-Chretien prescription, and a corrected f/2.3, 2° FoV, prime focus. The DCT is expected to
typically deliver sub-arcsecond images, with a telescope and local seeing contribution of <0.28" FWHM at the R-C
focus and <0.38" FWHM at the prime focus. The Delivered Image Quality (DIQ) budget considers errors from design
residuals, manufacturing, environmental effects, and control system limitations. We present an overview of the
analytical methods used, including sensitivity analysis for determining collimation effects, and a summary of
contributors to the overall system performance.
Modeling of Space Telescopes I
Effects of thermal deformations on the sensitivity of optical systems for space application
Show abstract
In this paper the results of the thermo-elastic analysis performed on the Stereo Imaging Channel of the imaging system SIMBIO-SYS for the BepiColombo ESA mission to Mercury is presented. The aim of the work is to determine the expected stereo reconstruction accuracy of the surface of the planet Mercury, i.e. the target of BepiColombo mission, due to the effects of the optics misalignments and deformations induced by temperature changes during the mission lifetime. The camera optics and their mountings are modeled and processed by a thermo-mechanical Finite Element Model (FEM)
program, which reproduces the expected optics and structure thermo-elastic variations in the instrument foreseen operative temperature range, i.e. between -20 °C and 30 °C. The FEM outputs are elaborated using a MATLAB optimization routine: a non-linear least square algorithm is adopted to determine the surface equation (plane, spherical, nth polynomial) which best fits the deformed optical surfaces. The obtained surfaces are then directly imported into ZEMAX raytracing code for sequential raytrace analysis.
Variations of the optical center position, boresight direction, focal length and distortion are then computed together with
the corresponding image shift on the detector.
The overall analysis shows the preferable use of kinematic constraints, instead of glue classical solution, for optical
element mountings, this minimize the uncertainty on the Mercury Digital Terrain Model (DTM) reconstructed via a stereo-vision algorithm based on the triangulation from two optical channels.
Investigation of disturbance effects on space-based weak lensing measurements with an integrated model
Show abstract
Many astrophysicists consider the mystery of accelerated expansion of the universe by a field called dark energy
as the greatest challenge to solve in cosmology. Gravitational weak lensing has been identfied as one of the
best methods to provide constraints on dark energy model parameters. Weak lensing introduces image shear
which can be measured statistically from a large sample of galaxies by determining the ellipticity parameters.
Several papers have suggested that a goal in the ability to measure shape biases should be <0.1% - this goal
will be reviewed in terms of the observatory "transfer function" with comments interspersed regarding allocation
inconsistencies. Time-varying effects introduced by thermoelastic deformations and vibration add bias and noise
to the galaxy shape measurements. This is compounded by the wide field-of-view required for the weak lensing
science which leads to a spatially varying point spead function (PSF). To fully understand these effects, a
detailed integrated model (IM) was constructed which includes a coupled scene/ structure/ optics/ disturbance
model. This IM was applied to the Joint Dark Energy Mission (JDEM) Omega design concept. Results indicate
that previous models of vibration disturbance effects have been too simplified and the allocation for vibration
needs to be re-evaluated. Furthermore, because of the complicated processing required to accurately extract
shape parameters, it is argued that an IM is needed for maximizing science return by iterating the telescope/
instrument design against mission cost constraints, and processing e¤ectiveness of shape extraction algorithms,
instrument calibration techniques and measurement desensitization of observatory effects.
The Kepler end-to-end model: creating high-fidelity simulations to test Kepler ground processing
Show abstract
The Kepler mission is designed to detect the transit of Earth-like planets around Sun-like stars by observing
100,000 stellar targets. Developing and testing the Kepler ground-segment processing system, in particular the
data analysis pipeline, requires high-fidelity simulated data. This simulated data is provided by the Kepler Endto-
End Model (ETEM). ETEM simulates the astrophysics of planetary transits and other phenomena, properties
of the Kepler spacecraft and the format of the downlinked data. Major challenges addressed by ETEM include
the rapid production of large amounts of simulated data, extensibility and maintainability.
Modeling of Ground-Based Telescopes I
Introducing atmospheric effects in the numerical simulation of the VLT/MUSE instrument
Show abstract
The Multi Unit Spectroscopic Explorer (MUSE) instrument is a second-generation integral-field spectrograph
in development for the Very Large Telescope (VLT), operating in the visible and near IR wavelength range
(465-930 nm). Given the complexity of MUSE we have developed a numerical model of the instrument, which
includes the whole chain of acquisition from the atmosphere down to the telescope and including the detectors,
and taking into account both optical aberrations and diffraction effects. Simulating atmospheric effects such as
turbulence, refraction and sky background within an instrument numerical simulator is computation intensive,
and the simulation of these effects is usually beyond the scope of an instrument simulator as it is done in
dedicated simulations from which only the results are available. In this paper we describe how these effects are
simulated in the VLT/MUSE numerical simulator, the simplifications that are used, as well as the assumptions
leading to these simplifications.
Thermal modeling environment for TMT
Show abstract
In a previous study we had presented a summary of the TMT Aero-Thermal modeling effort to support thermal seeing
and dynamic loading estimates. In this paper a summary of the current status of Computational Fluid Dynamics (CFD)
simulations for TMT is presented, with the focus shifted in particular towards the synergy between CFD and the TMT
Finite Element Analysis (FEA) structural and optical models, so that the thermal and consequent optical deformations of
the telescope can be calculated.
To minimize thermal deformations and mirror seeing the TMT enclosure will be air conditioned during day-time to the
expected night-time ambient temperature. Transient simulations with closed shutter were performed to investigate the
optimum cooling configuration and power requirements for the standard telescope parking position.
A complete model of the observatory on Mauna Kea was used to calculate night-time air temperature inside the
enclosure (along with velocity and pressure) for a matrix of given telescope orientations and enclosure configurations.
Generated records of temperature variations inside the air volume of the optical paths are also fed into the TMT thermal
seeing model.
The temperature and heat transfer coefficient outputs from both models are used as input surface boundary conditions in
the telescope structure and optics FEA models. The results are parameterized so that sequential records several days long
can be generated and used by the FEA model to estimate the observing spatial and temporal temperature range of the
structure and optics.
Thermal analysis of the TMT telescope structure
Show abstract
Thermal performances of the Thirty Meter Telescope (TMT) structure were evaluated by finite element thermal models.
The thermal models consist of the telescope optical assembly systems, instruments, laser facility, control and electronic
equipments, and structural members. Temporal and spatial temperature distributions of the optical assembly systems and
the telescope structure were calculated under various thermal conditions including air convections, conductions, heat flux
loadings, and radiations. In order to capture thermal responses faithfully, a three-consecutive-day thermal environment
data was implemented. This thermal boundary condition was created by CFD based on the environment conditions of the
corresponding TMT site. The thermo-elastic analysis was made to predict thermal deformations of the telescope
structure at every hour for three days. The line of sight calculation was made using the thermally induced structural
deformations. Merit function was utilized to calculate the OPD maps after repositioning the optics based on a best fit of
M1 segment deformations. The goal of this thermal analysis is to establish creditable thermal models by finite element
analysis to simulate the thermal effects with the TMT site environment data. These thermal models can be utilized for
estimating the thermal responses of the TMT structure. Thermal performance prediction of the TMT structure will guide
us to assess the thermal impacts, and enables us to establish a thermal control strategy and requirements in order to
minimize the thermal effects on the telescope structure due to heat dissipation from the telescope mounted equipment
and systems.
LSST camera heat requirements using CFD and thermal seeing modeling
Show abstract
The LSST camera is located above the LSST primary/tertiary mirror and in front of the secondary mirror in the shadow
of its central obscuration. Due to this position within the optical path, heat released from the camera has a potential
impact on the seeing degradation that is larger than traditionally estimated for Cassegrain or Nasmyth telescope
configurations. This paper presents the results of thermal seeing modeling combined with Computational Fluid
Dynamics (CFD) analyzes to define the thermal requirements on the LSST camera.
Camera power output fluxes are applied to the CFD model as boundary conditions to calculate the steady-state
temperature distribution on the camera and the air inside the enclosure. Using a previously presented post-processing
analysis to calculate the optical seeing based on the mechanical turbulence and temperature variations along the optical
path, the optical performance resulting from the seeing is determined. The CFD simulations are repeated for different
wind speeds and orientations to identify the worst case scenario and generate an estimate of seeing contribution as a
function of camera-air temperature difference. Finally, after comparing with the corresponding error budget term, a
maximum allowable temperature for the camera is selected.
Primary mirror dynamic disturbance models for TMT: vibration and wind
Show abstract
The principal dynamic disturbances acting on a telescope segmented primary mirror are unsteady wind pressure
(turbulence) and narrowband vibration from rotating equipment. Understanding these disturbances is essential
for the design of the segment support assembly (SSA), segment actuators, and primary mirror control system
(M1CS). The wind disturbance is relatively low frequency, and is partially compensated by M1CS; the response
depends on the control bandwidth and the quasi-static stiffness of the actuator and SSA. Equipment vibration is
at frequencies higher than the M1CS bandwidth; the response depends on segment damping, and the proximity
of segment support resonances to dominant vibration tones. We present here both disturbance models and
parametric response. Wind modeling is informed by CFD and based on propagation of a von Karman pressure
screen. The vibration model is informed by analysis of accelerometer and adaptive optics data from Keck. This
information is extrapolated to TMT and applied to the telescope structural model to understand the response
dependence on actuator design parameters in particular. Whether the vibration response or the wind response
is larger depends on these design choices; "soft" (e.g. voice-coil) actuators provide better vibration reduction
but require high servo bandwidth for wind rejection, while "hard" (e.g. piezo-electric) actuators provide good
wind rejection but require damping to avoid excessive vibration transmission to the primary mirror segments.
The results for both nominal and worst-case disturbances and design parameters are incorporated into the TMT
actuator performance assessment.
Modeling of Ground-Based Telescopes II
Optical modelling of the European Extremely Large Telescope for high-contrast imaging tasks
Show abstract
We study the capability of the European Extremely Large Telescope to image exoplanets. For this task we have
developed a simulation which models the telescope, adaptive-optics systems, coronagraphs, science instrument, which is
the integral field spectrograph in our case, and image post-processing.
Normalized point source sensitivity for off-axis optical performance evaluation of the Thirty Meter Telescope
Show abstract
The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis
seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to
include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the
Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select
one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature
even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error
budget. Various TMT optical errors are considered for the performance evaluation including segment alignment
and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously
been published by our group.
Investigation of Thirty Meter Telescope wavefront maintenance using low-order Shack-Hartmann wavefront sensors to correct for thermally-induced misalignments
Show abstract
We evaluate how well the performance of the Thirty Meter Telescope (TMT) can be maintained against thermally
induced errors during a night of observation. We first demonstrate that using look-up-table style correction for
TMT thermal errors is unlikely to meet the required optical performance specifications. Therefore, we primarily
investigate the use of a Shack-Hartmann Wavefront Sensor (SH WFS) to sense and correct the low spatial
frequency errors induced by the dynamic thermal environment. Given a basic SH WFS design, we position
single or multiple sensors within the telescope field of view and assess telescope performance using the JPL
optical ray tracing tool MACOS for wavefront simulation. Performance for each error source, wavefront sensing
configuration, and control scheme is evaluated using wavefront error, plate scale, pupil motion, pointing error,
and the Point Source Sensitivity (PSSN) as metrics. This study provides insight into optimizing the active optics
control methodology for TMT in conjunction with the Alignment and Phasing System (APS) and primary mirror
control system (M1CS).
Analysis of active alignment control of the Hobby-Eberly Telescope wide-field corrector using Shack-Hartmann wavefront sensors
Show abstract
One of the key aspects of the Wide-Field Upgrade (WFU) for the 10m Hobby-Eberly Telescope (HET) is the use of
wavefront sensing (WFS) to close the loop of active alignment control of the new four-mirror Wide-Field Corrector
(WFC), as it tracks sidereal motion, with respect to the fixed spherical segmented primary mirror. This makes the
telescope pupil dynamically change in shape. This is a unique challenge to the WFS on the HET, in addition to various
influences of seeing, primary mirror segment errors, and dynamic deflection of the internal optical components of the
WFC. We conducted extensive simulations to understand the robustness of the WFS in the face of these errors and the
results of these analyses are discussed in this paper.
Integrated finite element analysis and raytracing oriented to structural optimization for astronomical instrument design
Show abstract
The design of astronomical instrument is growing in dimension and complexity, following the new requirements
imposed by ELT class telescopes. The availability of new structural material like composite ones is asking for more
robust and reliable designing numerical tools. This paper wants to show a possible integrated design framework.
The procedure starts from the developing of a raw structure consisting in an assembly of plates and beams
directly from the optical design. The basic Finite Element Model is then prepared joining together plate and
beam elements for the structure and mass and semi-rigid element for the the opto-mechanical subsystems. The
technique developed is based onto Matlab® commands and run the FEA, extrapolate the optical displacements,
implement them into the optical design and evaluates the image quality in terms of displacement and spot size.
Thanks to a simplified procedure the routine is able to derive the full field of displacements from a reduced
sequence of three different load sets. The automatic optimization routine modifies the properties of plates
and beams considering also different materials and, in case of composites different lamination sequences. The
algorithm is oriented to find the best compromise in terms of overall weights w.r.t. eigen-frequencies, image
stability and quality.
SOFIA telescope modal survey test and test-model correlation
Show abstract
The NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA) employs a 2.5-meter reflector telescope in
a Boeing 747SP. The telescope is housed in an open cavity and will be subjected to aeroacoustic and inertial
disturbances. The image stability goal for SOFIA is 0.2 arc-seconds (RMS). Throughout the development phase of the
project, analytical models were employed to predict the image stability performance of the telescope, and to evaluate
pointing performance improvement measures. These analyses clearly demonstrated that key aspects which determined
performance were:
1) Disturbance environment and relevant load-paths
2) Telescope modal behavior
3) Sensor and actuator placement
4) Control algorithm design
The SOFIA program is now entering an exciting phase in which the characteristics of the telescope and the cavity
environment are being verified through ground and airborne testing. A modal survey test (MST) was conducted in early
2008 to quantify the telescope modal behavior. We will give a brief overview of analytical methods which have been
employed to assess/improve the pointing stability performance of the SOFIA telescope. In this context, we will describe
the motivation for the MST, and the pre-test analysis which determined the modes of interest and the required MST
sensor/shaker placement. A summary will then be given of the FEM-test correlation effort, updated end-to-end
simulation results, and actual data coming from telescope activation test flights.
Project Management I
Management and systems engineering of the Kepler mission
James Fanson,
Leslie Livesay,
Margaret Frerking,
et al.
Show abstract
Kepler is the National Aeronautics and Space Administration's (NASA's) first mission capable of detecting Earth-size
planets orbiting in the habitable zones around stars other than the sun. Selected for implementation in 2001 and launched
in 2009, Kepler seeks to determine whether Earth-like planets are common or rare in the galaxy. The investigation
requires a large, space-based photometer capable of simultaneously measuring the brightnesses of 100,000 stars at partper-
million level of precision. This paper traces the development of the mission from the perspective of project
management and systems engineering and describes various methodologies and tools that were found to be effective.
The experience of the Kepler development is used to illuminate lessons that can be applied to future missions.
Managing the development of the Wide-field Infrared Survey Explorer mission
William Irace,
Roc Cutri,
Valerie Duval,
et al.
Show abstract
The Wide-field Infrared Survey Explorer (WISE), a NASA Medium-Class Explorer (MIDEX) mission, is surveying the
entire sky in four bands from 3.4 to 22 microns with a sensitivity hundreds to hundreds of thousands times better than
previous all-sky surveys at these wavelengths. The single WISE instrument consists of a 40 cm three-mirror anastigmatic
telescope, a two-stage solid hydrogen cryostat, a scan mirror mechanism, and reimaging optics giving 6" resolution (fullwidth-
half-maximum). WISE was placed into a Sun-synchronous polar orbit on a Delta II 7320 launch vehicle on
December 14, 2009. NASA selected WISE as a MIDEX in 2002 following a rigorous competitive selection process. To
gain further confidence in WISE, NASA extended the development period one year with an option to cancel the mission
if certain criteria were not met. MIDEX missions are led by the principal investigator who in this case delegated day-today
management to the project manager. With a cost cap and relatively short development schedule, it was essential for
all WISE partners to work seamlessly together. This was accomplished with an integrated management team
representing all key partners and disciplines. The project was developed on budget and on schedule in spite of the need
to surmount significant technical challenges. This paper describes our management approach, key challenges and critical
decisions made. Results are described from a programmatic, technical and scientific point of view. Lessons learned are
offered for projects of this type.
Project Management II
Management evolution in the LSST project
Show abstract
The Large Synoptic Survey Telescope (LSST) project has evolved from just a few staff members in 2003 to about 100 in
2010; the affiliation of four founding institutions has grown to 32 universities, government laboratories, and industry.
The public private collaboration aims to complete the estimated $450 M observatory in the 2017 timeframe. During the
design phase of the project from 2003 to the present the management structure has been remarkably stable. At the same
time, the funding levels, staffing levels and scientific community participation have grown dramatically. The LSSTC
has introduced project controls and tools required to manage the LSST's complex funding model, technical structure and
distributed work force. Project controls have been configured to comply with the requirements of federal funding
agencies. Some of these tools for risk management, configuration control and resource-loaded schedule have been
effective and others have not. Technical tasks associated with building the LSST are distributed into three subsystems:
Telescope & Site, Camera, and Data Management. Each sub-system has its own experienced Project Manager and
System Scientist. Delegation of authority is enabling and effective; it encourages a strong sense of ownership within the
project. At the project level, subsystem management follows the principle that there is one Board of Directors, Director,
and Project Manager who have overall authority.
Advanced Technology Solar Telescope project management
Show abstract
The Advanced Technology Solar Telescope (ATST) has recently received National Science Foundation (NSF) approval
to begin the construction process. ATST will be the most powerful solar telescope and the world's leading resource for
studying solar magnetism that controls the solar wind, flares, coronal mass ejections and variability in the Sun's output.
This paper gives an overview of the project, and describes the project management principles and practices that have
been developed to optimize both the project's success as well as meeting requirements of the project's funding agency.
The poacher turned gamekeeper, or getting the most out of the design review process
Show abstract
This paper presents an accumulation of knowledge from both sides of the design review table. Using experience gained
over many reviews and post-mortems, some painful, some less painful; examining stakeholder's viewpoints and
expectations; challenging aspects of accepted wisdom and posing awkward questions, the author brings out what he
considers to be key criteria for a constructive design review. While this is not a guarantee to a successful outcome, it
may nudge the balance from the reviews being an obligatory milestone (millstone?) towards them being a beneficial
mechanism for project development.
The MUSE project from the dream toward reality
Show abstract
MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument developed for ESO (European Southern
Observatory) to be installed on the VLT (Very Large Telescope) in year 2012. The MUSE project is supported by a
European consortium of 7 institutes. After a successful Final Design Review the project is now facing a turning point
which consist in shifting from design to manufacturing, from calculation to test, ... from dream to reality.
At the start, many technical and management challenges were there as well as unknowns. They could all be derived of
the same simple question: How to deal with complexity? The complexity of the instrument, of the work to de done, of
the organization, of the interfaces, of financial and procurement rules, etc.
This particular moment in the project life cycle is the opportunity to look back and evaluate the management methods
implemented during the design phase regarding this original question. What are the lessons learn? What has been
successful? What could have been done differently? Finally, we will look forward and review the main challenges of the
MAIT (Manufacturing Assembly Integration and Test) phase which has just started as well as the associated new
processes and evolutions needed.
Project Management III
Management of the Herschel/Planck Programme
Thomas Passvogel,
Gerald Crone
Show abstract
The development of the Herschel and Planck Programme, the largest scientific space programme of the European Space
Agency (ESA), has culminated in May 2009 with the successful launch of the Herschel and Planck satellites onboard an
Ariane 5 from the European Spaceport in Kourou. Both satellites are operating flawlessly since then and the scientific
payload instruments provide world-class science. The Herschel/Planck Programme is a multi national cooperation with
the managerial lead being taken by the European Space Agency with the major contributions from European industry for
the spacecraft development and from scientific institutes, organized in five international consortia, for the payload
instruments. The overall programme complexity called for various, adapted, management approaches to resolve technical
and programmatic difficulties. Some of the management experiences of over a decade needed to realize such a satellite
programme will be presented giving the lessons learnt for future programmes with the similar complexities.
The Javalambre Astrophysical Observatory project
A. J. Cenarro,
M. Moles,
D. Cristóbal-Hornillos,
et al.
Show abstract
The Javalambre Astrophysical Observatory is a new ground based facility under construction at the Sierra de Javalambre (Teruel, Spain). The observatory is defined for carrying out large all-sky surveys in robotic mode. It will consist of two telescopes: T250, a large etendue telescope of 2.5m aperture and 3 deg diameter field of view, and T80, a 0.8m auxiliary telescope with a field of view diameter of 2 deg. The immediate objective of the T250 is a photometric survey of 8000 square degrees projected on the sky, using narrow-band filters (~ 100Å width) in the whole optical range, following the specifications defined in this paper1 for the measurement of Baryon Acoustic Oscillations along the line of sight with photometric redshifts. To do this, T250 will hold a panoramic
camera of ~ 14 large format (~ 10.5k×10.5k) CCDs covering the entire focal plane.
The present paper describes the overall project conceived to comply with the scientific and technical requirements and the managerial approach considered. Other aspects such as the expected instrumentation at the telescopes, the operation of the observatory, the software/hardware specifications, and the data handling will be
also outlined.
Using value-based total cost of ownership (TCO) measures to inform subsystem trade-offs
Show abstract
Total Cost of Ownership (TCO) is a metric from management accounting that helps expose both the direct and indirect
costs of a business decision. However, TCO can sometimes be too simplistic for "make vs. buy" decisions (or even
choosing between competing design alternatives) when value and extensibility are more critical than total cost. A three-dimensional
value-based TCO, which was developed to clarify product decisions for an observatory prior to Final
Design Review (FDR), will be presented in this session. This value-based approach incorporates priority of
requirements, satisfiability of requirements, and cost, and can be easily applied in any environment.
Systems Engineering for Space Telescopes
Systems engineering on the James Webb Space Telescope
Michael T. Menzel,
Marie Bussman,
Michael Davis,
et al.
Show abstract
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2014.
System-level verification of critical performance requirements will rely on integrated observatory models that predict the
wavefront error accurately enough to verify that allocated top-level wavefront error of 150 nm root-mean-squared (rms)
through to the wave-front sensor focal plane is met. This paper describes the systems engineering approach used on the
JWST through the detailed design phase.
Ten years of Chandra: reflecting back on engineering lessons learned during the design, fabrication, integration, test, and verification of NASA's great x-ray observatory
Show abstract
This paper emphasizes how the Chandra telescope hardware was designed, tested, verified, and accepted for flight to
benefit all future missions that must find efficient ways to drive out cost and schedule while still maintaining an
acceptable level of risk. Examples of how the verification methodology mitigated risk will be provided, along with
actual flight telemetry which confirms the robustness of the Chandra Telescope design and its verification methodology.
NuSTAR: system engineering and modeling challenges in pointing reconstruction for a deployable x-ray telescope
Show abstract
The Nuclear Spectroscopic Telescope Array (NuSTAR) is a NASA Small Explorer mission that will make the first sensitive images of the sky in the high energy X-ray band (6 - 80 keV). The NuSTAR observatory consists of two co-aligned grazing incidence hard X-ray telescopes with a ~10 meter focal length, achieved by the on-orbit extension of a deployable mast.
A principal science objective of the mission is to locate previously unknown high-energy X-ray sources to an accuracy of 10 arcseconds (3-sigma), sufficient to uniquely identify counterparts at other wavelengths. In order to achieve this, a star tracker and laser metrology system are an integral part of the instrument; in conjunction, they will determine the orientation of the optics bench in celestial coordinates and also measure the flexures in the deployable mast as it responds to the varying on-orbit thermal environment, as well as aerodynamic and control torques. The architecture of the NuSTAR system for solving the attitude and aspect problems differs from that of previous X-ray telescopes, which
did not require ex post facto reconstruction of the instantaneous observatory alignment on-orbit.
In this paper we describe the NuSTAR instrument metrology system architecture and implementation, focusing on the systems engineering challenges associated with validating the instantaneous transformations between focal plane and celestial coordinates to within the required accuracy. We present a mathematical solution to photon source reconstruction, along with a detailed error budget that relates component errors to science performance. We also describe the architecture of the instrument simulation software being used to validate the end-to-end performance model.
The project office of the Gaia Data Processing and Analysis Consortium
Show abstract
Gaia is Europe's future astrometry satellite which is currently under development. The data collected
by Gaia will be treated and analyzed by the "Data Processing and Analysis Consortium" (DPAC). DPAC consists of over 400 scientists in more
than 22 countries, which are currently developing the required data reduction, analysis and handling algorithms and routines. DPAC is organized
in Coordination Units (CU's) and Data Processing Centres (DPCs). Each of these entities is individually responsible for the development of
software for the processing of the different data. In 2008, the DPAC Project Office (PO) has been set-up with the task to manage the day-to-day activities of the consortium including implementation, development and operations. This paper describes the tasks DPAC faces and the role of the DPAC PO
in the Gaia framework and how it supports the DPAC entities in their effort to fulfill the Gaia promise.
The role of stray light modeling and analysis in telescope system engineering, performance assessment, and risk abatement
Show abstract
Stray light modeling and analysis play a key role in new technology assessment, system engineering, and the overall
performance assessment of telescope/instrument systems under real use conditions. It also is a key tool in risk reduction
as stray light problems that appear late in the program are usually severe, expensive to fix, and often compromise final
system performance. This paper will review the current stray light software and testing tools of value to the astronomical
community and their capabilities/ limitations for general and specialized telescope systems. We will describe the role of
stray light analysis in end-to-end modeling and integrated modeling for a number of systems we have analyzed and
discuss in detail the stray light modeling and analysis cycle for different types of programs. A key issue is how managers
might deal with the issues revealed by an analysis as well as the risks of an incomplete or improperly-timed analysis.
The importance of stray light analysis for end-to-end performance assessment and whether such an analysis can reduce
life-cycle costs will also be discussed. The paper will use examples from ground and space-based astronomical
telescope/instrument systems.
Modeling of Space Telescopes II
The JWST/NIRSpec instrument performance simulator software
Show abstract
NIRSpec is the near-infrared multi-object spectrograph for the future James Webb Space Telescope (JWST). It is
developed by EADS Astrium for the European Space Agency. The Centre de Recherche Astrophysique de Lyon (CRAL)
has developed the Instrument Performance Simulator (IPS) software that is being used for the modeling of NIRSpec's
performances and to simulate raw NIRSpec exposures. In this paper, we present the IPS software itself (main simulation
modules and user's interface) and discuss its intrinsic accuracy. We also show the results of simulations of calibration
exposures as they will be obtained during the NIRSpec on-ground calibration campaign.
Confronting the NIRSpec Instrument Performance Simulator outputs with results of the NIRSpec Demonstration Model calibration campaign
Show abstract
The James Webb Space Telescope (JWST) is the successor mission to the Hubble Space Telescope and will
operate in the near- and mid-infrared wavelength ranges. One of the four science instruments on board the
spacecraft is the multi-object spectrograph NIRSpec, currently developed by the European Space Agency (ESA)
with EADS Astrium Germany GmbH as the prime contractor. NIRSpec will be able to measure the spectra of
more than 100 objects simultaneously and will cover the near infrared wavelength range from 0.6 to 5.0 μm at
various spectral resolutions. To verify the performance of NIRSpec and simulate future on-ground and in-orbit
observations with this instrument, the Instrument Performance Simulator (IPS) software is developed at Centre
de Recherche Astrophysique de Lyon (CRAL) as subcontractor to Astrium.
In early and mid-2009, the NIRSpec Demonstration Model (DM), fully representative up to the slit plane,
underwent cryogenic tests and calibration runs. The detector was placed at the slit plane in case of the DM to
measure specific optical performance aspects. A simplified version of the IPS was prepared, matching the DM
configuration and also serving as a testbed for the final software for the flight model. In this paper, we first
present the simulation approach used in the IPS, followed by results of the DM calibration campaign. Then, for
the first time, simulation outputs are confronted with measured data to verify their validity.
An update on the role of systems modeling in the design and verification of the James Webb Space Telescope
Show abstract
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2014.
The imaging performance of the telescope will be diffraction limited at 2μm, defined as having a Strehl ratio >0.8.
System-level verification of critical performance requirements will rely on integrated observatory models that predict the
wavefront error accurately enough to verify that allocated top-level wavefront error of 150 nm root-mean-squared (rms)
through to the wave-front sensor focal plane is met. Furthermore, responses in several key disciplines are strongly crosscoupled.
The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures,
effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a
complex series of incremental tests and measurements are used to anchor components of the end-to-end models at
various levels of subassembly, with the ultimate verification of optical performance is by analysis using the assembled
models. The assembled models themselves are complex and require the insight of technical experts to assess their ability
to meet their objectives. This paper describes the modeling approach used on the JWST through the detailed design
phase.
Verification of the observatory integrated model for the JWST
Show abstract
The JWST optical performance verification poses challenges not yet encountered in space-based telescopes. In
particular the large, lightweight deployable optics pose challenges to verification via direct measurement of the
observatory on earth. For example, the lightweight optics have surface distortion due to gravity producing wavefront
error greater than the wavefront error specification for the entire observatory. Because of difficulties such as this, the
deployable, segmented Primary Mirror and deployable Secondary Mirror will be realigned after launch. The architecture
of JWST was designed to accommodate these difficulties as these by including active positioning of the primary mirror
segments and secondary. In fact, the requirements are written such that the active control system shall be used to meet
the requirements. Therefore many of the optical requirements are necessarily based on modeling and simulations. This
requires that the models used to predict the optical performance be verified and validated. This paper describes the
validation approach taken and a companion paper describes the optical performance analyses1.
Systems Engineering for Ground-Based Telescopes II
Application of systems engineering concepts in the Canada-France-Hawaii Telescope Observatory automation project
Show abstract
In 2007, the Canada-France-Hawaii Telescope (CFHT) undertook a project to enable the remote control of the
observatory at the summit of Mauna Kea from a control room in the Headquarters building in Waimea. Instead of
having two people operating the telescope and performing the observations from the summit, this project will allow one
operator to remotely control the observatory and perform observations for the night. It is not possible to have one person
operate from the summit, as our Two Person Rule requires at least two people for work at the summit for safety reasons.
This paper will describe how systems engineering concepts have shaped the design of the project structure and
execution.
Statistical approach to systems engineering for the Thirty Meter Telescope
Show abstract
Core components of systems engineering are the proper understanding of the top level system requirements, their
allocation to the subsystems, and then the verification of the system built against these requirements. System
performance, ultimately relevant to all three of these components, is inherently a statistical variable, depending on
random processes influencing even the otherwise deterministic components of performance, through their input
conditions. The paper outlines the Stochastic Framework facilitating both the definition and estimate of system
performance in a consistent way. The environmental constraints at the site of the observatory are significant design
drivers and can be derived from the Stochastic Framework, as well. The paper explains the control architecture capable
of achieving the overall system performance as well as its allocation to subsystems. An accounting for the error and
disturbance sources, as well as their dependence on environmental and operational parameters is included. The most
current simulations results validating the architecture and providing early verification of the preliminary TMT design are
also summarized.
Systems engineering of the Thirty Meter Telescope through integrated opto-mechanical analysis
Show abstract
The merit function routine (MFR) is implemented in the National Research Council Canada Integrated Modeling
(NRCIM) toolset and based in the MATLAB numerical computing environment. It links ANSYS finite element
structural models with ZEMAX optical models to provide a powerful integrated opto-mechanical engineering tool. The
MFR is utilized by the Thirty Meter Telescope Project to assess the telescope active optics system requirements and
performance. This paper describes the MFR tool, including the interfaces to ANSYS, ZEMAX, the method of
calculation of the results, and the internal data structures used. A summary of the required performance of the Thirty
Meter Telescope, and the MFR results for the telescope system design are presented.
Use of requirements engineering within the Thirty Meter Telescope project
John Rogers,
Hugh Thompson
Show abstract
The Thirty Meter Telescope is comprised of thirty five individual sub-systems which include optical systems,
instruments, adaptive optics systems, controls, mechanical systems, supporting software and hardware and the
infrastructure required to support their operation. These thirty five sub-systems must operate together as a system to
enable the telescope to meet the science cases for which it is being developed. These science cases are formalized and
expressed as science requirements by the project's Science Advisory Committee. From these, a top down requirements
engineering approach is used within the project to derive consistent operational, architectural and ultimately detailed
design requirements for the sub-systems. The various layers of requirements are stored within a DOORS requirements
database that also records the links between requirements, requirement rationale and requirement history. This paper
describes the development of the design requirements from science cases, the reasons for recording the links between
requirements and the benefits that documenting this traceability will yield during the design and verification of the
telescope. Examples are given of particular science cases, the resulting operational and engineering requirements on the
telescope system and how individual sub-systems will contribute to these being met.
Integrating AO in a performance budget: toward a global system engineering vision
Show abstract
EAGLE (Extremely large Adaptive telescope for GaLaxy Evolution) is one of the eight E-ELT instrument concepts that was developed as part of the Phase A E-ELT instrument studies. EAGLE is a near-infrared wide field multi object spectrograph1. It includes its own multi-object adaptive optics system (MOAO) and its subsystems are cooled down so as to ensure that the instrument can both achieve the desired spatial resolution and to be sure that the instrument is background limited, as required in the primary science case, to deliver the performance in the K-band. In this paper we discuss the performance matrix developed to allow us to partition and allocate the important characteristics to the various subsystems as well as to describe the process to verify that the current concept design will deliver the required performance. Due to the integrated nature of the instrument, a large number of AO parameters have to be controlled. The performance matrix also has to deal with the added complexity of active optical elements such as the science channel deformable mirrors (DMs). This paper also defines a method of how to convert the ensquared energy (EE) and signal-to-noise ratio (SNR) required by the science cases into the "as designed" wavefront error and the overall residue wavefront error. To ensure successful integration and verification of the next generation instruments for ELT it is of the utmost importance to have method to control and manage the instrument's critical performance characteristics at very early design steps.
The large observatories maintenance management: tools and strategies for maintenance manuals preparation
Show abstract
Large Observatories require enormous efforts in preparing the Maintenance Manuals. The possibility to adopt a
standardised system, associated to centralized data base and software tools is investigated. This strategy implies a
revolution of Maintenance Manuals: from information collection (paper-based), to a modular approach where data
modules are used. The initial efforts associated to data modules preparation is compensated by several benefits (time
savings for end-users, reduced training requirements, reduced equipment down-time). Moreover, cost savings in the
preparation process, even for different equipment, is also possible. Finally, this standardised strategy will assure
compatibility between different programs or partners.
Systems Engineering for Ground-Based Telescopes III
Using SysML for MBSE analysis of the LSST system
Show abstract
The Large Synoptic Survey Telescope is a complex hardware - software system of systems, making up a highly
automated observatory in the form of an 8.4m wide-field telescope, a 3.2 billion pixel camera, and a peta-scale data
processing and archiving system. As a project, the LSST is using model based systems engineering (MBSE)
methodology for developing the overall system architecture coded with the Systems Modeling Language (SysML).
With SysML we use a recursive process to establish three-fold relationships between requirements, logical & physical
structural component definitions, and overall behavior (activities and sequences) at successively deeper levels of
abstraction and detail. Using this process we have analyzed and refined the LSST system design, ensuring the
consistency and completeness of the full set of requirements and their match to associated system structure and
behavior. As the recursion process proceeds to deeper levels we derive more detailed requirements and specifications,
and ensure their traceability. We also expose, define, and specify critical system interfaces, physical and information
flows, and clarify the logic and control flows governing system behavior. The resulting integrated model database is
used to generate documentation and specifications and will evolve to support activities from construction through final
integration, test, and commissioning, serving as a living representation of the LSST as designed and built. We discuss
the methodology and present several examples of its application to specific systems engineering challenges in the LSST
design.
The Large Synoptic Survey Telescope OCS and TCS models
Show abstract
The Large Synoptic Survey Telescope (LSST) is a project envisioned as a system of systems with demanding science,
technical, and operational requirements, that must perform as a fully integrated unit. The design and implementation of
such a system poses big engineering challenges when performing requirements analysis, detailed interface definitions,
operational modes and control strategy studies. The OMG System Modeling Language (SysML) has been selected as the
framework for the systems engineering analysis and documentation for the LSST. Models for the overall system
architecture and different observatory subsystems have been built describing requirements, structure, interfaces and
behavior. In this paper we show the models for the Observatory Control System (OCS) and the Telescope Control
System (TCS), and how this methodology has helped in the clarification of the design and requirements. In one common
language, the relationships of the OCS, TCS, Camera and Data management subsystems are captured with models of the
structure, behavior, requirements and the traceability between them.
Conquering complexity with systems engineering as illustrated by EAGLE, a multi-object adaptive optics IFU spectrograph
Show abstract
This paper illustrates how the design of an instrument such as the Extremely Large Adaptive Telescope for GaLaxy
Evolution (EAGLE) instrument can be simplified. EAGLE is a Wide Field Multi Object Integral Field Unit
Spectrometer aimed as a cornerstone instrument for the European Extremely Large Telescope (E-ELT). The instrument
is rich in capabilities and will require Adaptive Optics to ensure that the expected spatial resolution (typically 15 times
finer than that of a seeing limited instrument) can be met. The complexities introduced by the need to include a Multi-
Object Adaptive Optics system (MOAO) can be simplified by using well defined systems engineering processes. These
processes include the capturing, analysis and flow down of requirements, functional and performance analysis and an
integrated system design approach. In this paper we will also show by example why the discipline imposed by the UK
ATC formal systems engineering process is necessary, especially given that projects such as EAGLE also have to deal
with the complexities of international collaborations. It also illustrates how the process promotes innovation and
creativity.
E-ELT phase-A instrument studies: a system engineering view
Show abstract
During the last two and half years ten phase-A instrument studies for the E-ELT have been launched by ESO and carried
out by consortia of institutes in the ESO member states and Chile. These studies have been undertaken in parallel with
the phase B of the E-ELT telescope. This effort has pursued two main goals: to prove the feasibility and performance of
a set of instruments to meet the project science goals and to identify and incorporate in the telescope design those
features that satisfy best the needs of the future hosts, i.e., the science instruments. To succeed on this goal it is crucial to
identify such needs as early as possible in the design process.
This concurrent approach definitively benefits both the instruments concept design and the telescope development, but
implies as well a number of difficult tasks. This paper compiles, from a system-engineering point of view, the benefits
and difficulties as well as the lesson learned during this concurrent process. In addition, the main outcomes of the
process, in terms of telescope-instruments interfaces definition and requirements from the instruments to the telescope
and vice-versa, are reported.
Error budgets definition for the European Solar Telescope (EST)
Show abstract
The European Solar Telescope (EST) is a European collaborative project to build a 4m class solar telescope in the
Canary Islands, which is now in its design study phase. The telescope will provide diffraction limited performance for
several instruments observing simultaneously at the Coudé focus at different wavelengths.
In order to guarantee the achievement of the demanding scientific requirements, error budgets of main performance have
been defined from the early design study phase in top-down fashion. During the design study, analyses are being
performed in order to update the defined error budgets in bottom-up fashion. Error budget management is proposed
from the design study phase to be used during the complete project life cycle.
Poster Session: Modeling
Modeling of control system for LAMOST based on Petri net workflow
Show abstract
The Chinese ever-ambitious project LAMOST (Large sky Area Multi-Object fibre Spectroscopic Telescope) has now
come to its final completion of R&D stage. Major functions of the telescope have successfully passed a serial of pilot
observation recently, and various kinds of applications integrated into the automation of the telescope chamber are being
under vigorous site tests too. The TCS (Telescope Control System) is built on multi-layer distributed network platform
with many sub-systems at different levels. How to efficiently process the enormous amount of message with particular
implications running in and out the TCS is one of the major issues of the TCS software package. The paper focuses on
the modelling of control system for LAMOST based on Petri net workflow. The model is also analyzed and verified with
the matrix equation.
LAMOST control system: past and future
Show abstract
The project of much-anticipated LAMOST (Large sky Area Multi-Object fibre Spectroscopic Telescope) has
successfully been inspected and accepted at national-level evaluation. It will become the world's most powerful
meter-class level ground astronomical optical survey telescope. The ever-ambitious project throughout the development
history of Chinese astronomical optics telescopes has brought an extraordinary challenge to its control system from
all-round aspects. Painstaking effort has been made to the R&D of the control system from its design strategy,
functionality analyses to most subtle technical solutions, and of course efficient engineering management is also
included. A number of papers highlighting the anticipated LAMOST control system have previously been published
during the course of the project evolving. However, much lesson and experience have been learned since 10 years ago.
Now the telescope with all its facilities and observation chamber has been put into trial observation. This is the time to
review the past and ponder over the future of the control system as a whole against the functional telescope in current
reality. Lesson and experience are discussed. Some considerations for improving the system efficiency and the
accessibility are presented too in this paper.
Phase retrieval analysis of the Hobby-Eberly Telescope primary mirror segment figure error and its implication for wavefront sensing for the new wide-field upgrade
Show abstract
Primary mirror segment figure error is potentially deleterious to the wavefront sensing in the new Hobby-Eberly
Telescope (HET) Wide-Field Upgrade (WFU). Previous measurements indicated the presence of figure errors including
prominent surface astigmatism on the segments, but need a systematic analysis to quantify the amounts. We developed a
Phase Retrieval procedure that estimates the surface figure map by applying the iterative transform method to a set of
focus-diversed images of a point source formed by the 91 segments of the 11m HET primary mirror. In this paper, we
discuss this analysis and the implication of the analysis results for wavefront sensing on the upgraded HET.
Efficient orthonormal aberration coefficient estimation for wavefront sensing over variable non-circular pupils of the Hobby-Eberly Telescope
Show abstract
Wavefront sensing (WFS) is one of the key elements for active alignment of the new Wide-Field Corrector (WFC), as it
tracks sidereal motion, with respect to the fixed Hobby-Eberly Telescope (HET) primary mirror. During a track, part of
the 10m-pupil of the WFC can lie outside the primary periphery and be clipped off. An additional field-dependent
central obscuration by the holes and baffles of the WFC leads to complex pupil geometries. The combination of these is
a complicated dynamically varying non-circular telescope pupil. This unique problem to the WFS on the HET needs to
be dealt with by choosing an appropriate set of orthonormal aberration polynomials during wavefront reconstruction. In
this paper, three ways of computing orthonormal aberration polynomials and their coefficients are discussed. These are
based on the Gram-Schmidt (GS) process, but differ in the way of computing key integrals during the GS process. The
first method analytically computes the integrals, where a computer algebra program is used. The second uses the
Gaussian quadrature over triangulated pupil geometries that approximate the true pupil shape. The last uses indirect
numerical estimates of the integrals, which turned out to be natural by-products of the usual least-square Zernike
polynomials fit. It is shown that the first method is limited to cases of simple pupil shapes, while the second can be
applied to more general pupil shapes. However, when dealing with complicated dynamically varying non-circular
pupils, the last method can be vastly more efficient than the second and enables the possibility of estimating
orthonormal aberration coefficient on the fly. Also noticed is that the last method naturally takes into account the
pixelation effect of pupil geometries due to pixel-based imaging sensors (e.g. CCDs). With these benefits, the last
method can be used as a viable tool in real-time wavefront analysis over dynamically changing pupils as in the Hobby-
Eberly Telescope, which is otherwise vastly inefficient with analytic methods used in past studies.
Computational fluid dynamic modeling of the summit of Mt. Hopkins for the MMT Observatory
Show abstract
Over the past three decades, the staff of the MMT observatory used a variety of techniques to predict the summit wind
characteristics including wind tunnel modeling and the release of smoke bombs. With the planned addition of a new
instrument repair facility to be constructed on the summit of Mt. Hopkins, new computational fluid dynamic (CFD)
models were made to determine the building's influence on the thermal environment around the telescope. The models
compared the wind profiles and density contours above the telescope enclosure with and without the new building. The
results show the steeply-sided Mount Hopkins dominates the summit wind profiles. In typical winds, the height of the
telescope remains above the ground layer and is sufficiently separated from the new facility to insure the heat from the
new building does not interfere with the telescope. The results also confirmed the observatories waste heat exhaust duct
location needs to be relocated to prevent heat from being trapped in the wind shadow of the new building and lofting
above the telescope. These useful models provide many insights into understanding the thermal environment of the
summit.
Simulating the LSST system
A. J. Connolly,
John Peterson,
J. Garrett Jernigan,
et al.
Show abstract
Extracting science from the LSST data stream requires a detailed knowledge of the properties of the LSST catalogs and
images (from their detection limits to the accuracy of the calibration to how well galaxy shapes can be characterized).
These properties will depend on many of the LSST components including the design of the telescope, the conditions
under which the data are taken and the overall survey strategy. To understand how these components impact the nature
of the LSST data the simulations group is developing a framework for high fidelity simulations that scale to the volume
of data expected from the LSST. This framework comprises galaxy, stellar and solar system catalogs designed to match
the depths and properties of the LSST (to r=28), transient and moving sources, and image simulations that ray-trace the
photons from above the atmosphere through the optics and to the camera. We describe here the state of the current
simulation framework and its computational challenges.
A high-speed data acquisition system to measure telescope response to earthquake-induced ground motion
Show abstract
The Gemini Observatory operates two telescopes, both in geographical areas that pose a significant risk of damage due
to earthquakes. To assess the potential of damage to telescope systems due to earthquake induced ground motion, a
system of accelerometers, data acquisition hardware and data analysis software is being installed at each telescope.
Information from these sensors will be used to evaluate the response at various locations on the telescope including the
primary and secondary mirror support structures, instruments, telescope mount, pier and adjacent ground. A detailed
discussion of the design of this sensor system is presented. Real time applications and potential future upgrades are also
discussed, including provisions for automatic subsystem parking and shutdown, laser shuttering, alarms and future
structural modifications designed to reduce the dynamic response of the telescope and its subsystems to earthquake
induced ground motion.
Active dynamic isolation and pointing control system design for ACCESS
Show abstract
Current concepts for some future for space based astronomical observatories require extraordinary stability with respect
to pointing and jitter disturbances. Exoplanet finding missions with internal coronagraphs require pointing stability of
<10nrad 3σ (<2mas, 3σ). Closed-loop active dynamic isolation at the interface between a telescope and the spacecraft
(where reaction wheels are the primary jitter source) can attain these requirements when incorporated with a robust
overall pointing control system architecture which utilizes information from IRUs, star-trackers, and steering mirrors.
ITT has developed a high TRL Active Isolation Mount System (AIMS) and through analyses and hardware test-bed
work demonstrated that these stringent pointing and dynamic stability can be met for the Actively-Corrected
Coronagraph for Exoplanet System Studies (ACCESS) [1] observatory.
LSST Telescope guider loop requirements analysis and predicted performance
Show abstract
The LSST Telescope has critical requirements on tracking error to meet image quality specifications, and will require
closing a guiding loop, with the telescope servo control, to meet its mission. The guider subsystem consists of eight
guiding sensors located inside the science focal plane at the edge of the 3.5deg field of view. All eight sensors will be
read simultaneously at a high rate, and a centroid average will be fed to the telescope and rotator servo controls, for
tracking error correction. A detailed model was developed to estimate the sensors centroid noise and the resulting
telescope tracking error for a given frame rate and telescope servo control system.
The centroid noise depends on the photo-electron flux, seeing conditions, and guide sensor specifications. The model for
the photo-electron flux takes into consideration the guide star availability at different galactic latitudes, the atmospheric
extinction, the optical losses at different filter bands, the detector quantum efficiency, the integration time and the
number of stars sampled. A 7-layer atmospheric model was also developed to estimate the atmospheric decorrelation
between the different guide sensors due to the 3.5deg field of view, to predict both correlated and decorrelated
atmospheric tip/tilt components, and to determine the trade-offs of the guider servo loop.
Poster Session: Systems Engineering
A formal risk management process for instrumentation projects at the Anglo-Australian Observatory
David R. Orr,
Anthony Heng
Show abstract
Risk management is a dynamic activity that takes place throughout the development process from the concept phase to
the retirement phase of the project. The successful management of risk is a critical part of the instrumentation
development process at the AAO. The AAO has a risk management process based on the AS/ISO standard for risk
management. Brainstorming sessions are conducted with the project team. Potential project risks are identified by the
team and grouped into the categories of technical, political, operational, logistical, environmental, and safety. A risk
matrix is populated with details of each risk. The risk is then ranked based on the consequence and likelihood according
to the scale of Low, Moderate, Significant, and High. The level of risk is evaluated; mitigation control mechanisms are
identified, and assigned to a specific team member for resolution. Risk management is used as a management tool for the
HERMES project. The top 5 risks are identified, and management efforts are then concentrating on reducing these risks.
Risk management is also used during the development process as a trade study tool to evaluate different design options
and assist senior management to make informed decisions.
PORÍS: practical-oriented representation for instrument systems
Jacinto Javier Vaz-Cedillo
Show abstract
This article presents a toolkit for defining simple but powerful systems. PORIS toolkit is an open and extensible source
collaborative project that allows describing system graph-based systems and their behavior in a snapshot. It provides a
web editor for a domain visual specific language (DSL) and transformation tools to generate software prototypes,
system configurations, specific user interfaces and documentation. Different kind of instruments, like the astronomical
ones, can be described and represented using PORIS specifications and models. A significant advantage of using PORIS
toolkit is that it makes easy and lighter providing instant feedback to domain experts in the dynamic process of defining
new instruments.
VISTA, a success story: from conceptual design to operation
Show abstract
This paper considers the development and progression of the VISTA telescope, from conception to the point where it is
now being operated by the scientific community (end user). It analyses and evaluates the value of effective project
management and systems engineering practices with practical examples. The practical application of systems
engineering is addressed throughout the requirement capture and management, design, manufacture, assembly, and
installation, integration, verification and acceptance phases, highlighting the value gained by appropriate application of
step-by-step procedures and tools. The special emphasis given to the importance of effective systems engineering during
on-site installation, verification and validation will be illustrated. Project management aspects are covered from
tendering and procurement through contractor management processes to final integration and commissioning, with great
emphasis placed on the importance of a "win-win" approach and the benefits of effective, constructive
customer/contractor liaison. Consideration is given to the details and practicalities of day-to-day site management,
safety, housekeeping, and the management and support of site personnel and services. Recommendations are made to
improve the effectiveness of UK ATC system engineering and project management so that future projects can benefit
from the lessons learned on VISTA.
Image quality verification analysis of the JWST
Show abstract
The JWST optical performance verification poses challenges not yet encountered in space based telescopes. The
deployable, segmented Primary Mirror and Secondary Mirror require re-alignment after launch rendering ground
alignment states moot. The architecture of JWST was designed to accommodate these difficulties by including active
positioning of the Secondary Mirror and the Primary Mirror Segments. In fact, the requirements are written such that the
active control system shall be used to meet the requirements. Therefore many of the optical requirements are necessarily
based on modeling and simulations of the post-launch, re-alignment of the telescope. This paper provide an overview of
a process of computer simulation using an end-to-end integrated model that is used to statistically evaluate the on-orbit,
re-alignment performance based on the uncertainties in the integration and test program and deployments.
Poster Session: Project Management
The VST telescope primary mirror safety system: simulation model and mechanical implementation
Show abstract
The VST telescope is a wide field survey telescope being installed at Cerro Paranal (Chile). Due to the geological nature
of the area, telescopes in Chile are always submitted to unpredictable and sometimes severe earthquake conditions. In
order to clarify some aspects of VST telescope seismic behavior not well represented by linear procedures like Response
Spectrum Analysis, a transient nonlinear analysis of the whole telescope has been foreseen. A mixed approach Finite
Element - Matlab-Simulink has been introduced and a linear FE model of the telescope has been developed, with all
nonlinear devices sources modelled as linear elements. The FE model has been exported to Simulink, using a space state
representation. In Simulink all nonlinearities are appropriately modeled and a base excitement corresponding to
accelerograms compliant with Paranal MLE response spectrum is applied. Resulting force-time histories are then applied
to a detailed finite element model of mirror, to compute stress field. The paper describes both Simulink and mirror FE
analysis, giving also an overview of the actual safety system mechanical implementation, based on analysis results.
Virtual reality and project management for astronomy
Show abstract
Over the years astronomical instrumentation projects are becoming increasingly complex making necessary to find
efficient ways for project communication management. While all projects share the need to communicate project
information, the required information and the methods of distribution vary widely between projects and project staff. A
particular problem experienced on many projects regardless of their size, is related to the amount of design, planning
information and how that is distributed among the project stakeholders. One way to improve project communications
management is to use a workflow that offers a predefined way to share information in a project. Virtual Reality (VR)
offers the possibility to get a visual feedback of designed components without the expenses of prototype building, giving
an experience that mimics real life situations using a computer. In this contribution we explore VR as a communication
technology that helps to manage instrumentation projects by means of a workflow implemented on a software package
called Discut designed at Universidad Nacional Autónoma de Mexico (UNAM). The workflow can integrate VR
environments generated as CAD models.
Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade
Show abstract
The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple
tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among
multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and
coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices
for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker
upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics,
software controls engineering, and site-operations. Successful system-level integration of tracker subsystems
and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process
controls for design information and workflow management have been implemented to assist the collaborative transfer of
tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a
determining factor in design task formulation and team communication needs. Interface controls and requirements
change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and
Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers
engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals,
teams, and organizations.
A paradigm shift to enable more cost-effective space science telescope missions in the upcoming decades
Show abstract
Modern astronomy currently is dealing with an exciting but challenging dichotomy. On one hand, there has been and
will continue to be countless advances in scientific discovery, but on the other the astronomical community is faced with
what unfortunately is considered by many to be an insurmountable budgetary impasse for the foreseeable future. The
National Academy of Sciences' Astro2010: Decadal Survey has been faced with the difficult challenge of prioritizing
sciences and missions for the upcoming decade while still allowing room for new, yet to be discovered opportunities to
receive funding. To this end, we propose the consideration of a paradigm shift to the astronomical community that may
enable more cost efficient space-based telescope missions to be funded and still provide a high science return per dollar
invested. The proposed paradigm shift has several aspects that make it worthy of consideration: 1) Telescopes would
leverage existing Commercial Remote Sensing Satellite (CRSS) Architectures such as the 1.1m NextView systems
developed by ITT, GeoEye-1, and WorldView-2, or the 0.7m IKONOS system (or perhaps other proprietary systems); 2)
By using large EELV class fairings, multiple telescopes with different science missions could be flown on a single
spacecraft bus sharing common features such as communications and telemetry (current Earth Science missions in early
development phases are considering this approach); 3) Multiple smaller observatories (with multiple spacecraft) could be
flown in a single launch vehicle for instances where the different science payloads had incompatible requirements; and
4) by leveraging CRSS architectures, vendors could supply telescopes at a fixed price. Here we discuss the implications
and risks that the proposed paradigm shift would carry.