Proceedings Volume 9150

Modeling, Systems Engineering, and Project Management for Astronomy VI

cover
Proceedings Volume 9150

Modeling, Systems Engineering, and Project Management for Astronomy VI

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 19 August 2014
Contents: 12 Sessions, 71 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2014
Volume Number: 9150

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9150
  • Project Management I
  • Project Management II
  • System Integration, Verification, and Validation
  • System Designs and Architectures
  • Model Based Systems Engineering
  • System Modeling I
  • Systems Engineering I
  • Systems Engineering II
  • System Modeling II
  • Systems Engineering III
  • Poster Session
Front Matter: Volume 9150
icon_mobile_dropdown
Front Matter: Volume 9150
This PDF file contains the front matter associated with SPIE Proceedings Volume 9150, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Project Management I
icon_mobile_dropdown
Success in large high-technology projects: What really works?
Despite a plethora of tools, technologies and management systems, successful execution of big science and engineering projects remains problematic. The sheer scale of globally funded projects such as the Large Hadron Collider and the Square Kilometre Array telescope means that lack of project success can impact both on national budgets, and collaborative reputations. In this paper, I explore data from contemporary literature alongside field research from several current high-technology projects in Europe and Australia, and reveal common ‘pressure points’ that are shown to be key influencers of project control and success. I discuss the how mega-science projects sit between being merely complicated, and chaotic, and explain the importance of understanding multiple dimensions of project complexity. Project manager/leader traits are briefly discussed, including capability to govern and control such enterprises. Project structures are examined, including the challenge of collaborations. I show that early attention to building project resilience, curbing optimism, and risk alertness can help prepare large high-tech projects against threats, and why project managers need to understand aspects of ‘the silent power of time’. Mission assurance is advanced as a critical success function, alongside the deployment of task forces and new combinations of contingency plans. I argue for increased project control through industrial-style project reviews, and show how post-project reviews are an under-used, yet invaluable avenue of personal and organisational improvement. Lastly, I discuss the avoidance of project amnesia through effective capture of project knowledge, and transfer of lessons-learned to subsequent programs and projects.
Generic documentation tree for science ground segments
F. Pérez-López, T. Lock, D. Texier
The competences of the Science Ground Segment, for an ESA science mission, include: science operations planning, science instrument handling, data reception and processing, and archiving as well as providing science support. This paper presents a generic documentation structure applicable during the analysis, definition, implementation and operational phases of an ESA Science Ground Segment. This is the conclusion of the analysis performed in the scope of the current ESAC Science Ground Segment developments and is derived from the experience of previous ESA science missions and the ESA standardization efforts (ECSS Standards). It provides a guideline to support the Science Ground Segment documentation processes during all mission phases; representing a new approach for the development of future ESA science missions, and providing an initial documentation structure that might be tailored depending on the specific scientific, engineering and managerial characteristics of each mission. This paper also describes the process followed to produce the generic documentation tree and how the development and operations experience feedback in the updated versions of this generic documentation tree.
Tackling five main problem areas found in science (ground segment) project developments
T. Lock, F. Pérez-López
Science projects which require a large software development may use many scientists alongside a few professional software engineers. Such projects tend to show extreme cases of the general problems associated with software developments. After introducing an example of a large software development in a science project, the importance of a development management plan will be emphasised and sections of the plan highlighted and it is explained how these sections address and prepare for the expected problems throughout the life of the project. A positive, strongly proactive quality assurance, QA, approach is the common theme throughout. The role of QA is, therefore, more to guide, support and advise all members of the team rather than only to detect and react to problems. The top five problem areas addressed are: 1. Vague, late and missing requirements. 2. Few professional software engineers in a large software development. 3. A lack of testers with an appropriate test mentality. 4. Quality Assurance people cannot be everywhere, nor have in-depth skills in every subject. 5. Scientists will want to start coding and see writing documents as a waste of their time.
Project Management II
icon_mobile_dropdown
The tail wags the dog: managing large telescope construction projects with lagging requirements and creeping scope
In a perfect world, large telescopes would be developed and built in logical, sequential order. First, scientific requirements would be agreed upon, vetted, and fully developed. From these, instrument designers would define their own subsystem requirements and specifications, and then flesh out preliminary designs. This in turn would then allow optic designers to specify lens and mirror requirements, which would permit telescope mounts and drives to be designed. Finally, software and safety systems, enclosures and domes, buildings, foundations, and infrastructures would be specified and developed. Unfortunately, the order of most large telescope projects is the opposite of this sequence. We don’t live in a perfect world. Scientists usually don’t want to commit to operational requirements until late in the design process, instrument designers frequently change and update their designs due to improving filter and camera technologies, and mount and optics engineers seem to live by the words “more” and “better” throughout their own design processes. Amplifying this is the fact that site construction of buildings and domes are usually the earliest critical path items on the schedule, and are often subject to lengthy permitting and environmental processes. These facility and support items therefore must quickly get underway, often before operational requirements are fully considered. Mirrors and mounts also have very long lead times for fabrication, which in turn necessitates that they are specified and purchased early. All of these factors can result in expensive and time-consuming change orders when requirements are finalized and/or shift late in the process. This paper discusses some of these issues encountered on large, multi-year construction projects. It also presents some techniques and ideas to minimize these effects on schedule and cost. Included is a discussion on the role of Interface Control Documents (ICDs), the importance (and danger) of making big-picture decisions early, and designing flexibility and adaptability into subsystems. In a perfect world, science would be the big dog in the room, wagging the engineering tail. In our non-perfect world, however, it’s often the tail that ends up wagging the dog instead.
Daniel K. Inouye Solar Telescope system safety
Robert P. Hubbard, Scott E. Bulau, Steve Shimko, et al.
System safety for the Daniel K. Inouye Solar Telescope (DKIST) is the joint responsibility of a Maui-based safety team and the Tucson-based systems engineering group. The DKIST project is committed to the philosophy of “Safety by Design”. To that end the project has implemented an aggressive hazard analysis, risk assessment, and mitigation system. It was initially based on MIL-STD-882D, but has since been augmented in a way that lends itself to direct application to the design of our Global Interlock System (GIS). This was accomplished by adopting the American National Standard for Industrial Robots and Robot Systems (ANSI/RIA R15.06) for all identified hazards that involve potential injury to personnel. In this paper we describe the details of our augmented hazard analysis system and its use by the project. Since most of the major hardware for the DKIST (e.g., the enclosure, and telescope mount assembly) has been designed and is being constructed by external contractors, the DKIST project has required our contractors to perform a uniform hazard analysis of their designs using our methods. This paper also describes the review and follow-up process implemented by the project that is applied to both internal and external subsystem designs. Our own weekly hazard analysis team meetings have now largely turned to system-level hazards and hazards related to specific tasks that will be encountered during integration, test, and commissioning and maintenance operations. Finally we discuss a few lessons learned, describing things we might do differently if we were starting over today.
Integrated Logistics Support approach: concept for the new big projects: E-ELT, SKA, CTA
G. Marchiori, F. Rampini, F. Formentin
The Integrated Logistic Support is a process supporting strategies and optimizing activities for a correct project management and system engineering development. From the design & engineering of complex technical systems, to the erection on site, acceptance and after-sales service, EIE GROUP covers all aspects of the Integrated Logistics Support (ILS) process that includes: costing process centered around the life cycle cost and Level of Repair Analyses; engineering process which influences the design via means of reliability, modularization, etc.; technical publishing process based on international specifications; ordering administration process for supply support. Through the ILS, EIE GROUP plans and directs the identification and development of logistics support and system requirements for its products, with the goal of creating systems that last longer and require less support, thereby reducing costs and increasing return on investments. ILS therefore, addresses these aspects of supportability not only during acquisition, but also throughout the operational life cycle of the system. The impact of the ILS is often measured in terms of metrics such as reliability, availability, maintainability and testability (RAMT), and System Safety (RAMS). Example of the criteria and approach adopted by EIE GROUP during the design, manufacturing and test of the ALMA European Antennas and during the design phase of the E-ELT telescope and Dome are presented.
De-mystifying earned value management for ground based astronomy projects, large and small
Timothy Norton, Patricia Brennan, Mark Mueller
The scale and complexity of today’s ground based astronomy projects have justifiably required Principal Investigator’s and their project teams to adopt more disciplined management processes and tools in order to achieve timely and accurate quantification of the progress and relative health of their projects. Earned Value Management (EVM) is one such tool. Developed decades ago and used extensively in the defense and construction industries, and now a requirement of NASA projects greater than $20M; EVM has gained a foothold in ground-based astronomy projects. The intent of this paper is to de-mystify EVM by discussing the fundamentals of project management, explaining how EVM fits with existing principles, and describing key concepts every project can use to implement their own EVM system. This paper also discusses pitfalls to avoid during implementation and obstacles to its success. The authors report on their organization’s most recent experience implementing EVM for the GMT-Consortium Large Earth Finder (G-CLEF) project. G-CLEF is a fiber-fed, optical echelle spectrograph that has been selected as a first light instrument for the Giant Magellan Telescope (GMT), planned for construction at the Las Campanas Observatory in Chile’s Atacama Desert region.
System Integration, Verification, and Validation
icon_mobile_dropdown
The commissioning of Gaia
With the successful launch of the next generation space astrometry mission Gaia* in December 2013, this paper is going to provide an overview and status of this mission after its first half year of operations in space. We will provide a summary of the performed commissioning activities, the obtained findings, and how these first months of working on real data is impacting the DPAC operational concepts. The results will also provide a first glimpse of what Gaia will deliver in its future catalog releases.
The ALMA assembly, integration, and verification project: a retrospective analysis
B. Lopez, L. B. G. Knee, H. Jager, et al.
The Atacama Large Millimeter/submillimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and East Asia, in collaboration with the Republic of Chile. ALMA consists of 54 twelve-meter antennas and 12 seven-meter antennas operating as an aperture synthesis array in the (sub)millimeter wavelength range. Assembly, Integration, and Verification (AIV) of the antennas was completed at the end of the year 2013, while the final optimization and complete expansion to validate all planned observing modes will continue. This paper compares the actually obtained results of the period 2008-2013 with the baselines that had been laid out in the early project-planning phase (2005-2007). First plans made for ALMA AIV had already established a two-phased project life-cycle: phase 1 for setting up necessary infrastructure and common facilities, and taking the first three antennas to the start of commissioning; and phase 2 focused on the steady state processing of the remaining units. Throughout the execution of the project this lifecycle was refined and two additional phases were added, namely a transition phase between phases 1 and 2, and a closing phase to address the project ramp-down. A sub-project called Accelerated Commissioning and Science Verification (ACSV) was carried out during the year 2009 in order to provide focus to the whole ALMA organization, and to accomplish the start-of-commissioning milestone. Early phases of CSV focused on validating the basic performance and calibration. Over time additional observing modes have been validated as capabilities expanded both in hardware and software. This retrospective analysis describes the originally presented project staffing plans and schedules, the underlying assumptions, identified risks and operational models, among others. For comparison actual data on staffing levels, the resultant schedule, additional risks identified and those that actually materialized, are presented. The observed similarities and differences are then analyzed and explained, and corresponding lessons learned are presented.
Daniel K. Inouye Solar Telescope: integration testing and commissioning planning
The Daniel K. Inouye Solar Telescope (DKIST), formerly the Advanced Technology Solar Telescope (ATST), has been in its construction phase since 2010, anticipating the onset of the integration, test, and commissioning (IT&C) phase late in 2016, and the commencement of science verification in early 2019. In this paper we describe the planning of the Integration, Testing and Commissioning (IT&C) phase of the project.
MUSE dream conclusion: the sky verdict
P. Caillier, M. Accardo, L. Adjali, et al.
MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument built for ESO (European Southern Observatory). The MUSE project is supported by a European consortium of 7 institutes. After the finalisation of its integration in Europe, the MUSE instrument has been partially dismounted and shipped to the VLT (Very Large Telescope) in Chile. From October 2013 till February 2014, it has then been reassembled, tested and finally installed on the telescope its final home. From there it collects its first photons coming from the outer limit of the visible universe. This critical moment when the instrument finally meets its destiny is the opportunity to look at the overall outcome of the project and the final performance of the instrument on the sky. The instrument which we dreamt of has become reality. Are the dreamt performances there as well? These final instrumental performances are the result of a step by step process of design, manufacturing, assembly, test and integration. Now is also time to review the path opened by the MUSE project. What challenges were faced during those last steps, what strategy, what choices did pay off? What did not?
Planning and reality of the final verification and on-site assembly of KMOS for the VLT
P. Rees II, G. H. Davidson, A. E. Fairley, et al.
The K band Multi Object Spectrograph (KMOS) instrument underwent its final European verification in the early part of 2012. It was then shipped to the Paranal Observatory where it was re-assembled, retested and then installed on one of the European Southern Observatory’s (ESO) Very Large Telescope (VLT) unit telescopes ready for final commissioning tests at the end of 2012. The whole process required meticulous planning in order to minimise the risk of problems at Paranal and the associated disruption to the observatory operations. This paper discusses the planning process and compares what actually happened against the plans. The process was smooth and some reasons for this success are explored.
System Designs and Architectures
icon_mobile_dropdown
Feed array metrology and correction layer for large antenna systems in ASIC mixed signal technology
F. Centureli, G. Scotti, P. Tommasino, et al.
The paper deals with a possible use of the feed array present in a large antenna system, as a layer for measuring the antenna performance with a self-test procedure and a possible way to correct residual errors of the Antenna geometry and of the antenna distortions. Focus has been concentrated on a few key critical elements of a possible feed array metrology program. In particular, a preliminary contribution to the design and development of the feed array from one side, and the subsystem dedicated to antenna distortion monitoring and control from the other, have been chosen as the first areas of investigation. Scalability and flexibility principles and synergic approach with other coexistent technologies have been assumed of paramount importance to ensure ease of integrated operation and therefore allowing in principle increased performance and efficiency. The concept is based on the use of an existing feed array grid to measure antenna distortion with respect to the nominal configuration. Measured data are then processed to develop a multilayer strategy to control the mechanical movable devices (when existing) and to adjust the residual fine errors through a software controlled phase adjustment of the existing phase shifter The signal from the feed array is converted passing through a FPGA/ASIC level to digital data channels. The kind of those typically used for the scientific experiments. One additional channel is used for monitoring the antenna distortion status. These data are processed to define the best correction strategy, based on a software managed control system capable of operating at three different levels of the antenna system: reflector rotation layer, sub reflector rotation and translation layer (assuming the possibility of controlling a Stewart machine), phase shifter of the phased array layer. The project is at present in the design phase, a few elements necessary for a sound software design of the control subsystem have been developed at a technological demonstrator level while the ASIC board for generating the digital data stream has been fully developed. A prototype for control accurately the position of the sub-reflector up to a diameter of 5 meters (similar to the sub reflector size of a large antenna) using a Stewart mechanism is being planned. The selection strategy of the correction modes will depend on the dynamics of the phased array (i.e. the available bits of the A/D conversion). The reaction time allowed for the correction, depending on the error type and the inertia of the sub systems. Typically, the compensation can be divided among all the adjusting elements.
Overview of the LSST active optics system
The LSST will utilize an Active Optics System to optimize the image quality by controlling the surface figures of the mirrors (M1M3 and M2) and maintain the relative position of the three optical systems (M1M3 mirror, M2 mirror and the camera). The mirror surfaces are adjusted by means of figure control actuators that support the mirrors. The relative rigid body positions of M1M3, M2 and the camera are controlled through hexapods that support the M2 mirror cell assembly and the camera. The Active Optics System (AOS) is principally operated off of a Look-Up Table (LUT) with corrections provided by wave front sensors.
Real time wavefront control system for the Large Synoptic Survey Telescope (LSST)
The LSST is an integrated, ground based survey system designed to conduct a decade-long time domain survey of the optical sky. It consists of an 8-meter class wide-field telescope, a 3.2 Gpixel camera, and an automated data processing system. In order to realize the scientific potential of the LSST, its optical system has to provide excellent and consistent image quality across the entire 3.5 degree Field of View. The purpose of the Active Optics System (AOS) is to optimize the image quality by controlling the surface figures of the telescope mirrors and maintaining the relative positions of the optical elements. The basic challenge of the wavefront sensor feedback loop for an LSST type 3-mirror telescope is the near degeneracy of the influence function linking optical degrees of freedom to the measured wavefront errors. Our approach to mitigate this problem is modal control, where a limited number of modes (combinations of optical degrees of freedom) are operated at the sampling rate of the wavefront sensing, while the control bandwidth for the barely observable modes is significantly lower. The paper presents a control strategy based on linear approximations to the system, and the verification of this strategy against system requirements by simulations using more complete, non-linear models for LSST optics and the curvature wavefront sensors.
NEAT breadboard system analysis and performance models
François Hénault, Antoine Crouzier, Fabien Malbet, et al.
NEAT (Nearby Earth Astrometric Telescope) is an astrometric space mission aiming at detecting Earth-like exoplanets located in the habitable zone of nearby solar-type stars. For that purpose, NEAT should be able to measure stellar centroids within an accuracy of 5 10-6 pixels. In order to fulfil such stringent requirement, NEAT incorporates an interferometric metrology system measuring pixel gains and location errors. To validate this technology and assess the whole performance of the instrument, a dedicated test bench has been built at IPAG, in Grenoble (France). In this paper are summarized the main system engineering considerations allowing to define sub-systems specifications. Then we describe the general architecture of the performance models (including photometric, interferometric, and final astrometric budgets) and confront their predictions with the experimental results obtained on the test bench. It is concluded that most of error items are well understood, although some of them deserve further investigations.
Relative performance of dispersive and non-dispersive far-infrared spectrometer instrument architectures
Bruce Sibthorpe, Willem Jellema
This paper presents an analysis of the relative performance of dispersive and non-dispersive spectrometer designs, built for astronomical observing at far infrared wavelengths. The analysis compares the relative point source and mapping capabilities of each configuration in both pure background limited and detector limited regimes. These results are assessed in terms of their application for future space-based astronomical facilities in which astronomical sky background limited performance is realistically achievable.
Running AIM: initial data treatment and micro-arcsec level calibration procedures for Gaia within the astrometric verification unit
D. Busonero, M. G. Lattanzi, M. Gai, et al.
The Gaia payload is a highly sophisticated system and much of its instrumental behaviour is tested to proper accuracy during the Commissioning and Early Operations phase. The Astrometric Instrument Modelling (AIM) belongs to the Core Processing, or CU3, software systems devoted to astrometric data processing, instrumental monitoring and calibration.; it was developed in the context of a special unit of CU3 devoted to Astrometric Verification. While waiting for nominal scientific operations, we present the challenges phased in the Gaia initial data treatment and real-time instrument health monitoring and diagnostic, during the non-standard conditions of the Commissioning phase. We describe the dedicated diagnostic and correction procedures implemented for Commissioning and Early Operations and we show some results obtain during still on-going Commissioning activities.
Model Based Systems Engineering
icon_mobile_dropdown
Model based systems engineering for astronomical projects
Model Based Systems Engineering (MBSE) is an emerging field of systems engineering for which the System Modeling Language (SysML) is a key enabler for descriptive, prescriptive and predictive models. This paper surveys some of the capabilities, expectations and peculiarities of tools-assisted MBSE experienced in real-life astronomical projects. The examples range in depth and scope across a wide spectrum of applications (for example documentation, requirements, analysis, trade studies) and purposes (addressing a particular development need, or accompanying a project throughout many - if not all - its lifecycle phases, fostering reuse and minimizing ambiguity). From the beginnings of the Active Phasing Experiment, through VLT instrumentation, VLTI infrastructure, Telescope Control System for the E-ELT, until Wavefront Control for the E-ELT, we show how stepwise refinements of tools, processes and methods have provided tangible benefits to customary system engineering activities like requirement flow-down, design trade studies, interfaces definition, and validation, by means of a variety of approaches (like Model Checking, Simulation, Model Transformation) and methodologies (like OOSEM, State Analysis)
Systems engineering in the Large Synoptic Survey Telescope project: an application of model based systems engineering
The Large Synoptic Survey Telescope project was an early adopter of SysML and Model Based Systems Engineering practices. The LSST project began using MBSE for requirements engineering beginning in 2006 shortly after the initial release of the first SysML standard. Out of this early work the LSST’s MBSE effort has grown to include system requirements, operational use cases, physical system definition, interfaces, and system states along with behavior sequences and activities. In this paper we describe our approach and methodology for cross-linking these system elements over the three classical systems engineering domains – requirement, functional and physical - into the LSST System Architecture model. We also show how this model is used as the central element to the overall project systems engineering effort. More recently we have begun to use the cross-linked modeled system architecture to develop and plan the system verification and test process. In presenting this work we also describe “lessons learned” from several missteps the project has had with MBSE. Lastly, we conclude by summarizing the overall status of the LSST’s System Architecture model and our plans for the future as the LSST heads toward construction.
Using SysML for verification and validation planning on the Large Synoptic Survey Telescope (LSST)
This paper provides an overview of the tool, language, and methodology used for Verification and Validation Planning on the Large Synoptic Survey Telescope (LSST) Project. LSST has implemented a Model Based Systems Engineering (MBSE) approach as a means of defining all systems engineering planning and definition activities that have historically been captured in paper documents. Specifically, LSST has adopted the Systems Modeling Language (SysML) standard and is utilizing a software tool called Enterprise Architect, developed by Sparx Systems. Much of the historical use of SysML has focused on the early phases of the project life cycle. Our approach is to extend the advantages of MBSE into later stages of the construction project. This paper details the methodology employed to use the tool to document the verification planning phases, including the extension of the language to accommodate the project’s needs. The process includes defining the Verification Plan for each requirement, which in turn consists of a Verification Requirement, Success Criteria, Verification Method(s), Verification Level, and Verification Owner. Each Verification Method for each Requirement is defined as a Verification Activity and mapped into Verification Events, which are collections of activities that can be executed concurrently in an efficient and complementary way. Verification Event dependency and sequences are modeled using Activity Diagrams. The methodology employed also ties in to the Project Management Control System (PMCS), which utilizes Primavera P6 software, mapping each Verification Activity as a step in a planned activity. This approach leads to full traceability from initial Requirement to scheduled, costed, and resource loaded PMCS task-based activities, ensuring all requirements will be verified.
System Modeling I
icon_mobile_dropdown
Transient aero-thermal simulations for TMT
Aero-thermal simulations are an integral part of the design process for the Thirty Meter Telescope (TMT). These simulations utilize Computational Solid-Fluid Dynamics (CSFD) to estimate wind jitter and blur, dome and mirror seeing, telescope pointing error due to thermal drift, and to predict thermal effects on performance of components such as the primary mirror segments. Design guidance obtained from these simulations is provided to the Telescope, Enclosure, Facilities and Adaptive Optics groups. Computational advances allow for model enhancements and inclusion of phenomena not previously resolved, such as transient effects on wind loading and thermal seeing due to vent operation while observing or long exposure effects, with potentially different flow patterns corresponding to the beginning and end of observation. Accurate knowledge of the Observatory aero-thermal environment will result in developing reliable look-up tables for effective open loop correction of key active optics system elements, and cost efficient operation of the Observatory.
Unsteady wind loads for TMT: replacing parametric models with CFD
Unsteady wind loads due to turbulence inside the telescope enclosure result in image jitter and higher-order image degradation due to M1 segment motion. Advances in computational fluid dynamics (CFD) allow unsteady simulations of the flow around realistic telescope geometry, in order to compute the unsteady forces due to wind turbulence. These simulations can then be used to understand the characteristics of the wind loads. Previous estimates used a parametric model based on a number of assumptions about the wind characteristics, such as a von Karman spectrum and frozen-flow turbulence across M1, and relied on CFD only to estimate parameters such as mean wind speed and turbulent kinetic energy. Using the CFD-computed forces avoids the need for assumptions regarding the flow. We discuss here both the loads on the telescope that lead to image jitter, and the spatially-varying force distribution across the primary mirror, using simulations with the Thirty Meter Telescope (TMT) geometry. The amplitude, temporal spectrum, and spatial distribution of wind disturbances are all estimated; these are then used to compute the resulting image motion and degradation. There are several key differences relative to our earlier parametric model. First, the TMT enclosure provides sufficient wind reduction at the top end (near M2) to render the larger cross-sectional structural areas further inside the enclosure (including M1) significant in determining the overall image jitter. Second, the temporal spectrum is not von Karman as the turbulence is not fully developed; this applies both in predicting image jitter and M1 segment motion. And third, for loads on M1, the spatial characteristics are not consistent with propagating a frozen-flow turbulence screen across the mirror: Frozen flow would result in a relationship between temporal frequency content and spatial frequency content that does not hold in the CFD predictions. Incorporating the new estimates of wind load characteristics into TMT response predictions leads to revised estimates of the response of TMT to wind turbulence, and validates the aerodynamic design of the enclosure.
Estimating dome seeing for LSST
Begin Dome seeing is a critical effect influencing the optical performance of ground based telescopes. A previously reported combination of Computational Fluid Dynamics (CFD) and optical simulations to model dome seeing was implemented for the latest LSST enclosure geometry. To this end, high spatial resolution thermal unsteady CFD simulations were performed for three different telescope zenith angles and four azimuth angles. These simulations generate time records of refractive index values along the optical path, which are post-processed to estimate the image degradation due to dome seeing. This method allows us to derive the distribution of seeing contribution along the different optical path segments that composed the overall light path between the entrance of the dome up to the LSST science camera. These results are used to recognize potential problems and to guide the observatory design. In this paper, the modeling estimates are reviewed and assessed relative to the corresponding performance allocation, and combined with other simulator outputs to model the dome seeing impact during LSST operations.
Wavefront sensing and control performance modeling of the Thirty Meter telescope for systematic trade analyses
We have developed an integrated optical model of the semi-static performance of the Thirty Meter Telescope. The model includes surface and rigid body errors of all telescope optics as well as a model of the Alignment and Phasing System Shack-Hartmann wavefront sensors and control algorithms. This integrated model allows for simulation of the correction of the telescope wavefront, including optical errors on the secondary and tertiary mirrors, using the primary mirror segment active degrees of freedom. This model provides the estimate of the predicted telescope performance for system engineering and error budget development. In this paper we present updated performance values for the TMT static optical errors in terms of Normalized Point Source Sensitivity and RMS wavefront error after Adaptive Optics correction. As an example of a system level trade, we present the results from an analysis optimizing the number of Shack-Hartmann lenslets per segment. We trade the number of lenslet rings over each primary mirror segment against the telescope performance metrics of PSSN and RMS wavefront error.
TOAD: a numerical model for the 4MOST instrument
Roland Winkler, Dionne M. Haynes, Olga Bellido-Tirado, et al.
TOAD, the “Top Of the Atmosphere to Detector” simulator, is a primary engineering tool that accompanies the development of the 4MOST instrument. The ultimate goal is to provide a detailed, end-to-end performance model of 4MOST by providing the detector image for an artificial target field with less then 5% error. TOAD will be able to create a realistic output for any reasonable input. The input can be anything, from point sources through extended sources, calibration lamps or stray-light, entering the system at virtually any point in a optical path. During the development of the 4MOST facility, the TOAD simulator will give invaluable insight into the interaction of various parts of the instrument and the impact of engineering design decisions on the system performance.
Systems Engineering I
icon_mobile_dropdown
Systems engineering plan for the construction phase of the E-ELT
J. C. Gonzalez, H. Kurlandczyk, D. Schneller
After having completed the phase B (front-end design) of the several subsystems, the E-ELT project is entering into the construction phase. The subsystems specifications, interface control documents and accompanying technical documentation resulting from the said design activities are being drafted along with the statements of work needed for the tendering processes. This paper presents an overview of the Systems Engineering Plan for the construction phase focusing on the specific systems engineering processes. The goal is to ensure that this phase is developed following an efficient systems engineering approach based on the lessons learned during phase B. The ultimate objective is that the E-ELT meets the science requirements defined by the users while the risk of overruns in cost or schedule, which might otherwise originate from the lack of a system perspective, is minimized.
Systems engineering of the Thirty Meter Telescope for the construction phase
Scott Roberts, John Rogers, Hugh Thompson, et al.
This paper provides an overview of the system design, architecture, and construction phase system engineering processes of the Thirty Meter Telescope project. We summarize the key challenges and our solutions for managing TMT systems engineering during the construction phase. We provide an overview of system budgets, requirements and interfaces, and the management thereof. The requirements engineering processes, including verification and plans for collection of technical data and testing during the assembly and integration phases, are described. We present configuration, change control and technical review processes, covering all aspects of the system design including performance models, requirements, and CAD databases.
The Paving Stones: initial feed-back on an attempt to apply the AGILE principles for the development of a CubeSat space mission to Mars
Boris Segret, Alain Semery, Jordan Vannitsen, et al.
The AGILE principles in the software industry seems well adapted to the paradigm of CubeSat missions that involve students for the development of space missions. Some of well-known engineering and program processes are revisited on the example of an interplanetary CubeSat mission profile that has been developed by several teams of students in various countries and at various educational levels since 02/2013. The lessons learned at adapting traditional space mission methods are emphasized and they produce a metaphoric image of paving stones.
Automatic performance budget: towards a risk reduction
Philippe Laporte, Simon Blake, Jürgen Schmoll, et al.
In this paper, we discuss the performance matrix of the SST-GATE telescope developed to allow us to partition and allocate the important characteristics to the various subsystems as well as to describe the process in order to verify that the current design will deliver the required performance. Due to the integrated nature of the telescope, a large number of parameters have to be controlled and effective calculation tools must be developed such as an automatic performance budget. Its main advantages consist in alleviating the work of the system engineer when changes occur in the design, in avoiding errors during any re-allocation process and recalculate automatically the scientific performance of the instrument. We explain in this paper the method to convert the ensquared energy (EE) and the signal-to-noise ratio (SNR) required by the science cases into the “as designed” instrument. To ensure successful design, integration and verification of the next generation instruments, it is of the utmost importance to have methods to control and manage the instrument’s critical performance characteristics at its very early design steps to limit technical and cost risks in the project development. Such a performance budget is a tool towards this goal.
Project management for complex ground-based instruments: MEGARA plan
María Luisa García-Vargas, Ana Pérez-Calpena, Armando Gil de Paz, et al.
The project management of complex instruments for ground-based large telescopes is a challenge itself. A good management is a clue for project success in terms of performance, schedule and budget. Being on time has become a strict requirement for two reasons: to assure the arrival at the telescope due to the pressure on demanding new instrumentation for this first world-class telescopes and to not fall in over-costs. The budget and cash-flow is not always the expected one and has to be properly handled from different administrative departments at the funding centers worldwide distributed. The complexity of the organizations, the technological and scientific return to the Consortium partners and the participation in the project of all kind of professional centers working in astronomical instrumentation: universities, research centers, small and large private companies, workshops and providers, etc. make the project management strategy, and the tools and procedures tuned to the project needs, crucial for success. MEGARA (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) is a facility instrument of the 10.4m GTC (La Palma, Spain) working at optical wavelengths that provides both Integral-Field Unit (IFU) and Multi-Object Spectrograph (MOS) capabilities at resolutions in the range R=6,000-20,000. The project is an initiative led by Universidad Complutense de Madrid (Spain) in collaboration with INAOE (Mexico), IAA-CSIC (Spain) and Universidad Politécnica de Madrid (Spain). MEGARA is being developed under contract with GRANTECAN.
Systems Engineering II
icon_mobile_dropdown
System modeling of the Thirty Meter Telescope alignment and phasing system
We have developed a system model using the System Modeling Language (SysML) for the Alignment and Phasing System (APS) on the Thirty Meter Telescope (TMT). APS is a Shack-Hartmann wave-front sensor that will be used to measure the alignment and phasing of the primary mirror segments, and the alignment of the secondary and tertiary mirrors. The APS system model contains the ow-down of the Level 1 TMT requirements to APS (Level 2) requirements, and from there to the APS sub-systems (Level 3) requirements. The model also contains the operating modes and scenarios for various activities, such as maintenance alignment, post-segment exchange alignment, and calibration activities. The requirements ow-down is captured in SysML requirements diagrams, and we describe the process of maintaining the DOORS database as the single-source-of-truth for requirements, while using the SysML model to capture the logic and notes associated with the ow-down. We also use the system model to capture any needed communications from APS to other TMT systems, and between the APS sub-systems. The operations are modeled using SysML activity diagrams, and will be used to specify the APS interface documents. The modeling tool can simulate the top level activities to produce sequence diagrams, which contain all the communications between the system and subsystem needed for that activity. By adding time estimates for the lowest level APS activities, a robust estimate for the total time on-sky that APS requires to align and phase the telescope can be obtained. This estimate will be used to verify that the time APS requires on-sky meets the Level 1 TMT requirements.
Systems and context modeling approach to requirements analysis
Amrit Ahuja, G. Muralikrishna, Puneet Patwari, et al.
Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.
TMT telescope structure thermal model
The thermal behavior of the Thirty Meter Telescope (TMT) Telescope Structure (STR) and the STR mounted subsystems depends on the heat load of the System, the thermal properties of component materials and the environment as well as their interactions through convection, conduction and radiation. In this paper the thermal environment is described and the latest three-dimensional Computational Solid Dynamics (CSD) model is presented. The model tracks the diurnal temperature variation of the STR and the corresponding deformations. The resulting displacements are fed into the TMT Merit Function Routine (MFR), which converts them into translations and rotations of the optical surfaces. They, in turn, are multiplied by the TMT optical sensitivity matrix that delivers the corresponding pointing error. Thus the thermal performance of the structure can be assessed for requirement compliance, thermal drift correction strategies and look-up tables can be developed and design guidance can be provided. Results for a representative diurnal cycle based on measured temperature data from the TMT site on Mauna Kea and CFD simulations are presented and conclusions are drawn.
Heat balance and thermal management of the TMT Observatory
Hugh Thompson, Konstantinos Vogiatzis
An extensive campaign of aero-thermal modeling of the Thirty Meter Telescope (TMT) has been carried out and presented in other papers. This paper presents a summary view of overall heat balance of the TMT observatory. A key component of this heat balance that can be managed is the internal sources of heat dissipation to the ambient air inside the enclosure. An engineering budget for both daytime and nighttime sources is presented. This budget is used to ensure that the overall effects on daytime cooling and nighttime seeing are tracked and fall within the modeled results that demonstrate that the observatory meets its performance requirements. In the daytime heat fluxes from air-conditioning, solar loading, infiltration, and deliberate venting through the enclosure top vent are included along with equipment heat sources. In the nighttime convective heat fluxes through the open aperture and vent doors, as well as radiation to the sky are tracked along with the nighttime residual heat dissipations after cooling from equipment in the observatory. The diurnal variation of thermal inertia of large masses, such as the telescope structure, is also included. Model results as well as the overall heat balance and thermal management strategy of the observatory are presented.
Polarimetric analysis of the Thirty Meter Telescope (TMT) for modeling instrumental polarization characteristics
Jenny Atwood, Warren Skidmore, G. C. Anupama, et al.
The Thirty Meter Telescope (TMT) will be called upon to support a polarimetric observing capability. Many different observing programs covering a range of different science areas are being considered for the TMT and a model of the overall polarization characteristics is being developed. The instrument development program will provide a means for polarimetric instruments to be developed, however the telescope itself and the AO system must be able to support polarimetric instruments. As a first step to defining the necessary polarimetric technical requirements we have created an international working group to carry out a study in which technical and cost implications will be balanced with scientific impact; new requirements will be generated with supporting science cases. We present here initial results of the instrumental polarization sensitivity of TMT with NFIRAOS, the first-light adaptive optics system.
System Modeling II
icon_mobile_dropdown
An end-to-end simulation framework for the Large Synoptic Survey Telescope
Andrew J. Connolly, George Z. Angeli, Srinivasan Chandrasekharan, et al.
The LSST will, over a 10-year period, produce a multi-color, multi-epoch survey of more than 18000 square degrees of the southern sky. It will generate a multi-petabyte archive of images and catalogs of astrophysical sources from which a wide variety of high-precision statistical studies can be undertaken. To accomplish these goals, the LSST project has developed a suite of modeling and simulation tools for use in validating that the design and the as-delivered components of the LSST system will yield data products with the required statistical properties. In this paper we describe the development, and use of the LSST simulation framework, including the generation of simulated catalogs and images for targeted trade studies, simulations of the observing cadence of the LSST, the creation of large-scale simulations that test the procedures for data calibration, and use of end-to-end image simulations to evaluate the performance of the system as a whole.
The LSST operations simulator
The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://www.lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions as well as additional scheduled and unscheduled downtime. It has a detailed model to simulate the external conditions with real weather history data from the site, a fully parameterized kinematic model for the internal conditions of the telescope, camera and dome, and serves as a prototype for an automatic scheduler for the real time survey operations with LSST. The Simulator is a critical tool that has been key since very early in the project, to help validate the design parameters of the observatory against the science requirements and the goals from specific science programs. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. Software to efficiently compare the efficacy of different survey strategies for a wide variety of science applications using such a growing set of metrics is under development. A recent restructuring of the code allows us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator is being used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities and assist with performance margin investigations of the LSST system.
DPAC operations simulation to take up the Gaia challenge
Gaia* is Europe's astrometry satellite which is currently entering its operational phase. The Gaia mission will determine the astrometric, photometric properties, as well as the radial velocities of over one billion stellar objects. The observations collected over 24 hours by Gaia will consist of several tens of, up to more than one hundred, million of imagery data files, and low and high resolution spectra. This avalanche of data will be handled by the Gaia Data Processing and Analysis Consortium (DPAC) which is tasked with the processing of the collected data and to ultimately compile the Gaia catalogue. In order to prepare itself for taking up this challenge, DPAC has conducted a number of campaigns simulating its daily operations. Here we will describe these operation rehearsals, their preparation, conduct, and the return of experience. The positive experiences from these campaigns are now being used to also conduct such campaigns for DPAC's long term processing, based on real data.
A framework for modeling the detailed optical response of thick, multiple segment, large format sensors for precision astronomy applications
Andrew Rasmussen, Pierre Antilogus, Pierre Astier, et al.
Near-future astronomical survey experiments, such as LSST, possess system requirements of unprecedented fidelity that span photometry, astrometry and shape transfer. Some of these requirements flow directly to the array of science imaging sensors at the focal plane. Availability of high quality characterization data acquired in the course of our sensor development program has given us an opportunity to develop and test a framework for simulation and modeling that is based on a limited set of physical and geometric effects. In this paper we describe those models, provide quantitative comparisons between data and modeled response, and extrapolate the response model to predict imaging array response to astronomical exposure. The emergent picture departs from the notion of a fixed, rectilinear grid that maps photo-conversions to the potential well of the channel. In place of that, we have a situation where structures from device fabrication, local silicon bulk resistivity variations and photo-converted carrier patterns still accumulating at the channel, together influence and distort positions within the photosensitive volume that map to pixel boundaries. Strategies for efficient extraction of modeling parameters from routinely acquired characterization data are described. Methods for high fidelity illumination/image distribution parameter retrieval, in the presence of such distortions, are also discussed.
Have confidence in your coronagraph: statistical analysis of high-contrast coronagraph dynamics error budgets
We have combined our Excel-based coronagraph dynamics error budget spreadsheets with DAKOTA scripts to perform statistical analyses of the predicted dark-hole contrast. Whereas in the past we have reported the expected contrast level for an input set of allocated parameters, we now generate confidence intervals for the predicted contrast. Further, we explore the sensitivity to individual or groups of parameters and model uncertainty factors through aleatory-epistemic simulations based on a surrogate model fitted to the error budget. We show example results for a generic high-contrast coronagraph.
Systems Engineering III
icon_mobile_dropdown
Daniel K. Inouye Solar Telescope systems engineering update
The Daniel K. Inouye Solar Telescope (DKIST), formerly the Advanced Technology Solar Telescope (ATST), has been in its construction phase since 2010, anticipating the onset of integration, test, and commissioning (IT and C) phase late in 2016, and the commencement of science verification in early 2019. In this paper we describe the role of Systems Engineering during these final phases of the project, and present some of the tools, techniques, and methods in use for these purposes. The paper concludes with a brief discussion of lessons learned so far including things we might do differently next time.
Managing the system validation of the DKIST enclosure
Javier Ariño, Celia Gómez, Gaizka Murga, et al.
The size of the DKIST (formerly ATST) Enclosure, similar to the 8-10 meters class telescopes such as VLT or GTC, together with the strict and demanding requirement of using the azimuth and shutter movement for accurate positioning of the entrance aperture stop while tracking, makes it probably the most complex enclosure built up to date. Thus, managing the system validation becomes a challenging task, in which the singularity of this system has to be considered by applying customized tools and processes in addition to the usual procedures. This paper describes the managing process followed towards DKIST Enclosure validation, focused on ensuring final on-site assembly and performance. During the design phase, the system verification was carried out by means of modeling tools and detailed simulations and calculations. Furthermore, an overall BIM (Building Information Model) was built to integrate all the design work and detect potential problems since the design phase; it was used to check interfaces between subsystems, verify accessibility for maintenance and study the construction process. The Factory Assembly and Testing phase (FA&T) test campaign, recently finished, has been oriented towards the final system validation, by testing: 1) the overall system integration; 2) the performance; 3) the simulation of the final on-site assembly. The importance of guaranteeing a correct on-site assembly has driven also the decision towards the installation of a modular prefabricated cladding, which reduces the risk inherent to the site remoteness and the on-site installation of the outer water-cooled skin. The validation process also included early prototyping and testing of critical subsystems. That being said, all this helps reducing the risk significantly during the final on-site assembly and commissioning through the replication of already validated procedures.
Systems engineering implementation in the conceptual design phase of 4MOST
Olga Bellido-Tirado, Roger Haynes, Roelof S. de Jong, et al.
The 4MOST Facility is a very high-multiplex, wide-field, fibre-fed spectrograph system for the VISTA telescope. Its aim is to create a world-class spectroscopic survey facility that is unique in its combination of wide-field multiplex, spectral resolution and coverage, and sensitivity. In such a complex instrumentation project, in which design and development activities are geographically distributed, a formal system engineering approach is essential for the success of the project. We present an overview of the systems engineering principles, and associated tools, implemented during the conceptual design phase, as well as the systems engineering activities planned for the preliminary design phase.
From space to specs: requirements for 4MOST
Olivier Schnurr, C. Jakob Walcher, Cristina Chiappini, et al.
4MOST,1 the 4m Multi-Object spectrographic Survey Telescope, is an upcoming optical, fiber-fed, MOS facility for the VISTA telescope at ESO's Cero Paranal Observatory (Chile). The preliminary design of 4MOST features 2,400 fibers split into a low-resolution (1,600 fibers, 390-900 nm, R > 5; 000) and a high-resolution channel (800 fibers, three arms, ~20-25 nm coverage each, R > 18; 000) with an Echidna-style positioner, and covering a hexagonal field of view of ~4.1 sqdeg. 4MOST's main science goals encompass massive (tens of millions of spectra), all-Southern sky (> 18; 000 sqdeg) surveys following up both the Gaia (optical) and eROSITA (X-ray) space missions, plus cosmological science that complements missions such as e.g. Euclid. In a novel approach, observations of these science cases, which are very different from another, are to be carried out in parallel (i.e., simultaneously); thus, from the very different science requirements, key user requirements have to be identified, stringently formulated, and condensed into a coherent set of system specifications. Clearly, identifying common grounds and thereby significantly reducing complexity in both the formulated requirements and the final 4MOST facility, is a very challenging task. In this paper, we will present science and user requirements, and how the latter flow down from the former, and eventually further down to the system-specification level. Special emphasis will be put on the identification of key requirements and their validation and verification protocols, so that significant trade-offs can be done as early on in the design phase as possible, with as little impact as possible on the science capabilities upstream.
Complexity in the MATISSE cold optics: a risk or a tool?
MATISSE (Multi AperTure mid-Infrared SpectroScopic Experiment) will be a mid-infrared spectro-interferometer combining the beams of up to four telescopes of the European Southern Observatory Very Large Telescope Interferometer (ESO VLTI), providing phase closure and image reconstruction. MATISSE will produce interferometric spectra in the LM and N band (2.8 to 13 micron). Building the cryogenic interferometer section of an instrument like MATISSE is inherently complex. During the preliminary design phase it became clear that this inherent complexity should not be seen as a hurdle but rather a tool; to keep project risks low it is vital to first comprehend the complexity and second to distribute these complexities to areas of expertise, i.e. fields of low risk. With this approach one prevents the typical reaction of either steering away from complexity or digging narrow and deep to find only a local solution. Complexity can be used to achieve the project goals with a reduced overall project risk. For example two alternative options: either a complex single structure with limited interfaces or an assembly of many simpler parts with, in total, much more interfaces. Although simpler in approach, the latter would be a burden on the overall tolerance chain, assembly procedures, logistics & overall cost, culminating in a higher overall risk to the project; the unintended shift of complexity and risk to a later project phase. In addition, this fragmentation would reduce the overall grip on the project and would make it more difficult to identify showstoppers early on. And solving these becomes exponentially more difficult in later project stages. The integral multidisciplinary approach, earlier discussed in “MATISSE cold optics opto-mechanical design” Proc. SPIE 7734, 77341S (2010), enables optimal distribution of complexity and lowering of overall project risk. This current proceeding presents the way in which the high level of opto-mechanical complexity and risks were distributed and dealt with during the MATISSE Cold Optics Bench instrument development.
Poster Session
icon_mobile_dropdown
Novel technique for tracking manpower and work packages: a useful tool for the team and management
R. Gill, G. Gracia, R. H. Lupton, et al.
In these times of austerity it is becoming more and more important to justify the need for manpower to management. Additionally, with the fast pace of today’s projects the need for tools that facilitate teams to not only plan, but also track their work, are essential. The practice of planning work packages and the associated manpower has been about for a while but little is done to really cross-check that planning against reality. In this paper these elements are brought together through a number of tools that make up the end to end process of planning, tracking and reporting of work package progress and manpower usage.
Space radiation parameters for EUI and the Sun Sensor of Solar Orbiter, ESIO, and JUDE instruments
Laurence Rossi, Lionel Jacques, Jean-Philippe Halain, et al.
This paper presents predictions of space radiation parameters for four space instruments performed by the Centre Spatial de Liège (ULg – Belgium); EUI, the Extreme Ultra-violet Instrument, on-board the Solar Orbiter platform; ESIO, Extreme-UV solar Imager for Operations, and JUDE, the Jupiter system Ultraviolet Dynamics Experiment, which was proposed for the JUICE platform. For Solar Orbiter platform, the radiation environment is defined by ESA environmental specification and the determination of the parameters is done through ray-trace analyses inside the EUI instrument. For ESIO instrument, the radiation environment of the geostationary orbit is defined through simulations of the trapped particles flux, the energetic solar protons flux and the galactic cosmic rays flux, taking the ECSS standard for space environment as a guideline. Then ray-trace analyses inside the instrument are performed to predict the particles fluxes at the level of the most radiation-sensitive elements of the instrument. For JUICE, the spacecraft trajectory is built from ephemeris files provided by ESA and the radiation environment is modeled through simulations by JOSE (Jovian Specification Environment model) then ray-trace analyses inside the instrument are performed to predict the particles fluxes at the level of the most radiation-sensitive elements of the instrument.
GAME/ISAS development status
The Gravitation Astrometric Measurement Experiment (GAME) is a space mission for Fundamental Physics tests in the Solar system, through coronagraphy and Fizeau interferometry for differential astrometry. The precision goal on the γ and β General Relativity PPN parameters is respectively in the 10-8 and 10-6 range. The design is focused on systematic error control through multiple field simultaneous observation and calibration. The GAME instrument concept is based on multiple aperture Fizeau interferometry, observing simultaneously regions close to the Solar limb (requiring the adoption of coronagraphic techniques), and others away from the Sun. The diluted optics approach is selected to achieve an efficient rejection of the scattered solar radiation, while retaining an acceptable angular resolution on the science targets. The Interferometric Stratospheric Astrometry for Solar system (ISAS) project is a GAME technology demonstrator, providing milli-arcsec level astrometry on the main planets of the Solar System. The ISAS technical goal is the validation of basic concepts for GAME, in particular integration of Fizeau interferometry and coronagraphic techniques by means of pierced silicon carbide (SiC) mirrors, intermediate angle dual field astrometry, smart focal plane management for increased dynamic range and pointing correction. The ISAS instrument concept is a dual field, multiple aperture Fizeau interferometer, using coronagraphy for observation of Solar System planets also close to the Sun. A prototype SiC multi-aperture mirror was manufactured by Boostec (F), and has been investigated by thermo-elastic analysis to define the applicability to both GAME and ISAS designs. We describe the development status of both stratospheric and space options, as well as the current extrapolation of the SiC prototype characteristics to the GAME and ISAS optical configurations.
Thermal design and performance of the REgolith x-ray imaging spectrometer (REXIS) instrument
Kevin D. Stout, Rebecca A. Masterson
The REgolith X-ray Imaging Spectrometer (REXIS) instrument is a student collaboration instrument on the OSIRIS-REx asteroid sample return mission scheduled for launch in September 2016. The REXIS science mission is to characterize the elemental abundances of the asteroid Bennu on a global scale and to search for regions of enhanced elemental abundance. The thermal design of the REXIS instrument is challenging due to both the science requirements and the thermal environment in which it will operate. The REXIS instrument consists of two assemblies: the spectrometer and the solar X-ray monitor (SXM). The spectrometer houses a 2x2 array of back illuminated CCDs that are protected from the radiation environment by a one-time deployable cover and a collimator assembly with coded aperture mask. Cooling the CCDs during operation is the driving thermal design challenge on the spectrometer. The CCDs operate in the vicinity of the electronics box, but a 130 °C thermal gradient is required between the two components to cool the CCDs to -60 °C in order to reduce noise and obtain science data. This large thermal gradient is achieved passively through the use of a copper thermal strap, a large radiator facing deep space, and a two-stage thermal isolation layer between the electronics box and the DAM. The SXM is mechanically mounted to the sun-facing side of the spacecraft separately from the spectrometer and characterizes the highly variable solar X-ray spectrum to properly interpret the data from the asteroid. The driving thermal design challenge on the SXM is cooling the silicon drift detector (SDD) to below -30 °C when operating. A two-stage thermoelectric cooler (TEC) is located directly beneath the detector to provide active cooling, and spacecraft MLI blankets cover all of the SXM except the detector aperture to radiatively decouple the SXM from the flight thermal environment. This paper describes the REXIS thermal system requirements, thermal design, and analyses, with a focus on the driving thermal design challenges for the instrument. It is shown through both analysis and early testing that the REXIS instrument can perform successfully through all phases of its mission.
Design of one large telescope direct drive control system based on TMS320F28xx
Xiao-li Song, Da-xing Wang, Chao Zhang, et al.
The mount drive control is the key technique which mostly affects astronomical telescope’s resolution and its speed. However, the ultra -lower speed and the giant moment of inertia make it very difficult to be controlled. In this paper, one segmented permanent-magnet synchronous motor (PMSM), 4m diameter, is suggested for the mount driving. A method is presented to drive the motor directly, which is based on TMS320F28XX and Insulated Gate Bipolar Transistor (IGBT) , also, HEIDENHAIN tape is used to detect the absolute position of the motor together with the Hall sensor. The segmented PMSM can work stable and the mount drive can realize nice tracking performance at ultra -lower speed with this drive system.
BIRDY: an interplanetary CubeSat to collect radiation data on the way to Mars and back to prepare the future manned missions
Boris Segret, Jordan Vannitsen, Marco Agnan, et al.
BIRDY is a 3-Unit CubeSat that is piggy-backed on a host mission to Mars and jettisoned at the beginning of the journey. Then it operates in full autonomy: no assistance, no communication but a beacon signal. The mission profile is a new contribution in Space Weather monitoring and an opportunity to assess the risks in the manned missions to Mars. It counts energetic particles in the maximum range 1 MeV/nucleon to 1 GeV/nucleon. The ground segment prepares a finetuned trajectory to be stored on-board, on the basis of the planed trajectory of the host mission that provides the main delta-V but not the ideal path. It makes the CubeSat compatible with almost all missions going to Mars. During the cruise, the CubeSat relies on an optical planet tracking system to locate itself and on small electrical thrusters to adapt its trajectory and perform the exact flyby at Mars that permits to come back to the Earth. The science data are collected all along the journey and only uploaded once in Mars' vicinity to one of the existing Martian orbiters or rovers, and once at the arrival back to the Earth. More widely than its own scientific mission, BIRDY demonstrates a new way to gather data from distant locations in the solar system. The project is an educational space mission, essentially leaded and designed by students from different educational levels in France and in Taiwan.
Chilean virtual observatory and integration with ALMA
Mauricio Solar, Walter Fariña, Diego Mardones, et al.
The Virtual Observatories strive to interoperate, exchange data and share services as if it was only one big VO. In this work, the state of the art of VOs will be presented and summarized in a schematic diagram with the frequency range of the observed data that every VO publishes. Chile, currently a member of the IVOA, collaborates with the Atacama Large Millimeter/submillimeter Array (ALMA), to study and propose ways to adequate the data generated by ALMA to the different data model proposed by the IVOA.
Plate coil thermal test bench for the Daniel K. Inouye Solar Telescope (DKIST) carousel cooling system
LeEllen Phelps, Gaizka Murga, Guillermo Montijo Jr., et al.
Analyses have shown that even a white-painted enclosure requires active exterior skin-cooling systems to mitigate dome seeing which is driven by thermal nonuniformities that change the refractive index of the air. For the Daniel K. Inouye Solar Telescope (DKIST) Enclosure, this active surface temperature control will take the form of a system of water cooled plate coils integrated into the enclosure cladding system. The main objective of this system is to maintain the surface temperature of the enclosure as close as possible to, but always below, local ambient temperature in order to mitigate this effect. The results of analyses using a multi-layer cladding temperature model were applied to predict the behavior of the plate coil cladding system and ultimately, with safety margins incorporated into the resulting design thermal loads, the detailed designs. Construction drawings and specifications have been produced. Based on these designs and prior to procurement of the system components, a test system was constructed in order to measure actual system behavior. The data collected during seasonal test runs at the DKIST construction site on Haleakalā are used to validate and/or refine the design models and construction documents as appropriate. The test fixture was also used to compare competing hardware, software, components, control strategies, and configurations. This paper outlines the design, construction, test protocols, and results obtained of the plate coil thermal test bench for the DKIST carousel cooling system.
FRIDA diffraction limited NIR instrument: the challenges of its verification processes
Be. Sánchez, C. Keiman, C. Espejo, et al.
FRIDA (inFRared Imager and Dissector for the Adaptive optics system of the Gran Telescopio Canarias (GTC)) is designed as a diffraction limited instrument that will offer broad and narrow band imaging and integral field spectroscopy capabilities with low, intermediate and high (R ~ 30,000) spectral resolutions, to operate in the wavelength range 0.9 – 2.5 μm. The integral field unit is based on a monolithic image slicer and the imaging and IFS observing modes will use the same Teledyne 2Kx2K detector. FRIDA will be based on a Nasmyth B of GTC, behind the adaptive optics (AO) system. The key scientific objectives of the instrument include studies of solar system bodies, low mass objects, circumstellar outflow phenomena in advanced stages of stellar evolution, active galactic nuclei high redshift galaxies, including resolved stellar populations, semidetached binary systems, young stellar objects and star forming environments. FRIDA subsystems are presently being manufactured and tested. In this paper we present the challenges to perform the verification of some critical specifications of a cryogenic and diffraction limited NIR instrument as FRIDA. FRIDA is a collaborative project between the main GTC partners, namely, Spain, México and Florida.
DKIST enclosure modeling and verification during factory assembly and testing
Ibon Larrakoetxea, William McBride, Heather K. Marshall, et al.
The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.
Structural influences on intensity correlation interferometry
Using a single focal parabolic reflector of an intensity interferometer(II) system is simulated. The extent that focal properties amongst a parabolic reflector can change the statistics of the light at a detector is analyzed. Recent technological advances have increased the speed and sensitivity of photon detectors, developed large scale precision optics, and incorporated multi-spectral imaging techniques which have led the way to reexamine the usefulness of II for scientific measurements. A ray tracing algorithm is used to examine how the statistical variations of simulated monochromatic stellar light changes from the source to the detector. Changing the position of the detector from the focal plane and changing the surface profile of the mirror develops a metric to understand how the varying scenario’s affects the statistics of the detected light. Photon streams are evaluated for light distribution, time of flight, and statistical changes at a detector. This research and analysis is used as a tool to develop a metric to quantify how structural perturbation effect the statistics of photon stream detections inherent in II instrumentation and science.
System model of an image stabilization system
The Polarimetric and Helioseismic Imager (PHI) instrument is part of the remote instruments for the ESA Solar Orbiter (SO), which is scheduled to launch in 2017. PHI captures polarimetric images from the Sun to better understand our nearest star, the Sun. A set of images is acquired with different polarizations, and afterwards is processed to extract the Stokes parameters. As Stokes parameters require the subtraction of the image values, in order to get the desired quality it is necessary to have good contrast in the image and very small displacements between them. As a result an Image Stabilization System (ISS) is required. This paper is focused in the behavior and the main characteristics of this system. This ISS is composed of a camera, a tip-tilt mirror and a control system. The camera is based on a STAR1000 sensor that includes a 10 bits resolution high-speed Analog-to-Digital Converter (ADC). The control system includes a Correlation Tracking (CT) algorithm that determines the necessary corrections. The tip-tilt mirror is moved based on this corrections to minimize the effects of the spacecraft (S/C) drift and jitter with respect to the Sun. Due to its stringent requirements, a system model has been developed in order to verify that the required parameters can be satisfied. The results show that the ISS is feasible, although the margins are very small.
Cosmic non-TEM radiation and synthetic feed array sensor system in ASIC mixed signal technology
F. Centureli, G. Scotti, P. Tommasino, et al.
The paper deals with the opportunity to introduce “Not strictly TEM waves” Synthetic detection Method (NTSM), consisting in a Three Axis Digital Beam Processing (3ADBP), to enhance the performances of radio telescope and sensor systems. Current Radio Telescopes generally use the classic 3D “TEM waves” approximation Detection Method, which consists in a linear tomography process (Single or Dual axis beam forming processing) neglecting the small z component. The Synthetic FEED ARRAY three axis Sensor SYSTEM is an innovative technique using a synthetic detection of the generic “NOT strictly TEM Waves radiation coming from the Cosmo, which processes longitudinal component of Angular Momentum too. Than the simultaneous extraction from radiation of both the linear and quadratic information component, may reduce the complexity to reconstruct the Early Universe in the different requested scales. This next order approximation detection of the observed cosmologic processes, may improve the efficacy of the statistical numerical model used to elaborate the same information acquired. The present work focuses on detection of such waves at carrier frequencies in the bands ranging from LF to MMW. The work shows in further detail the new generation of on line programmable and reconfigurable Mixed Signal ASIC technology that made possible the innovative Synthetic Sensor. Furthermore the paper shows the ability of such technique to increase the Radio Telescope Array Antenna performances.
The OTP-model applied to the Aklim site database
Within the framework of the site prospection for the future European Extremely Large Telescope (E-ELT), a wide site characterization was achieved. Aklim site located at an altitude of 2350 m at the geographical coordinates: lat.= 30°07’38” N, long.= 8°18’31” W , in the Moroccan Middle Atlas Mountains, was one of the candidate sites chosen by the Framework Programme VI (FP6) of the European Union. To complete the fulfilled study ([19]; [21]), we have used the ModelOTP (model of optical turbulence profiles) established by [15] and improved by [6]. This model allows getting the built-in profiles of the optical turbulence under various conditions. In this paper, we present an overview of the Aklim database results, in the boundary layers and in the free atmosphere separately and we make a comparison with Cerro Pachon result [15].
One of align metrologies for Antarctic telescopes
The preliminary site testing performed since the beginning of 2008 shows that Antarctic Dome A is an excellent astronomical site. The Chinese Antarctic optical telescopes CSTAR and the first Antarctic Survey Telescope AST3-1 has been in operation on Dome A, and several Antarctic telescopes are being developed and proposed. However, the harsh environment and manpower shortage make the in-situ alignment task difficult. The study will introduce the completed alignment work of AST3 and discuss an improved align metrology based on the previous treatments of the field dependent optical aberrations, as well as its application on Antarctic Bright Star Survey Telescope BSST.
The Basic Angle Monitoring (BAM) software tool in the context of Gaia's astrometric verification
Alberto Riva, Mario G. Lattanzi, Ronald Drimmel, et al.
The goal of the Gaia mission is to achieve micro-arcsecond astrometry, making Gaia the most important astro- metric space mission of the 21st century. To achieve this performance several innovative technological solutions have been realized as part of the satellite's scientific payload. A critical component of the Gaia scientific pay- load is the Basic Angle Monitoring device (BAM), an interferometric metrology instrument with the task of monitoring, to some picometers, the variation of the Basic Angle between Gaia's two telescopes. In this paper we provide an overview of the AVU/BAM software, running at the Italian Data Processing Center (DPCT), to analyze the BAM data and to recover the basic angle variations at the micro-arcosecond level. Outputs based on preliminary data from Gaia's Commissioning phase are shown as an example.
Error reduction and modeling for hexapod positioners of secondary mirrors for large ground-based telescopes
The positioning requirements for secondary mirrors and instruments for large ground-based telescopes are becoming increasingly challenging. Modern telescope designs, such as LSST and TMT, are specifying repeatability and/or absolute accuracy limits below 10 μm and 10 μrad for the hexapod positioning systems generally used for these applications. Hexapod error sources, including lead screw pitch variations, windup, backlash, friction, thermal expansion, compliance, sensing, and joint node location uncertainties, are examined along with methods for reducing or eliminating these errors by mechanical means or through calibration. Alternative sensing approaches are discussed and their relative benefits are evaluated. Finally, a model-based design approach is presented for conducting initial design trade studies, assessing technical risk, predicting achievable performance, establishing subsystem and component requirements, and tracking positioning error budgets through the entire development process. A parametric actuator model and its initial results are described, and testing approaches are outlined to identify key model parameters and verify subsystem and component performance.
Integrated modeling for parametric evaluation of smart x-ray optics
This work is developed in the framework of AXYOM project, which proposes to study the application of a system of piezoelectric actuators to grazing-incidence X-ray telescope optic prototypes: thin glass or plastic foils, in order to increase their angular resolution. An integrated optomechanical model has been set up to evaluate the performances of X-ray optics under deformation induced by Piezo Actuators. Parametric evaluation has been done looking at different number and position of actuators to optimize the outcome. Different evaluations have also been done over the actuator types, considering Flexible Piezoceramic, Multi Fiber Composites piezo actuators, and PVDF.
Wind responses of Giant Magellan telescope
Benjamin Irarrazaval, Christine Buleri, Matt Johns
The Giant Magellan Telescope (GMT) is 25 meter diameter extremely large ground based infrared/optical telescope being built by an international consortium of universities and research institutions. It will be located at the Las Campanas Observatory in Chile. The GMT primary mirror consists of seven 8.4 meter diameter borosilicate mirror segments. Two seven segment Gregorian secondary mirror systems will be built; an Adaptive Secondary Mirror (ASM) to support adaptive optics modes and a Fast-steering Secondary Mirror (FSM) with monolithic segments to support natural seeing modes when the ASM is being serviced. Wind excitation results in static deformation and vibration in the telescope structure that affects alignment and image jitter performance. The telescope mount will reject static and lower frequency windshake, while each of the Faststeering Secondary Mirror (FSM) segments will be used to compensate for the higher frequency wind-shake, up to 20 Hz. Using a finite element model of the GMT, along with CFD modeling of the wind loading on the telescope structure, wind excitation scenarios were created to study the performance of the FSM and telescope against wind-induced jitter. A description of the models, methodology and results of the analyses are presented.
Target allocation yields for massively multiplexed spectroscopic surveys with fibers
Will Saunders, Scott Smedley, Peter Gillingham, et al.
We present Simulated Annealing fiber-to-target allocation simulations for the proposed DESI and 4MOST massively multiplexed spectroscopic surveys. We simulate various survey strategies, for both Poisson and realistically clustered mock target samples. We simulate both Echidna and theta-phi actuator designs, including the restrictions caused by the physical actuator characteristics during repositioning. For DESI, with theta-phi actuators, used in 5 passes over the sky for a mock ELG/LRG/QSO sample, with matched fiber and target densities, a total target allocation yield of 89.3% was achieved, but only 83.7% for the high-priority Ly-alpha QSOs. If Echidna actuators are used with the same pitch and number of passes, the yield increases to 94.4% and 97.2% respectively, representing fractional gains of 5.7% and 16% respectively. Echidna also allows a factor-of-two increase in the number of close Ly-alpha QSO pairs that can be observed. Echidna spine tilt causes a variable loss of throughput, with average loss being the same as the loss at the rms tilt. The simulated annealing allows spine tilt minimization to be included in the optimization, at some small cost to the yield. With a natural minimization scheme, we find an rms tilt always close to 0.58 x maximum. There is an additional but much smaller defocus loss, equivalent to an average defocus of 30 μm. These tilt losses offset the gains in yield for Echidna, but because the survey strategy is driven by the higher priority targets, a clear survey speed advantage remains. For 4MOST, high and low latitude sample mock catalogs were supplied by the 4MOST team, and allocations were carried out with the proposed Echidna-based positioner geometry. At high latitudes, the resulting target completeness was 85.3% for LR targets and 78.9% for HR targets. At low latitude, the target completeness was 93.9% for LR targets and 71.2% for HR targets.
The ASTRI SST-2M prototype for the next generation of Cherenkov telescopes: a single framework approach from requirement analysis to integration and verification strategy definition
Mauro Fiorini, Nicola La Palombara, Luca Stringhetti, et al.
ASTRI is a flagship project of the Italian Ministry of Education, University and Research, which aims to develop an endto- end prototype of one of the three types of telescopes to be part of the Cherenkov Telescope Array (CTA), an observatory which will be the main representative of the next generation of Imaging Atmospheric Cherenkov Telescopes. The ASTRI project, led by the Italian National Institute of Astrophysics (INAF), has proposed an original design for the Small Size Telescope, which is aimed to explore the uppermost end of the Very High Energy domain up to about few hundreds of TeV with unprecedented sensitivity, angular resolution and imaging quality. It is characterized by challenging and innovative technological solutions which will be adopted for the first time in a Cherenkov telescope: a dual-mirror Schwarzschild-Couder configuration, a modular, light and compact camera based on silicon photomultipliers, and a front-end electronic based on a specifically designed ASIC. The end-to-end project is also including all the data-analysis software and the data archive. In this paper we describe the process followed to derive the ASTRI specifications from the CTA general requirements, a process which had to take into proper account the impact on the telescope design of the different types of the CTA requirements (performance, environment, reliability-availability-maintenance, etc.). We also describe the strategy adopted to perform the specification verification, which will be based on different methods (inspection, analysis, certification, and test) in order to demonstrate the telescope compliance with the CTA requirements. Finally we describe the integration planning of the prototype assemblies (structure, mirrors, camera, control software, auxiliary items) and the test planning of the end-to-end telescope. The approach followed by the ASTRI project is to have all the information needed to report the verification process along all project stages in a single layer. From this unique layer it is possible to, in a semi-automatic way, generate updated project documentation and progress report.
E-ELT requirements management
The E-ELT has completed its design phase and is now entering construction. ESO is acting as prime contractor and usually procures subsystems, including their design, from industry. This, in turn, leads to a large number of requirements, whose validity, consistency and conformity with user needs requires extensive management. Therefore E-ELT Systems Engineering has chosen to follow a systematic approach, based on a reasoned requirement architecture that follows the product breakdown structure of the observatory. The challenge ahead is the controlled flow-down of science user needs into engineering requirements, requirement specifications and system design documents. This paper shows how the E-ELT project manages this. The project has adopted IBM DOORTM as a supporting requirements management tool. This paper deals with emerging problems and pictures potential solutions. It shows trade-offs made to reach a proper balance between the effort put in this activity and potential overheads, and the benefit for the project.
System engineering at the MEGARA project
MEGARA (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) is a facility instrument of the 10.4m GTC (La Palma, Spain) working at optical wavelengths that provides both Integral-Field Unit (IFU) and Multi- Object Spectrograph (MOS) capabilities at resolutions in the range R=6,000-20,000. The MEGARA focal plane subsystems are located at one of the GTC focal stations, while the MEGARA refractive VPH based spectrograph is located at one of the Nasmyth platforms. The fiber bundles conduct the light from the focal plane subsystems to the pseudo-slits at the entrance of the spectrograph. The project is an initiative led by Universidad Complutense de Madrid (Spain) in collaboration with INAOE (Mexico), IAA-CSIC (Spain) and Universidad Politécnica de Madrid (Spain) and is developed under contract with GRANTECAN. The project is carried out by a multidisciplinary and geographically distributed team, which includes the in-kind contributions of the project partners and personnel from several private companies. The MEGARA system-engineering plan has been tailored to the project and is being applied to ensure the technical control of the project in order to finally meet the science high-level requirements and GTC constrains.
Introducing questionnaire technique to interface with multi-instrument teams for science operations
F. Pérez-López, S. de la Fuente
BepiColombo is an interdisciplinary ESA mission to explore the planet Mercury in cooperation with JAXA. The mission consists of two separate orbiters: ESA’s Mercury Planetary Orbiter (MPO) and JAXA’s Mercury Magnetospheric Orbiter (MMO), which are dedicated to the detailed study of the planet and its magnetosphere. The MPO scientific payload comprises eleven instruments packages covering different disciplines developed by several European teams. This paper describes the questionnaire technique followed by the Science Ground Segment (SGS) to get feedback from each individual instrument team about important issues for the SGS systems engineering, which support the SGS development and operations. The conclusions of the questionnaire process allowed the optimization of the Science Ground Segment development processes and resources by answering the real expectations from the instrument teams, the definition of the interfaces between the SGS and each individual team and a science operations and data handling and archiving concepts compatible with their needs.