The ELT prefocal stations provide wavefront sensing to support the active control of the telescope during observations; they also include mirrors to distribute the telescope optical beam to the scientific instrument or diagnostic tool that finally uses the light collected by the ELT. Built in to the prefocal stations is a hosted metrology positioning system where metrology measuring instruments including a laser tracker and alignment telescope will be installed. This metrology will be used during coarse alignment of the ELT, to maintain the internal alignment of the prefocal stations, and to locate them with respect to their surroundings. The detailed design and application of these instruments is described here, together with a first set of test results.
The Extremely Large Telescope (ELT) is a 39 meters optical telescope under construction in the Chilean Atacama Desert. The control software is under advanced development and the system is slowly taking shape for first light in 2028. ESO is directly responsible for coordination functions and control strategies requiring astronomical domain knowledge. Industrial contractors are instead developing the low-level control of individual subsystems. We are now implementing the coordination recipes and integrating the local control systems being delivered by contractors. System tests are performed in the ELT Control Model in Garching, while waiting for the availability of individual subsystems at the telescope. This paper describes the status of development for individual subsystems, of the high-level coordination software and of the system integration on the ELT Control Model (ECM), focusing on testing and integration challenges.
The Extremely Large Telescope (ELT) is a 39-meter optical telescope under construction in the Chilean Atacama desert. The optical design is based on a five-mirror scheme and incorporates adaptive optics. The primary mirror consists of 798 segments. Scientific first light is planned by the end of 2027. The status of the project is described in [1]. The major challenges for the control of the telescope and the instruments are in the number of sensors (~25,000) and actuators (~15,000) to be controlled in a coordinated fashion, the computing performance and low latency requirements for the phasing of the primary mirror, performing adaptive optics and coordinating all sub-systems in the optical path. Industrial contractors are responsible for the low-level control of individual subsystems and ESO for the development of coordination functions and control strategies requiring astronomical domain knowledge. In this paper we focus on architecture and design of the High-Level Coordination and Control (HLCC). It is the component of the control software responsible for coordination of all telescope subsystems to properly perform the activities required by scientific and technical operations. We first identify the HLCC context by introducing the global architecture of the telescope control system and by discussing the role of HLCC and its interfaces with the other components of the control system. We then analyze the internal architecture of the HLCC, and the primary design patterns adopted. We also discuss how the features identified from the requirements and the use cases are mapped into the design. Finally, the timeline and the current status of development activities are presented.
The Extremely Large Telescope (ELT) is a 39 meters optical telescope under construction at an altitude of about 3000m in the Chilean Atacama desert. The optical design is based on a novel five-mirror scheme and incorporates adaptive optics mirrors. The primary mirror consists of 798 segments, each 1.4 meters wide[1]. The control of this telescope and of the instruments that will be mounted on it is very challenging, because of its size, the number of sensors and actuators, the computing performance required for the phasing of the primary mirror, the adaptive optics and the correlation between all the elements in the optical path. In this paper we describe the control system architecture, emerging from scientific and technical requirements. We also describe how the procurement strategy (centered on industrial contracts at subsystem level) affects the definition of the architecture and the technological choices. We first introduce the global architecture of the system, with Local Control Systems and a Supervisory Control layer. The Local Control Systems is astronomy-agnostic and isolate the control of the subsystems procured through industrial contracts. The Supervisory Control layer is instead responsible for coordinating the operation of the different subsystems to realize the observation cases identified for the operation of the telescope. The control systems of the instruments interface with the telescope using a well-defined and standardized interface. To facilitate the work of the Consortia responsible for the construction of the instruments, we provide an Instrumentation Control Software Framework. This will ensure uniformity in the design of the control systems across instruments, making maintenance easier. This approach was successfully adopted for the instrumentation of the Very Large Telescope facility. We will analyze the process that was followed for defining the architecture from the requirements and use cases and to produce a design that addresses the technical challenges.
The ALMA Common Software (ACS), provides the infrastructure of the distributed software system of ALMA and other projects. ACS, built on top of CORBA and Data Distribution Service (DDS) middleware, is based on a Component- Container paradigm and hides the complexity of the middleware allowing the developer to focus on domain specific issues. The transition of the ALMA observatory from construction to operations brings with it that ACS effort focuses primarily on scalability, stability and robustness rather than on new features. The transition came together with a shorter release cycle and a more extensive testing. For scalability, the most problematic area has been the CORBA notification service, used to implement the publisher subscriber pattern because of the asynchronous nature of the paradigm: a lot of effort has been spent to improve its stability and recovery from run time errors. The original bulk data mechanism, implemented using the CORBA Audio/Video Streaming Service, showed its limitations and has been replaced with a more performant and scalable DDS implementation. Operational needs showed soon the difference between releases cycles for Online software (i.e. used during observations) and Offline software, which requires much more frequent releases. This paper attempts to describe the impact the transition from construction to operations had on ACS, the solution adopted so far and a look into future evolution.
Monitoring and prediction of astronomical observing conditions are essential for planning and optimizing observations. For this purpose, ESO, in the 90s, developed the concept of an Astronomical Site Monitor (ASM), as a facility fully integrated in the operations of the VLT observatory[1]. Identical systems were installed at Paranal and La Silla, providing comprehensive local weather information. By now, we had very good reasons for a major upgrade:
• The need of introducing new features to satisfy the requirements of observing with the Adaptive Optics Facility and to benefit other Adaptive Optics systems.
• Managing hardware and software obsolescence.
• Making the system more maintainable and expandable by integrating off-the-shelf hardware solutions.
The new ASM integrates:
• A new Differential Image Motion Monitor (DIMM) paired with a Multi Aperture Scintillation Sensor (MASS) to measure the vertical distribution of turbulence in the high atmosphere and its characteristic velocity.
• A new SLOpe Detection And Ranging (SLODAR) telescope, for measuring the altitude and intensity of turbulent layers in the low atmosphere.
• A water vapour radiometer to monitor the water vapour content of the atmosphere.
• The old weather tower, which is being refurbished with new sensors. The telescopes and the devices integrated are commercial products and we have used as much as possible the control system from the vendors. The existing external interfaces, based on the VLT standards, have been maintained for full backward compatibility. All data produced by the system are directly fed in real time into a relational database. A completely new web-based display replaces the obsolete plots based on HP-UX RTAP. We analyse here the architectural and technological choices and discuss the motivations and trade-offs.
KEYWORDS: Systems modeling, Systems engineering, Control systems, Telescopes, Control systems design, Astronomy, Instrument modeling, Interfaces, Wavefronts, Visual process modeling
Model Based Systems Engineering (MBSE) is an emerging field of systems engineering for which the System Modeling Language (SysML) is a key enabler for descriptive, prescriptive and predictive models. This paper surveys some of the capabilities, expectations and peculiarities of tools-assisted MBSE experienced in real-life astronomical projects. The examples range in depth and scope across a wide spectrum of applications (for example documentation, requirements, analysis, trade studies) and purposes (addressing a particular development need, or accompanying a project throughout many - if not all - its lifecycle phases, fostering reuse and minimizing ambiguity). From the beginnings of the Active Phasing Experiment, through VLT instrumentation, VLTI infrastructure, Telescope Control System for the E-ELT, until Wavefront Control for the E-ELT, we show how stepwise refinements of tools, processes and methods have provided tangible benefits to customary system engineering activities like requirement flow-down, design trade studies, interfaces definition, and validation, by means of a variety of approaches (like Model Checking, Simulation, Model Transformation) and methodologies (like OOSEM, State Analysis)
The ALMA Observatory is a challenging project in many ways. The hardware and software pieces were often designed
specifically for ALMA, based on overall scientific requirements. The observatory is still in its construction
phase, but already started Early Science observations with 16 antennas in September 2011, and has currently
(June 2012) 39 accepted antennas, with 1 or 2 new antennas delivered every month. The finished array will
integrate up to 66 antennas in 2014.
The on-line software is a critical part of the operations: it controls everything from the low level real-time
hardware and data processing up to the observations scheduler and data storage. Many pieces of the software are
eventually affected by a growing number of antennas, as more processes are integrated into the distributed system,
and more data flows to the Correlator and Database. Although some early scalability tests were performed in
a simulated environment, the system proved to be very dependent on real deployment conditions and several
unforeseen scalability issues have been found in the last year, starting with a critical number of about 15
antennas. Processes that grow with the number of antennas tend to quickly demand more powerful machines,
unless alternatives are implemented.
This paper describes the practical experience of dealing with (and hopefully preventing) blocking scalability
issues during the construction phase, while the expectant users push the system to its limits. This may also be
a very useful example for other upcoming radio-telescopes with a large number of receivers.
Code generation helps in smoothing the learning curve of a complex application framework and in reducing the
number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has
adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model
Driven code generation to transform directly an UML Model into a full implementation in the ACS framework.
This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower
handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are:
software reuse, implicit application of design patterns and automatic tests generation. A model driven approach
to design makes it also possible using the same model with different frameworks, by generating for different
targets.
The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator.
OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct
mapping of data types, the main difficulty encountered in the translation process. The output is an ACS
application readily usable by the developer, including the necessary deployment configuration, thus minimizing
any configuration burden during testing. The specific application code is implemented by extending generated
classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process
and aiding the developers by keeping a clean logical separation between the two.
Our first results show that code generation improves dramatically the code productivity.
The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama
Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design
patterns for distributed software.
Every properly built system needs to be able to log status and error information. Logging in a single computer
scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to
centralize all logging data in a single place without overloading the network nor complicating the applications.
ACS provides a complete logging service infrastructure in which every log has an associated priority and
timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the
ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using
only a minimal subset of the features provided by the standard.
The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed
over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which
is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative
standard for publisher-subscriber communication for real-time systems, offering better performance and featuring
decentralized message processing.
The current document describes how the new high performance logging service of ACS has been modeled and
developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark
is presented comparing the differences between the implementations.
The ALMA Common Software (ACS) provides both an application framework and CORBA-based middleware
for the distributed software system of the Atacama Large Millimeter Array. Building upon open-source tools
such as the JacORB, TAO and OmniORB ORBs, ACS supports the development of component-based software in
any of three languages: Java, C++ and Python. Now in its seventh major release, ACS has matured, both in its
feature set as well as in its reliability and performance. However, it is only recently that the ALMA observatory's
hardware and application software has reached a level at which it can exploit and challenge the infrastructure
that ACS provides. In particular, the availability of an Antenna Test Facility(ATF) at the site of the Very Large
Array in New Mexico has enabled us to exercise and test the still evolving end-to-end ALMA software under
realistic conditions. The major focus of ACS, consequently, has shifted from the development of new features
to consideration of how best to use those that already exist. Configuration details which could be neglected
for the purpose of running unit tests or skeletal end-to-end simulations have turned out to be sensitive levers
for achieving satisfactory performance in a real-world environment. Surprising behavior in some open-source
tools has required us to choose between patching code that we did not write or addressing its deficiencies by
implementing workarounds in our own software. We will discuss these and other aspects of our recent experience
at the ATF and in simulation.
The ALMA Common Software (ACS) provides the software infrastructure used by ALMA and by several other telescope projects, thanks also to the choice of adopting the LGPL public license. ACS is a set of application frameworks providing the basic services needed for object oriented distributed computing. Among these are transparent
remote object invocation, object deployment and location based on a container/component model, distributed error, alarm handling, logging and events. ACS is based on CORBA and built on top of free CORBA implementations. Free software is extensively used wherever possible. The general architecture of ACS was presented at SPIE 2002. ACS has been under development for 6 years and it is midway through its development life. Many applications have been written
using ACS; the ALMA test facility, APEX and other telescopes are running systems based on ACS. This is therefore a good time to look back and see what have been until now the strong and the weak points of ACS in terms of architecture and implementation. In this perspective, it is very important to analyze the applications based on ACS, the feedback received by the users and the impact that this feedback has had on the development of ACS itself, by favoring the development of some features with respect to others. The purpose of this paper is to describe the results of this analysis and discuss what we would like to do in order to extend and improve ACS in the coming years, in particular to make application development easier and more efficient.
A number of tools exist to aid in the preparation of proposals and observations for large ground and space-based observatories (VLT, Gemini, HST being examples). These tools have transformed the way in which astronomers use large telescopes. The ALMA telescope has a strong need for such a tool, but its scientific and technical requirements, and the nature of the telescope, provide some novel challenges. In addition to the common Phase I (Proposal) and Phase II (Observing) preparation the tool must support the needs of the novice alongside the needs of those who are expert in millimetre/sub-millimetre aperture synthesis astronomy. We must also provide support for the reviewing process, and must interface with and use the technical architecture underpinning the design of the ALMA Software System. In this paper we describe our approach to meeting these challenges.
The ALMA Common Software (ACS) is a set of application frameworks built on top of CORBA. It provides a common software infrastructure to all partners in the ALMA collaboration. The usage of ACS extends from high-level applications such as the Observation Preparation Tool [7] that will run on the desk of astronomers, down to the Control Software [6] domain. The purpose of ACS is twofold: from a system perspective, it provides the implementation of a coherent set of design patterns and services that will make the whole ALMA software [1] uniform and maintainable; from the perspective of an ALMA developer, it provides a friendly programming environment in which the complexity of the CORBA middleware and other libraries is hidden and coding is drastically reduced. The evolution of ACS is driven by a long term development plan, however on the 6-months release cycle the plan is adjusted based on incoming requests from ALMA subsystem development teams. ACS was presented at SPIE 2002[2]. In the two years since then, the core services provided by ACS have been extended, while the coverage of the application framework has been increased to satisfy the needs of high-level and data flow applications. ACS is available under the LGPL public license. The patterns implemented and the services provided can be of use also outside the astronomical community; several projects have already shown their interest in ACS. This paper presents the status of ACS and the progress over the last two years. Emphasis is placed on showing how requests from ACS users have driven the selection of new features.
ALMA software, from high-level data flow applications down to instrument control, is built using the ACS framework. To meet the challenges of developing distributed software in distributed teams, ACS offers a container/component model that integrates the use of XML transfer objects. ACS containers are built on top of CORBA and are available for C++, Java, and Python, so that ALMA software can be written as components in any of these languages. The containers perform technical aspects of the software system, while components can focus on the implementation of functional requirements.
Like Web services, components can use XML to exchange structured data by value. For Java components, the container seamlessly integrates the use of XML binding classes, which are Java classes that encapsulate access to XML data through type-safe methods. Binding classes are generated from XML schemas, allowing the Java compiler to enforce compliance of application code with the XML schemas.
This presentation will explain the capabilities of the ACS container/component model, and how it relates to other middleware technologies that are popular in industry.
KEYWORDS: Data modeling, Data archive systems, Software development, Computer architecture, Calibration, Optical correlators, Data storage, Observatories, Data acquisition, Telescopes
The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself
will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that
separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America
and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts
at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar
tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns:
application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with
services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.