The Cherenkov Telescope Array (CTA) will be the next generation ground-based observatory for gamma-ray astronomy at very-high energies. CTA will consist of two large arrays with 118 Cherenkov telescopes in total, deployed in the northern and southern hemispheres. The Observation Execution System (OES) provides the means to execute observations and to handle the acquisition of scientific data in CTA. The Manager and Central Control (MCC) system is a core element in the OES system that implements the execution of observation requests received from the scheduler sub-system. This contribution provides a summary of the main MCC design features and of the plans for prototyping.
KEYWORDS: Atmospheric Cherenkov telescopes, Telescopes, Control systems, Observatories, Data acquisition, Imaging systems, Observatories, Control systems, Cameras, Prototyping, Systems modeling, Software development
The Cherenkov Telescope Array (CTA) will be the next-generation ground-based observatory using the atmospheric Cherenkov technique. The CTA instrument will allow researchers to explore the gamma-ray sky in the energy range from 20 GeV to 300 TeV. CTA will comprise two arrays of telescopes, one with about 100 telescopes in the Southern hemisphere and another smaller array of telescopes in the North. CTA poses novel challenges in the field of ground-based Cherenkov astronomy, due to the demands of operating an observatory composed of a large and distributed system with the needed robustness and reliability that characterize an observatory. The array control and data acquisition system of CTA (ACTL) provides the means to control, readout and monitor the telescopes and equipment of the CTA arrays. The ACTL system must be flexible and reliable enough to permit the simultaneous and automatic control of multiple sub-arrays of telescopes with a minimum effort of the personnel on-site. In addition, the system must be able to react to external factors such as changing weather conditions and loss of telescopes and, on short timescales, to incoming scientific alerts from time-critical transient phenomena. The ACTL system provides the means to time-stamp, readout, filter and store the scientific data at aggregated rates of a few GB/s. Monitoring information from tens of thousands of hardware elements need to be channeled to high performance database systems and will be used to identify potential problems in the instrumentation. This contribution provides an overview of the ACTL system and a status report of the ACTL project within CTA.
KEYWORDS: Atmospheric Cherenkov telescopes, Telescopes, Systems modeling, Chemical elements, Computer architecture, Data acquisition, Control systems, Data modeling, Calibration, Software development
The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.
The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground-based gamma-ray instrument. CTA will allow studying the Universe in the very-high-energy gamma-ray domain with energies ranging from a few tens of GeV to more than a hundred TeV. It will extend the currently accessible energy band, while increasing the sensitivity by a factor of 10 with respect to existing Cherenkov facilities. Furthermore, CTA will enhance other important aspects like angular and energy resolution. CTA will comprise two arrays, one in the Northern hemisphere and one in the Southern hemisphere, of in total more than one hundred of telescopes of three different sizes. The CTA performance requirements and the increased complexity in operation, control and monitoring of such a large distributed multi-telescope array leads to new challenges in designing and developing the CTA control software system. Indeed, the control software system must be flexible enough to allow for the simultaneous operation of multiple sub-arrays of different types of telescopes, to be ready to react in short timescales to changing weather conditions or to automatic alarms for transient phenomena, to be able to operate the observatory with a minimum personal effort on site, to cope with the malfunctioning of single telescopes or sub-arrays of telescopes, and to readout and control a large and heterogeneous set of devices. This report describes the preliminary architectural design concept for the CTA control software system that will be responsible to manage all the functionality of the CTA array, thereby enabling CTA to reach its scientific goals.
KEYWORDS: Prototyping, Optical proximity correction, Telescopes, Atmospheric Cherenkov telescopes, Control systems, Java, CCD cameras, Cameras, Databases, OLE for process control
The Cherenkov Telescope Array (CTA) will be the next generation ground-based very-high energy -ray observatory. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different sizes and types and in addition numerous auxiliary devices. In order to provide a test-ground for the CTA array control, the steering software of the 12-m medium size telescope (MST) prototype deployed in Berlin has been implemented using the tools and design concepts under consideration to be used for the control of the CTA array. The prototype control system is implemented based on the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) control middleware, with components implemented in Java, C++ and Python. The interfacing to the hardware is standardized via the Object Linking and Embedding for Process Control Unified Architecture (OPC UA). In order to access the OPC UA servers from the ACS framework in a common way, a library has been developed that allows to tie the OPC UA server nodes, methods and events to the equivalents in ACS components. The front-end of the archive system is able to identify the deployed components and to perform the sampling of the monitoring points of each component following time and value change triggers according to the selected configurations. The back-end of the archive system of the prototype is composed by two different databases: MySQL and MongoDB. MySQL has been selected as storage of the system configurations, while MongoDB is used to have an efficient storage of device monitoring data, CCD images, logging and alarm information. In this contribution, the details and conclusions on the implementation of the control software of the MST prototype are presented.
The Cherenkov Telescope Array (CTA) is the next generation Very High Energy (VHE, defined as > 50GeV to several
100TeV) telescope facility, currently in the design and prototyping phase, and expected to come on-line around 2016. The
array would have both a Northern and Southern hemisphere site, together delivering nearly complete sky coverage. The
CTA array is planned to have ~100 telescopes of several different sizes to fulfill the sensitivity and energy coverage needs.
Each telescope has a number of subsystems with varied hardware and control mechanisms; a drive system that gets
commands and inputs via OPC UA (OPC Unified Architecture), mirror alignment systems based on XBee/ZigBee protocol
and/or CAN bus, weather monitor accessed via serial/Ethernet ports, CCD cameras for calibration, Cherenkov camera, and
the data read out electronics, etc. Integrating the control and data-acquisitions of such a distributed heterogeneous system
calls for a framework that can handle such a multi-platform, multi-protocol scenario. The CORBA based ALMA Common
software satisfies these needs very well and is currently being evaluated as the base software for developing the control
system for CTA.
A prototype for a Medium Size Telescope (MST, ~12m) is being developed and will be deployed in Berlin, by end of
2012. We present the development being carried out to integrate and control the various hardware subsystems of this MST
prototype using ACS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.