PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The end-to-end operations of the ESO VLT has now seen three full years of service to the ESO community. During that time its capabilities have grown to four 8.2m unit telescopes with a complement of four optical and IR multimode instruments being operated in a mixed Service Mode and Visitor Mode environment. The input and output of programs and data to the system is summarized over this period together with the growth in operations manpower. We review the difficulties of working in a mixed operations and development environment and the ways in which the success of the end-to-end approach may be measured. Finally we summarize the operational lessons learned and the challenges posed by future developments of VLT instruments and facilities such as interferometry and survey telescopes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subaru Telescope started its "Conditional Open Use" operation in December 2000. The condition is that general users cannot claim compensation of their lost telescope time due to telescope/instrument problem.
We favored this operation because we anticipated it beneficial for both users and the observatory. Users can have an access to the telescope otherwise they have to wait for at least one more year. The observatory can get feedback from users which help us completing the system.
I will show what we have learned and how much we improved the overall efficiency of the system - the telescope and instruments - out of this unique approach since December 2000. I will also mention how we are working on improving the observation support system - from proposal submission to the science output feedback.
We are about to start building the remote observation system which will enable users to access the telescope from our facility in Hilo, Hawaii and eventually from NAOJ Japan or even from user's institute. I will present our goal and the system toward remote observation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Representatives of the HST user community urged that a sizeable portion (10-30%) of HST's observing program be dedicated to large observing projects, each of 100 orbits or more of telescope time. In the first 10 Cycles of HST observing, this goal was not fully realized. In HST's Cycle 11, large programs make up nearly 40% of the HST General Observer time allocation. We describe the advances in proposal processing that have made this possible, and give examples of the scientific and mission goals that these programs are designed to meet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On August 8, 2001, Melipal became the fourth Unit Telescope of ESO's VLT to start regular scientific operations. Accordingly, the Paranal Science Operations team is now providing support for execution of observation programmes of the astronomical community on all four individual 8 m telescopes of the VLT. The operational model developed and applied by this team is based on the concept that optimal exploitation of the unique potential of the VLT and of its instrumentation requires support by dedicated qualified and experienced astronomers. This applies to observing both in visitor mode and in service (queue) mode, between which VLT operations are shared approximately in a 50/50 proportion. The Paranal Science Operations team has been staffed to implement the above-mentioned operational concept in collaboration with a mountain-based engineering team for technical support, and with groups based at ESO's headquarters in Germany for front- and back-end contacts with the astronomical community. Together with these teams, and based on the experience acquired since the start of operations of the first UT in April 1999, operational procedures have been refined and new operational tools have been implemented. In this process, the aspects that are particularly revelant for on-site operations include the short-term scheduling of service mode operations, and the reporting and tracking of the service mode programme execution status.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents miscellaneous activities related to instrumentation taking place at Paranal Observatory. The number of instruments and / or facilities that will eventually equip the Observatory (VLT, VLTI, VST, VISTA)is about 20. An adequate organization (human and technical)is required to ensure configuration control and efficient preventive and corrective maintenance (hardware and software). Monitoring instrument performance is a key feature to guarantee success of operations and minimize technical downtime. Some observational projects are carried out with the aim of characterizing the Paranal sky conditions in the visible and the IR, in emission and absorption. Efforts are being developed to monitor, characterize and archive the transparency conditions at night.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Apache Point Observatory 3.5-meter telescope is a working model of a modern mid-sized telescope used primarily on a shared-night, remote-observing basis. After a decade of successful remote operation and scientific accomplishments, the Astrophysical Research Consortium, builder and owner of the telescope, is examining the role by which this university-owned instrument can best serve its constituency and astronomy at large in the coming years. Various "niche" scientific capabilities are described for the telescope, including fast-response observations of transient phenomena, synoptic observing programs, reactive queue-scheduled observations, temporal study programs, plus being a capable test bed for new instruments. While specialized uses of the telescope offer potential for major scientific discoveries, traditional observing capabilities need to be sustained for the ongoing and future research programs for the majority of the consortium astronomers and students, a large and diverse community. Finding an appropriate balance between the "unique and specialized" versus the "bread-and-butter" observing models is discussed, as is the role hands-on remote observing can serve to support the various operational models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The following summarises changes in the scientific and technical operation of the Spanish-German astronomical center on Calar Alto which aim to maximise the scientific return. Most importantly, we introduce service observations which have the rationale to complete
whenever possible the most highly ranked scientific programmes, and to carry out the programme at the meteorological conditions which are best suited for it. We have started to monitor all instruments using specific calibration plans, and we carry out an optical engineering programme which consists of CO2 cleaning of the telescope mirrors in 1--2 week intervals in order to maintain a high reflectivity and a low scattering at any time. Technical modifications of the 3.5m dome are discussed which now enable a fast and efficient ventilation of the dome before and during observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Italian Galileo telescope (TNG) is part of the Roque de Los Muchachos astronomical complex, also referred as ENO, European Northern Observatory. Astronomical sites must be carefully selected in order to maximize the scientific return from the fairly large investment they require, both in terms of money as well as of human resources. This also means to maximize (and/or optimize) the amount of time available for observations, so that the requirements of the telescope to have good performance in both optical and NIR wavelengths strongly depends on meteorological conditions (e.g. differential air temperature between inside and outside telescope dome, presence of atmospheric dust, etc.). TNG site is monitored on a continuous basis by an automatic weather station, which provides on line measurements of a few local meteorological parameters, e.g. temperature and relative humidity. Since a few month we added a multichannel dust monitor to the set of meteorological sensors. This four channel facility provides the size distribution of atmospheric dust particles, being able to detect and discriminate among four different particles sizes: 0.3, 0.5, 1 and 5 micron. This contribution will present the first preliminary data which have collected at the Roque site close to the TNG dome, in order to analyze the (possible) relationship between dust data and meteorological parameters trend.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how the Sloan Digital Sky Survey telescopes are operated. A brief introduction to the survey science goals, hardware, and software systems is provided. Operational issues are discussed such as staffing, observing planning, real-time quality assurance, and data handling, with an emphasis on how we maximize operational efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to maximize the scientific productivity of the CFH12K mosaic wide-field imager (and soon MegaCam), the Queued Service Observing (QSO) mode was implemented at the Canada-France-Hawaii Telescope at the beginning of 2001. The QSO system consists of an ensemble of software components allowing for the submission of programs, the preparation of queues, and finally the execution and evaluation of observations. The QSO project is part of a broader system known as the New Observing Process (NOP). This system includes data acquisition, data reduction and analysis through a pipeline named Elixir, and a data archiving and distribution component (DADS). In this paper, we review several technical and operational aspects of the QSO project. In particular, we present our strategy, technical architecture, program submission system, and the tools developed for the preparation and execution of the queues. Our successful experience of over 150 nights of QSO operations is also discussed along with the future plans for queue observing with MegaCam and other instruments at CFHT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ground-based submillimetre astronomy is beset by high extinction caused by water vapour. To ensure maximum scientific return and efficiency of operation it is critical to ensure that the scientific requirements are matched to the prevailing atmospheric conditions. This means that flexible observing is a requirement. The James Clerk Maxwell Telescope (JCMT) has been undertaking scientifically prioritised, queue-based flexible observing for the past four years and this paper describes the experience and lists the lessons learned. It is absolutely clear that the JCMT and its user community has benefited enormously from the experience. The recent introduction of the Observing Management Project (OMP) will bring fully automated software solutions to bear that will ensure maximum efficiency is brought to the process for both the facility and the users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The execution of observations in Service Mode is an option at the European Southern Observatory Very Large Telescope. In this operations mode, observations are not scheduled for specific nights, they are scheduled flexibly. Each night observations are selected from a pool of possible observations based on Observing Programme Committee (OPC) priority and the current observing conditions. Ideally, the pool of possible observations contains a range of observations that exactly match the real range of conditions and the real number of available hours, so that all observations are completed in a timely manner. Since this ideal case never occurs, constructing the pool of observations must be done carefully, with the goals of maximizing scientific return and operational efficiency. In this paper, basic ESO Service Mode scheduling concepts are presented. A specific VLT focus is maintained for most of this article, but the general principles are true for all ESO facilities executing Service Mode runs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From 1991 until 1997, the 3.8m UK Infrared Telescope (UKIRT) underwent a programme of upgrades aimed at improving its intrinsic optical performance. This resulted in images with a FWHM of 0."17 at 2.2 μm in September 1998. To understand and maintain the improvements to the delivered image quality since the completion of the upgrades programme, we have regularly monitored the overall atmospheric seeing, as measured by radial displacements of supaperture images (i.e. seeing-generated focus fluctuations), and the delivered image diameters. The latter have been measured and recorded automatically since the beginning of 2001 whenever the facility imager UFTI (UKIRT Fast Track Imager) has been in use.
In this paper we report the results of these measurements. We investigate the relation between the delivered image diameter and the RMS atmospheric seeing (as measured by focus fluctuations, mentioned above). We find that the best seeing occurs in the second half of the night, generally after 2am HST and that the best seeing occurs in the summer between the months of July and September. We also find tha the relationship between Zrms and delivered image diameter is uncertain. As a result Zrmsfrequently predicts a larger FWHM than that measured in the images.
Finally, we show that there is no correlation between near-infrared seeing measured at UKIRT and sub-mm seeing measured at the Caltech Submillimetre Observatory (CSO).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since going electronic in 1994, NOAO has continued to refine and enhance its observing proposal handling system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form or via Gemini's downloadable Phase-I Tool. NOAO staff can use online interfaces for administrative tasks, technical reviews, telescope scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available online.
The system, now known as ANDES, is designed as a thin-client architecture (web pages are now used for almost all database functions) built using open source tools (FreeBSD, Apache, MySQL, Perl, PHP) to process descriptively-marked (LaTeX, XML) proposal documents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development and tracking of Hubble Space Telescope science operations metrics will be described. In order for such metrics to be meaningful, they must be clearly linked to well-defined scientific contributions the observatory staff makes to the overall mission. The process for defining these contributions for HST, and then developing the appropriate metrics will be discussed. The process of developing and using metrics must take into account the fact that some may be more quantifiable than others. The fact that a metric is not easy to quantify does not necessarily detract from its importance or usefulness. Examples from the HST suite of metrics will be used to illustrate these situations. Operational metrics and data are also important at the subsystem level, to provide guidance in the process of trying to improve performance against the high-level science metrics. To the extent possible, the development of a system for capturing metric information should provide information useful at both these levels. These points will also be discussed in the context of examples from the HST suite of metrics. Our experiences to date with the collection and presentation of metric information will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last twelve years, the Space Telescope Science Institute (STScI) planning and scheduling teams have reduced the lead time to schedule the Hubble Space Telescope (HST) five-fold while doubling the overall observing efficiency. After the launch of HST, a one-week flight calendar took 56 days to prepare, schedule, and convert to flight products; the process now begins 11 days before execution. Early observing efficiency was in the 25% range; it is now typically 50%. In this paper, the process improvements that allowed these advancements are summarized. We also discuss the most recent scheduling advancement, which allows interruption of an executing flight calendar for fast-turnaround science observations or telescope anomaly resolution within 24 hours of activation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Telescope performance can be characterised by two kinds of metric: those which reflect scientific productivity (e.g. citation impact) and those which monitor technical aspects of performance e.g. shutter open time and instrument throughput, assumed to impinge on eventual scientific productivity. These metrics can be used to guide an observatory’s investment of limited operational resources in such a way as to maximise long-term scientific productivity.
We review metrics used at the 4.2-m William Herschel Telescope (WHT) on La Palma, and identify key performance indicators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate and consistent time tracking is essential for evaluating the efficiency of survey observing operations and identifying areas that need improvement. Off the shelf time tracking software, which requires users to enter activities by hand, proved tedious to use and insufficiently flexible. In this paper, we present an alternate time tracking system developed specifically for Sloan Digital Sky Survey observing. This system uses an existing logging system, murmur, to log the beginning and ending times of tracked circumstances, including activities, weather, and problems which effect observing. Operations software automatically generates most entries for routine observing activities; in a night of routine observing, time tracking requires little or no attention from the observing staff. A graphical user interface allows observers to make entries marking time lost to weather and equipment, and to correct inaccurate entries made by the observing software. The last is necessary when the change in activity is not marked by a change in the state of the software or instruments, or when the time is used for engineering or other observing not part of routine survey data collection.
A second utility generates reports of time usage from these logs. These reports include totals for the time spent for each observing task, time lost to weather and problems, efficiency statistics for comparison with the survey baseline, and a detailed listing of what activities and problems were present in any covered time period.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently four instruments are operational at the four 8.2m telescopes of the European Southern Observatory Very Large Telescope: FORS1, FORS2, UVES, and ISAAC. Their data products are processed by the Data Flow Operations Group (also known as QC Garching) using dedicated pipelines. Calibration data are processed in order to provide instrument health checks, monitor instrument performance, and detect problems in time. The Quality Control (QC) system has been developed during the past three years. It has the following general components: procedures (pipeline and post-pipeline) to measure QC parameters; a database for storage; a calibration archive hosting master calibration data; web pages and interfaces. This system is part of a larger control system which also has a branch on Paranal where quick-look data are immediately checked for instrument health. The VLT QC system has a critical impact on instrument performance. Some examples are given where careful quality checks have discovered instrument failures or non-optimal performance. Results and documentation of the VLT QC system are accessible under http://www.eso.org/qc/.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subaru Quality Control Trinity consists of SOSS (Subaru Observation Software System), STARS (Subaru Telescope ARchive System), and DASH (Distributed Analysis System Hierarchy), each of which can be operated independently and also cooperatively with Observation Dataset. For the purpose of evaluating the trinity, test observations were made on June 2001 with the instrument SuprimeCam attached onto the prime focus of the Subaru Telescope. We finally confirmed that the trinity works successfully and the concept of our Observation Dataset can be applicable to the quality control purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
UVES is the UV-Visual high-resolution echelle spectrograph mounted at the 8.2m Kueyen (UT2) telescope of the ESO Very Large Telescope. Its data products are pipeline-processed and quality checked by the Data Flow Operations Group (often known as QC Garching). Calibration data are processed to create calibration products and to extract Quality Control (QC) parameters. These parameters provide instrument health checks and monitor instrument performance. Typical UVES QC parameters are: bias level, read-out-noise, dark current of the three CCD detectors used in the instrument, rms of dispersion, resolving power, CCD pixel-to-pixel gain structure, instrument efficiency. The measured data are fed into a database, compared to earlier data, trended over time and published on the web (http://www.eso.org/qc/index_uves.html). The QC system has evolved with time and proven to be extremely useful. Some examples are given which highlight the impact of careful QC on instrument performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Hubble Space Telescope (HST) was designed for periodic servicing by Space Shuttle astronauts. These servicing missions enable state-of-the-art upgrades to the Observatory’s scientific capabilities, engineering upgrades and refurbishments, and, when needed, repairs. Since its launch and deployment in 1990, there have been four space shuttle missions to service the HST. (A fifth is currently scheduled for March 2004) In each case, upon completion of a servicing mission and the astronauts’ release of the telescope, HST undergoes a period of intense and highly coordinated verification activities designed to commission the Observatory’s new capabilities and components for normal operations. The commissioning program following the 1990 deployment mission was known as OV/SV (orbital verification/science verification) while each of those following the subsequent Shuttle servicings has become known as servicing mission observatory verification, or SMOV. The 1990 OV/SV activities were hampered and greatly complicated by the problem of spherical aberration of the primary optics. The first servicing mission, SM1, in December 1993, is still remembered as the Hubble repair mission, having restored HST’s optics to within the original mission specifications. SMOV1 was important not only for confirming the optical fixes with spectacular early images, but also for demonstrating the effectiveness of “success-oriented” scheduling as a technique for orbital verification. The second servicing mission, SM2, in February 1997, greatly enhanced the scientific capabilities of HST but did so at the cost of greatly increased mechanical and operational complexity. The resulting SMOV2 program was accordingly the most complicated and ambitious till then and, as it turned out, the most responsive and resilient, as the newly installed instruments presented serious, unforeseen on-orbit problems. The third servicing mission, SM3a, carried out in December 1999, was essentially an emergency mission to replace failed gyros and the SMOV3a was correspondingly relatively simple. SM3b, scheduled for March 2002, will feature further significant scientific upgrades in the form of a new wide-field camera and the revival of the prematurely defunct infrared instrument. In addition to describing the highlights of these verification programs, this paper presents the general principles, guidelines, and lessons learned in the process of commissioning the HST Observatory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major milestone in an effort to update the aging Hubble Space Telescope (HST) ground system was completed when HST operations were switched to a new ground system, a project called "Vision 2000 Control Center System CCS)", at the time of the third Servicing Mission in December 1999.
A major CCS subsystem is the Space Telescope Engineering Data Store, the design of which is based on modern Data Warehousing technology. In fact, the Data Warehouse (DW) as implemented in the CCS Ground System that operates and monitors the Hubble Space Telescope represents, the first use of a commercial Data Warehouse to manage engineering data. By the end of February 2002, the process of populating the Data Warehouse with HST historical telemetry data had been completed, providing access to HST engineering data for a period of over 12 years with a current data volume of 2.8 Terabytes.
This paper describes hands-on experience from an end user perspective, using the CCS system capabilities, including the Data Warehouse as an HST engineering telemetry archive. The Engineering Team at the Space Telescope Science Institute is using HST telemetry extensively for monitoring the Scientific Instruments, in particular for
· Spacecraft anomaly resolutions
· Scientific Instrument trending
· Improvements of Instrument operational efficiency
The overall idea is to maximize science output of the space observatory. Furthermore, the CCS provides a powerful feature to build, save, and recall real-time display pages customized to specific subsystems and operational scenarios. Engineering teams are using the real-time monitoring capabilities intensively during Servicing Missions and real time commanding to handle anomaly situations, while the Flight Operations Team (FOT) monitors the spacecraft around the clock.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For 3 years the Infrared Spectrometer And Array Camera (ISAAC) has been operating at the 8m Antu (UT1) telescope of the European Southern Observatory Very Large Telescope (ESO VLT). As part of ESO data flow operations ISAAC data are processed and quality control checked by the Data Flow Operations group (often known as QC Garching). at ESO headquarters in Garching. The status of the instrument is checked in terms of QC parameters, which are derived from raw and processed data and compared against reference values. Low level parameters include detector temperature and zero level offset, other parameters include image quality and spectrum curvature. Complicated instrumental behaviors like the odd-even column effect and the appearance of pupil ghosts require more sophisticated QC tools. Instrumental interventions of cryogenic instruments like ISAAC include a defrost and re-freeze sequence which can be traced in trending plots of the QC1 parameters, which are published regularly (see http://www.eso.org/qc). We present recent highlights of the ISAAC QC process and their role as feedback to the observatory to retain the performance of the instrument.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Keck Interferometer is entering a regular limited observational phase. A restricted number of observers are expected to use the instrument over the course of the next few years in a shared-risk capacity. To facilitate this, the W. M. Keck Observatory and the Jet Propulsion Laboratory are following a Handover procedure consisting of a number of stages related to the science modes of the instrument as they reach completion. The first of these is the Visibility Science mode that involves only the two Keck telescopes. Other modes to follow are Nulling, Differential Phase, Astrometry, and Imaging. The process includes defining a reasonable level of functionality of each mode, training observatory staff to maintain and schedule tasks related to the upkeep of each mode, and defining and documenting each of the subsystems related to each mode. Here we discuss the outline of the Handover plan and report on its progress to date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A review of operational procedures and requirements evolving at the Navy Prototype Optical Interferometer (NPOI) provides some useful insights for the automation, maintenance and operation of large optical interferometers even as construction and instrument development continues. Automation is essential for efficient, single operator observing. It is important to integrate ease of operation and maintenance into the instrument design from the start. In final form, the Navy Prototype Optical Interferometer, NPOI, will use six portable siderostats for imaging stars and narrow angle astrometry of multiple stars as well as four fixed siderostats designed for all sky astrometry. Currently all four astrometric siderostats and two transportable siderostats are operational. All six beams from the siderostats now in use have been combined coherently to form images of multiple stars at milliseconds of arc resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Interferometry Science Center (ISC) at the California Institute of Technology (Caltech) is chartered with providing science operations, data analysis support, and data archiving support for the suite of interferometry projects within the NASA Origins theme. Beginning with the Science Operations System (SOS) for the Keck Interferometer (KI), the ISC will design, implement, and operate a multi-mission facility to provide operations and support functions for NASA Origins interferometers and the scientists and engineers that use them. Future Origins interferometry projects such as the Space Interferometry Mission (SIM) will further use and extend the functionality of the ISC’s multi-mission base. In this talk I will introduce the functional elements in the KI SOS; describe the common SOS core elements that KI and SIM will share; and provide prospective users of these facilities an introduction to user support model that the ISC is implementing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is inevitable that the International Space Station (ISS) will play a significant role in the conduct of science in space. However, in order to provide this service to a wide and broad community and to perform it cost effectively, alternative concepts must be considered to complement NASA’s Institutional capability. Currently science payload forward and return data services must compete for higher priority ISS infrastructure support requirements. Furthermore, initial astronaut crews will be limited to a single shift. Much of their time and activities will be required to meet their physical needs (exercise, recreation, etc.), station maintenance, and station operations, leaving precious little time to actively conduct science payload operations. ISS construction plans include the provisioning of several truss mounted, space-hardened pallets, both zenith and nadir facing. The ISS pallets will provide a platform to conduct both earth and space sciences. Additionally, the same pallets can be used for life and material sciences, as astronauts could place and retrieve sealed canisters for long-term micro-gravity exposure. Thus the pallets provide great potential for enhancing ISS science return.
This significant addition to ISS payload capacity has the potential to exacerbate priorities and service contention factors within the exiting institution. In order to have it all, i.e., more science and less contention, the pallets must be data smart and operate autonomously so that NASA institutional services are not additionally taxed.
Specifically, the “Enhanced Science Capability on the International Space Station” concept involves placing data handling and spread spectrum X-band communications capabilities directly on ISS pallets. Spread spectrum techniques are considered as a means of discriminating between different pallets as well as to eliminate RFI. The data and RF systems, similar to that of “free flyers”, include a fully functional command and data handling system, providing, in part, science solid state recorders and instrument command management sub-systems. This, together with just one direct-to-ground based X-Band station co-located with a science payload operations center provides for a direct data path to ground, bypassing NASA institutions. The science center exists to receive user service requests, perform required constraint checks necessary for safe instrument operations, and to disseminate user science data. Payload commands can be up-linked directly or, if required, relayed through the existing NASA institution. The concept is modular for the downlink Earth terminals; in that multiple downlink X-band ground stations can be utilized throughout the world. This has applications for Earth science data direct to regional centers similar to those services provided by the EOS Terra spacecraft. However, for the purposes of this concept, just one downlink site was selected in order to define the worst-case data acquisition scenario necessary to ascertain concept feasibility.
The paper demonstrates that the concept is feasible and can lead to a design that significantly reduces operational dependency on the NASA institutions and astronauts while significantly increasing ISS science operational efficiency and access.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Older spacecraft missions, especially those in low Earth orbit with telemetry intensive requirements, required round-the-clock control center staffing. The state of technology relied on control center personnel to continually examine data, make decisions, resolve anomalies, and file reports. Hubble Space Telescope (HST) is a prime example of this description. Technological advancements in hardware and software over the last decade have yielded increases in productivity and operational efficiency, which result in lower cost. The re-engineering effort of HST, which has recently concluded, utilized emerging technology to reduce cost and increase productivity. New missions, of which NASA's Transition Region and Coronal Explorer Satellite (TRACE) is an example, have benefited from recent technological advancements and are more cost-effective than when HST was first launched.
During its launch (1998) and early orbit phase, the TRACE Flight Operations Team (FOT) employed continually staffed operations. Yet once the mission entered its nominal phase, the FOT reduced their staffing to standard weekday business hours. Operations were still conducted at night and during the weekends, but these operations occurred autonomously without compromising their high standards for data collections. For the HST, which launched in 1990, reduced cost operations will employ a different operational concept, when the spacecraft enters its low-cost phase after its final servicing mission in 2004. Primarily due to the spacecraft’s design, the HST Project has determined that single-shift operations will introduce unacceptable risks for the amount of dollars saved. More importantly, significant cost-savings can still be achieved by changing the operational concept for the FOT, while still maintaining round-the-clock staffing. It’s important to note that the low-cost solutions obtained for one satellite may not be applicable for other satellites. This paper will contrast the differences between low-cost operational concepts for a satellite launched in 1998 versus a satellite launched in 1990.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important aspect of the Hubble Space Telescope (HST) operations is the ability to quickly disseminate and coordinate spacecraft commanding and ground system information for both routine spacecraft operations and Space Shuttle Servicing Missions. When deviating from preplanned activities all new spacecraft commanding, ground system and space system configurations must be reviewed, authorized and executed in an efficient manner. The information describing the changes must be disseminated to and coordinated by a large group of users.
In the early years of the HST mission a paper-based Operational Request System was used. The system worked, but was cumbersome to efficiently coordinate with a large geographically dispersed group of users in a timely manner. As network and server technology matured, the HST Project developed an on-line interactive Operations Request System. This Operations Request System is a server-based system (access via HST Net) that provides immediate access to command and ground system information to both local and remotely based Instrument Engineers, Flight Operations Team Controllers, Subsystem Engineers and Project Management.
This paper describes the various aspects of the system's submission, review, authorization and implementation processes. Also described is the methodology used to arrive at the current system design and the Graphical User Interface (GUI). This system has been used successfully for all routine and special HST operations for the last five years. This approach to operations coordination is adaptable to spacecraft of any complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra X-ray Observatory (CXO), launched in July of 1999, contains two focal-plane imaging detectors and two gratings spectrometers. Keeping these instruments operating at an optimal performance level is the responsibility of the Chandra X-ray Center, located in Cambridge, MA. Each week a new set of command loads is generated to be uploaded to the spacecraft for use in the following week. The command loads contain all of the necessary instructions for the observatory to execute a week's worth of science observations and spacecraft maintenance activities. Ensuring that these loads do not compromise the performance of the observatory or its health and safety in any way is a complex procedure. It requires a coordinated review and subsequent approval of the loads from a team of scientists and engineers representing each instrument on the spacecraft. Reviewing the command loads can be quite a daunting task; but with the help of automated scripts and command load interpretation into "human-readable" form, we have been able to streamline the command load review process as well as improve our ability to identify errors in commanding. We present here a detailed review of those scripts utilized in the inspection of command loads for the ACIS instrument.
This work was supported by NASA contract NAS8-39073.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra X-ray Observatory (CXO), NASA's latest "Great Observatory", was launched on July 23, 1999 and reached its final orbit on August 7, 1999. The CXO is in a highly elliptical orbit, approximately 140,000 km × 10,000 km, and has a period of approximately 63.5 hours (≈2.65 days). Communication with the CXO nominally consists of 1-hour contacts spaced 8-hours apart. Thus, once a communication link has been established, it is very important that the health and safety status of the scientific instruments as well as the Observatory itself be determined as quickly as possible.
In this paper, we focus exclusively on the automated health and safety monitoring scripts developed for the Advanced CCD Imaging Spectrometer (ACIS) during those 1-hour contacts. ACIS is one of the two focal plane instruments on-board the CXO. We present an overview of the real-time ACIS Engineering Data Web Page and the alert schemes developed for monitoring the instrument status during each communication contact. A suite of HTML and PERL scripts monitors the instrument hardware house-keeping electronics (i.e., voltages and currents) and temperatures during each contact. If a particular instrument component is performing either above or below pre- established operating parameters, a sequence of email and alert pages are spawned to the Science Operations Team of the Chandra X-ray Observatory Center so that the anomaly can be quickly investigated and corrective actions taken if necessary. We also briefly discuss the tools used to monitor the real-time science telemetry reported by the ACIS flight software.
The authors acknowledge support for this research from NASA contract NAS8-39073.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra X-ray Observatory was launched in July, 1999 and has yielded extraordinary scientific results. Behind the scenes, our Monitoring and Trends Analysis (MTA) system has proven to be a valuable resource. With three years worth of on-orbit data, we have available a vast array of both telescope diagnostic information and analysis of scientific data to access Observatory performance. As part of Chandra's Science Operations Team (SOT), the primary goal of MTA is to provide tools for effective decision making leading to the most efficient production of quality science output from the Observatory. We occupy a middle ground between flight operations, chiefly concerned with the health and safety of the spacecraft, and validation and verification, concerned with the scientific validity of the data taken and whether or not they fulfill the observer's requirements. In that role we provide and receive support from systems engineers, instrument experts, operations managers, and scientific users. MTA tools, products, and services include real-time monitoring and alert generation for the most mission critical components, long term trending of all spacecraft systems, detailed analysis of various subsystems for life expectancy or anomaly resolution, and creating and maintaining a large SQL database of relevant information. This is accomplished through the use of a wide variety of input data sources and flexible, accessible programming and analysis techniques. This paper will discuss the overall design of the system, its evolution and the resources available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra Data Archive plays a central role in the Chandra X-ray Center (CXC) that manages the operations of the Chandra X-ray Observatory. We shall give an overview of two salient aspects of the CDA's operations, as they are pertinent to the operation of any large observatory.
First, in the database design it was decided to have a single observation catalog database that controls the entire life cycle of Chandra observations (as opposed to separate databases for uplink and downlink, as is common for many scientific space missions). We will discuss the pros and cons of this design choice and present some lessons learnt.
Second, we shall review the complicated network that consists of Automated (pipeline) Processing, archive ingest, Verification & Validation, reprocessing, data distribution, and public release of observations. The CXC is required to deliver high-level products to its users. This is achieved through a sophisticated system of processing pipelines. However, occasional failures as well as the need to reprocess observations complicate this seemingly simple series of actions. In addition, we need to keep track of allotted and used observing time and of proprietary periods. Central to the solution is the Processing Status Database which is described in more detail in a related poster presentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Large-aperture Synoptic Survey Telescope will repeatedly image a large fraction of the visible sky in multiple optical passbands in a way that will sample temporal phenomena over a large range of time scales. This will enable a suite of synoptic investigations that range in temporal sampling requirements from the detection of near Earth asteroids (minutes), through discovery and followup of supernovae to long period monitoring of QSOs, AGN and LPVs (years). Additionally, the data must be obtained in a way to support programs aimed at building up deep static images of part or all of the sky.
Here we examine some of the issues involved in crafting an observing scheme that serves these goals. The problem has several parts: a) what is the optimal time sampling strategy that best serves the desired temporal range? b) how can a chosen time sampling sequence be packed into an observing scheme that accommodates all pointings and 'whiteout' windows (daytime, lunation period)? c) how vulnerable is such an observing plan to realistic models of disruption by poor observing conditions and weather? d) how does one build in the most economical contingency/redundancy to i) mitigate against such disruption and ii) reserve time for recovery and followup of transient phenomena (e.g. gamma-ray bursts, supernovae)?
In this article we touch upon several of these issues, and come to an understanding of some of the limitations, as well as areas in which scientific priorities and trade-offs will have to be made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The first generation of STN with 150TB tape library has been utilized by Subaru telescope for the past several years of operation. We are upgrading the storage system to 600TB of capacity based on the Digital Tape Format 2 of SONY Ltd, in March 2002, so called STN-II. The engine is changed from the VPP700 of Fujitsu Ltd., twenty two vector processors are connected by a cross bar network to the cluster of PrimePower2000 of Fujitsu Ltd., which consists of 128 processors each, with 384GB of quasi-shared memory in total. Data management servers and graphical workstations are connected by the Storage Area Network technology. There are two dedicated clusters of workstations for daily development of software for the archive system, STARS, and for the platform of the data analysis pipeline, DASH. These two software components are combined into the Subaru Software Trinity, with the observation control system, SOSS. The STN-II system is the platform to support observation data flow of the Subaru Telescope with the Subaru Software Trinity. Adopting a powerful computation and fast network, a system for real time quality measurement of the observation is planned and quick feedback to the observation parameter will be possible on the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NOAO Data Products Program (DPP) is a new program aimed at identifying scientifically interesting datasets from ground-based O/IR telescopes and making them available to the astronomical community, together with the tools for exploring them. The program coordinates NOAO projects that are data intensive, including the handling, pipeline processing, analysis, and archiving of data. These datasets, and the facilities for mining them, will form a significant component of the resources of the National Virtual Observatory, and will be an important part of NOAO’s participation in that endeavor. In the longer term, this activity will lead to a data management role in the Large-aperture Synoptic Survey Telescope, a facility that will produce one petabyte of imaging data per year.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Upgrades of the science instrument complement on the Hubble Space Telescope (HST) and observing strategy innovations have combined to greatly increase the number of observations and the volume of data during the first decade of HST operations. At the same time, the data processing component of HST operations has undergone a parallel evolution in strategy and implementation, partly in response to the increased volume of data from HST while reducing staffing requirements, and partly due to the phasing out of old technologies and the exploring of new ones. This paper describes the original HST data processing strategy and implementation, how it has evolved to the current design today, and where it may be going for future space telescope missions (HST, NGST, et al.).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra Data Archive has been archiving and distributing data for the Chandra X-ray Observatory and keeping observers informed of the status of their observations since shortly after launch in July 1999. Due to the complicated processing history of Chandra data, it became apparent that a database was needed to track this history on an observation by observation basis. The result is the Processing Status Database and the Chandra Observations Processing Status tool. In this paper, a description of the database design is given, followed by details of the tools which populate and display the database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VLT Data Flow System (DFS) has been developed to maximize the scientific output from the operation of the ESO observatory facilities. From its original conception in the mid 90s till the system now in production at Paranal, at La Silla, at the ESO HQ and externally at home institutes of astronomers, extensive efforts, iteration and retrofitting have been invested in the DFS to maintain a good level of performance and to keep it up to date. In the end what has been obtained is a robust, efficient and reliable 'science support engine', without which it would be difficult, if not impossible, to operate the VLT in a manner as efficient and with such great success as is the case today. Of course, in the end the symbiosis between the VLT Control System (VCS) and the DFS plus the hard work of dedicated development and operational staff, is what made the success of the VLT possible. Although the basic framework of DFS can be considered as 'completed' and that DFS has been in operation for approximately 3 years by now, the implementation of improvements and enhancements is an ongoing process mostly due to the appearance of new requirements. This article describes the origin of such new requirements towards DFS and discusses the challenges that have been faced adapting the DFS to an ever-changing operational environment. Examples of recent, new concepts designed and implemented to make the base part of DFS more generic and flexible are given. Also the general adaptation of the DFS at system level to reduce maintenance costs and increase robustness and reliability and to some extend to keep it conform with industry standards is mentioned. Finally the general infrastructure needed to cope with a changing system is discussed in depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the coming decade we will build on the foundations of current large scale imaging surveys such as the SDSS, 2MASS and MACHO to develop deep, wide-field imaging surveys covering over 15,000 square degrees that are designed to probe to the time domain. One such project is the Large Synoptic Survey Telescope (LSST). We describe here some of the data management challenges we face in moving from the current generation of surveys to an imaging program of the size of the LSST. Scaling from todays deep CCD imaging surveys and wide-field photometric surveys we show that the computational challenge of analyzing the data from a three Gigapixel camera with 10s integrations should be manageable on the time frame that the LSST is expected to be delivered. While encouraging, these hardware considerations are only a very small aspect of the software engineering and data management procedures that must be developed in order for the LSST to succeed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's astronomers may use the telescopes and instruments of many observatories to execute their science observations. Discovering the distributed resources that are available is time consuming and error prone because astronomers must manually take facility information and match it to the needs of their science observations. While Phase 1 and Phase 2 of the
proposal process are well supported by a wide variety of software tools, the initial phase of discovering what resources are available, Phase 0, suffers from a lack of software support. This paper describes and proposes the creation of a Phase 0 Network to fill this void. The network is built upon peer-to-peer (P2P) technology, showing that this new approach to distributed computing has viable uses in astronomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After the starting of operation on December 1998, Subaru Telescope ARchive System(STARS) continued to store and manage the data taken at the summit of Mauna Kea,
Hawaii, and total amount of data is currently about 3 TB in amount or ~ 600,000 files. The data production rate is increasing gradually with increasing stability of telescope and instrument operation.
There were some upgrades in STARS itself after our report performed in the last SPIE meeting held in Munich on March 2000, and also were some new features developed and
established in Mitaka, Japan for mirroring all data stored in STARS. We also started the releasing the data whicv passed the proprietary terms to world wide via seperate system.
We will discuss about the concepts and current status of our distributed archive systems in detail, and its impact to scientific or enginieering return from Subaru telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to operate large telescope, it is crucial to have a good weather forecast especially of the temperature when the telescope begins preparation, i.e., open the dome to introduce new fresh air inside. For this purpose, the Mauna Kea Weather Center (MKWC) has been established in July 1998 by the initiative of Institute of Astronomy, University of Hawaii. The weather forecast is not a simple matter and is difficult in general especially as in the quite unique environment as in the summit of Mauna Kea. MKWC introduced a system of numerical forecasting based on the mesoscale model, version five, so called MM5, was running on the vector parallel super computer VPP700 of Subaru Telescope for past three years. By the introduction of new supercomputer system at Subaru Telescope, we have prepared new programs for the new supercomputer systems. The long term but coarse grid forecast is available through National Center for Environmental Predict (NCEP) every day, and the MKWC system get the result of simulations on coarse grid over the pacific ocean from NCEP, and readjustment of data to the fine grid down to 1km spatial separation at the summit of Mauna Kea, i.e. Telescope sites of Mauna Kea Observatories. Computation begins around 20:00 HST, to end 48 hours forecast around 0100am next morning. Conversion to WWW graphics will finish around 0500am, then, the specialist of MKWC would take into the result of the numerical forecast account, to launch a precious forecast for the all observatories at the summit of Mauna Kea, at 10:00am HST. This is the collaboration among observatories to find a better observation environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a system at the Canada-France-Hawaii Telescope (CFHT), SkyProbe, which allows for the direct measurement of the true attenuation by clouds once per minute, within a percent, directly on the field pointed by the telescope. It has been possible to make this system relatively inexpensively due to the low-cost CCD cameras from the amateur market. A crucial addition to this hardware is the quite recent availability of a full-sky photometry catalog at the appropriate depth: the Tycho catalog, from the Hipparcos mission. The central element is the automatic data analysis pipeline developed at CFHT, Elixir, for the improved operation of the CFHT wide-field imagers, CFH12K and MegaCam. SkyProbe’s FITS images are processed in real-time and the pipeline output (a zero point attenuation) provides the current sky transmission to the observers and helps immediate decision making. These measurements are also attached to the archived data, adding a key criteria for future use by other astronomers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional methods of data collection typically rely on each instrument storing data locally during each data collect run with the files relayed to a central storage location at a later time. For moderate rate systems this is an acceptable paradigm. However, as ultra-high bandwidth instruments become available, this approach presents two significant limitations. First, the bandwidth required for the transfers can become unrealistic, and the transfer times are prohibitive. Second, the increasing complexity, speed, and breadth of instruments presents significant challenges in combining the data into a coherent data set for analysis. The Starfire Optical Range is in the process of implementing a centralized data storage system that provides multi-gigabyte per second transfer rates and allows each instrument to store directly to the primary data store. Additionally, the architecture provides for absolute synchronization of every data sample throughout all sensors. The result is a single data set with data from all instruments frame by frame synchronized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the coming decade, the drive to increase the scientific returns on capital investment and to reduce costs will force automation to be implemented in many of the scientific tasks that have traditionally been manually overseen. Thus, spacecraft autonomy will become an even greater part of mission operations. While recent missions have made great strides in the ability to autonomously monitor and react to changing health and physical status of spacecraft, little progress has been made in responding quickly to science driven events. The new generation of space-based telescopes/observatories will see deeper, with greater clarity, and they will generate data at an unprecedented rate. Yet, while onboard data processing and storage capability will increase rapidly, bandwidth for downloading data will not increase as fast and can become a significant bottleneck and cost of a science program.
For observations of inherently variable targets and targets of opportunity, the ability to recognize early if an observation will not meet the science goals of variability or minimum brightness, and react accordingly, can have a major positive impact on the overall scientific returns of an observatory and on its operational costs. If the observatory can reprioritize the schedule to focus on alternate targets, discard uninteresting observations prior to downloading, or download them at a reduced resolution its overall efficiency will be dramatically increased.
We are investigating and developing tools for a science goal monitoring (SGM) system. The SGM will have an interface to help capture higher-level science goals from scientists and translate them into a flexible observing strategy that SGM can execute and monitor. SGM will then monitor the incoming data stream and interface with data processing systems to recognize significant events. When an event occurs, the system will use the science goals given it to reprioritize observations, and react appropriately and/or communicate with ground systems - both human and machine - for confirmation and/or further high priority analyses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a versatile scheduler for automated telescope observations and operations. The main objective is to optimize telescope use, while taking alerts (e.g., Gamma-Ray Bursts), weather conditions, and mechanical failures into account. Based on our previous experiment, we propose a two steps approach. First, a daily module develops plan schemes during the day that offer several possible scenarii for a night and provide alternatives to handle problems. Secondly, a nightly module uses a reactive technique --driven by events from different sensors-- to select at any moment the "best" block of observations to launch from the current plan scheme. In addition to a classical scheduling problem under resource constraints, we also want to provide dynamic reconfiguration facilities. The proposed approach is general enough to be applied to any other type of telescope, provided that reactivity is important.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fulfilling the promise of the era of great observatories, NASA now has more than three space-based astronomical telescopes operating in different wavebands. This situation provides astronomers with the unique opportunity of simultaneously observing a target in multiple wavebands with these observatories. Currently scheduling multiple observatories simultaneously, for coordinated observations, is highly inefficient. Coordinated observations require painstaking manual collaboration among the observatory staff at each observatory. Because they are time-consuming and expensive to schedule, observatories often limit the number of coordinated observations that can be conducted. In order to exploit new paradigms for observatory operation, the Advanced Architectures and Automation Branch of NASA's Goddard Space Flight Center has developed a tool called the Visual Observation Layout Tool (VOLT). The main objective of VOLT is to provide a visual tool to automate the planning of coordinated observations by multiple astronomical observatories. Four of NASA's space-based astronomical observatories - the Hubble Space Telescope (HST), Far Ultraviolet Spectroscopic Explorer (FUSE), Rossi X-ray Timing Explorer (RXTE) and Chandra - are enthusiastically pursuing the use of VOLT. This paper will focus on the purpose for developing VOLT, as well as the lessons learned during the infusion of VOLT into the planning and scheduling operations of these observatories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the continuing effort to streamline our systems and improve service to the science community, the Space Telescope Science Institute (STScI) is developing and releasing, APT - The Astronomer’s Proposal Tool as the new interface for Hubble Space Telescope (HST) Phase I and Phase II proposal submissions for HST Cycle 12. APT, was formerly called the Scientist's Expert Assistant (SEA), which started as a prototype effort to try and bring state of the art technology, more visual tools and power into the hands of proposers so that they can optimize the scientific return of their programs as well as HST.
Proposing for HST and other missions, consists of requesting observing time and/or archival research funding. This step is called Phase I, where the scientific merit of a proposal is considered by a community based peer-review process. Accepted proposals then proceed thru Phase II, where the observations are specified in sufficient detail to enable scheduling on the telescope.
In this paper, we will present our concept and implementation plans for our Phase I development and submission tool, APT. More importantly, we will go behind the scenes and discuss why it's important for the Science Policies Division (SPD) and other groups at the STScI to have a new submission tool and submission output products. This paper is an update of the status of the HST Phase I Proposal Processing System that was described in the published paper “A New Era for HST Phase I Development and Submission.”
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the current status and development plans for ASPRO, a software to prepare observations with an optical interferometer. ASPRO enables the user to find which configurations of the interferometer are the most adapted to study the scientific object, which objects could be used to calibrate the interferometric measurements on the scientific object and what accuracy could be expected on the measurements. ASPRO is developed by the Jean-Marie Mariotti Center for Expertise in Interferometry to help astronomers optimize the use of the facility to which they submit an observing proposal -in particular ESO's VLTI- and optimize its scientific return.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article we present the Data Flow System (DFS) for the Very Large Telescope Interferometer (VLTI). The Data Flow System is the VLT end-to-end software system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. The DFS was first installed for VLTI first fringes utilising the siderostats together with the VINCI instrument and is constantly being upgraded in phase with the VLTI commissioning. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. Observations of objects with some scientific interest are already being carried out in the framework of the VLTI commissioning using siderostats and the VLT Unit Telescopes, making it possible to test tools under realistic conditions. These tools comprise observation preparation, pipeline processing and further analysis systems. Work is in progress for the commissioning of other VLTI science instruments such as MIDI and AMBER. These are planned for the second half of 2002 and first half of 2003 respectively. The DFS will be especially useful for service observing. This is expected to be an important mode of observation for the VLTI, which is required to cope with numerous observation constraints and the need for observations spread over extended periods of time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video Vector Magnetograph at Huairou Solar Observing Station, in Beijing of China is the primary instrument designed to simultaneously measure the solar 2-dimension magnetic field and velocity field with different spectral lines in the world. In order to satisfy needs from various users, raw data, received from the observations system is processed into CD-ROMs for archive and distribution to the Co-Investigators, and summary data is generated for viewing at the HSOS Web site (http://sun.bao.ac.cn) The data archive is designed to store in two parts for the sake of safe, one part is located at the local, the other is at headquarter of National Astronomical Center of Observatories. The data archive system is setup here. This paper presents a preliminary design and preliminary implement of the data archive system. The goal of this project is to provide a high efficient, fast speed and extensible software that is characterized by lower cost and high performance and a desire to create high quality software system. The article will encompass a wide variety of experiments associated with the inception and prototype stages to its current state of maturity of the database system, its relative integrality of the means and tools employed on a series of implement steps on operating system, database management system, and server end scripting language, etc. The solution offers significant performance improvements over some existing methods in similar system. The gained experiments all are in Linux system of PC. Everyone, who follows along with the steps described herein, must build a good online database server in a short time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the deployment of the new FLAMES facility at the Kueyen Unit Telescope of the VLT, multi object, fibre fed capability will be added to the UVES spectrograph.
The FLAMES-UVES Data Reduction Software is a C Library embedded in the MIDAS environment. It is designed to extend the UVES pipeline functionalities to support the operation and to monitor the night and long term performance of FLAMES-UVES at Kueyen telescope of the VLT.
The peculiar spectral format of FLAMES-UVES imposes very stringent constraints on instrument stability, and poses some major challenges. Some of them are common to any multi fibre fed echelle spectrograph, such as automatic order and fibre location and identification, deblending of spectra carried by neighboring fibres, flagging and removal of cosmic ray hits etc.; others are typical of FLAMES/UVES, such as the automatic measurement and correction of some spectrograph instabilities which, although irrelevant for slit mode operation, would severely cripple the maximum achievable S/N ratio in the fibre-fed case, if neglected.
Throughout the reduction, errors are thoroughly propagated from raw frames to the final data products, pixel by pixel, easing the assessment of the actual, physical significance of weak features.
We shortly discuss the performance of the pipeline and to what extent the DRS can be expected to recover the full information content without introducing artifacts, showing its results on test data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
UKIRT and JCMT, two highly heterogeneous telescopes, have been embarking on several joint software projects covering all areas of observatory operations such as observation preparation and scheduling, telescope control and data reduction. In this paper we briefly explain the processes by which we have arrived at such a large body of shared code and discuss our experience with developing telescope-portable software and code re-use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In some eyes, the Phase I proposal selection process is the most important activity handled by the Space Telescope Science Institute (STScI). Proposing for HST and other missions consists of requesting observing time and/or archival research funding. This step is called Phase I, where the scientific merit of a proposal is considered by a community based peer-review process. Accepted proposals then proceed thru Phase II, where the observations are specified in sufficient detail to enable scheduling on the telescope.
Each cycle the Hubble Space Telescope (HST) Telescope Allocation Committee (TAC) reviews proposals and awards observing time that is valued at $0.5B, when the total expenditures for HST over its lifetime are figured on an annual basis. This is in fact a very important endeavor that we continue to fine-tune and tweak. This process is open to the science community and we constantly receive comments and praise for this process.
Several cycles ago we instituted several significant changes to the process to address concerns such as: Fewer, broader panels, with redundancy to avoid conflicts of interest; Redefinition of the TAC role, to focus on Larger programs; and incentives for the panels to award time to medium sized proposals. In the last cycle, we offered new initiatives to try to enhance the scientific output of the telescope. Some of these initiatives were: Hubble Treasury Program; AR Legacy Program; and the AR Theory Program.
This paper will outline the current HST Peer review process. We will discuss why we made changes and how we made changes from our original system. We will also discuss some ideas as to where we may go in the future to generate a stronger science program for HST and to reduce the burden on the science community. This paper is an update of the status of the HST Peer Review Process that was described in the published paper "Evolution of the HST Proposal Selection Process".
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Elixir system at CFHT provides automatic data quality assurance and calibration for the wide-field mosaic imager camera CFH12K. Elixir consists of a variety of tools, including: a real-time analysis suite which runs at the telescope to provide quick feedback to the observers; a detailed analysis of the calibration data; and an automated pipeline for processing data to be distributed to observers. To date, 2.4 × 1012 night-time sky pixels from CFH12K have been processed by the Elixir system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an overview of the history and technology by which tools placed in the Hubble Space Telescope (HST) data processing pipeline were used to feedback information on observation execution to the scheduling system and observers.
Because the HST is in a relatively low orbit, which imposes a number of constraints upon its observations, it operates in a carefully planned, fully automated mode. To substitute for direct observer involvement available at most ground-based observatories and to provide rapid feedback on failures that might affect future visits, the Space Telescope Science Institute (STScI) gradually evolved a system for screening science and engineering products during pipeline processing. The highly flexible HST data processing system (OPUS) allows tools to be introduced to use the content of FITS keywords to alert production staff to potential telescope and instrument performance failures. Staff members review the flagged data and, if appropriate, notify the observer and the scheduling staff so that they can resolve the problems and possibly repeat the failed observations.
This kind of feedback loop represents a case study for other automated data collection systems where rapid response to certain quantifiable events in the data is required. Observatory operations staff can install processes to look for these events either in the production pipeline or in an associated pipeline into which the appropriate data are piped. That process can then be used to notify scientists to evaluate the data and decide upon a response or to automatically initiate a response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Selections and tradeoffs during mission concept development and ground system architecture definition determine the cost-effectiveness of the spacecraft operations. The Next Generation Space Telescope (NGST) makes this difficult due to its unique mission requirements. Experience has shown a greater savings can be achieved at the ground station and it's interfaces with the spacecraft. Since a majority of the bandwidth is used for science data, this is one of the major areas to explore.
This paper will address problems and experiences with the various approaches to accommodate the ground station interfaces with the spacecraft. As a team we have explored several approaches:
- Antenna size, frequency, and transmit power on the spacecraft is a big driver in determining the ground station cost,
- Data guarantee verses data loss risk,
- Down-linking all data verses putting more logic for science processing on board. This includes discussions on guaranteed data delivery protocols and downlink change only data,
- Evaluation of recording the data at the ground station for a reduced rate playback later, and
- Transmitters at different frequencies for simultaneous downlinks.
Many of these topics and how they are applied, change over the course of time as projects implement their requirements. To achieve the goal of 'low cost', innovated approaches have to be taken into consideration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.