Neural Radiance Fields (NeRFs) have become a benchmark for 3D modeling. Despite their impressive capabilities, the performance of NeRFs is largely dependent on the quality of the input images. To address this, we propose integrating superresolution techniques with NeRFs to enhance 3D model fidelity. Our approach employs exposure correction to overcome model convergence failures resulting from geometric inconsistencies in Generative Adversarial Network (GAN) outputs. While previous studies have explored geometric consistency using refinement networks and inverse degradation pipelines, our solution seamlessly connects image restoration to the ultimate goal of 3D reconstruction. We report an improvement of 0.1065 in LPIPS across our degradation levels and models.
Here we present a concept for a mobile, completely off-grid, robotic observatory for rapid deployment and observational support. This 1-meter aperture, 3-degree FOV telescope employs state-of-the-art commercial instrumentation such that it not only supports satellite orbit cataloging but also closely spaced object detection/ characterization at atmospheric seeing limits, i.e. sub-arcsecond pixels vs. more traditional cataloging systems’ 2-3 arcsecond pixels. Its relatively large étendue, high throughput, and up to 50 deg/s slew speeds provides for high survey speeds, be it for lost space debris or astronomical transients. We will detail the design and simulated performance of this Deployable, Attritable Optical (DAO) system. Furthermore, each system will employ US Space Force developed observatory control software called SensorKit, a completely open-source software that enables robotic operation and, if desired for SDA purposes, communication with the Unified Data Library. Scheduling, tasking, data processing and dissemination and more are a part of the US Space Force MACHINA program, presented separately in these proceedings.
Aperture photometry is a critical method for estimating the visual magnitudes of stars and satellites, essential in Space Domain Awareness (SDA) for tasks like collision avoidance. Traditional methods have fixed aperture shapes, limiting accuracy and adaptability. We introduce a novel approach that defines pixel-specific regions for the aperture and annulus, significantly improving accuracy. Nevertheless, conventional aperture photometry is constrained by predefined equations, leading to errors and sensitivity to image conditions. To overcome these limitations, we propose a learned photometry pipeline that combines aperture photometry with machine learning. Our approach demonstrates remarkable effectiveness for both stars and satellites across diverse image conditions. We rigorously tested it on three datasets, including a custom synthetic dataset and real imagery. Our results showcase outstanding performance, with a 1.44% error in star visual magnitude estimation and a 0.64% error in satellite visual magnitude estimation.
Recent work demonstrates recognition of artificial satellites in spatially unresolved observations by utilizing learned spectroscopic classification (SpectraNet1 ). That proof of concept exposes critical identifying information currently lacking in catalogs used by space domain awareness stakeholders. In this work we present experiments to increase the accessibility and efficiency of SpectraNet enabled systems by probing the bandpass and resolution requirements for learned recognition of satellites. To enable affordable, off the shelf instrumentation, this work focuses on wavelength ranges accessible by Silicon-based detectors (400-1000 nanometers). While the SpectraNet proof of concept utilized a medium resolution spectrograph on a 3.6 meter telescope at 10,000 feet elevation, we show that the identifying spectral features relate to an object’s overall spectral energy density and are accessible at significantly lower spectral resolution. This finding relaxes the need for large telescopes at high altitude. We further demonstrate that the technology can be utilized via simultaneous multi-band filter photometry. Design considerations for properly obtaining simultaneous photometry are discussed. Thus this work demonstrates that−in simulation−learned spectral recognition is an effective technology from high resolution spectrographs through simultaneous multi-filter photometric instruments. We provide experiments to understand the minimum engineered system needed to perform effective learned recognition, such that the technology can be hardened and widely proliferated.
Recent work demonstrates that convolutional neural networks can be trained to recognize artificial satellites from spatially unresolved ground-based observations (SpectraNet). SpectraNet enables space domain awareness (SDA) catalogs to be enriched with object identity, a critical source of information for space domain stakeholders. As learned spectral SDA matures, conditions for training and deploying performant and calibrated neural network recognition algorithms must be measured. In this work we present a simulated three year baseline of observations using a longslit spectrograph on a single telescope. We use this dataset to develop a framework for measuring baseline data requirements for performant SpectraNet models, and for testing the performance of those models after deployment. On this limited (single telescope, longslit spectrograph) setup, the presented framework returns a performant model after three weeks of collections. Further, we find that a model can be deployed for a full annual cycle after twenty six weeks of data collection, and the model reaches maximum sustained inference performance after a year. Thus a SpectraNet powered longslit spectrograph can provide tactical inferences after a few weeks and be retrained to infer through seasonal variability during deployment. We find that the simulated system and dataset regularly exceed 82% classification accuracy, and discuss performance improvements with enhanced instrumentation and/or multi-telescope networks.
We introduce two new tools to the application of polarimetry to space domain awareness (SDA), the LoVIS spectropolarimeter on the 3.6 m AEOS telescope and deep convolutional neural networks (CNNs). Using a dataset of 20,000 simulated satellite observations, we train a CNN to map distance-invariant spectropolarimetric data to object identity. We report the classification accuracy of this simulation for a 9-class satellite problem, comparing results against low-resolution spectra for which prior success has been demonstrated as well as solar phase angle and satellite apparent magnitude. These initial experiments show potential for improved discrimination against nearly identical satellites on the basis of added polarimetric data.
The detection of closely spaced artificial satellites informs tactical decision making in a high risk scenario in the space domain. In regimes where spatial information is lost (ground observations of small or distant satellites), spectroastrometry simulations have demonstrated the potential to detect the presence of multiple objects down to 0′′.05–ten meters at geostationary orbit–using a medium resolution optical spectrograph on a large aperture telescope.1 This technique falls into the growing field of learned space domain awareness: leveraging convolutional neural networks to rapidly infer tactical information from complex, non-intuitive data. In this work we present a field rotation nodding technique that removes the need for a priori knowledge of the closely spaced object on sky orientation. We discuss modifications to an optical spectrograph necessary to perform this technique. We present simulated bounds on the effectiveness of spectroastrometry for the detection of closely spaced objects.
Effective space domain awareness (SDA) requires accurate positions and identities of artificial satellites. These measurements–critical to effective decision making in the high risk on orbit environment–are daunting in the deep space geosynchronous (GEO) regime. Here, distance precludes collection of spatially resolved measurements from ground-based telescopes. Neural networks designed for deep space object detection and spectroscopic positive identification have been shown to be effective tools for these mission critical SDA measurements. In this work we demonstrate the potential of slitless field spectroscopy to provide simultaneous object detection and identification of on orbit assets at GEO. Slitless spectrographs expose the reflection physics needed for spectroscopic positive identification without destroying the spatial information used for object detection. Such systems are compact and hardened in comparison to classic spectrographs, and may be deployed to small telescopes. In this work we present a GPU-accelerated simulation environment for the production of realistic synthetic imagery to support generation of large datasets for deep learning. We establish a baseline for simultaneous detection and identification performance by training convolutional neural networks on synthetic datasets created with this tool. This work reduces risk for initial technology development and dataset collection, and provides constraints to the design and development of slitless spectrograph systems for space domain awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.