The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
The National Ignition Facility (NIF) is producing experimental results for the study of Inertial Confinement Fusion (ICF). The Gamma Reaction History (GRH) diagnostic at NIF can detect gamma rays to measure fusion burn parameters such as fusion burn width, bang time, neutron yield, and areal density of the compressed ablator for cryogenic deuterium-tritium (DT) implosions. Gamma-ray signals detected with this diagnostic are inherently distorted by hardware impulse response functions (IRFs) and gains, and are comprised of several components including gamma rays from laser-plasma interactions (LPI). One method for removing hardware distortions to approximate the gamma-ray reaction history is deconvolution. However, deconvolution of the distorted signal to obtain the gamma-ray reaction history and its associated parameters presents an ill-posed inverse problem and does not separate out the source components of the gamma-ray signal. A multi-dimensional parameter space model for the distorted gamma-ray signal has been developed in the literature. To complement a deconvolution, we develop a multi-objective optimization algorithm to determine the model parameters so that the error between the model and the collected gamma-ray data is minimized in the least-squares sense. The implementation of the optimization algorithm must be suffciently robust to be used in automated production software. To achieve this level of robustness, impulse response signals must be carefully processed and constraints on the parameter space based on theory and experimentation must be implemented to ensure proper convergence of the algorithm. In this paper, we focus on the optimization algorithm's theory and implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.