KEYWORDS: Optical surfaces, Modulation transfer functions, Design, Diffraction, Wavefront aberrations, Ray tracing, Wavefronts, Point spread functions, Modeling, Diffractive optical elements
Hybrid diffractive lenses are an enabling technology that allow the shaping and control of wavefronts by precisely controlled zone structures, a coherent version of a standard Fresnel lens. They are extremely useful in the medium and long wave infrared spectral regions for performing colour correction, where traditional cemented doublets (that are used in the visible region) are not an option. These surface structures are often modelled not as an actual physical structure, but in a way that treats the surface fictitiously as a phase function on the surface. This makes some results dubious and provides a substantial difficulty in assessing and specifying tolerances. In the current presentation, we review a more physical model based on the ideas of zone decomposition and then show how this may be applied to advantage for multi-order diffractive lenses (where the blaze structure is now an integer multiple of the basic step height). The zone decomposition view is ideal for understanding the diffractive structure on a lens surface. In particular, it allows one to view diffraction efficiency and colour correction in a different manner. With this standpoint, one sees how interpolation takes place from a standard diffractive surface all the way up to purely refractive Fresnel lens. The multi-order diffractive surface sits between these exhibiting both coherence across different zones but also the onset of incoherence, thereby returning to a surface with only refractive properties.
I discuss how light propagation, both wave and ray dual aspects, can be implemented and its origin within a Feynman path integral approach. This can be done for both scalar fields and the full vectorial field descriptions of classical electromagnetism as applied to imaging problems. A key part of this scheme is in generalising the standard optical path length integral from a scalar to a matrix quantity. Reparametrization invariance along rays allows a covariant formulation where propagation can take place along a general curve. The current programme then gives a practical realisation of both gauge invariance and differential geometry concepts. As a specific example, a general gradient index (GRIN) rod fiber background is used to demonstrate the scheme. Calculations such as the evaluation of the Gouy phase, and parallel transport of states of polarisation provide examples of applicability of the scheme. As a particular noteworthy example and application, I show how the current approach allows for the evaluation of observable effects in GRIN lens cascades where additionally there is a spatially varying birefringence. This is a prime candidate for a perturbative Feynman diagram evaluation since the birefringence is much smaller than the bulk refractive index.
Hybrid diffractive lenses are an enabling technology that allows the shaping and control of wavefronts by precisely controlled zone structures. In particular, they are extremely useful in the medium and long wave infrared spectral regions for performing colour correction, where the lens element count can be reduced (replacing two heavy and expensive infrared materials with a single element). Yet these surface structures are often modelled in a way that treats the surface purely as an attached phase function and not an actual physical structure. This makes some results dubious and provides a substantial difficulty in assessing and specifying tolerances. In the current presentation, we move to a more physical model based on the ideas of zone decomposition.
When vision is provided through thermal-imaging systems field-of-view is reduced, effectively the soldier must operate with severe tunnel vision and so there is a requirement for a system which provides automated warning and immersive imaging. We present a computational multi-aperture thermal infrared (MA-TIR) imaging system with single-photon range imaging to provide enhanced video-rate detection of obscured biological signatures in clutter. Our multi-camera computational imaging system creates a 360° panoramic image, and we employ synthetic baseline integral imaging (SBII) for the construction of three-dimensional thermal scenes, including detection of occluded objects. We further fuse thermal imaging with covert time-correlated single-photon counting (TCSPC) LIDAR to provide the complementary capability of video-rate ranging with the ability to detect and classify targets through clutter, particularly based on movement signatures. Finally, we demonstrate the ability to discriminate between biological scene components and static clutter based on temporal modulations of picosecond resolution TCSPC returns.
Phase space provides the natural formalism with which to formulate optical imaging problems as a system with constraints. We consider the general formulation of optical imaging problems and look at two examples. The first example is a completely asymmetric freeform prism that has titled surfaces. The second example is a gradient index medium that is exactly solvable. This gives an alternative formalism to standard Seidel aberrations and nodal aberration theory, which can be used in the design process.
Phase space provides the natural formalism with which to formulate optical imaging problems as a system with constraints. We consider the general formulation of optical imaging problems and look at two examples. The first example is a completely asymmetric freeform prism that has titled surfaces. The second example is a GRIN media that is exactly solvable. This gives an alternative formalism to standard Seidel aberrations and nodal aberration theory that can be used in the design process.
We describe how the use of multiple-camera imaging systems provides an interesting alternative imaging modality to conventional single-aperture imaging, but with a different challenge: to computationally integrate diverse images while demonstrating an overall system benefit. We report the use of super-resolution with arrays of nominally identical longwave infrared cameras to yield high-resolution imaging with reduced track length, while various architectures enable foveal imaging, 4π and 3D imaging through the exploitation of integral imaging techniques. Strikingly, multi-camera spectral imaging using a camera array can uniquely demonstrate video-rate imaging, high performance and low cost.
We consider using phase space techniques and methods in analysing optical ray propagation in head mounted display systems. Two examples are considered that illustrate the concepts and methods. Firstly, a shark tooth freeform geometry, and secondly, a waveguide geometry that replicates a pupil in one dimension. Classical optics and imaging in particular provide a natural stage to employ phase space techniques, albeit as a constrained system. We consider how phase space provides a global picture of the physical ray trace data. As such, this gives a complete optical world history of all of the rays propagating through the system. Using this data one can look at, for example, how aberrations arise on a surface by surface basis. These can be extracted numerically from phase space diagrams in the example of a freeform imaging prism. For the waveguide geometry, phase space diagrams provide a way of illustrating how replicated pupils behave and what these imply for design considerations such as tolerances.
We consider using the Alvarez lens concept to perform focal length change in conventional optical systems. The Alvarez pair are a good example of freeform surfaces that are used to imprint a deformation into the propagating wavefront. In addition, we try to better understand the paraxial theory of each freeform component in building up to a composite lens system. An example dual field of view system in the medium wave infrared is presented. An inherent feature of the Alvarez pair is the axial symmetry breaking due to both the finite air gap between the cubic surfaces and the transverse movement of the pair. This has implications for the wavefront at the image plane. Having developed the first order theory one can better understand misalignment tolerances and how these produce certain wavefront aberrations. Most notably, misalignments lead to simple expressions in terms of the Zernike polynomials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.