Hard X-ray fluorescence (XRF) microscopy offers unparalleled sensitivity for quantitative analysis of most of the trace elements in biological samples, such as Fe, Cu, and Zn. These trace elements play critical roles in many biological processes. With the advanced nano-focusing optics, nowadays hard X-rays can be focused down to 30 nm or below and can probe trace elements within subcellular compartments. However, XRF imaging does not usually reveal much information on ultrastructure, because the main constituents of biomaterials, i.e. H, C, N, and O, have low fluorescence yield and little absorption contrast at multi-keV X-ray energies. An alternative technique for imaging ultrastructure is ptychography. One can record far-field diffraction patterns from a coherently illuminated sample, and then reconstruct the complex transmission function of the sample. In theory the spatial resolution of ptychography can reach the wavelength limit. In this manuscript, we will describe the implementation of ptychography at the Bionanoprobe (a recently developed hard XRF nanoprobe at the Advanced Photon Source) and demonstrate simultaneous ptychographic and XRF imaging of frozen-hydrated biological whole cells. This method allows locating trace elements within the subcellular structures of biological samples with high spatial resolution. Additionally, both ptychographic and XRF imaging are compatible with tomographic approach for 3D visualization.
X-ray fluorescence offers unparalleled sensitivity for imaging the nanoscale distribution of trace elements in micrometer thick samples, while x-ray ptychography offers an approach to image weakly fluorescing lighter elements at a resolution beyond that of the x-ray lens used. These methods can be used in combination, and in continuous scan mode for rapid data acquisition when using multiple probe mode reconstruction methods. We discuss here the opportunities and limitations of making use of additional information provided by ptychography to improve x-ray fluorescence images in two ways: by using position-error-correction algorithms to correct for scan distortions in fluorescence scans, and by considering the signal-to-noise limits on previously-demonstrated ptychographic probe deconvolution methods. This highlights the advantages of using a combined approach.
Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual
environments and high resolution tiled display walls. This paper outlines the design and implementation of the
CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
We present a novel immersive workstation environment that scientists can use for 3D data exploration and as their everyday
2D computer monitor. Our implementation is based on an autostereoscopic dynamic parallax barrier 2D/3D display, interactive
input devices, and a software infrastructure that allows client/server software modules to couple the workstation to
scientists' visualization applications. This paper describes the hardware construction and calibration, software components,
and a demonstration of our system in nanoscale materials science exploration.
Modern computational science poses two challenges for scientific visualization: managing the size of resulting
datasets and extracting maximum knowledge from them. While our team attacks the first problem by implementing
parallel visualization algorithms on supercomputing architectures at vast scale, we are experimenting
with autostereoscopic display technology to aid scientists in the second challenge. We are building a visualization
framework connecting parallel visualization algorithms running on one of the world's most powerful supercomputers
with high-quality autostereo display systems. This paper is a case study of the development of an end-to-end
solution that couples scalable volume rendering on thousands of supercomputer cores to the scientists' interaction
with autostereo volume rendering at their desktops and larger display spaces. We discuss modifications to our
volume rendering algorithm to produce perspective stereo images, their transport from supercomputer to display
system, and the scientists' 3D interactions. A lightweight display client software architecture supports a variety
of monoscopic and autostereoscopic display technologies through a flexible configuration framework. This case
study provides a foundation that future research can build upon in order to examine how autostereo immersion
in scientific data can improve understanding and perhaps enable new discoveries.
Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE
Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first
Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has
grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a
barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person
interactive VR experience without the need for glasses or other gear to be worn by the user.
Since Varrier's inception, new algorithmic and systemic developments have produced performance and quality
improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and
computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using
a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The
Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional
stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on
the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics
card enhancements.
Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier.
Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks
them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable
to commercially available tracking systems.
Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered
data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported.
Local as well as distributed computation is employed in various applications. Long-distance collaboration has been
demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science
have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop
forms to fit a variety of space and budget constraints.
Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a
static barrier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.