GPU Ray-tracing using Irregular GridsComputer Graphics Forum,
Abstract: We present a spatial index structure to accelerate ray tracing on GPUs. It is a flat, non-hierarchical spatial subdivision of the scene into axis aligned cells of varying size. In order to construct it, we first nest an octree into each cell of a uniform grid. We then apply two optimization passes to increase ray traversal performance: First, we reduce the expected cost for ray traversal by merging cells together. This adapts the structure to complex primitive distributions, solving the "teapot in a stadium" problem. Second, we decouple the cell boundaries used during traversal for rays entering and exiting a given cell. This allows us to extend the exiting boundaries over adjacent cells that are either empty or do not contain additional primitives. Now, exiting rays can skip empty space and avoid repeating intersection tests. Finally, we demonstrate that in addition to the fast ray traversal performance, the structure can be rebuilt efficiently in parallel, allowing for ray tracing dynamic scenes.
Spherically symmetric volume elements as basis functions for image reconstructions in computed laminography.Journal of Xray Science and Technology, :1-14
Abstract: Abstract Spherically symmetric volume elements (blobs) were evaluated as basis functions for iterative tomographic reconstructions in computed laminography. We implemented an iterative algorithm for the computation of three-dimensional reconstructions from computed laminography projections based on the simultaneous algebraic reconstruction technique also known as SART. Hereby, the discretization of the volume was realized by means of blobs based on generalized Kaiser-Bessel window functions. We found that band-limiting properties of blob functions are beneficial compared to a voxel basis particular in the case of noisy projections and if only a limited number of projections is available. In this case, using blob basis functions leads to sharper 3D datasets with less artifacts, which improves the capability to detect small features in images such as defects. The increased computational demand per iteration of the algorithm is compensated for by a faster convergence rate when using blobs, such that the overall performance of the tomographic reconstruction is approximately identical for blob as well as voxel basis functions. We conclude that despite the higher complexity, tomographic reconstruction from computed laminography data should be implemented using blob basis functions, especially if noisy data is expected.
Perception-driven Accelerated RenderingComputer Graphics Forum (Proceedings of Eurographics), 36
Abstract: Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Modelling and characterization of ductile fracture surface in Al-Si alloys by means of Voronoi tessellationMaterials Characterization,
Abstract: In this study, a new approach to model the system of dimples on the fracture surface of Al-Si alloys using the weighted Voronoi tessellation is proposed. The tessellation model is applied to metallographic images of the eutectic phase to simulate a fracture surface appearance (as projected on a fractograph) that would potentially exhibit this structure if it had been fractured under uniaxial tensile loading. It enables the determination of geometrical features of virtual fracture surface projections, such as the dimple density, the area and equivalent diameter distributions, and topographic features, such as the dimple depth, the surface area and the roughness, by means of geometrical approximations, empirical and analytical relations. A brief review of the fractographic observations on different Al-Si alloys is made to demonstrate preconditions and motivation for using mosaic methods and in particular their weighted version. The simulation results are confirmed by experimental measurements indicating the credibility and usefulness of the model. The routine for generating the weighted Voronoi diagram is implemented as a Java plugin for the Fiji interface and is easy to execute.
Advanced recording schemes for electron tomographyMRS Bulletin, 41(7):537-541
Abstract: Three-dimensional (3D) scanning transmission electron microscopy (STEM) has become one of the primary tools for analytical characterization in materials science, and also finds increasing use in the life sciences. A number of different recording schemes exist for the acquisition of 3D data using STEM, each capturing different spatial frequencies and, thus, different information about the shape of a specimen. In this article, we present and compare different sampling approaches based on images with both large and small depth of field. We highlight the latest contribution to 3D data acquisition, the combined tilt and focal series. This recording scheme combines the advantages of tilt series-based tomography with 3D data acquisition using a focal series and is particularly beneficial for imaging specimens with thickness of 1 µm or greater.
Feature Adaptive Sampling for Scanning Electron MicroscopyScientific Reports, 6
Abstract: A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning.
Combined Tilt- and Focal-Series Tomography for HAADF-STEMMicroscopy Today, 24(3):26-30
Abstract: A new aid to tomography in the scanning transmission electron microscope (STEM) is called combined tilt- and focal-series (CTFS). This software controls the recording of a tilt series where for each specimen tilt an entire focal series is recorded. This approach is particularly useful for thick specimens where the tilt range may be limited. Use of CTFS leads to a significant reduction of the missing wedge effect and a better representation of the 3D shapes of features in the specimen.
On geometric artifacts in cryo electron tomographyUltramicroscopy, 163:48-61
Abstract: Single-tilt scheme is nowadays the prevalent acquisition geometry in electron tomography and subtomogram averaging experiments. Being an incomplete scheme that induces ill-posedness in the sense of the X-ray or Radon transform inverse problem, it introduces a number of artifacts that directly influence the quality of tomographic reconstructions. Though individually described by different authors before, a systematic study of these acquisition geometry-related artifacts in one place and across representative set of reconstruction methods has not been, to our knowledge, performed before. Moreover, the effects of these artifacts on the reconstructed density are sometimes misinterpreted, attributing them to the wrong cause, especially if their effects accumulate. In this work, we systematically study the major artifacts of single-tilt geometry known as the missing wedge (incomplete projection set problem), the missing information and the specimen-level interior problem (long-object problem). First, we illustratively describe, using a unified terminology, how and why these artifacts arise and when they can be avoided. Next, we describe the effects of these artifacts on the reconstructions across all major classes of reconstruction methods, including newly-appeared methods like the Iterative Nonuniform fast Fourier transform based Reconstruction method (INFR) and the Progressive Stochastic Reconstruction Technique (PSRT). Finally, we draw conclusions and recommendations on numerous points, especially regarding the mutual influence of the geometric artifacts, ability of different reconstruction methods to suppress them as well as implications to the interpretation of both electron tomography and subtomogram averaging experiments.
Building Construction Sets by Tiling Grammar SimplificationComputer Graphics Forum}, 35(2):013--025
Abstract: This paper poses the problem of fabricating physical construction sets from example geometry: A construction set provides a small number of different types of building blocks from which the example model as well as many similar variants can be reassembled. This process is formalized by tiling grammars. Our core contribution is an approach for simplifying tiling grammarssuchthatweobtainphysicallymanufacturablebuildingblocksofcontrollablegranularitywhileretainingvariability, i.e., the ability to construct many different, related shapes. Simpliﬁcation is performed by sequences of two types of elementary operations: non-local joint edge collapses in the tile graphs reduce the granularity of the decomposition and approximate replacement operations reduce redundancy. We evaluate our method on abstract graph grammars in addition to computing several physical construction sets, which are manufactured using a commodity 3D printer.
An Analysis of Eye-Tracking Data in Foveated Ray TracingProceedings of the 2016 Workshop on Eye Tracking and Visualization,
Abstract: We present an analysis of eye tracking data produced during a quality-focused user study of our own foveated ray tracing method. Generally, foveated rendering serves the purpose of adapting actual rendering methods to a user’s gaze. This leads to performance improvements which also allow for the use of methods like ray tracing, which would be computationally too expensive otherwise, in fields like virtual reality (VR), where high rendering performance is important to achieve immersion, or fields like scientific and information visualization, where large amounts of data may hinder real-time rendering capabilities. We provide an overview of our rendering system itself as well as information about the data we collected during the user study, based on fixation tasks to be fulfilled during flights through virtual scenes displayed on a head-mounted display (HMD). We analyze the tracking data regarding its precision and take a closer look at the accuracy achieved by participants when focusing the fixation targets. This information is then put into context with the quality ratings given by the users, leading to a surprising relation between fixation accuracy and quality ratings.
The Ettention software packageUltramicroscopy, :110-118
Keywords: Electron tomography Tomographic reconstruction Software architecture GPU OpenCL Block iterative methods High performance computing
Abstract: We present a novel software package for the problem “reconstruction from projections” in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or manycore architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing.
Marker Detection in Electron Tomography: A Comparative StudyMicroscopy and Microanalysis, 21(6):1591-1601
Keywords: marker detection particle detection electron tomography tomographic reconstruction
Abstract: We conducted a comparative study of three widely used algorithms for the detection of fiducial markers in electron microscopy images. The algorithms were applied to four datasets from different sources. For the purpose of obtaining comparable results, we introduced figures of merit and implemented all three algorithms in a unified code base to exclude software-specific differences. The application of the algorithms revealed that none of the three algorithms is superior to the others in all cases. This leads to the conclusion that the choice of a marker detection algorithm highly depends on the properties of the dataset to be analyzed, even within the narrowed domain of electron tomography.