Dual Motor-Cognitive Virtual Reality Training Impacts Dual-Task Performance in Freezing of GaitBiomedical and Health Informatics, IEEE Journal of, PP(99):1-1
Keywords: Accuracy;Biomedical measurement;Diseases;Foot;Legged locomotion;Time measurement;Training
An automated workflow for the biomechanical simulation of a tibia with implant using computed tomography and the finite element methodComputers and Mathematics with Applications,
Abstract: In this study, a fully automated workflow is presented for the biomechanical simulation of bone-implant systems using the example of a fractured tibia. The workflow is based on routinely acquired tomographic data and consists of an automatic segmentation and material assignment, followed by a mesh generation step and, finally, a mechanical simulation using the finite element method (FEM). Because of the high computational costs of the FEM simulations, an adaptive mesh refinement scheme was developed that limits the highest resolution to materials that can take large amounts of mechanical stress. The scheme was analyzed and it was shown that it has no relevant impact on the simulation precision. Thus, a fully automatic, reliable and computationally feasible method to simulate mechanical properties of bone-implant systems was presented, which can be used for numerous applications, ranging from the design of patient-specific implants to surgery preparation and post-surgery implant verification.
Progressive Stochastic Reconstruction Technique (PSRT) for Cryo Electron TomographyJournal of Structural Biology, 189(3):195-206
Abstract: Cryo Electron Tomography (cryoET) plays an essential role in Structural Biology, as it is the only technique that allows to study the structure of large macromolecular complexes in their close to native environment in situ. The reconstruction methods currently in use, such as Weighted Back Projection (WBP) or Simultaneous Iterative Reconstruction Technique (SIRT), deliver noisy and low-contrast reconstructions, which complicates the application of high-resolution protocols, such as Subtomogram Averaging (SA). We propose a Progressive Stochastic Reconstruction Technique (PSRT) – a novel iterative approach to tomographic reconstruction in cryoET based on Monte Carlo random walks guided by Metropolis–Hastings sampling strategy. We design a progressive reconstruction scheme to suit the conditions present in cryoET and apply it successfully to reconstructions of macromolecular complexes from both synthetic and experimental datasets. We show how to integrate PSRT into SA, where it provides an elegant solution to the region-of-interest problem and delivers high-contrast reconstructions that significantly improve template-based localization without any loss of high-resolution structural information. Furthermore, the locality of SA is exploited to design an importance sampling scheme which significantly speeds up the otherwise slow Monte Carlo approach. Finally, we design a new memory efficient solution for the specimen-level interior problem of cryoET, removing all associated artifacts.
Matched Backprojection Operator for Combined Scanning Transmission Electron Microscopy Tilt- and Focal SeriesMicroscopy & Microanalysis,
Abstract: Combined tilt- and focal series scanning transmission electron microscopy (STEM) is a recently developed method to obtain nanoscale three-dimensional (3D) information of thin specimens. In this study, we formulate the forward projection in this acquisition scheme as a linear operator and prove that it is a generalization of the Ray transform for parallel illumination. We analytically derive the corresponding backprojection operator as the adjoint of the forward projection. We further demonstrate that the matched backprojection operator drastically improves the convergence rate of iterative 3D reconstruction compared to the case where a backprojection based on heuristic weighting is used. In addition, we show that the 3D reconstruction is of better quality.
Towards a Performance-portable Description of Geometric Multigrid Algorithms using a Domain-specific LanguageJournal of Parallel and Distributed Computing (JPDC), 24(12):3191-3201
Keywords: multigrid; multiresolution; image pyramid; domain-specific language; stencil codes; code generation; GPU; CUDA; OpenCL
Abstract: High Performance Computing (HPC) systems are nowadays more and more heterogeneous. Different processor types can be found on a single node including accelerators such as Graphics Processing Units (GPUs). To cope with the challenge of programming such complex systems, this work presents a domain-specific approach to automatically generate code tailored to different processor types. Low-level CUDA and OpenCL code is generated from a high-level description of an algorithm specified in a Domain-Specific Language (DSL) instead of writing hand-tuned code for GPU accelerators. The DSL is part of the Heterogeneous Image Processing Acceleration (HIPAcc) framework and was extended in this work to handle grid hierarchies in order to model different cycle types. Language constructs are introduced to process and represent data at different resolutions. This allows to describe image processing algorithms that work on image pyramids as well as multigrid methods in the stencil domain. By decoupling the algorithm from its schedule, the proposed approach allows to generate efficient stencil code implementations. Our results show that similar performance compared to hand-tuned codes can be achieved.
shade.js: Adaptive Material DescriptionsComputer Graphics Forum, 33(7):51--60
Code Refinement of Stencil CodesParallel Processing Letters (PPL), 24(3):1-16
Keywords: stencil codes; partial evaluation; domain-specific language
Abstract: A straightforward implementation of an algorithm in a general-purpose programming language does usually not deliver peak performance: compilers often fail to automatically tune the code for certain hardware peculiarities like memory hierarchy or vector execution units. Manually tuning the code is firstly error-prone as well as time-consuming and secondly taints the code by exposing those peculiarities to the implementation. A popular method to circumvent these problems is to implement the algorithm in a Domain-Specific Language (DSL). A DSL compiler can then automatically tune the code for the target platform. In this paper we show how to embed a DSL for stencil codes in another language. In contrast to prior approaches we only use a single language for this task. Furthermore, we offer explicit control over code refinement in the language itself which is used to specialize stencils for particular scenarios. Our first results show that our specialized programs achieve competitive performance compared to hand-tuned CUDA programs.
Progressive Light Transport Simulation on the GPU: Survey and ImprovementsCM Trans. Graph, 33(3):29:1-29:19
Keywords: GPU; Global illumination; bidirectional path tracing; high performance; vertex connection and merging
Abstract: Graphics Processing Units (GPUs) recently became general enough to enable implementation of a variety of light transport algorithms. However, the efficiency of these GPU implementations has received relatively little attention in the research literature and no systematic study on the topic exists to date. The goal of our work is to fill this gap. Our main contribution is a comprehensive and in-depth investigation of the efficiency of the GPU implementation of a number of classic as well as more recent progressive light transport simulation algorithms. We present several improvements over the state-of-the-art. In particular, our Light Vertex Cache, a new approach to mapping connections of sub-path vertices in Bidirectional Path Tracing on the GPU, outperforms the existing implementations by 30-60%. We also describe a first GPU implementation of the recently introduced Vertex Connection and Merging algorithm [Georgiev et al. 2012], showing that even relatively complex light transport algorithms can be efficiently mapped on the GPU. With the implementation of many of the state-of-the-art algorithms within a single system at our disposal, we present a unique direct comparison and analysis of their relative performance.
A Collaborative Virtual Workspace for Factory Configuration and EvaluationCollaborative Computing,
Combined Scanning Transmission Electron Microscopy Tilt- and Focal SeriesMicroscopy and Microanalysis, :1-13
Keywords: STEM, tomography, 3D, focal series, whole cell, nanoparticle, SART, 3D reconstruction, back projection
Abstract: In this study, a combined tilt- and focal series is proposed as a new recording scheme for high-angle annular dark-field scanning transmission electron microscopy (STEM) tomography. Three-dimensional (3D) data were acquired by mechanically tilting the specimen, and recording a through-focal series at each tilt direction. The sample was a whole-mount macrophage cell with embedded gold nanoparticles. The tilt–focal algebraic reconstruction technique (TF-ART) is introduced as a new algorithm to reconstruct tomograms from such combined tilt- and focal series. The feasibility of TF-ART was demonstrated by 3D reconstruction of the experimental 3D data. The results were compared with a conventional STEM tilt series of a similar sample. The combined tilt- and focal series led to smaller “missing wedge” artifacts, and a higher axial resolution than obtained for the STEM tilt series, thus improving on one of the main issues of tilt series-based electron tomography.
An Open Modular Middleware for Interoperable Virtual EnvironmentsProceedings of the 12th IEEE International Conference on Cyberworlds,
Grand Challenges: Material Models in AutomotiveWorkshop on Material Appearance Modeling (2013), Eds. H. Rushmeier and R. Klein,