Project description:Quantitative assessment of retinal microvasculature in optical coherence tomography angiography (OCTA) images is important for studying, diagnosing, monitoring, and guiding the treatment of ocular and systemic diseases. However, the OCTA user community lacks universal and transparent image analysis tools that can be applied to images from a range of OCTA instruments and provide reliable and consistent microvascular metrics from diverse datasets. We present a retinal extension to the OCTA Vascular Analyser (OCTAVA) that addresses the challenges of providing robust, easy-to-use, and transparent analysis of retinal OCTA images. OCTAVA is a user-friendly, open-source toolbox that can analyse retinal OCTA images from various instruments. The toolbox delivers seven microvascular metrics for the whole image or subregions and six metrics characterising the foveal avascular zone. We validate OCTAVA using images collected by four commercial OCTA instruments demonstrating robust performance across datasets from different instruments acquired at different sites from different study cohorts. We show that OCTAVA delivers values for retinal microvascular metrics comparable to the literature and reduces their variation between studies compared to their commercial equivalents. By making OCTAVA publicly available, we aim to expand standardised research and thereby improve the reproducibility of quantitative analysis of retinal microvascular imaging. Such improvements will help to better identify more reliable and sensitive biomarkers of ocular and systemic diseases.
Project description:Environment-sensitive probes are frequently used in spectral and multi-channel microscopy to study alterations in cell homeostasis. However, the few open-source packages available for processing of spectral images are limited in scope. Here, we present VISION, a stand-alone software based on Python for spectral analysis with improved applicability. In addition to classical intensity-based analysis, our software can batch-process multidimensional images with an advanced single-cell segmentation capability and apply user-defined mathematical operations on spectra to calculate biophysical and metabolic parameters of single cells. VISION allows for 3D and temporal mapping of properties such as membrane fluidity and mitochondrial potential. We demonstrate the broad applicability of VISION by applying it to study the effect of various drugs on cellular biophysical properties. the correlation between membrane fluidity and mitochondrial potential, protein distribution in cell-cell contacts and properties of nanodomains in cell-derived vesicles. Together with the code, we provide a graphical user interface for easy adoption.
Project description:Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g. words, sentences or music, by lack of tools able to manipulate the stimulus dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, able to systematically randomize the prosody/melody of existing speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment's pitch, duration or amplitude, using a new Python-language implementation of the phase-vocoder digital audio technique. We present here two applications of the tool to generate stimuli for studying intonation processing of interrogative vs declarative speech, and rhythm processing of sung melodies.
Project description:Super-resolved structured illumination microscopy (SR-SIM) is an important tool for fluorescence microscopy. SR-SIM microscopes perform multiple image acquisitions with varying illumination patterns, and reconstruct them to a super-resolved image. In its most frequent, linear implementation, SR-SIM doubles the spatial resolution. The reconstruction is performed numerically on the acquired wide-field image data, and thus relies on a software implementation of specific SR-SIM image reconstruction algorithms. We present fairSIM, an easy-to-use plugin that provides SR-SIM reconstructions for a wide range of SR-SIM platforms directly within ImageJ. For research groups developing their own implementations of super-resolution structured illumination microscopy, fairSIM takes away the hurdle of generating yet another implementation of the reconstruction algorithm. For users of commercial microscopes, it offers an additional, in-depth analysis option for their data independent of specific operating systems. As a modular, open-source solution, fairSIM can easily be adapted, automated and extended as the field of SR-SIM progresses.
Project description:Translating deep learning research from theory into clinical practice has unique challenges, specifically in the field of neuroimaging. In this paper, we present DeepNeuro, a Python-based deep learning framework that puts deep neural networks for neuroimaging into practical usage with a minimum of friction during implementation. We show how this framework can be used to design deep learning pipelines that can load and preprocess data, design and train various neural network architectures, and evaluate and visualize the results of trained networks on evaluation data. We present a way of reproducibly packaging data pre- and postprocessing functions common in the neuroimaging community, which facilitates consistent performance of networks across variable users, institutions, and scanners. We show how deep learning pipelines created with DeepNeuro can be concisely packaged into shareable Docker and Singularity containers with user-friendly command-line interfaces.
Project description:PurposeIntroduce Shimming Toolbox ( https://shimming-toolbox.org), an open-source software package for prototyping new methods and performing static, dynamic, and real-time B0 shimming as well as B1 shimming experiments.MethodsShimming Toolbox features various field mapping techniques, manual and automatic masking for the brain and spinal cord, B0 and B1 shimming capabilities accessible through a user-friendly graphical user interface. Validation of Shimming Toolbox was demonstrated in three scenarios: (i) B0 dynamic shimming in the brain at 7T using custom AC/DC coils, (ii) B0 real-time shimming in the spinal cord at 3T, and (iii) B1 static shimming in the spinal cord at 7T.ResultsThe B0 dynamic shimming of the brain at 7T took about 10 min to perform. It showed a 47% reduction in the standard deviation of the B0 field, associated with noticeable improvements in geometric distortions in EPI images. Real-time dynamic xyz-shimming in the spinal cord took about 5 min and showed a 30% reduction in the standard deviation of the signal distribution. B1 static shimming experiments in the spinal cord took about 10 min to perform and showed a 40% reduction in the coefficient of variation of the B1 field.ConclusionShimming Toolbox provides an open-source platform where researchers can collaborate, prototype and conveniently test B0 and B1 shimming experiments. Future versions will include additional field map preprocessing techniques, optimization algorithms, and compatibility across multiple MRI manufacturers.
Project description:Functional assessment of in vitro neuronal networks-of relevance for disease modelling and drug testing-can be performed using multi-electrode array (MEA) technology. However, the handling and processing of the large amount of data typically generated in MEA experiments remains a huge hurdle for researchers. Various software packages have been developed to tackle this issue, but to date, most are either not accessible through the links provided by the authors or only tackle parts of the analysis. Here, we present ''MEA-ToolBox'', a free open-source general MEA analytical toolbox that uses a variety of literature-based algorithms to process the data, detect spikes from raw recordings, and extract information at both the single-channel and array-wide network level. MEA-ToolBox extracts information about spike trains, burst-related analysis and connectivity metrics without the need of manual intervention. MEA-ToolBox is tailored for comparing different sets of measurements and will analyze data from multiple recorded files placed in the same folder sequentially, thus considerably streamlining the analysis pipeline. MEA-ToolBox is available with a graphic user interface (GUI) thus eliminating the need for any coding expertise while offering functionality to inspect, explore and post-process the data. As proof-of-concept, MEA-ToolBox was tested on earlier-published MEA recordings from neuronal networks derived from human induced pluripotent stem cells (hiPSCs) obtained from healthy subjects and patients with neurodevelopmental disorders. Neuronal networks derived from patient's hiPSCs showed a clear phenotype compared to those from healthy subjects, demonstrating that the toolbox could extract useful parameters and assess differences between normal and diseased profiles.
Project description:Proton irradiation is a well-established method to treat deep-seated tumors in radio oncology. Usually, an X-ray computed tomography (CT) scan is used for treatment planning. Since proton therapy is based on the precise knowledge of the stopping power describing the energy loss of protons in the patient tissues, the Hounsfield units of the planning CT have to be converted. This conversion introduces range errors in the treatment plan, which could be reduced, if the stopping power values were extracted directly from an image obtained using protons instead of X-rays. Since protons are affected by multiple Coulomb scattering, reconstruction of the 3D stopping power map results in limited image quality if the curved proton path is not considered. This work presents a substantial code extension of the open-source toolbox TIGRE for proton CT (pCT) image reconstruction based on proton radiographs including a curved proton path estimate. The code extension and the reconstruction algorithms are GPU-based, allowing to achieve reconstruction results within minutes. The performance of the pCT code extension was tested with Monte Carlo simulated data using three phantoms (Catphan® high resolution and sensitometry modules and a CIRS patient phantom). In the simulations, ideal and non-ideal conditions for a pCT setup were assumed. The obtained mean absolute percentage error was found to be below 1% and up to 8 lp/cm could be resolved using an idealized setup. These findings demonstrate that the presented code extension to the TIGRE toolbox offers the possibility for other research groups to use a fast and accurate open-source pCT reconstruction.
Project description:Fluorescence calcium imaging using a range of microscopy approaches, such as two-photon excitation or head-mounted "miniscopes," is one of the preferred methods to record neuronal activity and glial signals in various experimental settings, including acute brain slices, brain organoids, and behaving animals. Because changes in the fluorescence intensity of genetically encoded or chemical calcium indicators correlate with action potential firing in neurons, data analysis is based on inferring such spiking from changes in pixel intensity values across time within different regions of interest. However, the algorithms necessary to extract biologically relevant information from these fluorescent signals are complex and require significant expertise in programming to develop robust analysis pipelines. For decades, the only way to perform these analyses was for individual laboratories to write their custom code. These routines were typically not well annotated and lacked intuitive graphical user interfaces (GUIs), which made it difficult for scientists in other laboratories to adopt them. Although the panorama is changing with recent tools like CaImAn, Suite2P, and others, there is still a barrier for many laboratories to adopt these packages, especially for potential users without sophisticated programming skills. As two-photon microscopes are becoming increasingly affordable, the bottleneck is no longer the hardware, but the software used to analyze the calcium data optimally and consistently across different groups. We addressed this unmet need by incorporating recent software solutions, namely NoRMCorre and CaImAn, for motion correction, segmentation, signal extraction, and deconvolution of calcium imaging data into an open-source, easy to use, GUI-based, intuitive and automated data analysis software package, which we named EZcalcium.
Project description:Bidirectional microscopy (BDM) combines simultaneous targeted optical perturbation and imaging of biophysical or biochemical signals (e.g. membrane voltage, Ca2+, or signaling molecules). A core challenge in BDM is precise spatial and temporal alignment of stimulation, imaging, and other experimental parameters. Here we present Luminos, an open-source MATLAB library for modular and precisely synchronized control of BDM experiments. The system supports hardware-triggered synchronization across stimulation, recording, and imaging channels with microsecond accuracy. Source code and documentation for Luminos are available online at https://www.luminosmicroscopy.com and https://github.com/adamcohenlab/luminos-microscopy. This library will facilitate development of bidirectional microscopy methods across the biological sciences.