Bolus tracking with nanofilter-based multispectral videography for capturing microvasculature hemodynamics.
ABSTRACT: Multispectral imaging is a highly desirable modality for material-based analysis in diverse areas such as food production and processing, satellite-based reconnaissance, and biomedical imaging. Here, we present nanofilter-based multispectral videography (nMSV) in the 700 to 950?nm range made possible by the tunable extraordinary-optical-transmission properties of 3D metallic nanostructures. Measurements made with nMSV during a bolus injection of an intravascular tracer in the ear of a piglet resulted in spectral videos of the microvasculature. Analysis of the multispectral videos generated contrast measurements representative of arterial pulsation, the distribution of microvascular transit times, as well as a separation of the venous and arterial signals arising from within the tissue. Therefore, nMSV is capable of acquiring serial multispectral images relevant to tissue hemodynamics, which may have application to the detection and identification of skin cancer.
Project description:Interminable surveillance and reconnaissance through various sophisticated multispectral detectors present threats to military equipment and manpower. However, a combination of detectors operating in different wavelength bands (from hundreds of nanometers to centimeters) and based on different principles raises challenges to the conventional single-band camouflage devices. In this paper, multispectral camouflage is demonstrated for the visible, mid-infrared (MIR, 3-5 and 8-14 μm), lasers (1.55 and 10.6 μm) and microwave (8-12 GHz) bands with simultaneous efficient radiative cooling in the non-atmospheric window (5-8 μm). The device for multispectral camouflage consists of a ZnS/Ge multilayer for wavelength selective emission and a Cu-ITO-Cu metasurface for microwave absorption. In comparison with conventional broadband low emittance material (Cr), the IR camouflage performance of this device manifests 8.4/5.9 °C reduction of inner/surface temperature, and 53.4/13.0% IR signal decrease in mid/long wavelength IR bands, at 2500 W ∙ m<sup>-2</sup> input power density. Furthermore, we reveal that the natural convection in the atmosphere can be enhanced by radiation in the non-atmospheric window, which increases the total cooling power from 136 W ∙ m<sup>-2</sup> to 252 W ∙ m<sup>-2</sup> at 150 °C surface temperature. This work may introduce the opportunities for multispectral manipulation, infrared signal processing, thermal management, and energy-efficient applications.
Project description:Many important scientific questions in physics, chemistry and biology require effective methodologies to spectroscopically probe ultrafast intra- and inter-atomic/molecular dynamics. However, current methods that extend into the femtosecond regime are capable of only point measurements or single-snapshot visualizations and thus lack the capability to perform ultrafast spectroscopic videography of dynamic single events. Here we present a laser-probe-based method that enables two-dimensional videography at ultrafast timescales (femtosecond and shorter) of single, non-repetitive events. The method is based on superimposing a structural code onto the illumination to encrypt a single event, which is then deciphered in a post-processing step. This coding strategy enables laser probing with arbitrary wavelengths/bandwidths to collect signals with indiscriminate spectral information, thus allowing for ultrafast videography with full spectroscopic capability. To demonstrate the high temporal resolution of our method, we present videography of light propagation with record high 200 femtosecond temporal resolution. The method is widely applicable for studying a multitude of dynamical processes in physics, chemistry and biology over a wide range of time scales. Because the minimum frame separation (temporal resolution) is dictated by only the laser pulse duration, attosecond-laser technology may further increase video rates by several orders of magnitude.
Project description:SUMMARY:This note describes nTracer, an ImageJ plug-in for user-guided, semi-automated tracing of multispectral fluorescent tissue samples. This approach allows for rapid and accurate reconstruction of whole cell morphology of large neuronal populations in densely labeled brains. AVAILABILITY AND IMPLEMENTATION:nTracer was written as a plug-in for the open source image processing software ImageJ. The software, instructional documentation, tutorial videos, sample image and sample tracing results are available at https://www.cai-lab.org/ntracer-tutorial. SUPPLEMENTARY INFORMATION:Supplementary data are available at Bioinformatics online.
Project description:Intravital video microscopy permits the observation of microcirculatory blood flow. This often requires fluorescent probes to visualize structures and dynamic processes that cannot be observed with conventional bright-field microscopy. Conventional light microscopes do not allow for simultaneous bright-field and fluorescent imaging. Moreover, in conventional microscopes, only one type of fluorescent label can be observed. This study introduces multispectral intravital video microscopy, which combines bright-field and fluorescence microscopy in a standard light microscope. The technique enables simultaneous real-time observation of fluorescently-labeled structures in relation to their direct physical surroundings. The advancement provides context for the orientation, movement, and function of labeled structures in the microcirculation.
Project description:Sleep is indispensable for human health, with sleep disorders initiating a cascade of negative consequences. As our closest phylogenetic relatives, non-human primates (NHPs) are invaluable for comparative sleep studies and exhibit tremendous potential for improving our understanding of human sleep and related disorders. Previous work on measuring sleep in NHPs has mostly used electroencephalography or videography. In this study, simultaneous videography and actigraphy were applied to observe sleep patterns in 10 cynomolgus monkeys ( <i>Macaca fascicularis</i>) over seven nights (12 h per night). The durations of wake, transitional sleep, and relaxed sleep were scored by analysis of animal behaviors from videography and actigraphy data, using the same behavioral criteria for each state, with findings then compared. Here, results indicated that actigraphy constituted a reliable approach for scoring the state of sleep in monkeys and showed a significant correlation with that scored by videography. Epoch-by-epoch analysis further indicated that actigraphy was more suitable for scoring the state of relaxed sleep, correctly identifying 97.57% of relaxed sleep in comparison with video analysis. Only 34 epochs (0.13%) and 611 epochs (2.30%) were differently interpreted as wake and transitional sleep compared with videography analysis. The present study validated the behavioral criteria and actigraphy methodology for scoring sleep, which can be considered as a useful and a complementary technique to electroencephalography and/or videography analysis for sleep studies in NHPs.
Project description:Background:Active sensing is crucial for navigation. It is characterized by self-generated motor action controlling the accessibility and processing of sensory information. In rodents, active sensing is commonly studied in the whisker system. As rats and mice modulate their whisking contextually, they employ frequency and amplitude modulation. Understanding the development, mechanisms, and plasticity of adaptive motor control will require precise behavioral measurements of whisker position. Findings:Advances in high-speed videography and analytical methods now permit collection and systematic analysis of large datasets. Here, we provide 6,642 videos as freely moving juvenile (third to fourth postnatal week) and adult rodents explore a stationary object on the gap-crossing task. The dataset includes sensory exploration with single- or multi-whiskers in wild-type animals, serotonin transporter knockout rats, rats received pharmacological intervention targeting serotonergic signaling. The dataset includes varying background illumination conditions and signal-to-noise ratios (SNRs), ranging from homogenous/high contrast to non-homogenous/low contrast. A subset of videos has been whisker and nose tracked and are provided as reference for image processing algorithms. Conclusions:The recorded behavioral data can be directly used to study development of sensorimotor computation, top-down mechanisms that control sensory navigation and whisker position, and cross-species comparison of active sensing. It could also help to address contextual modulation of active sensing during touch-induced whisking in head-fixed vs freely behaving animals. Finally, it provides the necessary data for machine learning approaches for automated analysis of sensory and motion parameters across a wide variety of signal-to-noise ratios with accompanying human observer-determined ground-truth.
Project description:To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.
Project description:Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye.
Project description:We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems.