Information

Spatial resolutions in optical microscopy

Spatial resolutions in optical microscopy


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I have read that different optical imaging techniques such as such as wide-field microscopy, confocal microscopy or STED microscopy can theoretically achieve a different spatial resolution.

However, I was only able to find information about the STED microscopy's spatial resolution (5.8 nm) on Wikipedia.

Does anybody know of any references or recommended reading (preferably free) where I can learn more about different optical microscopy methods and how the cope with the diffraction limit?


The theoretical resolution for an optical microscope depends on the wavelength used, but is close to 0.22 um (220 nm). Confocal microscopy does not greatly improve axial resolution; it dramatically improves the in-plane resolution (x and y directions, by excluding extraneous light). Theres not a lot to do to "cope" with that using those techniques. If you need more resolution, you can use the super-resolution techniques you mention, or the good old TEM.

There is some more information on the Nikon website: http://www.microscopyu.com/articles/formulas/formulasresolution.html

and on Olympus website: http://www.olympusconfocal.com/theory/resolutionintro.html


You could also have a look at Leica Microsystems' Science Lab. You will find lots of articles and tutorials on the different microscopy methods, ranging from basic microscopy knowledge to specific know-how, including latest information on STED: www.leica-microsystems.com/science-lab

For example: gCW-STED Microscopy: When the Arrival Time of a Photon Matters http://www.leica-microsystems.com/science-lab/gcw-sted-microscopy-when-the-arrival-time-of-a-photon-matters/

From Molecules to Tissues - Optical Tools for Cancer Research http://www.leica-microsystems.com/science-lab/from-molecules-to-tissues-optical-tools-for-cancer-research/

(Disclosure: I have been involved in the launch of Leica Science Lab)


A couple of review articles you could read up on.

1) Leung BO, Chou KC. Review of super-resolution fluorescence microscopy for biology. Appl Spectrosc. 2011 Sep;65(9):967-80.

2) Huang B, Bates M, Zhuang X. Super-resolution fluorescence microscopy. Annu Rev Biochem. 2009;78:993-1016.


Spatial Frequency and Image Resolution

A method often utilized in defining the limiting resolution of an optical microscope is to observe the conoscopic image of periodic line gratings as a function of spatial frequency and numerical aperture. Conoscopic images of the condenser iris opening, which represent a diffraction pattern induced by periodic spacing in the specimen, are oriented at right angles to the long axis of lines forming the periodic grating. If the grating has very large spacings between adjacent lines, several images of the condenser iris appear within the objective lens aperture when the rear focal plane is viewed with a focusing telescope or Bertrand lens. In cases where the iris is not closed to its smallest size, these images can overlap with each other. This article explores the relationship between the distance separating these iris opening images and the periodic spacing (spatial frequency) of lines in the grating.

The schematic microscope drawing (on the left-hand side of the tutorial) depicts the zeroth and higher-order diffracted light waves that are focused at the objective rear focal plane. Both the condenser and objective are represented by a single lens element, and the condenser iris diaphragm opening is shown at the bottom of the figure. The specimen (a line grating) is depicted as a dashed line through which illuminating light rays pass from the condenser iris. Orientation of the diffracted rays are governed by the equation:

S/f ≈ λ/D = sin(φ) (1)

where f is the objective focal length, λ is the wavelength of light in the specimen plane, and φ is the angle between the lens axis and the diffracted wave. The conoscopic image period between focused diffraction orders, (S), is proportional to the numerical aperture of the light rays entering the objective lens:

S/f ≈ n(sin(φ)) (2)

where n is the refractive index of the imaging medium. Ernst Abbe demonstrated that in order for the diffraction grating image to be resolved, at least two diffraction orders (usually the zeroth and the first) must be captured by the objective lens and be focused at the rear focal plane. As the numerical aperture increases, additional higher-order rays are included in the diffraction pattern, and the integrity of the specimen (line grating) becomes clearer. When only the zeroth and first orders are captured, the specimen is barely resolved, having only a sinusoidal intensity distribution within the image.

Contributing Authors

Christopher E. Steenerson and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.


Defining super resolution

The resolution limit of conventional light microscopy is

250 nm in the x and y direction, and >450–700 nm in the z direction. This limit, also called the point-spread function (PSF), is the fixed size of the spread of a single point of light that is diffracted through a microscope it is also a measure of the minimum-size point source or object that can be resolved by a microscope. Objects that are smaller than the PSF appear to be the same size as the PSF in the microscope, and objects that are closer together than the width of the PSF cannot be distinguished as separate. A commonly used measure of the PSF width is the Rayleigh (R) criterion: R=0.61λ/NA, where NA is the numerical aperture. Any microscopy technique that overcomes the resolution limit of conventional light microscopy by at least a factor of two is considered to be a super-resolution technique.

Super-resolution techniques break the diffraction limit by temporally or spatially modulating the excitation or activation light. For example, structured illumination microscopy (SIM) illuminates the entire field with a striped pattern of light (Gustafsson, 2000). When this excitation pattern mixes with the spatial pattern of the sample they produce an interference pattern (called a moiré fringe) that is much coarser than either pattern alone and is detectable by the microscope. The excitation pattern is translated and rotated to generate a series of images with different moiré fringes. As the illumination pattern is known, it can be mathematically removed from the moiré to gain access to the normally irresolvable higher resolution information in the sample. SIM increases resolution to

100 nm in the x-y direction and

400 nm axially (Schermelleh et al., 2008). SIM is limited to this factor-of-two improvement because the periodicity of the illumination pattern is created by diffraction-limited optics and is, therefore, limited by the PSF of conventional microscopy (Gustafsson, 2000).

Other super-resolution imaging techniques modulate the excitation light to exploit the ability to saturate the emission of fluorophores in order to break the diffraction barrier by a greater amount. Saturation can be achieved by using intense illumination to produce a photophysical transition of the fluorophore to a transient dark state that can lead to either a permanently dark state (bleaching) or the emission of light on a microsecond or millisecond time scale, which is much slower than the nanosecond time scale of fluorescence. Alternatively, super-resolution techniques can use light to induce photochemical reactions in photoswitchable or photoactivatable fluorophores, and either transition them between on and off states or change their color. As long as these transitions can be limited to a subset of fluorophores that are spatially separated by the distance of the microscope PSF, the molecules can be located with precision approaching 5 nm. Super-resolution techniques can be separated into two categories depending on whether these effects are exploited at the ensemble level or at the single-molecule level.


New microscopy under ambient achieves less than 10 nm spatial resolution on surface potential measurement

Credit: ACS

A new nanomaterials microscopy approach called Pulsed Force Kelvin Probe Force Microscopy (PF-KPFM), allows for less than 10 nanometer measurements of work function and surface potential in a single-pass AFM scan. The findings have been published in two related articles in ACS Nano and Angewandte Chemie International Edition.

As technology shrinks, the need to characterize the properties of very small materials—measured in nanometers (1 nanometer = 1 billionth of a meter)—has become increasingly important. Nanomaterials that measure from 1 and 20 nanometers show promise for use in next-generation electronic devices, solar cells, laser technology, and chemical and biosensors, to name a few. For scale, the width of a human hair is 75,000 nanometers.

To understand the surface potential of nanomaterials, the most commonly used nanoscience tool is the Kelvin Probe Force Microscopy (KPFM), which is an atomic force microscopy (AFM) based technique that measures work function and surface potential. Unfortunately, KPFM has its limitations due to its use of AC voltage to charge the AFM probe.

"Every KPFM technique operates on the same measurement paradigm: AC voltage is used to completely charge an AFM probe, thus producing a detectable electrostatic force for image acquisition," explains Xiaoji Xu, assistant professor in Lehigh University's Department of Chemistry. "Overloading the probe with charges forces a limit on the spatial resolution, since the charges are not limited to the apex of the AFM probe. Instead, excess charges occupy the entire cantilever and contribute to the signal."

Now, Xu and his graduate student Devon S. Jakob have introduced an entirely new measurement paradigm based on the alignment on Fermi levels. While traditional KPFM methods produce images with a spatial resolution of 30 to 100 nanometers, the new Xu Research Group method, called Pulsed Force Kelvin Probe Force Microscopy (PF-KPFM), allows for less than 10 nanometer measurements of work function and surface potential in a single-pass AFM scan. Their findings have been published in an article in ACS Nano: "Pulsed Force Kelvin Probe Force Microscopy."

"In Pulsed Force Kelvin Probe Force Microscopy, we removed the need for the AC voltage by implementing a custom circuit of a field effect transistor between the tip and the sample which acts as a binary switch," says Xu. "When the switch is on, the circuit acts as a simple wire, allowing charges to pass between tip and sample. A small amount of charges spontaneously migrates between tip and sample based on the relative difference in their intrinsic Fermi levels. When the switch is off, the circuit does not allow for charges to pass, and acts as a capacitor to re-absorb the charges from the tip and sample region."

The PF-KPFM also exclusively operates in the pulsed force mode, according to Xu. By using the pulsed force mode, he says, PF-KPFM measurements can be accurately obtained at very small tip-sample distances, where the electrical force is large, allowing for small sample heterogeneities to be revealed.

"The next logical step was to combine PF-KPFM with Peak Force Infrared (PFIR) microscopy, an infrared imaging technique invented in our lab, since both techniques use the pulsed force mode," says Xu. "The resulting technique, named PFIR-KPFM, provides topographical, mechanical, chemical, and electrical information at

Devon S. Jakob et al. Pulsed Force Kelvin Probe Force Microscopy, ACS Nano (2020). DOI: 10.1021/acsnano.0c00767


Far-Field Superresolution Imaging via Spatial Frequency Modulation

The diffraction limit substantially impedes the resolution of the conventional optical microscope. Under traditional illumination, the high-spatial-frequency light corresponding to the subwavelength information of objects is located in the near-field in the form of evanescent waves, and thus not detectable by conventional far-field objectives. Recent advances in nanomaterials and metamaterials provide new approaches to break this limitation by utilizing large-wavevector evanescent waves. Here, a comprehensive review of this emerging and fast-growing field is presented. The current superresolution imaging techniques based on evanescent-wave-assisted spatial frequency modulation, including hyperlens, microsphere lens, and evanescent field-illuminated spatial frequency shift microscopy, are illustrated. They are promising in investigating unobserved details and processes in fields such as medicine, biology, and material research. Some current challenges and future possibilities of these superresolution methods are also discussed.


What is the spatial resolution of TERS?

In conventional far-field imaging, where all points in space are constantly illuminated during imaging, it is conventional to define the best theoretical lateral resolution by the Rayleigh Criterion, or that distance between two point scatterers where the central Airy disc of one overlaps with the first dark ring of the second, leading to a significant intensity dip (at least 26%) between the two: r = (0.61 x wavelength) / (numerical aperture) – say two thirds of a micron. The resolution in z (focus) is far worse, being on the order of tens of microns.

Confocal microscopy is a far-field technique designed to improve resolution by illuminating an isolated diffraction-limited volume and collecting the scattered light from the same volume: resolution here is conventionally defined as the full-width-half-height dimension of the measured scattering response: now r = (0.41 x wavelength) / (numerical aperture) perhaps a bit better than a third of a micron for lateral resolution and only about three times worse in z (focus). Near-field techniques aim to reduce the illuminated volume still further.

Near-field scanning optical microscopy (NSOM or SNOM, depending on whether you follow IBM or Bell Lab’s lead) employs a small aperture, at the very tip, in the metallization of a drawn optical fiber that allows light to leak out a very short distance (an evanescent filed), thus illuminating a volume the diameter of the aperture and only a couple of nanometers deep. Scattered light from this volume is then collected in the far-field.

Of course this process may be reversed with collection through the aperture, from the same volume that is illuminated in the far-field. It is even possible to both illuminate and collect through the fiber, although each pass attenuates the beam by some million times, so this can only be useful for very strong scattering.

Height feedback to track surface topography and maintain a fixed tip / surface separation, has traditionally been by shear-force feedback, where the tip is oscillated laterally while fixed to one tine of a tuning fork at resonance and fixing the phase shift to maintain constant interaction with the surface this is slow somewhat insensitive and leads to aperture wear (enlargement) by collision with any significant topography of the surface. Lately, other schemes have been devised using a similar aperture at the apex of a hollow AFM tip to perform this type of NSOM while allowing somewhat more robust feedback.

Because smaller apertures that illuminate a smaller volume transmit fewer photons, this technique is limited, in practice, to resolutions of 100nm (or with more difficulty) perhaps half that. Also there are significant artefacts associated with, for instance: variable throughput of the aperture when held close to a surface or by variation in feedback height due to the width (aperture diameter + twice the metallization thickness) interacting with surface roughness and polarization variation across the aperture. It should be noted that scanning these apertures over an edge (even in the absence of topography) generates an artefact with a size of half the aperture width, which is sometimes confused with resolution.

Scattering-SNOM (sSNOM), of which TERS is an example, uses a physical tip to define the spectroscopy volume passively, by its physical presence, or actively, by its photonic effects. For instance, a physical tip scanned within nanometers of the surface of a TIRF crystal, where a totallyinternally- reflected excitation beam provides a shallow evanescent field will generate light propagating to the far-field from the volume immediately surrounding the physical tip which, therefore, defines the resolution.

TERS and associated TEOS techniques employ a metal coating whose photonic activity causes it to act as an antenna which mediates energy transfer from the far-field excitation beam to the near-field. So-called “hot spots” at the tip are non-propogating fields of extremely high electric dipole, which effectively illuminate a volume with only a couple of nanometers in diameter. Whilst it is still necessary to deconvolve the diffraction-limited far-field response from the nanometer-scale near-field response, this is possible and in the case of strong scatterers not strictly necessary since the tip-enhancement can be on the order of 10 7 , or more.

There is currently much discussion among practitioners on definitions of resolution for TEOS techniques and there remains theoretical work to be done to fully understand which of the contrast mechanisms seen are useful.

Fig. 7: TERS image of 100 nm × 100 nm (75 × 75 pixels) of a carbon nanotube showing an optical spatial resolution down to 8 nm total map acquisition time < 9 minutes.

In this example presented Fig. 7, the TERS image (100 × 100 nm scanning area, with a pixel step size of 1.3 nm, total acquisition time < 9 min, 100 ms integration time per pixel), showed nanoscale chemical imaging of a single carbon nanotube with a spatial resolution of 8 nm, confirmed from the section analysis of the TERS intensity.

As a first approximation, this resolution is dependent on the radius of curvature of the TERS tip itself, those "8 nanometers" result from convolution of the tip radius. Thus, the achievable TERS resolution can be approximated and considered equal to half the radius of curvature of the tip.


Recommended Products for Cell Biology

ZEISS Primovert
ZEISS Primovert is a complete solution for your cell culture laboratory and is perfect for your cancer research requirements. Primovert has a compact design allowing it to be placed directly inside your culture hood. With a universal phase slider for all objectives, you can quickly and easily assess the condition of your cells without needing to adjust phase position when changing magnification. Primovert also comes with mounting frames for different culture vessels, and by simply removing the condenser, you can increase the working distance for culture flasks. Primovert iLED with integrated fluorescence allows quick and efficient contrast imaging of green fluorescent protein (GFP)-labelled cells by switching the contrast technique on the stand. Primovert HDcam, with integrated five-megapixel camera, lets you capture images, videos and produce reports. In combination with your iPad and the free imaging App, Labscope, you can share and discuss observations with colleagues.

ZEISS Axio Vert.A1
ZEISS Axio Vert.A1 is the only system in its class to provide all the standard contrast techniques, yet is compact enough to sit beside your incubator. With the new IVF contrast system, you can switch freely between DIC, PlasDIC and iHMC to investigate your samples. Axio Vert.A1 offers LED excitation with no UV component, enabling you to image fluorescently labelled cells or specify transfection rates without a negative impact on cell survival rates. With the ergonomic design of Axio Vert.A1, including tilting eyepieces, your routine tasks can be performed quicker and easier.

ZEISS Cell Observer
All your live-cell imaging requirements are met with the ZEISS Cell Observer, a fully integrated research platform, based on ZEISS Axio Observer 7, with convenient workflow for the most complex applications. Optional light sources include Colibri, the innovative, high-performance LED source for bright, fast and gentle imaging to greatly reduce phototoxicity. Without the reliance on mechanical shutters, required wavelengths can be switched in microseconds. Your application may require image capture at maximum speed, maximum resolution, or both. A range of monochrome cameras can be integrated in Cell Observer to meet the highest possible sensitivity and resolution requirements. To collect every single photon from a weakly fluorescent sample, Cell Observer supports the full range of ZEISS Axiocams and is also compatible with sCMOS and EM-CCD cameras from third-party manufacturers.
An environment that is as close to the living organism as possible ensures you produce physiologically relevant results. Incubator XL encases all beam path components ensuring environmental stability of your cells, and of Cell Observer. In addition, Definite Focus 2 keeps your cells in focus, hours and days after starting your experiment.
For optical sectioning and Z-stacks, the Apotome slider module transforms Cell Observer into a 3D workstation. Based on structured illumination microscopy (SIM), Apotome generates extremely high-resolution optical sections of your live cells.

ZEISS Celldiscoverer 7
For multiple live-cell imaging experiments with increased throughput, ZEISS Celldiscoverer 7 is your fully automated imaging platform, incorporating an inverted microscope, darkroom, and various incubation and detection options. For your most demanding long-term live-cell experiments with rapid time-lapse imaging, you can choose fast, sensitive sCMOS or EM-CCD cameras. With Celldiscoverer 7, you can undertake long-term observation of the physiological and morphological parameters of living tissue sections or spheroids during growth, motility and interaction. Celldiscoverer 7 excitation unit combines up to seven LED’s providing broad spectrum excitation, gentle illumination, low phototoxicity and fast switching times. Integrated incubation options ensure environmental stability for your cells, and with automatic recognition and adaptation, you are free to use any number of cell carriers. With Auto-immersion, hardware-based focus and Autocorr objectives, you can produce images of a quality unseen before. You can further enhance these images with the computational deconvolution option, which greatly increases signal-to-noise ratio (SNR) and produces crystal clear 3D images. For confocal imaging of 3D samples, simply add the LSM 900 with Airyscan 2 and profit from a system that seamlessly combines camera and confocal imaging like never before.

ZEISS LSM 900 with Airyscan 2
For observing dynamic processes in live cells, ZEISS LSM 900 with Airyscan 2 provides the highest sensitivity and frame rates for gentle imaging and is tailored precisely to your live-cell applications. The LSM 900 with Airyscan 2 offers a range of imaging solutions including fluorescence resonance energy transfer (FRET) and FRAP, as well as parallel acquisition and linear unmixing in samples with multiple overlapping fluorophores. The LSM 900 with Airyscan 2 has a genuinely small footprint and reduced complexity, saving valuable lab space and minimizing training time. With ZEN imaging software, setup and use is simple, enabling reproducible results in the shortest possible time to increase your productivity. You can analyse bleach events within a time series using the FRAP Efficiency Analysis Module for ZEN.

ZEISS LSM 980 with Airyscan 2
The ZEISS LSM 9 family of confocal microscopes with Airyscan 2 includes the LSM 900 and the LSM 980. In conventional confocal systems, the pinhole blocks out-of-focus light from reaching the detector. Although this achieves higher resolution, photons are lost for the final image. With the Airyscan 2 detector simply more photons are collected. Replacing the conventional pinhole/detector arrangement, the Airyscan 2 detection element array extracts more information from the fluorescence signal while simultaneously increasing resolution and SNR with improved optical sectioning. The new Multiplex mode for Airyscan 2 offers more flexibility for your live-cell experiments enabling you to rapidly image whole sample volumes and large fields of view in high resolution. In a single sweep, Airyscan 2 in Multiplex mode can acquire up to four high SNR superresolution image lines.

ZEISS Elyra 7
ZEISS Elyra 7 with Lattice SIM is the flexible platform for fast and gentle 3D SR microscopy that brings SIM to a new level. Whether you work with living cells, organoids or tissue, Elyra 7 with Lattice SIM offers a range of SMLM techniques including PALM. The Lattice SIM spot pattern has a sampling efficiency twice as high as classic SIM, an imaging speed of up to 255 fps, and a lower laser dose ensuring reduced phototoxicity. This fast, gentle and light-efficient imaging ensures you can reveal mechanistic and dynamic details in live cells for the first time. Applications such as vesicle trafficking and cytoskeleton and membrane reorganization are examples in cell biology where you can never have enough temporal and spatial resolution to discover new pathways and mechanisms. The Apotome mode of Elyra 7 gives you fast optical sectioning with high lateral and axial resolution for crisp contrast, minus out-of-focus and background signal. With a wealth of contrast and SR techniques, combined with optical sectioning, Elyra 7 is tailored precisely to your live-cell imaging applications. From molecular-level imaging such as focal adhesion dynamics of cancer cells, to whole-cell capture in a single image, Elyra 7 is an outstanding SR platform with the flexibility to perfectly match your applications.

ZEISS Sigma
The ZEISS Sigma field emission SEM (FE-SEM) family comprises the Sigma 300 and Sigma 500. The new generation of secondary electron detectors in the Sigma family produces images with 50% more signal and 85% greater contrast with the novel C2D (Cascade Current Detector) and VPSE (Variable Pressure SE) detectors in variable pressure mode. With a semi-automated, high-speed 4-step workflow, you can image across multiple regions of interest benefiting from fast time-to-image, as well as saving time on training, especially in a multi-user environment. The C2D is able to deliver sharp, ultrastructural images of cryo-fixed biological samples, and non-conductive biomaterials (such as teeth and bone) can be studied without coating using the Advanced VP mode or low voltage approaches.

ZEISS GeminiSEM
The ZEISS Gemini FE-SEM family includes the GeminiSEM 300, GeminiSEM 450 and GeminiSEM 500. The GeminiSEM 500 offers you more signal and more detail, with minimal sample damage. Perfect image quality and a resolution of 1.0 nm are obtained at low voltages without requiring beam deceleration. Applying beam deceleration with the Tandem decel option resolves to 0.8 nm at 1 kV. The imaging capabilities of the GeminiSEM 500, combined with the annular scanning transmission electron microscope (aSTEM) detector, provide ultrastructural details in biological specimens such as brain cell lipid bilayers.
Your specialist instrument for speed and surface sensitivity in imaging and analytics is the GeminiSEM 450. The optical design ensures there are no time-consuming realignments as you work. With the GeminiSEM 450, you can seamlessly switch between analytical modes at high beam currents and high-resolution imaging at low beam currents. Using the high beam current, you can capture large areas of high-resolution cellular ultrastructure.
For imaging flexibility, the GeminiSEM 300 provides high resolution and high contrast, even on extremely large fields of view. Even with non-conductive samples, the novel high gun resolution mode achieves excellent resolution and distortion-free images. When combined with the variable pressure mode, sputter coating is not required to analyse the topology of biological samples.

ZEISS Crossbeam
Based on the novel Gemini optics and the low voltage capabilities of Ion-sculptor FIB column, ZEISS Crossbeam 350 and Crossbeam 550 provide true sample information from high-resolution images, minimal sample damage and ultra-thin samples. For your most demanding characterisations, Crossbeam 550 features an optional large chamber with 22 configurable ports. The imaging capabilities of the Crossbeam family can be applied to FIB-tomography of biological samples. This allows you to investigate different cell compartments in single cells, or understand the 3D morphology of complete model organisms with the highest resolution and reliability.

ZEISS ZEN Connect
For greater insights into your cell biology research, you can utilize a connected microscopy approach with ZEISS ZEN Connect, which lets you combine and correlate images and data from different imaging modalities. For example, you can overlay a large area widefield scan of your specimen with detailed regions acquired with a confocal or superresolution microscope, or you can combine and correlate light and electron microscopic images with each other. Correlating information from different resolution and size scales will allow you to identify new cellular functions and mechanisms more precisely and faster than before.

ZEISS Lattice Lightsheet 7
Make you light sheet fluorescence microscopy available for live cell imaging at subcellular resolution – while also allowing you to use your standard sample carriers. With this automated, easy-to-use system, volumetric imaging of subcellular structures and dynamics over hours and days with best protection from photo damage becomes available to everyone. Discover the dynamics of life in unprecedented depth of detail – with the ease you never imagined possible!

Not all products are available in every country. Use of products for medical diagnostic, therapeutic or treatment purposes may be limited by local regulations. Contact your local ZEISS provider for more information.


Intravital Imaging of Vasoactivity in the Uterine Arterial Vasculature Tree during Pregnancy

Chapter Summary

Intravital microscopy incorporates a large range of microscopy techniques and modifies and adapts them to facilitate visualization of biological processes in live, anesthetized animals. It involves the exteriorization of tissue but the tissue remains intact and attached to the anesthetized animal and thus allows tissue behavior and responses to be observed in real time under in vivo-like conditions. The microvasculature has been studied in this fashion for many years in many different tissues. Here, we have adapted these techniques to observe the endometrial microvasculature and its reactivity in the pregnant mouse. This model allows us to visualize the uterine, radial, arcuate, basal, and spiral arteries as a connected network and observe their reactivity in their native environment. This model will allow for exploration of the physiology of blood flow control to the developing conceptus and can be used with most of the widely available genetically altered strains of mice.


Optical Sectioning and Confocal Microscopy

01:00:11.00 So, I am Kurt Thorn, director of the Nikon
01:00:14.00 Imaging Center at UC San Francisco
01:00:16.17 and I'm gonna be talking about
01:00:17.20 optical sectioning and confocal microscopy
01:00:19.28 and how we can use that to make
01:00:20.28 3-dimensional images of biological samples.
01:00:24.18 So, our goal is to
01:00:25.00 build three-dimensional images of
01:00:26.00 biological samples using a microscope.
01:00:28.00 And the example I'm showing here is an
01:00:31.00 embryonic mouse lung, it's about 200
01:00:34.00 microns thick and as you can see here in
01:00:37.00 this animation, we've captured a
01:00:38.00 three-dimensional image here, where we
01:00:40.00 can see the complete structure of this
01:00:42.00 this mouse lung in 3D and rotate it
01:00:45.00 around and look inside it and see what's
01:00:47.00 going on. And I'm going to talk now about
01:00:49.00 how we do that.
01:00:51.20 So, in order to build a
01:00:52.00 3-dimensional image, we first take a
01:00:55.00 bunch of images in separate slices
01:00:57.00 through the sample, at different heights
01:00:58.00 in the sample.
01:01:00.01 And so, on the left here is a panel
01:01:01.20 showing 12 images taken at
01:01:05.00 different heights through a yeast cell.
01:01:06.00 And, we have this set of two-dimensional
01:01:10.00 images and what we can do now is move
01:01:11.00 through the different images,
01:01:13.08 stacking them up into a 3-dimensional stack.
01:01:15.00 What we call a z-stack.
01:01:17.13 And so, I'm taking here
01:01:18.12 the bottom image, then the next
01:01:19.00 image, putting that on top, the image
01:01:21.00 after that, putting that on top, and so on
01:01:23.00 until we have a complete
01:01:24.00 three-dimensional stack of these images.
01:01:25.00 We can now take this in a computer and
01:01:28.00 have the computer calculate what this
01:01:30.00 structure would look like if viewed from
01:01:31.00 different angles. And that's how we get
01:01:33.00 these kinds of movies and I'm showing
01:01:34.00 down here a rotation around the axis of
01:01:38.00 these two yeast cells here, so that you
01:01:40.00 can now, from the set of 2D images,
01:01:42.00 appreciate their 3-dimensional structure
01:01:44.00 and layout.
01:01:48.01 So, how do we do this? We need
01:01:49.00 a microscope where we can take images at
01:01:52.00 different heights. So, we have our
01:01:53.00 sample here, illustrated by this little cube.
01:01:57.24 We take a picture of it until we
01:01:59.00 get a slice through the sample, which
01:02:00.00 looks like this rectangle.
01:02:02.14 We then step the focus.
01:02:04.09 We physically move the sample up or down.
01:02:07.03 In this case, we move the
01:02:08.00 sample down away from the objective to
01:02:09.00 get a second slice through a different
01:02:12.00 portion of our little cube here, so we
01:02:13.00 get a smaller rectangle.
01:02:15.15 And we can keep doing this,
01:02:16.00 getting slices through different parts
01:02:19.00 of this object and then we get, as I
01:02:21.00 showed before, a stack of these images
01:02:22.00 where we can now in the computer
01:02:25.00 assemble these into a three-dimensional
01:02:27.00 structure. And now we can
01:02:29.00 look through that at different angles to
01:02:31.00 generate a three-dimensional
01:02:32.00 reconstruction as how this would look if
01:02:34.00 viewed from different angles.
01:02:38.04 Here's another example
01:02:39.01 This is a worm, C. elegans,
01:02:41.00 expressing two different sensory
01:02:43.00 neurons labeled in green and red and I'm
01:02:45.00 showing here 85 Z slices moving through
01:02:48.00 the full thickness of the head of this worm.
01:02:49.00 And you can see, there's a lot of
01:02:51.00 information there but it's hard to
01:02:52.00 interpret what the connectivity of these
01:02:56.00 neurons is, what the sort of biological
01:02:58.00 structure is, just from the Z slices alone.
01:03:02.12 So, when we render those in 3D now,
01:03:05.00 we go and compute these views looking
01:03:07.00 through different orientations of the
01:03:09.00 sample. So, we get a rotation series, it's
01:03:11.00 now much easier to appreciate what the
01:03:12.00 geometry of these neurons is and what
01:03:14.00 the structure looks like.
01:03:16.08 So that hopefully motivates why
01:03:17.27 we want to be able to do this kind
01:03:19.14 of three-dimensional reconstruction.
01:03:22.28 So, the challenge in doing this kind of work
01:03:26.18 is that conventional microscopes,
01:03:28.15 a conventional wide field microscope,
01:03:30.18 sees both the in-focus information that is
01:03:32.23 the information that's in focus on the
01:03:34.08 Z slice we're looking at, as well as light
01:03:37.14 that comes from above and below there
01:03:39.04 from other regions on the sample that
01:03:40.11 aren't in focus. And so, you can see in
01:03:41.00 this conventional image here. This is an
01:03:44.00 image just of a piece of mouse tissue.
01:03:46.00 You can see that there's a lot of blurry,
01:03:49.00 fuzzy out-of-focus light here in
01:03:51.00 addition to these, you know, sharp edges.
01:03:52.00 There's they're overlay with this
01:03:54.00 blurry out-of-focus light and that
01:03:56.00 reduces our ability to see what's going on.
01:03:59.00 If we use something called a confocal
01:04:01.00 microscope, which blocks that
01:04:03.00 out-of-focus light, you see that this
01:04:05.00 image now gets much sharper and much
01:04:07.00 crisper because we're only seeing the
01:04:08.00 in-focus information. We're not seeing the
01:04:10.00 light that came from out of focus.
01:04:13.00 And so, this is what I'm going to talk about
01:04:14.00 for most of the rest of this talk is how one
01:04:17.00 of these confocal microscopes works,
01:04:18.22 how it rejects this out-of-focus light,
01:04:20.14 and how it enables us to get these nice
01:04:22.00 crisp in-focus slices, which we can then
01:04:24.00 use to do 3-dimensional reconstruction
01:04:26.00 of objects.
01:04:28.07 So, if we consider how a
01:04:29.00 conventional microscope works when
01:04:31.00 illuminating a single point in our
01:04:32.00 sample, we've got this blue excitation
01:04:34.00 light coming in here, it's brought to a
01:04:36.00 focus to a spot in our cyan sample, and
01:04:38.00 that spot emits light. That light is
01:04:41.00 collected by the objective lens and focus
01:04:43.00 to a corresponding point on our camera
01:04:45.00 by the tube lens.
01:04:46.26 So, we see this green spot in our sample
01:04:49.11 in focus.
01:04:50.29 However,
01:04:51.00 there is also light coming from other
01:04:53.00 regions in the sample, such as these
01:04:55.00 points out of focus and above our sample.
01:04:57.00 And that light will be collected by the
01:05:00.00 objective and the tube lens. But it
01:05:01.00 doesn't come to a focus on the camera, it
01:05:03.00 comes to a focus somewhere behind it.
01:05:05.00 And so, that will give rise to a disc of
01:05:06.00 fuzzy out of focus light overlaying our
01:05:09.00 in-focus image.
01:05:10.16 So, we see both the
01:05:12.00 out-of-focus light from this spot here,
01:05:13.00 as well as the in-focus light from the
01:05:15.00 spot we want to image.
01:05:18.04 So that, the problem here
01:05:19.07 is that fluorescence is emitted along
01:05:21.03 the entire illuminated cone of this spot
01:05:23.01 in our microscope
01:05:23.29 not just at the focus.
01:05:27.09 So, the trick in a confocal microscope
01:05:29.17 is to physically block that out-of-focus light.
01:05:31.22 And the way we do that is by replacing
01:05:33.09 our camera here with a pinhole.
01:05:35.07 And that pinhole is placed such that it
01:05:38.11 exactly will pass the light that's in-focus,
01:05:41.08 that light will come to a focus at that spot,
01:05:42.25 make it through the pinhole and reach
01:05:43.18 the detector.
01:05:44.24 So, if we now I draw the
01:05:45.19 emitted light from that in-focus spot,
01:05:47.17 you see it comes here and crosses out
01:05:48.27 of focus exactly where the pinhole is
01:05:50.24 placed, which means it will make it
01:05:52.17 through that pinhole, reach the detector,
01:05:54.24 and be recorded by our microscope.
01:05:56.26 If we now consider the corresponding
01:05:58.00 out-of-focus light, it will make this big
01:06:02.00 fuzzy disc on the pinhole, which will
01:06:04.00 block the majority of that light and
01:06:05.00 prevent it from reaching the detector.
01:06:07.00 So now, we have we've arranged our
01:06:10.00 microscopes such that we only see the
01:06:11.00 in-focus information.
01:06:12.22 The out-of-focus information never
01:06:14.04 makes it to the detector.
01:06:15.21 So, this gives us a way of just
01:06:16.26 recording the in-focus light.
01:06:19.16 The problem now is
01:06:20.24 we're only imaging a single spot.
01:06:22.27 And to get an image you need more
01:06:24.09 than a single point.
01:06:25.25 And the way we solve that problem
01:06:27.00 is by recording lots of
01:06:30.00 different points in our sample,
01:06:31.18 arrayed across it.
01:06:32.17 And so, the idea is if you have
01:06:33.00 your sample here, you build up a grid of
01:06:36.01 points you want to record,
01:06:37.29 and then you march the laser spot,
01:06:40.00 the focal spot
01:06:40.22 across the sample point by point.
01:06:43.06 And then use that to record the intensity
01:06:45.07 at each spot on the sample
01:06:46.10 and reconstruct that image
01:06:47.28 as a 2D grid of points.
01:06:53.08 So, to do this, we need a way to focus our
01:06:56.00 light to a single spot so that we can
01:06:57.00 detect the corresponding in-focus
01:07:00.00 information coming from that spot.
01:07:02.14 And to do that
01:07:03.19 we use a laser because it gives
01:07:05.00 highly collimated illumination and also
01:07:07.00 high power so that we can very quickly
01:07:09.00 record this fluorescence from the spot.
01:07:11.00 Since we need to record many many spots
01:07:13.00 to build up an image, we don't want to
01:07:14.00 spend a long time looking at each one or
01:07:16.00 it'll take us forever to get an image.
01:07:17.29 So, the idea is we bring in this this
01:07:19.00 collimated beam and as you can see here
01:07:21.00 a collimated beam gives us a
01:07:23.00 single focal point in the sample and we
01:07:25.00 can record that information.
01:07:27.05 How do we move that spot, right?
01:07:28.18 It doesn't do us any good
01:07:29.27 to just image a single spot in our sample.
01:07:31.14 We want to have this grid of spots.
01:07:33.14 And so, to do that, we can change the angle
01:07:36.02 of illumination as it enters the objective.
01:07:39.05 And so, see here, if we come
01:07:40.20 in with light that's coming
01:07:42.04 perpendicular to the plane of the
01:07:44.06 objective here, it comes to a focus
01:07:46.03 exactly in the center of the field of view.
01:07:48.10 If we now tilt that beam so it's
01:07:49.22 coming at an angle, we move the focal
01:07:52.07 spot off axis and so we can now record
01:07:54.23 from the left side of this object.
01:07:56.21 If we instead change the angle to the other
01:07:58.00 orientation we now record a spot from
01:08:01.00 the right side of the sample.
01:08:02.25 So, by changing the angle of illumination
01:08:05.18 as it enters the objective,
01:08:07.01 we can move the illumination spot
01:08:08.13 across the sample.
01:08:12.08 You might be wondering now
01:08:13.24 how we still detect the
01:08:15.22 or how we still block the out-of-focus light?
01:08:17.05 Because now our spot will no longer
01:08:19.25 necessarily be coincident with a pinhole.
01:08:22.00 Right, before we had the pinhole drawn
01:08:23.14 exactly on the center of this optical axis
01:08:26.11 of our microscope.
01:08:27.17 And now as we move the focal spot off axis,
01:08:29.27 how do we make sure that that light
01:08:31.17 still reaches the pinhole and goes
01:08:33.01 through it, and doesn't get blocked by
01:08:34.10 the pinhole.
01:08:35.15 So, here's the optical path of the
01:08:36.24 confocal microscope that allows
01:08:38.04 us to do the scanning.
01:08:40.07 And the heart of it is
01:08:41.05 this set of scanning mirrors here,
01:08:43.00 which allow us to rotate the angle at
01:08:44.23 which the laser beam enters the
01:08:46.03 objective, and thereby move the spot
01:08:48.18 across the field of view,
01:08:49.27 across our sample.
01:08:52.04 So, if you follow this here, we
01:08:53.01 have a laser that enters our microscope.
01:08:56.10 It's scanned by these scanning mirrors
01:08:57.29 here, that change its angle so that once
01:09:00.08 it's focused by the objective, the spot
01:09:02.01 it illuminates in our sample changes.
01:09:05.08 Now, if you imagine what's going on with
01:09:06.21 the light that's emitted by that spot,
01:09:08.15 it's going to come out, be collected by
01:09:10.06 the objective, and it's going to come out
01:09:11.19 at the same angle it entered.
01:09:13.10 Because the objective maps angles into
01:09:15.08 position.
01:09:16.08 And so, the light that comes
01:09:17.29 from that emitted spot, that excited spot,
01:09:20.22 is coming out of the same angle that
01:09:22.15 the laser beam entered.
01:09:23.26 And do if we don't move these mirrors
01:09:27.12 that emitted light will be reflected
01:09:28.25 by these mirrors along exactly the same
01:09:30.17 path that the laser came in along.
01:09:32.14 And so that means that the light
01:09:34.28 here is always coming in the same
01:09:36.08 direction regardless of what spot it
01:09:37.22 came from because the scanning mirrors
01:09:39.01 exactly undo the angle change that we
01:09:41.01 apply to the illumination light.
01:09:43.06 They exactly undo that angle change
01:09:44.22 on the emitted light.
01:09:46.11 So, then all we need is a dichroic mirror,
01:09:48.00 this 45-degree mirror here,
01:09:50.05 which separates that emitted light
01:09:51.15 from the laser. And now we can pass
01:09:53.12 that through our pinhole again and it
01:09:55.12 will reach the detector regardless of
01:09:58.07 where it came from on the sample.
01:10:00.09 So, by using the same pair
01:10:01.28 of scanning nerves twice,
01:10:02.24 once to scan the illuminated light,
01:10:04.09 and a second time to de-scan the
01:10:05.25 emitted light, we can keep our pinhole
01:10:09.02 fixed because the emitted light will
01:10:11.02 always come back along the same path
01:10:12.14 regardless of where it came from in the
01:10:13.17 sample.
01:10:14.19 So that makes this construction
01:10:15.21 really simple.
01:10:16.09 We just have a single fixed pinhole,
01:10:17.20 we put a detector behind it
01:10:19.03 and now we can see the emitted light
01:10:20.26 regardless of where it came from in the
01:10:22.11 sample.
01:10:26.09 So, the detector required for a
01:10:29.04 confocal microscope is a little bit
01:10:31.07 different than a detector for an
01:10:32.17 ordinary microscope,
01:10:33.10 where we can just use a camera.
01:10:35.05 First off, we don't need a
01:10:36.07 camera because we're only imaging a
01:10:37.20 single spot at a time,
01:10:38.20 we're just imaging the total amount of light
01:10:40.10 that comes to the pinholes.
01:10:41.10 We don't need any ability to detect
01:10:42.18 where it came from.
01:10:44.18 But we want to be very fast.
01:10:46.02 Because again, we have to
01:10:46.28 scan the spot over the sample, and so if
01:10:48.09 it takes us, you know, a second to collect
01:10:50.15 an image it's going to take us hours to
01:10:51.29 build up an image.
01:10:53.10 So instead we use a
01:10:54.08 detector, a photomultiplier tube
01:10:57.07 principally because it's very fast.
01:10:58.22 And so, a photomultiplier tube can respond
01:11:00.11 in nanoseconds to light impinging on it.
01:11:05.00 And so, this allows us to collect a single
01:11:06.27 pixel in our confocal image in only a
01:11:08.17 few microseconds.
01:11:10.17 I don't want to say a whole lot
01:11:11.11 about photomultiplier tubes
01:11:12.20 and how they work,
01:11:14.23 but I'll just briefly describe their operation.
01:11:16.28 And so, the idea is we have
01:11:18.11 a light beam that enters the
01:11:20.00 photomultiplier tube and it hits an
01:11:24.00 object called a photo-cathode,
01:11:25.00 which converts it to electrons.
01:11:26.24 And those electrons are then excited by a
01:11:30.29 high voltage onto an electrode here,
01:11:33.16 and they they're moving fast enough
01:11:35.05 that when they hit that electrode
01:11:36.03 they kick out additional electrons.
01:11:39.14 And by putting a series of these
01:11:41.02 electrodes in our order here,
01:11:43.24 we can multiply the signal by a
01:11:46.22 very large amount.
01:11:47.28 Most photomultiplier tubes
01:11:48.29 have multiplications on the order
01:11:50.12 of a million or ten million.
01:11:53.10 So that a single photon that enters the
01:11:55.00 photomultiplier tube will give rise to
01:11:57.00 say 10 million electrons at the anode
01:12:00.00 here at the end of this multiplication
01:12:01.00 chain, which will be very easy to detect.
01:12:06.00 So, that means that these are sensitive
01:12:08.00 to very low light levels and they're
01:12:10.00 very fast so, you know, in a few
01:12:12.00 microseconds we can record a bunch of
01:12:13.00 photons from a single pixel in our
01:12:15.00 sample, and then use that to record the
01:12:18.00 intensity there, and then move on to the
01:12:20.00 next pixel and repeat this process.
01:12:22.09 So, putting this all together, here's again
01:12:24.17 our non-confocal image on the right,
01:12:27.02 our confocal image on the left,
01:12:28.24 and you can now see how this works.
01:12:30.17 We've taken the out-of-focus light
01:12:34.00 that's in the non-confocal version here,
01:12:35.00 and completely eliminated it by scanning the
01:12:37.00 spot across the sample and all these pixels,
01:12:39.16 and only recording the light
01:12:40.26 that's in focus and makes it through our
01:12:42.00 pinhole.
01:12:45.17 And again, putting this all together
01:12:47.00 in a 3D context.
01:12:48.02 If we take many of these images
01:12:49.03 at different heights and then
01:12:52.06 reconstruct them in the computer,
01:12:53.10 we can get very beautiful images like this
01:12:56.00 high magnification view of the mouse
01:12:58.00 lung I showed you earlier, where we can
01:13:00.00 now see individual cells.
01:13:00.16 And so, if you look at this,
01:13:02.06 you can see the outlines of
01:13:04.00 these individual cells, here the black
01:13:05.00 areas in the center are the cytoplasm.
01:13:08.00 We've only labeled the cell membranes here.
01:13:11.16 And so, this is the real strength of
01:13:13.00 confocal imaging is getting
01:13:15.00 high-resolution 3D images of thick
01:13:18.00 biological samples
01:13:20.16 So, what's it not so good at.
01:13:22.19 So, there's a couple drawbacks of
01:13:24.00 it to a confocal microscope as I've
01:13:26.00 described it here, which is called a
01:13:27.00 laser scanning confocal microscope
01:13:29.00 because we use a laser scanning across
01:13:31.00 the sample to build up the image.
01:13:32.20 The first is, as I've mentioned,
01:13:34.00 we're building up this image
01:13:35.26 point by point.
01:13:37.03 And even though we have these fast PMT's
01:13:39.07 if you have,
01:13:39.13 if you spend a microsecond at each pixel and
01:13:42.00 you want to record a 1-megapixel image,
01:13:44.00 that's still one second to record an image.
01:13:46.24 So, that means it's a slow
01:13:49.00 microscope generally. if you want to
01:13:50.02 follow things that are happening very
01:13:51.08 quickly this is not a great way to do it.
01:13:54.00 Secondly, it tends to not be incredibly
01:13:58.00 light efficient, we don't,
01:13:58.29 we lose a lot of light in this process.
01:14:00.25 And partly that's because
01:14:02.03 the photomultiplier tube is
01:14:04.11 not a fantastic detector for light
01:14:06.09 sensitivity.
01:14:07.25 It's very, has very high gain, it's very
01:14:10.00 fast, but it misses a lot of the photons
01:14:12.00 that hit it.
01:14:12.22 So, it only records something like
01:14:13.29 25% of the photons that arrive there.
01:14:17.04 And if you put these two things
01:14:18.00 together, that means if you're trying to
01:14:20.00 go very fast and you have a not really
01:14:22.00 high efficiency detector, it means you're
01:14:23.00 not generally recording very many
01:14:24.00 photons from each pixel.
01:14:25.12 So, for dim signals this doesn't work very well.
01:14:29.17 And so, the two big drawbacks of a laser
01:14:31.00 scanning confocal microscope is that
01:14:33.00 it's slow, and it's not really good for
01:14:35.00 very low light, very dim samples.
01:14:38.00 Fortunately, there's a solution to get
01:14:41.00 around this, which is instead of using a
01:14:43.00 single pinhole and a PMT we use many
01:14:45.00 pinholes in a camera.
01:14:46.15 And this allows us to address
01:14:47.00 both of these issues.
01:14:49.26 There are a number of geometries
01:14:51.10 you can use to do this.
01:14:52.23 Number of companies have made
01:14:54.00 different microscopes that use multiple
01:14:57.00 pinholes and cameras to record images.
01:14:59.00 But the most common one, and the one
01:15:00.13 that's achieved probably the largest usage,
01:15:02.28 is called the spinning disk confocal.
01:15:05.00 And so, that's the one I'll describe here.
01:15:07.16 So, a spinning disk confocal does is
01:15:09.16 instead of using a
01:15:11.00 single pinhole to illuminate the sample,
01:15:12.00 we put a disc that's punched with many
01:15:15.00 pinholes in it, and they're arranged in
01:15:17.00 a pattern that if we rotate this disc
01:15:19.00 around its axis, those pinholes will
01:15:21.00 sweep out across every point in our
01:15:23.00 sample once and only once.
01:15:24.22 So that for every rotation of the disc will
01:15:27.00 illuminate every point in our sample
01:15:29.00 once and only once.
01:15:29.25 And so, you can see
01:15:31.00 here we've got these pinholes
01:15:31.25 they're placed exactly in the same focal
01:15:34.02 plane as the laser scanning confocal
01:15:36.00 microscope.
01:15:37.08 But you can see now instead
01:15:39.00 of illuminating a single spot in our
01:15:40.00 specimen, we illuminate many spots
01:15:41.13 on our specimen.
01:15:42.26 And then because we're doing this and
01:15:44.23 the disk spins very fast, such that it'll
01:15:47.00 illuminate every point in our sample,
01:15:48.14 say in a micro or in a millisecond,
01:15:52.15 we can then just record the emitted light
01:15:55.25 onto a camera because we're now building up
01:15:58.00 an image by sweeping these pinholes out
01:16:00.00 across the sample, rather than moving a
01:16:02.00 single spot pixel by pixel.
01:16:05.29 And otherwise, the objects are basically
01:16:08.04 exactly the same.
01:16:09.15 The pinholes, we use the same
01:16:12.00 pinhole for excitation and emission.
01:16:14.02 The disc spins fast but it's been slow
01:16:16.00 compared to the speed of light so that
01:16:18.00 the emitted light makes it back to the
01:16:20.00 same pinhole that excited it.
01:16:22.07 And then the only other thing
01:16:23.03 we need to add is,
01:16:24.00 there's a technical refinement here,
01:16:26.00 which is to put microlenses,
01:16:27.22 a matched set of micro lenses,
01:16:30.21 in front of these pinholes,
01:16:31.15 such that we can focus our
01:16:33.00 laser beams tightly through the pinholes
01:16:36.00 in the spinning disk here to get
01:16:38.00 high light efficiency delivered to the
01:16:40.17 sample, so we can get most of our light
01:16:42.00 reaching the sample.
01:16:44.07 So, this solves our slow problem
01:16:45.00 because we're now learning
01:16:47.00 of many pinholes. We can collect an image
01:16:49.00 in milliseconds of the whole sample.
01:16:51.04 And it also eliminates a lot of our poor
01:16:56.00 light efficiency issues because we can
01:16:57.00 now use a very high efficiency CCD,
01:16:59.20 like an electron multiplying CCD,
01:17:01.15 or a back-thinned CCD,
01:17:03.20 to record the emitted light.
01:17:06.11 And so, we now record almost every
01:17:07.22 photon that leaves our sample.
01:17:10.18 This is what the pinholes look like.
01:17:12.17 So, this is just an image I took
01:17:13.26 of our spinning disc, where
01:17:16.00 we stopped the disc rotation.
01:17:16.18 So, the disc is parked
01:17:18.03 and just shine light through it.
01:17:19.29 And so, you can see there, here's the
01:17:22.00 pinhole grid of this little piece of the disc.
01:17:23.00 And it rotates such that these
01:17:26.00 pinholes will sweep out over every
01:17:28.00 point in the sample once and only once.
01:17:31.00 Here's an example of a movie acquired
01:17:33.00 with a spinning disk confocal.
01:17:33.22 So, this is a time-lapse movie
01:17:35.19 of a dividing Drosophila S2 cell.
01:17:40.13 This is about 15 minutes
01:17:43.16 compressed to you know
01:17:44.14 a few seconds here.
01:17:45.27 So, because they're fast and
01:17:48.00 high light efficiency,
01:17:48.24 spinning discs are very good for live samples.
01:17:51.12 Here's a 3-dimensional reconstruction
01:17:53.09 of a yeast cell expressing a
01:17:55.27 mitochondrially-targeted RFP.
01:17:57.11 These are actually the same
01:17:59.00 yeast cells I showed at the very
01:18:00.00 beginning of this talk.
01:18:01.07 So, it does a good job doing
01:18:02.01 3D reconstructions as well.
01:18:05.26 So, what's its downside?
01:18:08.25 So, imagine here
01:18:10.00 illuminating a single one of these
01:18:12.00 pinholes in the spinning disk.
01:18:13.08 In reality, they're all being illuminated
01:18:15.02 But let's just consider one.
01:18:16.20 So, you can see here in the center,
01:18:18.03 I've marked this one pinhole here
01:18:19.20 with a little green dot.
01:18:20.19 That's the guy who's exciting
01:18:22.17 So now, if the sample
01:18:25.00 is in focus, we'll create a little spot
01:18:26.00 of green in our sample. It'll emit and
01:18:29.00 we'll collect that in-focus light back
01:18:31.00 through that same pinhole.
01:18:31.23 Now what happens if we start moving
01:18:32.28 that sample out of focus?
01:18:35.14 What's going to happen is that the disc
01:18:37.01 of out-of-focus light is going to start to grow.
01:18:39.04 As it gets out of focus and the sample
01:18:41.27 gets further and further out of focus,
01:18:42.29 that disc is out of focus,
01:18:44.05 light gets bigger and bigger.
01:18:46.02 And eventually,
01:18:46.23 it'll be big enough that it's going to
01:18:48.15 overlap with adjacent pinholes in the disk.
01:18:50.29 And once that happens, we stop blocking
01:18:52.25 that out-of-focus light completely
01:18:54.06 because we're now picking it up through
01:18:55.05 these neighboring pinholes.
01:18:57.03 And so, this is the main limitation
01:18:58.21 of the spinning disc system is
01:19:00.06 that it has limited out-of-focus rejection.
01:19:02.16 If your sample is far enough out of focus,
01:19:04.17 That light will make it through adjacent pinholes
01:19:07.01 and that means that you're no longer
01:19:08.08 blocking all the out-of-focus light.
01:19:11.12 You can quantitate this.
01:19:14.00 And so, what I'm showing here
01:19:15.05 is a plot of basically how
01:19:17.04 much of the out of focus light is
01:19:18.14 transmitted as a function of how far out
01:19:21.02 of focus you are on the x axis.
01:19:23.04 And so, in this red curve here,
01:19:24.12 you can see the laser scanning confocal,
01:19:26.19 where no matter how far out of focus you get,
01:19:29.08 you continue to reject that light.
01:19:31.28 It, you know,
01:19:32.24 just you attenuate it more and more
01:19:34.00 the further out of focus it is.
01:19:35.24 The spinning disk here, shown in blue,
01:19:38.02 for small amounts of out-of-focus,
01:19:40.04 for small amounts of defocus,
01:19:41.20 performs exactly the same
01:19:42.22 as a laser scanning confocal.
01:19:44.10 This is where that that out-of-focus disc
01:19:47.00 isn't big enough to reach the adjacent
01:19:49.00 pinholes yet.
01:19:49.27 However, once it does reach
01:19:51.08 those adjacent pinholes you can see here
01:19:52.19 there's this plateau.
01:19:54.12 And once we get to that plateau,
01:19:55.16 basically we stop improving
01:19:56.27 our out of focus rejection.
01:19:58.23 And so, what this means is that
01:19:59.27 if your sample is really thick,
01:20:02.03 the out of focus light from
01:20:03.04 the far away parts are going to
01:20:05.21 contribute significant background
01:20:06.26 to your image that would normally
01:20:08.19 be blocked by a laser scanning confocal.
01:20:10.11 That's the difference between the
01:20:11.10 spinning disk curve here
01:20:12.19 and the laser scanning confocal.
01:20:15.08 And so, there's a crossover
01:20:19.00 point here, where for really thick
01:20:22.00 samples this laser scanning system will
01:20:23.00 be better than the spinning disk.
01:20:24.09 And that crossover is in the third
01:20:25.24 of 20 to 30 micron thick range.
01:20:28.11 That depends a little bit
01:20:29.17 on the spinning disk design
01:20:31.05 and the objective you're using
01:20:32.25 but for the commonly used 100x objective
01:20:35.00 and the Yokogawa's spinning disk system,
01:20:37.00 it's around 20 or 30 microns.
01:20:38.25 And so, for thicker samples you're not
01:20:41.14 going to see as much benefit from the
01:20:42.25 spinning disk as you would expect.
01:20:44.23 Doesn't mean that you can't use it.
01:20:46.07 It just means that it won't perform
01:20:47.10 as well as you might like.
01:20:50.16 So, putting this all together we can
01:20:52.22 come up with some general guidelines as
01:20:54.07 to when to use confocal microscopy.
01:20:56.20 And I'm going to talk separately here about
01:20:57.27 fixed samples and live samples because
01:21:00.24 fixed samples are much less sensitive to
01:21:02.25 the amount of light you put on them.
01:21:03.28 For live cells if you shine too much light
01:21:06.02 on them they die or you get photo damage
01:21:08.05 to the cells, which prevents you from
01:21:09.28 from getting the data you want.
01:21:11.21 So, for fixed samples, we can first consider
01:21:15.12 thin samples, or where you're working at
01:21:18.00 very low magnification, where the
01:21:20.00 thickness of your sample is not very
01:21:22.00 large compared to the focal volume of
01:21:24.00 your microscope, the region that your
01:21:26.00 microscope detects in focus.
01:21:27.05 So, if there's if essentially your whole sample
01:21:31.00 is in focus there's no out-of-focus
01:21:32.00 light and so confocal doesn't give you
01:21:34.00 any advantages.
01:21:34.12 You can just use ordinary
01:21:36.00 conventional widefield microscopy.
01:21:39.00 However, once your samples start getting
01:21:41.00 thick with respect to the in-focus
01:21:43.00 volume your microscope can record,
01:21:46.00 say if you're using 100x objective with a,
01:21:46.26 you know, 700 nanometer focal volume,
01:21:50.06 and your samples 10 microns thick,
01:21:51.28 then you now have an advantage
01:21:53.03 for doing confocal.
01:21:54.16 And typically for fixed specimens you would
01:21:57.00 use laser scanning confocal because the
01:21:59.00 out of focus rejection is better than
01:22:02.00 for a spinning disc system.
01:22:03.10 The one exception may be as if you're looking
01:22:06.00 at very dim fixed samples, where the higher
01:22:08.00 light sensitive view in spinning disc
01:22:09.00 might be an advantage.
01:22:10.18 As your samples get thicker,
01:22:12.27 you know sort of in a 30 to 50 micron range,
01:22:14.27 the laser scanning confocal gets better and
01:22:16.17 better because in this range the spinning disc
01:22:19.15 system stops rejecting as much
01:22:21.00 out of focus light as the laser scanning
01:22:22.15 system.
01:22:24.00 So, this is probably the preferred way to go.
01:22:27.00 And then as you get to even thicker
01:22:29.00 specimens beyond 100 or 200 microns
01:22:32.00 the laser scanning confocal starts to
01:22:34.00 have problems dealing with the amount of
01:22:36.00 out of focus light and then you need to
01:22:37.00 go to two-photon microscopy or other
01:22:39.00 specialized techniques, which we'll cover
01:22:40.00 in another lecture here.
01:22:45.01 For live samples it's a bit of a different story.
01:22:49.19 For thin
01:22:51.00 again, for thin and low magnification
01:22:53.00 samples, where your sample is thin
01:22:56.00 compared to the focal volume of your
01:22:57.00 microscope, generally you'd use wide
01:22:59.00 field systems here.
01:23:00.21 Because the confocal again
01:23:01.26 doesn't buy you any benefit,
01:23:05.00 there may be some advantage to using the
01:23:06.00 spinning disc because they seem to be
01:23:07.00 more live cell friendly than wide field.
01:23:09.00 But this depends a lot on the details of
01:23:12.00 your instrumentation and it's hard to give
01:23:13.00 general guidelines.
01:23:14.08 If your samples again are,
01:23:15.18 you know, thin but thicker than the
01:23:18.00 focal volume of your objective,
01:23:20.00 so, say again you know your 10 microns
01:23:22.00 cell and you're using 100x objective
01:23:23.00 with a 700 nanometer focal volume
01:23:25.22 and you need to do Z stacks,
01:23:27.01 you need to get slices through these guys,
01:23:28.14 then spinning discs is really the microscope
01:23:30.27 of choice.
01:23:33.01 For thick specimens,
01:23:34.16 it becomes a bit more of a toss-up here.
01:23:35.28 So, once you get to this range,
01:23:37.02 where the spinning disk is
01:23:39.00 no longer rejecting as much out-of-focus
01:23:41.00 light as the laser scanning system
01:23:44.00 it now starts to matter whether you really
01:23:47.00 need the extra out of focus rejection or
01:23:49.00 you need the more live cell friendliness
01:23:51.00 and higher sensitivity of the spinning disk.
01:23:51.29 And so, it's hard to give general
01:23:55.00 recommendations on this range because
01:23:56.00 you may find that the better
01:23:58.00 out of focus rejection and the laser
01:24:00.00 scanning system is necessary to get
01:24:01.00 crisp images. Or you may find that your
01:24:04.00 sample is not really densely labeled,
01:24:05.00 there's only small amount of
01:24:07.00 out of focus light, and so the higher
01:24:09.00 sensitivity and better live cell
01:24:10.00 friendliness of the spinning disk is an
01:24:12.00 advantage.
01:24:12.19 And again, when your samples
01:24:15.00 get really thick, two-photon microscopy,
01:24:17.00 another specialized technique come into
01:24:19.00 play and none of these systems are
01:24:21.00 really capable of handling it.


5. Experimental Section

Tissue sample preparation

Mice brain and kidney tissues were purchased from BioLASCO Taiwan Co., Ltd. Gender and age of the ICR mice were not specified. Metastatic lung tissues derived from MMTV-PyMT breast cancer mice model were collected from the laboratory of Dr. Tang-Long Shen (National Taiwan University) and the experimental details were described elsewhere. [57,58] All experimental procedures were handled in accordance with the protocols and the ethical regulations approved by the Institutional Animal Care and Use Committee of National Taiwan University (IACUC approval NO. NTU104-EL-00003). The intact organs were harvested as soon as euthanasia. The Luminal B human breast cancer tissue were collected from National Taiwan University Hospital. All procedures on human tissue were performed with the approval of the Research Ethics Committee B of National Taiwan University Hospital (NTUH-REC approval No. 201812125RINB). The intact organs and tissue samples were stored under −80°C prior cryo-sectioning. For tissue cryo-sectioning, organs and tissue samples were flash-frozen using liquid nitrogen and kept under −20°C in the cold tome (LEICA, CM1900) in order to reach the optimal temperature before sectioning. The samples were sectioned to 14-μm thick sections and thaw-mounted onto the slides and stored at −80°C prior to analysis without fixation. Slides used for nanoDESI MSI analysis were regular plain glass without any coating. For DESI, H&E staining and immunostaining, tissues were thaw-mounted onto silane coated slides while the slides for MALDI-TOF analysis were coated with indium tin oxide. Before nanoDESI MS interrogation, tissue sections were dried in a desiccator for

1 hour and then rinsed with 50 ml of chloroform for 1 min. Adjoining tissue section pretreatment for MALDI-TOF validation is as described in the following. The matrix application was achieved by the sublimation method, which has been described elsewhere, [59] followed by a recrystallization step. [60] The sublimation apparatus was purchased from Singlong (Taichung, Taiwan) and placed in a sand bath on a hot plate while applying matrix. The brain sections were adhered to ITO-coated glass slides by a conductive tape and stored under −80°C before applying matrix. Sublimation was performed using 2,5-dihydroxyacetophenone (2,5-DHA). The matrix sublimation and application were performed at 110 °C with a 0.7 Torr vacuum for 10 minutes. Amount of the applied matrices were determined by the exposure time. The matrix-coated samples were rehydrated with 50% TFA solution in an incubator at 37°C for 4 minutes. A sonication step was added to increase the signal for protein analysis. [61] The sonication was operated by Elmasonic S 30 H ultrasonicator under continuous mode with a frequency of 37 KHz. As for DESI MS interrogation, tissue sections were dried in a desiccator for

1 hour, no further sample pretreatment was conducted before analysis.

Ambient ionization mass spectrometry imaging

For DESI MSI, commercial DESI source (Prosolia Inc., IN, USA) was mounted to Orbitrap Elite to conduct the MSI measurements (mass range 200 to 1000 m/z). The gas pressure of nitrogen was set at 150 psi, the angle of the spray head was set to 55°, the flow rate of the solvent (DMF:ACN = 1:1) was 2 μl/min and the voltage was 3.5 kV. The nanoDESI system was modified from the commercial DESI platform (mass range 700 to 800 m/z), in which two flame-pulled fused capillaries (O.D: 360μm, I.D: 250 μm) were implanted and used to substitute the original DESI emitter. The solvent delivery (primary) capillary (65% ACN with 1% formic acid), was applied with a high voltage (2.5kV) to extract and ionize the compound on the surface of the sample. The secondary capillary was used to deliver the extracts from the sample to the mass spectrometer. The MSI experiments were conducted using Prosolia’s DESI 2D system. The scanning rates of the motor stage for DESI and nanoDESI MSI were set at approximately 150 μm s −1 and 30 μm s −1 , respectively. The acquired raw data were then imported into Firefly™ 2.2 for data conversion, then we imported the converted data into BioMAP to obtain the final DESI and nanoDESI MSI. For H&E staining, the tissue slides were rinsed with 70% EtOH and then 100% EtOH for 30 sec each and allow dry under vacuum. The hematoxylin staining was applied under 60°C for 40 sec. After hematoxylin staining, the slides were rinsed with H2O, then rinsed with acidified EtOH 0.3% for 3 sec, then rinsed with H2O. The bluing up was achieved by 1% NH4OH and finally H2O under room temperature. Then the slides were dipped into the eosin stain for 20 sec, then the slides were rinsed with H2O, then 80% EtOH, then 90% EtOH, and finally 99% EtOH at room temperature.

Data preprocessing for imaging fusion

The MSI data was converted into a *.img file by Firefly™ 2.2 and exported into a built-in Matlab package MSiReader. Each dataset was overlaid with the H&E image and the MSI data in MSiReader to determine the region of interests. [62] The mass spectral signals were binned into 2,000 features (equivalent to 0.4 m/z each for DESI MSI and 0.05 m/z each for nanoDESI MSI) and summed before exported as a uniformly spaced text file. The microscope images were collected by an optical microscope (WHITED INC., Taipei, Taiwan) with a PSC600-05C digital camera (OPLENIC CORP., USA) and the image data was processed by AOR AJ-VERT. The microscopy images were then exported as a uniform data array by the home-built Matlab script. An affine transform matrix that can describe the spatial relationship between MSI data and microscopy data was calculated using a second home-built Matlab script. Subsequently, the best fitting between the MSI data and the microscopic data could be found to ensure the optimal alignment. The processed data was exported by in-house generated Matlab script and imported into “Molecular image fusion system” under the command-line interface to generate the predictive MSI datasets. [38] The output high resolution MSI with reconstruction score >75% were exported for further AUC ROC curves analysis.

Receiver operating characteristic curve

The ROC curves were constructed by our home-built MATLAB code. The ROC curve is given by the corresponding values of the sensitivity and the (1-specificity) at various ion intensity thresholds for individual m/z to determine the binary classification of tissue. If the intensity of a specific m/z peak in a particular pixel was higher than the intensity threshold, the pixel was then considered as “Cancer”. Otherwise, the pixel was assigned as “Normal”. The resulting sets of assignments were compared, pixel by pixel, with the labels of the H&E stained metastatic lung tissue image evaluated by the pathologist to calculate the sensitivity and the specificity of each ion. The sensitivity (true positive rate) was defined as the ratio of “the number of Cancer pixels determined by both MSI and the pathologist” to “the number of Cancer pixels labeled by the pathologist” the (1-specificity), or false positive rate, was defined as the ratio of “the number of Cancer pixels determined by MSI in the normal tissue region” to “the number Normal pixels labeled by the pathologist”. The ROC curves of an m/z peak were sketched by setting the intensity threshold from 0% to 100% of the highest intensity of the ion in all pixels.

Immunofluorescence staining

All the procedures for immunostaining were based on the protocols of the commercial kit (TAHC03, BioTnA, Kaohsiung, Taiwan). After immunostaining, the slides were mounted and digitized with a Motic Easyscan Digital Slide Scanner (Motic Hong Kong Limited, Hong Kong, China) at ×40 (0.26 μm/pixel) with high precision (High precision autofocus). Motic Easyscan whole-slide images were viewed with DSAssistant and EasyScanner software at Litzung Biotechnology INC (Kaohsiung, Taiwan).

Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry imaging

MALDI-TOF MSI results were acquired using MALDI-TOF/TOF mass spectrometry (Autoflex Speed MALDI TOF/TOF system, Bruker Daltonics). The instrument was equipped with the third harmonic of Nd:YAG SmartBeamTM-II laser (355nm). Imaging spectra were recorded and processed by FlexControl 3.4 and FlexImaging 3.0 (Bruker Daltonics). The spectra were acquired in positive polarity with pixel-to-pixel resolution of 60 μm using the following parameters: laser attenuator offset at 80% of the maximum power laser operating power at 90% under linear mode for protein analysis and 80% under reflectron mode for lipid analysis with smartbeam parameter at 2_small laser repetition rate at 1 kHz acquisition shots accumulated to 1,000 shots per pixel for imaging analysis. The resulting imaging spectra were processed using TopHat baseline subtraction and normalized to the total ion counts per pixel.

Mass spectrometry-based molecular identifications

Protein ions generated by nanoDESI source were directly introduced to the LTQ Orbitrap Elite for top-down tandem mass analysis [45,46] . Protein ions of interest were chosen as the mass center of 5-m/z isolation window with an activation energy Q of 0.25 and utilize collision energy of 30%. The data was imported into Prosight PTM for identification [63,64] . Lipid species in the brain tissue was extracted with MTBE methods for further tandem mass analysis [65] . The HPLC-MS/MS analysis was performed using HPLC (LC-20AD, Shimadzu, Tokyo, Japan), coupled with Orbitrap Elite (Thermo Scientific). HPLC experiments were performed using C18 column (100*2.1 mm, 3.5 μm, Agilent) and following the gradient elution: mobile phase A = water with 0.1% formic acid (v/v) mobile phase B = acetonitrile and isopropanol (10:90, v/v) with 0.1 % formic acid (v/v) elution profile = 0.0-5.0 min (40% mobile phase B) 5.0-35.0 min (40-90% mobile phase B) 35.0-50.0 min (90% mobile phase B), column oven at 25°C, volume injection 10 μL and flow rate of 0.15 mL/min. Mass spectrometry acquisition parameters were as followed: positive ions mode, heater temperature 180 °C, sheath gas flow rate 35 arb, auxiliary gas flow rate 10 arb, sweep gas flow rate 10 arb, spray voltage 3.5 kV and capillary temperature 350°C. CID fragmentation was performed for targeting ion peaks observed in DESI analysis with collision energy of 30% and an activation Q of 0.25. The mass spectra were collected in FT mode with 30,000 resolving power. The mass spectral analysis was processed by Xcalibur QualBrowser.


Watch the video: Обзор микроскопа FULLHD 1080P 4K (May 2022).


Comments:

  1. Abhimanyu

    What interesting message

  2. Beckham

    You are not similar to the expert :)

  3. Gerald

    Cleared

  4. Rowdy

    I apologise, I can help nothing, but it is assured, that to you will help to find the correct decision. Do not despair.

  5. Tuzilkree

    I join. All of the above is true. We can communicate on this theme.

  6. Anyon

    but in general it's funny.

  7. Malazahn

    Excuse me, the message is taken away



Write a message