Preparation of Electron Microscope


Materials to be viewed under an electron microscope may require processing to produce a suitable sample. The technique required varies depending on the specimen and the analysis required:

  • Cryofixation – freezing a specimen so rapidly, to liquid nitrogen or even liquid helium temperatures, that the water forms vitreous (non-crystalline) ice. This preserves the specimen in a snapshot of its solution state. An entire field called cryo-electron microscopy has branched from this technique. With the development of cryo-electron microscopy of vitreous sections (CEMOVIS), it is now possible to observe virtually any biological specimen close to its native state.
  • Dehydration – replacing water with organic solvents such as ethanol or acetone.
  • Embedding – infiltration of the tissue with a resin such as araldite or epoxy for sectioning. After this embedding process begins, the specimen must be polished to a mirror-like finish using ultra-fine abrasives. The polishing process must be performed carefully to minimise scratches and other polishing artefacts that impose on image quality.
  • Sectioning – produces thin slices of specimen, semitransparent to electrons. These can be cut on an ultramicrotome with a diamond knife to produce very thin slices. Glass knives are also used because they can be made in the lab and are much cheaper.
  • Staining – uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give contrast between different structures, since many (especially biological) materials are nearly "transparent" to electrons (weak phase objects). In biology, specimens are usually stained "en bloc" before embedding and also later stained directly after sectioning by brief exposure to aqueous (or alcoholic) solutions of the heavy metal stains.
  • Freeze-fracture or freeze-etch – a preparation method particularly useful for examining lipid membranes and their incorporated proteins in "face on" view. The fresh tissue or cell suspension is frozen rapidly (cryofixed), then fractured by simply breaking or by using a microtome while maintained at liquid nitrogen temperature. The cold fractured surface (sometimes "etched" by increasing the temperature to about -100°C for several minutes to let some ice sublime) is then shadowed with evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. A second coat of carbon, evaporated perpendicular to the average surface plane is often performed to improve stability of the replica coating. The specimen is returned to room temperature and pressure, then the extremely fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying biological material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The still-floating replica is thoroughly washed from residual chemicals, carefully fished up on EM grids, dried then viewed in the TEM.
  • Ion Beam Milling – thins samples until they are transparent to electrons by firing ions (typically argon) at the surface from an angle and sputtering material from the surface. A subclass of this is Focused ion beam milling, where gallium ions are used to produce an electron transparent membrane in a specific region of the sample, for example through a device within a microprocessor. Ion beam milling may also be used for cross-section polishing prior to SEM analysis of materials that are difficult to prepare using mechanical polishing.
  • Conductive Coating – An ultrathin coating of electrically-conducting material, deposited either by high vacuum evaporation or by low vacuum sputter coating of the sample. This is done to prevent the accumulation of static electric fields at the specimen due to the electron irradiation required during imaging. Such coatings include gold, gold/palladium, platinum, tungsten, graphite etc. and are especially important for the study of specimens with the scanning electron microscope. Another reason for coating, even when there is more than enough conductivity, is to improve contrast, a situation more common with the operation of a FESEM (field emission SEM). When an osmium coater is used, a layer far thinner than would be possible with any of the previously mentioned sputtered coatings is possible.


Scanning Electron Microscope


Unlike the TEM, where electrons of the high voltage beam form the image of the specimen, the Scanning Electron Microscope (SEM) produces images by detecting low energy secondary electrons which are emitted from the surface of the specimen due to excitation by the primary electron beam. In the SEM, the electron beam is rastered across the sample, with detectors building up an image by mapping the detected signals with beam position.

Generally, the TEM resolution is about an order of magnitude greater than the SEM resolution, however, because the SEM image relies on surface processes rather than transmission it is able to image bulk samples and has a much greater depth of view, and so can produce images that are a good representation of the 3D structure of the sample.

Transmission Electron Microscope

The original form of electron microscopy, Transmission electron microscopy (TEM) involves a high voltage electron beam emitted by a cathode, usually a tungsten filament and focused by electrostatic and electromagnetic lenses. The electron beam that has been transmitted through a specimen that is in part transparent to electrons carries information about the inner structure of the specimen in the electron beam that reaches the imaging system of the microscope. The spatial variation in this information (the "image") is then magnified by a series of electromagnetic lenses until it is recorded by hitting a fluorescent screen, photographic plate, or light sensitive sensor such as a CCD (charge-coupled device) camera. The image detected by the CCD may be displayed in real time on a monitor or computer.

Resolution of the TEM is limited primarily by spherical aberration, but a new generation of aberration correctors have been able to partially overcome spherical aberration to increase resolution. Software correction of spherical aberration for the High Resolution TEM HRTEM has allowed the production of images with sufficient resolution to show carbon atoms in diamond separated by only 0.89 ångström (89 picometers) and atoms in silicon at 0.78 ångström (78 picometers)[ at magnifications of 50 million times.The ability to determine the positions of atoms within materials has made the HRTEM an important tool for nano-technologies research and development.


Electron Microscope

An electron microscope is a type of microscope that uses electrons to illuminate a specimen and create an enlarged image. Electron microscopes have much greater resolving power than light microscopes and can obtain much higher magnifications. Some electron microscopes can magnify specimens up to 2 million times, while the best light microscopes are limited to magnifications of 2000 times. Both electron and light microscopes create images with electromagnetic radiation, with their resolving power and magnification limited by the wavelength of the electromagnetic radiation being used to obtain the image. The greater resolution and magnification of the electron microscope is due to the wavelength of an electron being much smaller than that of a light photon.

The electron microscope uses electrostatic and electromagnetic lenses in forming the image by controlling the electron beam to focus it at a specific plane relative to the specimen in a manner similar to how a light microscope uses glass lenses to focus light on or through a specimen to form an image.



History of Electron Microscopy

The first electron microscope prototype was built in 1931 by the German engineers Ernst Ruska and Max Knoll. It was based on the ideas and discoveries of French physicist Louis de Broglie. Although it was primitive and not fit for practical use, the instrument was still capable of magnifying objects by four hundred times.

Reinhold Rudenberg, the research director of Siemens, had patented the electron microscope in 1931, although Siemens was doing no research on electron microscopes at that time. In 1937 Siemens began funding Ruska and Bodo von Borries to develop an electron microscope. Siemens also employed Ruska's brother Helmut to work on applications, particularly with biological specimens.

In the same decade of 1930s Manfred von Ardenne pioneered the scanning electron microscope and his universal electron microscope.

Siemens produced the first commercial TEM in 1939, but the first practical electron microscope had been built at the University of Toronto in 1938, by Eli Franklin Burton and students Cecil Hall, James Hillier and Albert Prebus.

Although modern electron microscopes can magnify objects up to two million times, they are still based upon Ruska's prototype. The electron microscope is an integral part of many laboratories. Researchers use it to examine biological materials (such as microorganisms and cells), a variety of large molecules, medical biopsy samples, metals and crystalline structures, and the characteristics of various surfaces. The electron microscope is also used extensively for inspection, quality assurance and failure analysis applications in industry, including, in particular, semiconductor device fabrication.


Ultrasonic Range Findings

A common use of ultrasound is in range finding; this use is also called SONAR, (sound navigation and ranging). This works similarly to RADAR (radio detection and ranging): An ultrasonic pulse is generated in a particular direction. If there is an object in the path of this pulse, part or all of the pulse will be reflected back to the transmitter as an echo and can be detected through the receiver path. By measuring the difference in time between the pulse being transmitted and the echo being received, it is possible to determine how far away the object is.

The measured travel time of SONAR pulses in water is strongly dependent on the temperature and the salinity of the water. Ultrasonic ranging is also applied for measurement in air and for short distances. Such method is capable for easily and rapidly measuring the layout of rooms.

Although range finding underwater is performed at both sub-audible and audible frequencies for great distances (1 to several ten kilometers), ultrasonic range finding is used when distances are shorter and the accuracy of the distance measurement is desired to be finer. Ultrasonic measurements may be limited through barrier layers with large salinity, temperature or vortex differentials. Ranging in water varies from about hundreds to thousands of meters, but can be performed with centimeters to meters accuracy.

Ultrasonic Disintegration

Some sorts of ultrasound can disintegrate biological cells including bacteria. This has uses in biological science and in killing bacteria in sewage. High power ultrasound at frequency of around 20 kHz produces cavitation that facilitates particle disintegration. Dr. Samir Khanal of Iowa State University employed high power ultrasound to disintegrate corn slurry to enhance liquefaction and saccharification for higher ethanol yield in dry corn milling plants.

See examples:-

  • Ultrasound pre-treatment of waste activated sludge
  • Retooling ethanol industries: integrating ultrasonics into dry corn milling to enhance ethanol yield
  • Enhancement of anaerobic sludge digestion by ultrasonic disintegration

Sonochemistry

Power ultrasound in the 20-100 kHz range is used in chemistry. The ultrasound does not interact directly with molecules to induce the chemical change, as its typical wavelength (in the millimeter range) is too long compared to the molecules. Instead:

  • It causes cavitation which causes local extremes of temperature and pressure in the liquid where the reaction happens.
  • It breaks up solids and removes passivating layers of inert material to give a larger surface area for the reaction to occur over.

Both of these make the reaction faster.

Ultrasound and Animals

Bats

Bats use a variety of ultrasonic ranging (echolocation) techniques to detect their prey. They can detect frequencies as high as 100 kHz, although there is some disagreement on the upper limit.

Dogs

Dogs can hear sound at higher frequencies than humans can. A dog whistle exploits this by emitting a high frequency sound to call to a dog. Many dog whistles emit sound in the upper audible range, but some, such as the silent whistle, emit ultrasound at a frequency in the range of 18 kHz to 22 kHz.

Dolphins and Whales

It is well known that some whales can hear ultrasound and have their own natural sonar system. Some whales use the ultrasound as a hunting tool (for both detection of prey and as an attack).

Fish

Several types of fish can detect ultrasound. Of the order Clupeiformes, members of the subfamily Alosinae (shad), have been shown to be able to detect sounds up to 180 kHz, while the other subfamilies (e.g. herrings) can hear only up to 4 kHz.

Moths

There is evidence that ultrasound in the range emitted by bats causes flying moths to make evasive manoeuvres because bats eat moths. Ultrasonic frequencies trigger a reflex action in the noctuid moth that cause it to drop a few inches in its flight to evade attack.

Rodents/Insects

Ultrasound generator/speaker systems are sold with claims that they frighten away rodents and insects, but there is no scientific evidence that the devices work. Laboratory tests conducted by Kansas State University did show positive results for products from specific manufacturers. Controlled tests on some of the systems have shown that rodents quickly learn that the speakers are harmless. The positive results (Kansas State University) were limited to units which use constantly modulating frequencies . The frequency used however is often within the range that most children can hear, and can cause headaches.

Mosquitoes

There is a theory that ultrasound of certain frequencies, while not audible to humans, can repel mosquitoes. There are computer programs available on the internet that claim to use this phenomenon for pest control. There have been mixed reports about the effectiveness of this method towards mosquito control.[dubious ] These claims are made questionable by the fact that most if not all computer speakers and the soundcards that drive them are incapable of producing sound far beyond the upper and lower ranges of human hearing.

Industrial Ultrasound


Non-destructive testing of a swing shaft showing spline cracking
Non-destructiv

e testing of a swing shaft showing spline cracking.

Ultrasonic testing is a type of nondestructive testing commonly used to find flaws in materials and to measure the thickness of objects. Frequencies of 2 to 10 MHz are common but for special purposes other frequencies are used. Inspection may be manual or automated and is an essential part of modern manufacturing processes. Most metals can be inspected as well as plastics and aerospace composites. Lower frequency ultrasound (50 kHz to 500 kHz) can also be used to inspect less dense materials such as wood, concrete and cement.

Ultrasound can also be used for heat transfer in liquids.

Researchers recently employed ultrasound in dry corn milling plant to enhance ethanol production.


Biomedical Ultrasonic Application

Ultrasound also has therapeutic applications, which can be highly beneficial when used with dosage precautions.

  • According to RadiologyInfo . ultrasounds are useful in the detection of Pelvic abnormalities and can involve techniques known as abdominal (transabdominal) ultrasound, vaginal (transvaginal or endovaginal) ultrasound in women, and also rectal (transrectal) ultrasound in men.
  • Treating benign and malignant tumors and other disorders, via a process known as Focused Ultrasound Surgery (FUS) or HIFU, High Intensity Focused Ultrasound. These procedures generally use lower frequencies than medical diagnostic ultrasound (from 250 kHz to 2000 kHz), but significantly higher time-averaged intensities. The treatment is often guided by MRI, as in Magnetic Resonance guided Focused Ultrasound.
  • Therapeutic ultrasound, a technique that uses more powerful ultrasound sources to generate local heating in biological tissue, e.g. in occupational therapy, physical therapy and cancer treatment.
  • Cleaning teeth in dental hygiene.
  • Focused ultrasound sources may be used for cataract treatment by phacoemulsification.
  • Additional physiological effects of low-intensity ultrasound have recently been discovered, e.g. the ability to stimulate bone-growth and its potential to disrupt the blood-brain barrier for drug delivery.
  • Ultrasound is used in UAL (= ultrasound-assisted lipectomy), or liposuction.
  • Doppler ultrasound is being tested for use in aiding tissue plasminogen activator treatment in stroke sufferers. This procedure is called Ultrasound-Enhanced Systemic Thrombolysis.
  • Low intensity pulsed ultrasound is used for therapeutic tooth and bone regeneration.
  • Ultrasound can also be used for elastography. This can be useful in medical diagnoses, as elasticity can discern healthy from unhealthy tissue for specific organs/growths. In some cases unhealthy tissue may have a lower system Q, meaning that the system acts more like a large heavy spring as compared to higher values of system Q (healthy tissue) that respond to higher forcing frequencies. Ultrasonic elastography is different from conventional ultrasound, as a transceiver (pair) and a transmitter are used instead of only a transceiver. One transducer (a single element {or array of elements} acts as both the transmitter and receiver to image the region of interest over time. The extra transmitter is a very low frequency transmitter, and perturbs the system so the unhealthy tissue oscillates at a low frequency and the healthy tissue does not. The transceiver, which operates at a high frequency (typically MHz) then measures the displacement of the unhealthy tissue (oscillating at a much lower frequency). The movement of the slowly oscillating tissue is used to determine the elasticity of the material, which can then be used to distinguish healthy from unhealthy tissue.
  • Ultrasound has been shown to act synergistically with antibiotics in bacterial cell killing.
  • Ultrasound has been postulated to allow thicker eukaryotic cell tissue cultures by promoting nutrient penetration.Scientific Article
  • Ultrasound in the low MHz range in the form of standing waves is an emerging tool for contactless separation, concentration and manipulation of microparticles and biological cells. The basis is the acoustic radiation force, a non-linear effect which causes particles to be attracted to either the nodes or anti-nodes of the standing wave depending on the acoustic contrast factor, which is a function of the sound velocities and densities of the particle and of the medium in which the particle is immersed.

Diagnostic Sdnography

Medical sonography (ultrasonography) is an ultrasound-based diagnostic medical imaging technique used to visualize muscles, tendons, and many internal organs, their size, structure and any pathological lesions with real time tomographic images. It is also used to visualize a fetus during routine and emergency prenatal care. Ultrasound scans are performed by medical health care professionals called sonographers. Obstetric sonography is commonly used during pregnancy. Ultrasound has been used to image the human body for at least 50 years. It is one of the most widely used diagnostic tools in modern medicine. The technology is relatively inexpensive and portable, especially when compared with modalities such as magnetic resonance imaging(MRI) and computed tomography (CT). As currently applied in the medical environment, ultrasound poses no known risks to the patient. Sonography is generally described as a "safe test" because it does not use ionizing radiation, which imposes hazards, such as cancer production and chromosome breakage. However, ultrasonic energy has two potential physiological effects: it enhances inflammatory response; and it can heat soft tissue Ultrasound energy produces a mechanical pressure wave through soft tissue. This pressure wave may cause microscopic bubbles in living tissues, and distortion of the cell membrane, influencing ion fluxes and intracellular activity. When ultrasound enters the body, it causes molecular friction and heats the tissues slightly. This effect is very minor as normal tissue perfusion dissipates heat. With high intensity, it can also cause small pockets of gas in body fluids or tissues to expand and contract/collapse in a phenomenon called cavitation (this is not known to occur at diagnostic power levels used by modern diagnostic ultrasound units). The long-term effects of tissue heating and cavitation are not knownThere are several studies that indicate the harmful side effects on animal fetuses associated with the use of sonography on pregnant mamals. A noteworthy study in 2006 suggests exposure to ultrasound can affect fetal brain development in mice. This misplacement of brain cells during their development is linked to disorders ranging "from mental retardation and childhood epilepsy to developmental dyslexia, autism spectrum disorders and schizophrenia, the researchers said. There is no link made yet between the test results on animals, such as mice, and the possible outcome to humans. Widespread clinical use of diagnostic ultrasound testing on humans has not been done for ethical reasons. The possibility exists that biological effects may be identified in the future, currently most doctors feel that based on available information the benefits to patients outweigh the risks.Obstetric ultrasound can be used to identify many conditions that would be harmful to the mother and the baby. For this reason many health care professionals consider that the risk of leaving these conditions undiagnosed is much greater than the very small risk, if any, associated with undergoing the scan. According to Cochrane review, routine ultrasound in early pregnancy (less than 24 weeks) appears to enable better gestational age assessment, earlier detection of multiple pregnancies and earlier detection of clinically unsuspected fetal malformation at a time when termination of pregnancy is possible.

Sonography is used routinely in obstetric appointments during pregnancy, but the FDA discourages its use for non-medical purposes such as fetal keepsake videos and photos, even though it is the same technology used in hospitals. The demand for keepsake ultrasound products in medical environments has prompted a commercial solution: PhotOBaby self-serve software for a patient operated kiosk that allows the patient to create a "keepsake" out of the ultrasound imagery recorded during a medical ultrasound procedure (source: [www.myphotobaby.com]).

Ability to hear Ultrasound

The upper frequency limit in humans (approximately 20 kHz) is caused by the middle ear, which acts as a low-pass filter. If ultrasound is fed directly into the skull bone and reaches the cochlea without passing through the middle ear, much higher frequencies can be heard. This effect is discussed in ultrasonic hearing. Carefully-designed scientific studies have been performed and confirmed what they call the hypersonic effect - that even without consciously hearing it, high-frequency sound can have a measurable effect on the mind.

It is a fact in psychoacoustics that children can hear some high-pitched sounds that older adults cannot hear, because in humans the upper limit pitch of hearing tends to become lower with age A cell phone company has used this to create ring signals supposedly only able to be heard by younger humans but many older people claim to be able to hear it, which is likely given the considerable variation of age-related deterioration in the upper hearing threshold.

Some animals – such as dogs, cats, dolphins, bats, and mice – have an upper frequency limit that is greater than that of the human ear and thus can hear ultrasound.

Ultrasound

Ultrasound is cyclic sound pressure with a frequency greater than the upper limit of human hearing. Although this limit varies from person to person, it is approximately 25 kilohertz

(25,000 hertz) in healthy, young adults and thus, 25kHz serves as a useful lower limit in describing ultrasound.

Ultrasound is manually produced in many different fields, typically to penetrate a medium and measure the reflection signature or supply focused energy. The reflection signature can reveal details about the inner structure of the medium. The most well known application of this technique is its use in sonography to produce pictures of fetuses in the human womb. There are a vast number of other applications as well.

Approximate frequency ranges corresponding to ultrasound, with rough guide of some applications
Approximate frequency ranges corresponding to ultrasound, with rough guide of some applications
A fetus in its mother's womb, viewed in a sonogram (brightness scan)
A fetus in its mother's womb, viewed in a sonogram (brightness scan)



Types ofTomography

  • Atom probe tomography (APT)
  • Computed tomography (CT)
  • Confocal laser scanning microscopy (LSCM)
  • Cryo-electron tomography (Cryo-ET)
  • Electrical capacitance tomography (ECT)
  • Electrical resistivity tomography (ERT)
  • Electrical impedance tomography (EIT)
  • Functional magnetic resonance imaging (fMRI)
  • Magnetic induction tomography (MIT)
  • Magnetic resonance imaging (MRI), formerly known as magnetic resonance tomography (MRT) or nuclear magnetic resonance tomography
  • Neutron tomography
  • Optical coherence tomography (OCT)
  • Optical projection tomography (OPT)
  • Process tomography (PT)
  • Positron emission tomography (PET)
  • Positron emission tomography - computed tomography (PET-CT)
  • Quantum tomography
  • Single photon emission computed tomography (SPECT)
  • Seismic tomography
  • Ultrasound assisted optical tomography (UAOT)
  • Ultrasound transmission tomography
  • X-ray tomography (CT, CATScan)
  • Photoacoustic Tomography (PAT), also known as Optoacoustic Tomography (OAT) or Thermoacoustic Tomography (TAT)
  • Zeeman-Doppler imaging, used to reconstruct the magnetic geometry of rotating stars.


Tomography

Tomography is imaging by sections or sectioning. A device used in tomography is called a tomograph, while the image produced is a tomogram. The method is used in medicine, archaeology, biology, geophysics, oceanography, materials science, astrophysics and other sciences. In most cases it is based on the mathematical procedure called tomographic reconstruction. The word derives from the Greek word tomos which means "a section" or "a cutting". A tomography of several sections of the body is known as a polytomography.


Description of Positron Emission Tomography

Operation

To conduct the scan, a short-lived radioactive tracer isotope, which decays by emitting a positron, which also has been chemically incorporated into a metabolically active molecule, is injected into the living subject (usually into blood circulation). There is a waiting period while the metabolically active molecule becomes concentrated in tissues of interest; then the research subject or patient is placed in the imaging scanner. The molecule most commonly used for this purpose is fluorodeoxyglucose (FDG), a sugar, for which the waiting period is typically an hour.

Schema of a PET acquisition process
Schema of a PET acquisition process

As the radioisotope undergoes positron emission decay (also known as positive beta decay), it emits a positron, the antimatter counterpart of an electron. After travelling up to a few millimeters the positron encounters and annihilates with an electron, producing a pair of annihilation (gamma) photons moving in opposite directions. These are detected when they reach a scintillator material in the scanning device, creating a burst of light which is detected by photomultiplier tubes or silicon avalanche photodiodes (Si APD). The technique depends on simultaneous or coincident detection of the pair of photons; photons which do not arrive in pairs (i.e., within a few nanoseconds) are ignored.

The most significant fraction of electron-positron decays result in two 511 keV gamma photons being emitted at almost 180 degrees to each other; hence it is possible to localize their source along a straight line of coincidence (also called formally the "line of response" or LOR). In practice the LOR has a finite width as the emitted photons are not exactly 180 degrees apart. If the recovery time of detectors is in the picosecond range rather than the 10's of nanosecond range, it is possible to calculate the single point on the LOR at which an annihilation event originated, by measuring the "time of flight" of the two photons. This technology is not yet common, but it is available on some new systems . More commonly, a technique much like the reconstruction of computed tomography (CT) and single photon emission computed tomography (SPECT) data is used, although the data set collected in PET is much poorer than CT, so reconstruction techniques are more difficult (see section below on image reconstruction of PET). Using statistics collected from tens-of-thousands of coincidence events, a set of simultaneous equations for the total activity of each parcel of tissue along many LORs can be solved by a number of techniques, and thus a map of radioactivities as a function of location for parcels or bits of tissue ("voxels"), may be constructed and plotted. The resulting map shows the tissues in which the molecular probe has become concentrated, and can be interpreted by a nuclear medicine physician or radiologist in the context of the patient's diagnosis and treatment plan.

PET scans are increasingly read alongside CT or magnetic resonance imaging (MRI) scans, the combination ("co-registration") giving both anatomic and metabolic information (i.e., what the structure is, and what it is doing biochemically). Because PET imaging is most useful in combination with anatomical imaging, such as CT, modern PET scanners are now available with integrated high-end multi-detector-row CT scanners. Because the two scans can be performed in immediate sequence during the same session, with the patient not changing position between the two types of scans, the two sets of images are more-precisely registered, so that areas of abnormality on the PET imaging can be more perfectly correlated with anatomy on the CT images. This is very useful in showing detailed views of moving organs or structures with higher amounts of anatomical variation, such as are more likely to occur outside the brain.

Radioisotopes

Radionuclides used in PET scanning are typically isotopes with short half lives such as carbon-11 (~20 min), nitrogen-13 (~10 min), oxygen-15 (~2 min), and Fluorine-18 (~110 min). Due to their short half lives, the radionuclides must be produced in a cyclotron which is not too far away in delivery-time to the PET scanner. These radionuclides are incorporated into compounds normally used by the body such as glucose, water or ammonia and then injected into the body to trace where they become distributed. Such labelled compounds are known as radiotracers.

Limitations

The minimization of radiation dose to the subject is an attractive feature of the use of short-lived radionuclides. Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response to therapy, in particular, cancer therapy (e.g. Young et al. 1999), where the risk to the patient from lack of knowledge about disease progress is much greater than the risk from the test radiation.

Limitations to the widespread use of PET arise from the high costs of cyclotrons needed to produce the short-lived radionuclides for PET scanning and the need for specially adapted on-site chemical synthesis apparatus to produce the radiopharmaceuticals. Few hospitals and universities are capable of maintaining such systems, and most clinical PET is supported by third-party suppliers of radiotracers which can supply many sites simultaneously. This limitation restricts clinical PET primarily to the use of tracers labelled with F-18, which has a half life of 110 minutes and can be transported a reasonable distance before use, or to rubidium-82, which can be created in a portable generator and is used for myocardial perfusion studies. Nevertheless, in recent years a few on-site cyclotrons with integrated shielding and hot labs have begun to accompany PET units to remote hospitals. The presence of the small on-site cyclotron promises to expand in the future as the cyclotrons shrink in response to the high cost of isotope transportation to remote PET machines.Because the half-life of F-18 is about two hours, the prepared dose of a radiopharmaceutical bearing this radionuclide will undergo multiple half-lives of decay during the working day. This necessitates frequent recalibration of the remaining dose (determination of activity per unit volume) and careful planning with respect to patient scheduling.

Image reconstruction

The raw data collected by a PET scanner are a list of 'coincidence events' representing near-simultaneous detection of annihilation photons by a pair of detectors. Each coincidence event represents a line in space connecting the two detectors along which the positron emission occurred.

Coincidence events can be grouped into projections images, called sinograms. The sinograms are sorted by the angle of each view and tilt, the latter in 3D case images. The sinogram images are analogous to the projections captured by computed tomography (CT) scanners, and can be reconstructed in a similar way. However, the statistics of the data is much worse than those obtained through transmission tomography. A normal PET data set has millions of counts for the whole acquisition, while the CT can reach a few billion counts. As such, PET data suffer from scatter and random events much more dramatically than CT data does.


Positron Emission Tomography


Positron emission tomography (PET) is a nuclear medicine medical imaging technique which produces a three-dimensional image or map of functional processes in the body. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radioisotope, which is introduced into the body on a metabolically active molecule. Images of metabolic activity in space are then reconstructed by computer analysis, often in modern scanners aided by results from a CT X-ray scan performed on the patient at the same time, in the same machine.



Nuclear Medicine

Nuclear medicine is a branch of medicine and medical imaging that uses the nuclear properties of matter in diagnosis and therapy. More specifically, nuclear medicine is a part of molecular imaging because it produces images that reflect biological processes that take place at the cellular and subcellular level. Nuclear medicine procedures use pharmaceuticals that have been labeled with radionuclides (radiopharmaceuticals). In diagnosis, radioactive substances are administered to patients and the radiation emitted is detected. The diagnostic tests involve the formation of an image using a gamma camera or positron emission tomography. Imaging may also be referred to as radionuclide imaging or nuclear scintigraphy. Other diagnostic tests use probes to acquire measurements from parts of the body, or counters for the measurement of samples taken from the patient. In therapy, radionuclides are administered to treat disease or provide palliative pain relief. For example, administration of Iodine-131 is often used for the treatment of thyrotoxicosis and thyroid cancer.

Nuclear medicine differs from most other imaging modalities in that the tests primarily show the physiological function of the system being investigated as opposed to traditional anatomical imaging such as CT or MRI. In some centers, the nuclear medicine images can be superimposed, using software or hybrid cameras, on images from modalities such as CT or MRI to highlight which part of the body the radiopharmaceutical is concentrated in. This practice is often referred to as image fusion or co-registration.

Nuclear medicine diagnostic tests are usually provided by a dedicated department within a hospital and may include facilities for the preparation of radiopharmaceuticals. The specific name of a department can vary from hospital to hospital, with the most common names being the nuclear medicine department and the radioisotope department.

A majority of the world's supply of medical isotopes are produced at the Chalk River Laboratories in Chalk River, Ontario, Canada. This reactor took longer than expected to repair, and in late 2007 a critical shortage of these isotopes has occurred. As of 11 December, the Canadian government is proposing legislation to re-open the reactor and allow the production of more isotopes.

Principle of Magnetic Resonance Imaging

Principle

Modern 3 tesla clinical MRI scanner.
Modern 3 tesla clinical MRI scanner.

Magnetism

Elementary subatomic particles such as protons have the quantum mechanical property of spin. Nuclei such as 1H or 31P, with an odd number of nucleons, always have a non–zero spin and therefore a magnetic moment. Some other isotopes such as C have no unpaired neutrons or protons, and no net spin.

When these spins are placed in an external magnetic field they start to precess around the direction of that field. The magnetic field also creates two energy states that the protons can occupy which are separated by a quantum of energy. The thermal energy of the sample causes the molecules to tumble leaving only a very small excess of protons to cause magnetic polarization.

Resonance

The energy difference between the proton energy states corresponds to electromagnetic radiation at radio frequency wavelengths. Resonant absorption of energy by the protons due to an external oscillating magnetic field (radio wave) will occur at the Larmor frequency.

The net magnetization vector has two components. The longitudinal magnetization is due to an excess of protons in the lower energy state. This gives a net polarization parallel to the external field. The transverse magnetization is due to coherences forming between the two proton energy states. This gives a net polarization perpendicular to external field in the transverse plane. The recovery of longitudinal magnetization is called T1 relaxation and the loss of phase coherence in the transverse plane is called T2 relaxation.

When the radio frequency pulse is turned off, the transverse vector component produces an oscillating magnetic field which induces a small current in the receiver coil. This free induction decay (FID) lasts only a few milliseconds before the thermal equilibrium of the spins is restored. The actual signal that is measured by the scanner is formed by a refocusing gradient or radio wave to create a gradient or spin-echo.

Imaging

Slice selection is achieved by applying a magnetic gradient in addition to the external magnetic field during the radio frequency pulse. Only one plane within the object will have protons that are on–resonance and contribute to the signal.

A real image can be considered as being composed of a number of spatial frequencies at different orientations. A two–dimensional Fourier transformation of a real image will express these waves as a matrix of spatial frequencies known as k–space. Low spatial frequencies are represented at the center of k–space and high spatial frequencies at the periphery. Frequency and phase encoding are used to measure the amplitudes of a range of spatial frequencies within the object being imaged. The frequency encoding gradient is applied during readout of the signal and is orthogonal to the slice selection gradient. During application of the gradient the frequency differences in the readout direction progressively change. At the midpoint of the readout these differences are small and the low spatial frequencies in the image are sampled filling the center of k-space. Higher spatial frequencies will be sampled towards the beginning and end of the readout filling the periphery of k-space.

Phase encoding is applied in the remaining orthogonal plane and uses the same principle of sampling the object for different spatial frequencies. However, it is applied for a brief period before the readout and the strength of the gradient is changed incrementally between each radio frequency pulse. For each phase encoding step a line of k–space is filled

Magnetic Resonanace Imaging

Magnetic resonance image showing a median sagittal cross section through a human head.
Magnetic resonance image showing a median sagittal cross section through a human head.

Magnetic resonance imaging (MRI) is primarily used in medical imaging to visualize the structure and function of the body. It provides detailed images of the body in any plane. MR has much greater soft tissue contrast than CT making it especially useful in neurological, musculoskeletal, cardiovascular and oncolological diseases. Unlike CT it uses no ionizing radiation. The scanner creates a powerful magnetic field which aligns the magnetization of hydrogen atoms in the body. Radio waves are used to alter the alignment of this magnetization. This causes the hydrogen atoms to emit a weak radio signal which is amplified by the scanner. This signal can be manipulated by additional magnetic fields to build up enough information to reconstruct an image of the body.

Magnetic Resonance strestoscophy.is used to measure the levels of different metabolites in body tissues. The MR signal produces spectrum of difference resonances that correspond to different molecular arrangements of the isotope being "excited". This signature is used to diagnose certain metabolic disorders, especially those affecting the brain as well as to provide information on tumor metabolism.The scanners used in medicine have a typical magnetic field strength of 0.2 to 3 teslas.Construction costs approximately USdollers. 1 million per tesla and maintenance an additional several hundred thousand dollars per year. Research using MRI scanners operating at ultra high field strength (up to 21.1 tesla) can produce images of the mouse brain with a resolution of 18 micrometres.


Common procedures using flouroscopy

  • Investigations of the gastrointestinal tract, including barium enemas, barium meals and barium swallows, and enteroclysis.
  • Orthopaedic surgery to guide fracture reduction and the placement of metalwork.
  • Angiography of the leg, heart and cerebral vessels.
  • Placement of a PICC (peripherally inserted central catheter)
  • Placement of a weighted feeding tube (e.g. Dobhoff) into the duodenum after previous attempts without fluoroscopy have failed.
  • Urological surgery – particularly in retrograde pyelography.
  • Implantation of cardiac rhythm management devices (pacemakers, implantable cardioverter defibrillators and cardiac resynchronization devices)

Another common procedure is the modified barium swallow study during which barium-impregnated liquids and solids are ingested by the patient. A radiologist records and, with a speech pathologist, interprets the resulting images to diagnose oral and pharyngeal swallowing dysfunction. Modified barium swallow studies are also used in studying normal swallow function.

Flouroscopy design

The first fluoroscopes consisted of an x-ray source and fluorescent screen between which the patient would be placed. As the x rays pass through the patient, they are attenuated by varying amounts as they interact with the different internal structures of the body, casting a shadow of the structures on the fluorescent screen. Images on the screen are produced as the unattenuated x rays interact with atoms in the screen through the photoelectric effect, giving their energy to the electrons. While much of the energy given to the electrons is dissipated as heat, a fraction of it is given off as visible light, producing the images. Early radiologists would adapt their eyes to view the dim fluoroscopic images by sitting in darkened rooms, or by wearing red adaptation goggles.

X-ray Image Intensifiers

The invention of X-ray image intensifiers in the 1950s allowed the image on the screen to be visible under normal lighting conditions, as well as providing the option of recording the images with a conventional camera. Subsequent improvements included the coupling of, at first, video cameras and, later, CCD cameras to permit recording of moving images and electronic storage of still images.

Modern image intensifiers no longer use a separate fluorescent screen. Instead, a cesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.

Image intensifiers are available with input diameters of up to 45 cm, and a resolution of approximately 2-3 line pairs mm.

.Flat-panel detectors

The introduction of flat-panel detectors allows for the replacement of the image intensifier in fluoroscope design. Flat panel detectors offer increased sensitivity to X-rays, and therefore have the potential to reduce patient radiation dose. Temporal resolution is also improved over image intensifiers, reducing motion blurring. Contrast ratio is also improved over image intensifiers: flat-panel detectors are linear over a very wide latitude, whereas image intensifiers have a maximum contrast ratio of about 35:1. Spatial resolution is approximately equal, although an image intensifier operating in 'magnification' mode may be slightly better than a flat panel.

Flat panel detectors are considerably more expensive to purchase and repair than image intensifiers, so their uptake is primarily in specialties that require high-speed imaging, e.g., vascular imaging and cardiac catheterization.

Fluoroscopy

Fluoroscopy is an imaging technique commonly used by physicians to obtain real-time images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an x-ray source and fluorescent screen between which a patient is placed. However, modern fluoroscopes couple the screen to an x-ray image intensifier and CCD video camera allowing the images to be played and recorded on a monitor. The use of x-rays, a form of ionizing radiation, requires that the potential risks from a procedure be carefully balanced with the benefits of the procedure to the patient. While physicians always try to use low dose rates during fluoroscopy procedures, the length of a typical procedure often results in a relatively high absorbed dose to the patient. Recent advances include the digitization of the images captured and flat-panel detector systems which reduce the radiation dose to the patient still further.

Biomedical Engineering Training

Education

A prosthetic eye, an example of a biomedical engineering application ofA prosthetic eye, an example of a biomedical engineering application of mechanical engineering and biocompatible materials to opthalmology.

Biomedical engineers combine sound knowledge of engineering and biological science, and therefore tend to have a bachelors of science and advanced degrees from major universities, who are now improving their biomedical engineering curriculum because interest in the field is increasing. Many colleges of engineering now have a biomedical engineering program or department from the undergraduate to the doctoral level. Traditionally, biomedical engineering has been an interdisciplinary field to specialize in after completing an undergraduate degree in a more traditional discipline of engineering or science, the reason for this being the requirement for biomedical engineers to be equally knowledgeable in engineering and the biological sciences. However, undergraduate programs of study combining these two fields of knowledge are becoming more widespread, including programs for a Bachelor of Science in Biomedical Engineering. As such, many students also pursue an undergraduate degree in biomedical engineering as a foundation for a continuing education in medical school. Though the number of biomedical engineers is currently low (as of 2004, under 10,000 in the U.S.), the number is expected to rise as modern medicine and technology improves.

In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. Over 40 programs are currently accredited by ABET, the first being Duke University, originally accredited by the Engineering Council for Profession Development (now ABET) in September of 1972.As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees are also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations.

Graduate education is also an important aspect in BME. Although many engineering professions do not require graduate level training, BME professions often recommend or require them..Since many BME professions often involve scientific research, such as in the pharmaceutical and medical device industries, graduate education may be highly desirable as undergraduate degrees typically do not provide substantial research training and experience.

Graduate programs in BME, like in other scientific fields, are highly varied and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields, owing again to the interdisciplinary nature of BME.

Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, numerous major universities, and few internal barriers, the U.S. has progressed a great deal in the development of BME education and training. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to bring down some of the national barriers that exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards.[8] Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education.[9] Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.

In Malaysia, University of Malaya was the first institution to offer biomedical engineering undergraduate program in Malaysia, having commenced intake in 1997 and was accredited by the Engineering Accreditation Council (EAC) of Malaysia. Other public universities which offer similar program are Universiti Teknologi Malaysia (2005) and Universiti Malaysia Perlis (2006).

Professional certification


Engineers typically require a type of professional certification, such as satisfying certain education requirements and passing an examination to become a professional engineer. These certifications are usually nationally regulated and registered, but there are also cases of self-governing bodies, such as the Canadian Association of Professional Engineers. In many cases, carrying the title of "Professional Engineer" is legally protected.

As BME is an emerging field, professional certifications are not as standard and uniform as they are for other engineering fields. For example, the Fundamentals of Engineering exam in the U.S. does not include a biomedical engineering section, though it does cover biology. Biomedical engineers often simply possess a university degree as their qualification. However, some countries, such as Australia, do regulate biomedical engineers, however registration is typically only recommended and not required.

Regulatory Issues

Regulatory issues are never far from the mind of a biomedical engineer. To satisfy safety regulations, most biomedical systems must have documentation to show that they were managed, designed, built, tested, delivered, and used according to a planned, approved process. This is thought to increase the quality and safety of diagnostics and therapies by reducing the likelihood that needed steps can be accidentally omitted again.

In the United States, biomedical engineers may operate under two different regulatory frameworks. Clinical devices and technologies are generally governed by the Food and Drug Administration (FDA) in a similar fashion to pharmaceuticals. Biomedical engineers may also develop devices and technologies for consumer use, such as physical therapy devices, which may be governed by the Consumer Product Safety Commission. See US FDA 510(k) documentation process for the US government registry of biomedical devices.

Implants, such as artificial hip joints, are generally extensively regulated due to the invasive nature of such devices.
Implants, such as artificial hip joints, are generally extensively regulated due to the invasive nature of such devices.

Other countries typically have their own mechanisms for regulation. In Europe, for example, the actual decision about whether a device is suitable is made by the prescribing doctor, and the regulations are to assure that the device operates as expected. Thus in Europe, the governments license certifying agencies, which are for-profit. Technical committees of leading engineers write recommendations which incorporate public comments and are adopted as regulations by the European Union. These recommendations vary by the type of device, and specify tests for safety and efficacy. Once a prototype has passed the tests at a certification lab, and that model is being constructed under the control of a certified quality system, the device is entitled to bear a CE mark, indicating that the device is believed to be safe and reliable when used as directed.

The different regulatory arrangements sometimes result in technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. Most safety-certification systems give equivalent results when applied diligently. Frequently, once one such system is satisfied, satisfying the other requires only paperwork.

Biomedical Devices

A medical device is intended for use in:

  • the diagnosis of disease or other conditions, or
  • in the cure, mitigation, treatment, or prevention of disease,
  • intended to affect the structure or any function of the body of man or other animals, and which does not achieve any of its primary intended purposes through chemical action and which is not dependent upon being metabolized for the achievement of any of its primary intended purposes.
A pump for continuous subcutaneous insulin infusion, an example of a biomedical engineering application of electrical engineering to medical equipment.
A pump for continuous subcutaneous insulin infusion, an example of a biomedical engineering application of electrical engineering to medical equipment.

Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.

Stereolithography is a practical example on how medical modeling can be used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, and early diagnosis of complex diseases.

Medical devices can be regulated and classified (in the US) as shown below:

  1. Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments and other similar types of common equipment.
  2. Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include x-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes.
  3. Class III devices require premarket approval, a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants.

Clinical engineering

Clinical engineering is a branch of biomedical engineering for professionals responsible for the management of medical equipment in a hospital. The tasks of a clinical engineer are typically the acquisition and management of medical device inventory, supervising biomedical engineering technicians (BMETs), ensuring that safety and regulatory issues are taken into consideration and serving as a technological consultant for any issues in a hospital where medical devices are concerned. Clinical engineers work closely with the IT department and medical physicists.

Schematic representation of normal ECG trace showing sinus rhSchematic representation of normal ECG trace showing sinus rhythm, an example of a biomedical engineering application of electronic engineering to electrophysiology and medical diagnosis.ythm, an example of a biomedical engineering application of electronic engineering to electrophysiology and medical diagnosis.

A typical biomedical engineering department does the corrective aClinical engineeringnd preventive maintenance on the medical devices used by the hospital, except for those covered by a warranty or maintenance agreement with an external company. All newly acquired equipment is also fully tested. That is, every line of software is executed, or every possible setting is exercised and verified. Most devices are intentionally simplified in some way to make the testing process less expensive, yet accurate. Many biomedical devices need to be sterilized. This creates a unique set of problems, since most sterilization techniques can cause damage to machinery and materials. Most medical devices are either inherently safe, or have added devices and systems so that they can sense their failure and shut down into an unusable, thus very safe state. A typical, basic requirement is that no single failure should cause the therapy to become unsafe at any point during its life-cycle. See safety engineering for a discussion of the procedures used to design safe systems.

Discipline in Biomedical engineering

Biomedical engineering is widely considered an interdisciplinary field, resulting in a broad spectrum of disciplines that draw influence from various fields and sources. Due to the extreme diversity, it is not atypical for a biomedical engineer to focus on a particular aspect. There are many different taxonomic breakdowns of BME, one such listing defines the aspects of the field as such:[1]

  • Bioelectrical and neural engineering
  • Biomedical imaging and biomedical optics
  • Biomaterials
  • Biomechanics and biotransport
  • Biomedical devices and instrumentation
  • Molecular, cellular and tissue engineering
  • Systems and integrative engineering

In other cases, disciplines within BME are broken down based on the closest association to another, more established engineering field, which typically include:

Breast implants, an example of a biomedical engineering application of biocompatible materials to cosmetic surgery.
Breast implants, an example of a biomedical engineering application of biocompatible materials to cosmetic surgery.
  • Chemical engineering - often associated with biochemical, cellular, molecular and tissue engineering, biomaterials, and biotransport.
  • Electrical engineering - often associated with bioelectrical and neural engineering, bioinstrumentation, biomedical imaging, and medical devices.
  • Mechanical engineering - often associated with biomechanics, biotransport, medical devices, and modeling of biological systems.
  • Optics and Optical engineering - biomedical optics, imaging and medical devices

Biomedical engineering

Biomedical engineering (BME) is the application of engineering principles and techniques to the medical field. It combines the design and problem solving skills of engineering with the medical and biological science to help improve patient health care and the quality of life of healthy individuals.

As a relatively new discipline, much of the work in biomedical engineering consists of research and development, covering an array of fields: bioinformatics, medical imaging, image processing, physiological signal processing, biomechanics, biomaterials and bioengineering, systems analysis, 3-D modeling, etc. Examples of concrete applications of biomedical engineering are the development and manufacture of biocompatible prostheses, medical devices, diagnostic devices and imaging equipment such as MRIs and EEGs, and pharmaceutical drugs.