Fusion of Volume Data from Multiple Imaging Modalities
RAHD Oncology Products, St. Louis, MO
Department of Radiation Oncology, University of Utah Medical Center,
Salt Lake City, UT
Note: This whitepaper was produced for RAHD Oncology Products and it's
commercial version of this tool. However, the tool was jointly developed
with:
Marilyn E. Noz, Ph.D. -- Department of Radiology, New York Universitiy
Medical Center, New York, NY
Gerald Q. Maguire, Ph.D. - Institute for Microelectronics and
Information Technology, Royal Institute of Technology, Kista, Sweden
The Need for Volume Fusion
Advances in the imaging sciences offer new opportunities to improve the
diagnosis, staging, and treatment delivery for cancer patients. These new
opportunities are dependent on the integration of diagnostic modalities
into the treatment planning process, including the need to fuse multiple
volumetric image sets into a common coordinate system. Implementation of
these technical advances will be impaired if practical considerations are
not accounted for. A complete analysis evaluating the merits of a fusion
tool should consider the comprehensive use of today's technology; the
changes that are certain to occur as technology advances; and the financial
implications that are constraints in a very real world. Many fusion tools
limit the modalities that may be used, or the tools may only be effective
in fusing CT and MR studies of the head. Some fusion tools impose
significant constraints for acquiring image data. Obviously, limitations
in the fusion tool will decrease its clinical value when the clinician
cannot incorporate newly developed imaging techniques into the treatment
planning process. The RAHD Volume Fusion tool provides the ability to fuse
all modalities without requiring specific acquisition techniques.
Cost-per-use will increase if the tools require extensive time in acquiring
images or managing the fusion process.
Volume Fusion Basics
Volume fusion is the process of spatially aligning multiple 3-dimensional
data sets by transforming them to use a single coordinate system. This
process is also referred to as image fusion, or volume/image
registration. In general, one set of volumetric data is designated as the
reference volume and defines the reference coordinate system to which other
volumes will be aligned. The other volumes are then each transformed into
alignment with this reference coordinate system. Depending on the data and
the application, this transformation can range from simple translation,
rotation, or scaling, to the more complex nonlinear warping. The
transformation must be based either on externally added information, such
as fiducial markers or a stereotactic frame, or on the image data
itself. The simplest method of utilizing the image data is to consider the
correlation of data between the two volumes. The correlation can be based
on a wide range of information, such as the use of known or extracted
surfaces, contours, or points; or employing volume techniques based on
voxel intensities; or employing statistical methods derived from voxel
intensities, such as Mutual Information. Thus, the accuracy of the fusion
process depends on 3 issues, which are discussed below:
- The method used to correlate the data between the volumes.
- The type of transformation employed.
- How well the correlation and transformation match the
application-specific needs.
Correlation methods fall into two broad categories: automated and
manual. Automated registration between modalities which display anatomy
clearly, such as computed tomography (CT) and nuclear magnetic resonance
imaging (MR), have become commonplace. However, these "automated" methods
in other than narrow applications, require operator intervention in almost
all cases. Although they generally work very well in the brain, they tend
to be less successful in the rest of the body. The situation becomes even
more complex when one of the anatomic modalities is replaced with a
molecular imaging study (such as PET or SPECT) since the anatomic clues and
the information required for successful, automated registration are
generally not available. Scintigraphic images have long been used
diagnostically to evaluate treatment outcome and recurrence and to aid in
planning patient management. New advances in scintigraphic imaging,
particularly positron emission tomography (PET) and single photon emission
tomography (SPECT), as well as advances in radiopharmaceutical technology
are making the use of scintigraphy important even for radiation treatment
planning. As external beam placement and brachytherapy loading have become
increasingly accurate, the ability to treat the cancer more precisely and
limit damage to healthy tissue has led to an increasing interest in using
molecular imaging techniques to define active tumor tissue more
precisely. Because of the properties of these images, they are not well
suited to current automated correlation methods.
The appropriate transformation depends on both the imaging modality and the
anatomical region contained in the volumes. For example, fusion of two CT
head studies (e.g., pre and post treatment) may require only rotation,
translation, and uniform scaling, since typically the shape of the head
does not change between scans. But in cases where it does change, for
example, following reconstructive surgery, the "automated" methods may need
increased operator intervention. Unlike the case of CT head studies, fusion
of two CT abdominal studies will require nonlinear warping since the shape
of the body, as well as the relative locations of internal structures, may
change, even if due only to patient positioning. This becomes even more
complex as other modalities are utilized, such as MR, PET, and SPECT. To
accurately register with CT data, spatial distortions in MR data can
require non-uniform scaling or non-linear warping even for head scans. With
SPECT, in the absence of a "transmission" image, there may be no easily
visible anatomic structure to register with the CT data. With both SPECT
and PET, due to the time taken to complete the scan, patient movement
(voluntary or involuntary) can result in a need to warp, especially in
areas below the head.
RAHD Volume Fusion
By building upon the research results from NYU Medical Center, RAHD has
taken a farsighted approach in creating a new Volume Fusion tool. The
primary goal was to provide a tool that is accurate and effective for all
imaging modalities and applications, with few constraints on the original
volume data. For example, only orthogonal axial slices are required, the
patient can be oriented arbitrarily within each volume (e.g., prone and
supine), the modalities need not have strong data correlation (e.g., SPECT
and CT), and external fiducials are not required. We achieved our goal by
employing landmark-based correlation and by supporting a full range of
transformations, from simple affine to non-linear warping.
By employing a landmark-based approach, RAHD's Volume Fusion tool
successfully fuses most all types of data volumes. This is in stark
contrast with other tools that work with only strictly constrained data
volumes. Such constraints in other tools include: (1) applicability to only
the small subset of situations where automated techniques can work, and
further, only to those which do not require warping, (2) requiring external
fiducials, or (3) requiring that slices be well-aligned. The first
limitation eliminates most non-head studies, and most molecular imaging
studies of any area. The latter two are difficult (or costly) to achieve in
practice since they require either extra time/effort at the time of image
acquisition (or possibly reacquiring the image later) or require additional
equipment/materials.
In order to provide accurate fusion of arbitrary volumes, RAHD provides 3
levels of fusion transformation: affine, 1st order warp, and 2nd order
warp, in addition to manual affine controls. By providing this powerful
flexibility, the operator can maintain fusion accuracy with arbitrary
volume data sets. For example, a fusion tool, which only provides rigid
body or affine transformation cannot accurately fuse volumes which require
2nd order warping (e.g., abdominal or pelvic studies, or SPECT/PET with
CT/MR). Fully automated fusion tools based on mutual information are
generally limited to rigid body or affine transformations (especially with
molecular imaging studies), as are tools which require manual rotation,
translation, and scaling of the volume. Manual volume manipulation is
nearly impossible to do effectively on anything except already well-aligned
volumes. RAHD recognizes the value of automated techniques such as mutual
information (MI) for use in the specific situations where they are known to
be effective (e.g., CT/MR head, or CT/CT head). As improvements in computer
resources make them practically feasible, these options will be included to
augment our tool accordingly.
During development, mathematical simulations of the fusion process were
performed on pure mathematical data to ensure the accuracy of the
transformation algorithm. Tests were further performed on simulated data
sets to ensure that the algorithm behaved as expected on medical
data. Extensive clinical tests with real patient data volumes were
performed at NYU Medical Center, the Karolinska Institute in Sweden, and
the University of Utah Medical Center. These studies include brain, thorax
and abdomen volumes. Matches were performed not only between MR-MR, CT-CT
and CT-MR but between CT/MR and molecular imaging modalities such as PET
and SPECT. Results were evaluated visually, and by measurement of the
difference between reference and transformed landmark positions. A third
evaluation technique was recently developed. Random sets of 3D points are
generated in each volume, and the points falling inside and outside of
areas of interest are calculated before and after the transformation, and
statistically evaluated.
For the operator, quantitative analysis of the RAHD Fusion results is
achieved through error statistics on the difference between reference and
transformed landmark positions. Based on this information, the user can
quickly find errant landmarks and correct them. Additional analysis of
results is possible through (1) precision overlay displays with split
views, view ports, or color blending with a variety of colorscales, (2)
side-by-side displays with overlayed isolines and landmarks, and (3)
simultaneous display of 3D isosurfaces and landmarks.
The RAHD Fusion tool is based on more than a decade of research and
development. Our colleagues at NYU Medical Center have been actively
researching, using, and refining 2D and 3D warping techniques for over 13
years. Coupled with RAHD's experience in visualization, product
development, and customer support, the resulting fusion tool is an accurate
and reliable resource with the power to fuse all modalities. Furthermore,
RAHD will continue to develop and extend this tool to remain at the
vanguard of fusion-based clinical methods for radiation oncology.
Back to Top
|