Murphy Lab -
Automated Classification of 3-Dimensional Protein Location Patterns from Fluorescence
Microscope Images
Meel Velliste, graduate student in
Biomedical Engineering
Aaron C. Rising, undergraduate student in
Biological Sciences
Introduction
The goal of this work is to develop methods that allow the numerical
description and subsequent classification of the patterns characteristic of
subcellular structures in fluorescence microscope images of eukaryotic
cells. We have previously described classifiers capable of recognizing 2D
patterns of all major subcellular structures with high accuracy. This was
done with HeLa cells which are fairly flat in the sense that their
morphology can be compared to an "egg on a frying pan". However, there are
many cell types that have more of a 3D structure such as columnar
epithelial cells. A 2D optical section of such cells would hardly be
representative of the whole cell. For example, a slice through the middle
of the cell would completely miss proteins that localize to either the
apical or basal membrane. Therefore if the methods we are developing for
systematic analysis of protein location patterns are to be generally useful
for all cell types, they will have to be based on full 3D images rather
than mere 2D slices. The goal of this project is to extend our methods to
work with 3D images.
Approach
We first acquired a set of 3D images of HeLa cells using a confocal laser
scanning microscope. Seven different fluorescent markers were used to label
some of the major subcellular structures and 50 3D stacks of images were
collected for each of the classes.
We then adapted our previously used features for use with 3D images. In
our 2D classification work we had used three kinds of features: Texture
features, Moments and Morphological features. The morphological features
had been found the most useful single subset of features for 2D images.
Many of these features describe relationships between objects in the image
in terms of sizes of objects and distances between them. Therefore as a
starting point we extended a subset of these Morphological features by
changing the way distances and sizes were calculated. The size of objects
was changed to be the volume instead of area. For each feature that
described the pattern in terms of distances between objects, two new
features were created: one that considers "horizontal" distances (euclidean
distance by x,y-coordinates); and another that describes the "vertical"
distance or the z-component. These features were calculated for all of the
images in the 3D set and then a backpropagation neural network classifier
was trained to recognize the seven different classes of patterns.
Results
The neural network classifier was found to be capable of recognizing the 3D
subcellular location patterns with an average accuracy of 92%. In order to
see if this 3D classification approach has any advantage over 2D
classification, we created a 2D comparable image set by taking one optical
section from each of the 3D stacks. The section was chosen to intersect
the center of fluorescence of each image, because we found that this
provided the best classification accuracy. These 2D comparable image were
recognized correctly only 87% of the time.
Conclusions
These results demonstrate the feasibility of recognizing protein location
patterns in 3D. Furthermore it is clear that there is a great advantage in
using 3D images instead of 2D images. If the 3D vs. 2D classification
accuracy is 5% better for flat cells like the HeLa cells used here, then
the difference will be even more significant for most other cell types
where a single 2D slice would be extremely under-representative of the
cell. Therefore the 3D features developed here will an invaluable tool when
generalizing the automated image interpretation methods for use with
different cell types.
|