My Research Projects
Learning unsupervised feature representations for single cell microscopy images with paired cell inpainting (2018 - Preprint)
Lu AX, Kraus OZ, Cooper S, Moses AM. Learning unsupervised feature representations for single cell microscopy images with paired cell inpainting. BioRxiv 2018 Aug 20
One of the ways biologists extract information from microscopy images is that they use features, or measurements taken on the image. For example, if you're interested in
studying protein subcellular localization, or what part of a cell a protein is located in (e.g. the nucleus or the cytosplasm), you might measure the distance of a protein (the green blobs in the pictures below) from the edge of the cell (in red).
Or you might come up with other measures, like the size of these blobs, or how bright they are, or how many there are. Most of the time,
humans have to manually design these measurements. However, not only is
the process of designing features time-consuming, but it's not guaranteed to measure interesting biology in the images very well, because humans
are limited and don't always define complex, robust, and informative metrics.
In this study, we asked, what if a computer could automatically learn how to design features for extracting insights about protein subcellular localization from images of cells? To
"teach" a computer how to do this, we asked it to learn how to solve a simple problem: we gave the computer an image of a cell, and another image of the shape of a different cell, and asked the
computer to predict what the protein in the first cell would look like if it were expressed in the second cell. By learning to solve this problem, the computer learns features that are much
better at categorizing biology in images than human-designed features.
Our method isn't the only method that can teach computers how to design features. Another way is to have a human show the computer images, and tell the computer what the image is
(e.g. if it's of a cat, or a dog, or an airplane). Compared to this approach, ours doesn't learn as good features. But these methods require the human to manually label every image the computer
is given in its learning process - usually, a computer will need anywere between tens of thousands to millions of images to learn good features. In contrast, our approach is fully automated,
so the computer teaches itself without any human guidance. This reflects massive time-savings for humans - imagine having to look at a million images and tell the computer what each of these
An Unsupervised kNN Method to Systematically Detect Changes in Protein Localization in High-Throughput Microscopy Images (2016)
Lu AX, Moses AM. An Unsupervised kNN Method to Systematically Detect Changes in Protein Localization in High-Throughput Microscopy Images. PLoS One 2016 Jul 21;11(7):e0158712.
The localization of a protein within a cell is often very important to its function. For example, transcription factors are active when they're in the nucleus of the cell, but
not when they're in the cytoplasm. We can study protein localization changes in response to drug treatments, environmental stresses, or genetic mutations, using high-throughput microscopy screens.
In these screens, every single protein has been individually attached to a fluorescent marker, and imaged under a microscope. Then, given a screen where cells have been grown in "normal" conditions,
and another screen where cells have been treated with a drug, we can look at the images to see if the protein has changed localization or not. Doing this lets us understand how a cell reconfigures
its entire set of proteins in response to different conditions, giving us insight on how cells adapt to different environments.
Because there are so many proteins (~6,000 in the yeast proteome), looking at all of these images individually is time-consuming. We want to use computers to do this for us.
However, sometimes different screens will have different effects on cells - for example, drugs might cause the shape of cells to change - that will confuse the computer and make it report
that it's found changes when there aren't really any. We developed a machine learning algorithm to correct for these systematic effects, so the changes found by the computer are mostly real localization changes.
Importantly, our method is unsupervised. This means that the user does not need to tell the computer what the systematic differences between screens are - the computer automatically learns these
from the data itself. This makes our method easier for biologists to apply to their own data without a lot of additional work compared to other methods.
Integrating images from multiple microscopy screens reveals diverse patterns of change in the subcellular localization of proteins (2018)
Lu AX, Chong YT, Hsu IS, Strome B, Handfield LF, Kraus O, Andrews BJ, Moses AM. Integrating images from multiple microscopy screens reveals diverse patterns of change in the subcellular localization of proteins. eLife 2018;7:e31872.
Protocol and Data: https://bio-protocol.org/e3022
In the project above, I showed how we could automatically discover protein localization changes between different screens. The next question was then to ask - what would happen if we had
a whole bunch of different screens, reflecting different stresses and drug treatments? We reasoned that we might discover interesting things about proteins that would not be obvious if we looked at
any single screen alone. For example, if a protein would react to many different stresses in the same way, these might be proteins that are responsible for general responses to stress. But in contrast,
if a protein reacted to only one stress, it might be a specific kind of protein that only responds in one type of environment. Finding these distinctions deepens our knowledge about how proteins work, but also
leads to important clues in developing drug treatments. For example, if you wanted to come up with an anti-cancer drug, you'd want it to target a protein that only responds in cancer cells specifically,
but not in the normal, diverse conditions that a cell might encounter.
We analyzed the entire yeast proteome in 24 different screens - integrating over 600,000 different images in this study. We found a lot of cool patterns in the ways different proteins
responded, that were unexpected - for example, in addition to finding specific and general responses to stress, we also found proteins that would behave in really different ways in different stresses,
or unexpected changes in proteins that were thought to do one thing, but had a change that implied that they were functional in different ways. These analysis help us come up with new biological
hypotheses, highlighting biology that biologists wouldn't have previously noticed without these types of big, systematic analyses.