Below is a smattering of research problems, why I think they're important, and my contribution toward solving them.
Graduate studies @ UW
I recently finished my graduate degree at the University of Washington where I was working on brain-computer interfaces (BCIs). BCIs interpret brain activity allowing a person to control objects (like video games or robotic arms) by thinking. BCIs can also help people after neural damage (e.g., stroke, spinal cord injury, ALS, etc.). So why don't we see BCIs infiltrating every facet of healthcare? Researchers were asking the exact same question over two decades ago. Healthcare applications are in the cards, but we have a whole battery of challenges to overcome first. BCI systems don't build on modern neuroscience findings, aren't accurate enough, are notoriously unstable, require a team of experts on hand, and have lengthy calibration periods.
It's fair to ask if we should expect major improvements -- the kind that will vault BCIs into mainstream rehabilitation and augmentative applications -- by continuing on the same decades-old lines of research. Go to any major review paper or book on BCIs and you'll find that all the paradigms discussed (i.e., BCIs using steady state visually evoked potentials, motor imagery, or evoked responses like the P300) are 25+ years old, but they still comprise the majority of today's BCI research.
I'm working to change that. My lab believes that the tools and knowledge from neuroscience offer a path forward. Our work revolves around a cool technique called "source imaging." Source imaging is attractive because it takes electrical signals recorded from the scalp (i.e., outside the head) and uses them to estimate activity on the surface of the brain (i.e., inside the head). Working with brain activity in this format brings a number of advantages, three of which I've explored so far:
Targeting brain regions of interest (Status: published in 2016)
As I mentioned, there are only a few types of BCI systems today, and the 3 mainstream ones are 25+ years old. However, there are so many behaviors and types of thinking humans are capable of; presumably, there are hundreds more types of brain signals we could use in a BCI context.
We've demonstrated that basic neuroscience research can offer clues as to which of these behaviors are worth exploring. Functional neuroscience is a mainstay of brain research that often links brain activity in certain region of the cortex with particular behaviors. By using source imaging to project data to the surface of the brain, we can then focus on activity only within those cortical regions of interest. In other words, we're targeting the most useful signal instead of trying to wade through the whole pool of brain activity.
To show this, we used recent research that identified a certain region of the brain involved in switching auditory attention. By selectively targeting this region's activity, we could predict significantly better when a person switched who they were listening to as compared to a naive approach. Practically, this may be useful for developing hearing aids that tune dynamically based on what the user intends to listen to even in crowded auditory environments (like a bustling bar). If the system knows you switched attention, it could dynamically alter its tuning to selectively amplify the sound source you're trying to listen to and reject other sound sources.
Tracking network activity with deep learning (Status: in progress)
The bulk of human neuroscience has been focused on activation paradigms -- this assumes that when a human does some task X, some brain region Y becomes more active, thereby indicating its importance for that behavior. We can leverage this X <-> Y link in a BCI to record from brain region Y and predict if a person is trying to do behavior X.
Recently, however, the trendy topic in human neuroscience has become network activity. Instead of asking, "Is brain area Y more active during behavior X?" the question has evolved into, "Do brain areas Y and Z communicate during behavior X." The focused effort of many neuroscientists has begun to uncover a number of networks (i.e., collections of several brain regions) that tend to synchronize their activity during certain behaviors. Broadly, our research is trying to exploit this newfound neuroscience knowledge in a BCI context.
To understand the importance, imagine if I had a BCI that predicted whether you were trying to move a cursor left or right on a computer screen. However, this BCI couldn't answer the broader question of whether or not you were trying to use the computer in the first place. Without someone else to manually turn off the BCI system, it would naively continue to move the computer cursor even if you stopped paying attention or fell asleep. Network activity might provide clues into the user's state (e.g., daydreaming vs. intently paying attention), which is useful for changing how the BCI tries to interpret brain signals. Right now, we're working to classify (on a second by second basis) whether a subject is listening to a talker or resting quietly, solely from their brain's network activity. We've had some success with traditional machine learning techniques, but we are also enthusiastically testing deep learning approaches (using convolutional nets) via Keras to try some cutting edge classification techniques.
Transfer Learning (Status: published in 2015)
Most BCIs require a 20-30 minute calibration period before they're ready to interpret brain activity. This is an awfully inconvenient practical limitation -- imagine if your car required a 20-30 minute period to calibrate the steering wheel before you could drive it. That's simply not acceptable in the real world.
BCI researchers are interested in ways to reduce this calibration by recycling training data. Often, we have data from a dozen or more subjects doing the exact same task, so it would make sense to leverage this existing data for an upcoming BCI session. The problem is that there is so much variability between people (e.g., differences in head shapes, brain folding, electrode placement, etc.), that it's hard to compare EEG activity between people. We found that using source imaging helped normalize many of these structural differences. We were able to train a BCI using data solely from other subjects and beat the control condition (where we used the standard method of training and testing a classifier for each individual subject). More work is needed, but it's a promising path towards eliminating calibration periods.
Deep learning based inverse solution (Status: in progress)
My lab regularly uses technique called source imaging, which estimates activity at the surface of the brain (inside the head) from electroencephalography and magnetoencephalography signals (EEG and MEG; recorded from outside the head). Remapping neural activity to the brain’s surface is important because a great deal of neuroscience research interrogates the brain at the cortical or subcortical level (think fMRI, PET, ECoG and single-unit recordings). By using source imaging, we can draw better parallels and more easily exploit previous research findings from these other neuroimaging modalities when our data is remapped into the same reference frame. During the 80s and 90s, a great deal of effort was put toward developing different flavors of source imaging. Modern research, however, still relies almost entirely on regularized linear techniques published during this time period. In its current state, remapped EEG and MEG activity remains spatially blurry in comparison to invasive (i.e., surgery requiring) electrophysiologic methods like ECoG and single-unit recordings. Currently, I’m using deep learning to explore whether we can use big data and deep learning to more accurately remap brain activity ‒ an accomplishment that could make it safer and cheaper to interrogate brain activity for scientific and medical purposes.
The brain's wiring architecture (Status: published in 2016)
In neuroscience, one big question concerns the brain's physical architecture. In other words, how are the regions of the brain wired? Recently, the Allen Institute for Brain Science used their industrial research machine to construct a map of the connections between 200+ brain regions spanning the entire mouse brain at a finer scale than ever before. This whole-brain wiring diagram is often called a "connectome" -- the name comes from splicing together "connection" and "genome"). To understand its importance, imagine how excited frequent travelers were when the first country-wide road atlas hit the shelves.
In 2014, I was fortunate enough to attend the Allen Institute of Brain Science's 2-week summer workshop. There, two other graduate students and I used this dataset to characterize the major connectivity patterns present across the mouse brain. We found that two biologically plausible rules seemed to underlie the properties we observed. By applying these rules when simulating the growth of networks, we could recreate the newly uncovered properties of the experimental data. Because the brain's physical wiring act as a scaffold for the brain's electrical activity, this is an important step forward for research into how information propagates and is processed in different brain regions. You can read the actual paper and digest here and the news story here.
Automatic skin cancer detection (Status: publications in 2009 and 2010)
Melanoma is the deadliest form of skin cancer. It originates in melanocytes -- the cells in our skin that produce pigment -- and is typically triggered by excessive UV exposure. When caught early, dermatologists can remove melanoma during a simple outpatient procedure and recurrence is extremely low (at ~2%). If melanoma spreads to other parts of the body (i.e., metastasizes), however, the survival rate drops dramatically. Therefore, it's crucial to detect melanoma early, and any technique that quickens the process or improves detection accuracy is likely to save lives.
My first high school job I worked with a group led by Dr. Stoecker, a dermatologist in Rolla, MO, toward this problem. Our goal was to leverage machine learning techniques to recognize melanoma from digital images. Within the research group, I wrote C++ code to recognize various features specific to melanoma based on characteristics that dermatologists look for. We then fed the output of many feature detectors into a large neural network. I've been away from this field for a while, but this image recognition task is definitely worth exploring with convolutional neural networks (as a few groups have started to do). When I learn to successfully run on 5 hours of sleep a night, I plan on picking this project back up.
IpsiHand: a BCI orthosis for stroke rehabilitation (Status: publications in 2011 and 2012)
One of the earliest BCI projects I was fortunate enough to be involved with was the IpsiHand system. The scientific backing came from a paper out of the Leuthardt lab (where I was doing undergraduate research). They showed that there were are detectable brain signals in the ipsilateral (or same side) brain reflecting hand movement. In other words, the movement of your right hand is associated with signals in the right hemisphere of the brain. Note that this is contrary to neuroscience canon dictating that each side of the body is controlled by the opposite hemisphere of the brain (i.e., the right hand is controlled by the left brain).
This is a very relevant finding for certain stroke victims. People who have a stroke in the motor cortex, which is heavily involved in generating movement for control of our body, typically suffer a serious degree of paralysis. Furthermore, since that area of the brain is partially destroyed, we can't use a typical BCI that relies on detecting brain signals from that (now destroyed) brain region. I joined an undergraduate team working on exploiting the findings of the ipsilateral motor activity for these specific stroke patients. IpsiHand recorded ipsilateral brain signals to drive an orthosis that should eventually allow stroke victims to control their hands again under their own volition. This research was the foundation for the next generation device currently being tested in actual patients.