Below is a smattering of research problems, why I think they're important, and my contribution toward solving them.

Graduate studies @ UW


I recently finished my graduate degree at the University of Washington where I was working on brain-computer interfaces (BCIs). BCIs interpret brain activity allowing a person to control objects (like video games or robotic arms) by thinking. BCIs can also help people after neural damage (e.g., stroke, spinal cord injury, ALS, etc.). So why don't we see BCIs infiltrating every facet of healthcare? Researchers were asking the exact same question over two decades ago. Healthcare applications are in the cards, but we have a whole battery of challenges to overcome first. BCI systems don't build on modern neuroscience findings, aren't accurate enough, are notoriously unstable, require a team of experts on hand, and have lengthy calibration periods.

It's fair to ask if we should expect major improvements -- the kind that will vault BCIs into mainstream rehabilitation and augmentative applications -- by continuing on the same decades-old lines of research. Go to any major review paper or book on BCIs and you'll find that all the paradigms discussed (i.e., BCIs using steady state visually evoked potentials, motor imagery, or evoked responses like the P300) are 25+ years old, but they still comprise the majority of today's BCI research.

I'm working to change that. My lab believes that the tools and knowledge from neuroscience offer a path forward. Our work revolves around a cool technique called "source imaging." Source imaging is attractive because it takes electrical signals recorded from the scalp (i.e., outside the head) and uses them to estimate activity on the surface of the brain (i.e., inside the head). Working with brain activity in this format brings a number of advantages, three of which I've explored so far:

  1. Targeting brain regions of interest (Status: published in 2016)

    As I mentioned, there are only a few types of BCI systems today, and the 3 mainstream ones are 25+ years old. However, there are so many behaviors and types of thinking humans are capable of; presumably, there are hundreds more types of brain signals we could use in a BCI context.

    We've demonstrated that basic neuroscience research can offer clues as to which of these behaviors are worth exploring. Functional neuroscience is a mainstay of brain research that often links brain activity in certain region of the cortex with particular behaviors. By using source imaging to project data to the surface of the brain, we can then focus on activity only within those cortical regions of interest. In other words, we're targeting the most useful signal instead of trying to wade through the whole pool of brain activity.

    To show this, we used recent research that identified a certain region of the brain involved in switching auditory attention. By selectively targeting this region's activity, we could predict significantly better when a person switched who they were listening to as compared to a naive approach. Practically, this may be useful for developing hearing aids that tune dynamically based on what the user intends to listen to even in crowded auditory environments (like a bustling bar). If the system knows you switched attention, it could dynamically alter its tuning to selectively amplify the sound source you're trying to listen to and reject other sound sources.

  2. Tracking network activity with deep learning (Status: in progress)

    The bulk of human neuroscience has been focused on activation paradigms -- this assumes that when a human does some task X, some brain region Y becomes more active, thereby indicating its importance for that behavior. We can leverage this X <-> Y link in a BCI to record from brain region Y and predict if a person is trying to do behavior X.

    Recently, however, the trendy topic in human neuroscience has become network activity. Instead of asking, "Is brain area Y more active during behavior X?" the question has evolved into, "Do brain areas Y and Z communicate during behavior X." The focused effort of many neuroscientists has begun to uncover a number of networks (i.e., collections of several brain regions) that tend to synchronize their activity during certain behaviors. Broadly, our research is trying to exploit this newfound neuroscience knowledge in a BCI context.

    To understand the importance, imagine if I had a BCI that predicted whether you were trying to move a cursor left or right on a computer screen. However, this BCI couldn't answer the broader question of whether or not you were trying to use the computer in the first place. Without someone else to manually turn off the BCI system, it would naively continue to move the computer cursor even if you stopped paying attention or fell asleep. Network activity might provide clues into the user's state (e.g., daydreaming vs. intently paying attention), which is useful for changing how the BCI tries to interpret brain signals. Right now, we're working to classify (on a second by second basis) whether a subject is listening to a talker or resting quietly, solely from their brain's network activity. We've had some success with traditional machine learning techniques, but we are also enthusiastically testing deep learning approaches (using convolutional nets) via Keras to try some cutting edge classification techniques.

  3. Transfer Learning (Status: published in 2015)

    Most BCIs require a 20-30 minute calibration period before they're ready to interpret brain activity. This is an awfully inconvenient practical limitation -- imagine if your car required a 20-30 minute period to calibrate the steering wheel before you could drive it. That's simply not acceptable in the real world.

    BCI researchers are interested in ways to reduce this calibration by recycling training data. Often, we have data from a dozen or more subjects doing the exact same task, so it would make sense to leverage this existing data for an upcoming BCI session. The problem is that there is so much variability between people (e.g., differences in head shapes, brain folding, electrode placement, etc.), that it's hard to compare EEG activity between people. We found that using source imaging helped normalize many of these structural differences. We were able to train a BCI using data solely from other subjects and beat the control condition (where we used the standard method of training and testing a classifier for each individual subject). More work is needed, but it's a promising path towards eliminating calibration periods.

Independent research


  1. Deep learning based inverse solution (Status: in progress)

    My lab regularly uses technique called source imaging, which estimates activity at the surface of the brain (inside the head) from electroencephalography and magnetoencephalography signals (EEG and MEG; recorded from outside the head). Remapping neural activity to the brain’s surface is important because a great deal of neuroscience research interrogates the brain at the cortical or subcortical level (think fMRI, PET, ECoG and single-unit recordings). By using source imaging, we can draw better parallels and more easily exploit previous research findings from these other neuroimaging modalities when our data is remapped into the same reference frame. During the 80s and 90s, a great deal of effort was put toward developing different flavors of source imaging. Modern research, however, still relies almost entirely on regularized linear techniques published during this time period. In its current state, remapped EEG and MEG activity remains spatially blurry in comparison to invasive (i.e., surgery requiring) electrophysiologic methods like ECoG and single-unit recordings. Currently, I’m using deep learning to explore whether we can use big data and deep learning to more accurately remap brain activity ‒ an accomplishment that could make it safer and cheaper to interrogate brain activity for scientific and medical purposes.

  2. The brain's wiring architecture (Status: published in 2016)

    In neuroscience, one big question concerns the brain's physical architecture. In other words, how are the regions of the brain wired? Recently, the Allen Institute for Brain Science used their industrial research machine to construct a map of the connections between 200+ brain regions spanning the entire mouse brain at a finer scale than ever before. This whole-brain wiring diagram is often called a "connectome" -- the name comes from splicing together "connection" and "genome"). To understand its importance, imagine how excited frequent travelers were when the first country-wide road atlas hit the shelves.

    In 2014, I was fortunate enough to attend the Allen Institute of Brain Science's 2-week summer workshop. There, two other graduate students and I used this dataset to characterize the major connectivity patterns present across the mouse brain. We found that two biologically plausible rules seemed to underlie the properties we observed. By applying these rules when simulating the growth of networks, we could recreate the newly uncovered properties of the experimental data. Because the brain's physical wiring act as a scaffold for the brain's electrical activity, this is an important step forward for research into how information propagates and is processed in different brain regions. You can read the actual paper and digest here and the news story here.

Past research


  1. Automatic skin cancer detection (Status: publications in 2009 and 2010)

    Melanoma is the deadliest form of skin cancer. It originates in melanocytes -- the cells in our skin that produce pigment -- and is typically triggered by excessive UV exposure. When caught early, dermatologists can remove melanoma during a simple outpatient procedure and recurrence is extremely low (at ~2%). If melanoma spreads to other parts of the body (i.e., metastasizes), however, the survival rate drops dramatically. Therefore, it's crucial to detect melanoma early, and any technique that quickens the process or improves detection accuracy is likely to save lives.

    My first high school job I worked with a group led by Dr. Stoecker, a dermatologist in Rolla, MO, toward this problem. Our goal was to leverage machine learning techniques to recognize melanoma from digital images. Within the research group, I wrote C++ code to recognize various features specific to melanoma based on characteristics that dermatologists look for. We then fed the output of many feature detectors into a large neural network. I've been away from this field for a while, but this image recognition task is definitely worth exploring with convolutional neural networks (as a few groups have started to do). When I learn to successfully run on 5 hours of sleep a night, I plan on picking this project back up.

  2. IpsiHand: a BCI orthosis for stroke rehabilitation (Status: publications in 2011 and 2012)

    One of the earliest BCI projects I was fortunate enough to be involved with was the IpsiHand system. The scientific backing came from a paper out of the Leuthardt lab (where I was doing undergraduate research). They showed that there were are detectable brain signals in the ipsilateral (or same side) brain reflecting hand movement. In other words, the movement of your right hand is associated with signals in the right hemisphere of the brain. Note that this is contrary to neuroscience canon dictating that each side of the body is controlled by the opposite hemisphere of the brain (i.e., the right hand is controlled by the left brain).

    This is a very relevant finding for certain stroke victims. People who have a stroke in the motor cortex, which is heavily involved in generating movement for control of our body, typically suffer a serious degree of paralysis. Furthermore, since that area of the brain is partially destroyed, we can't use a typical BCI that relies on detecting brain signals from that (now destroyed) brain region. I joined an undergraduate team working on exploiting the findings of the ipsilateral motor activity for these specific stroke patients. IpsiHand recorded ipsilateral brain signals to drive an orthosis that should eventually allow stroke victims to control their hands again under their own volition. This research was the foundation for the next generation device currently being tested in actual patients.


...

2017: "Incorporating additional modern neuroscience into non-invasive brain-computer interfaces."

Wronkiewicz, M; Larson, E; Lee, AK

Note: This book chapter is under review.

...

2016: "Incorporating modern neuroscience findings to improve brain-computer interfaces: tracking auditory attention."

Wronkiewicz, M; Larson, E; Lee, AK
J Neural Eng, DOI: 10.1088/1741-2560/13/5/056017

Note: The code is openly available

...

2016: "A simple generative model of the mouse mesoscale connectome."

Henriksen, S; Pang, R; Wronkiewicz, M
Elife, DOI: 10.7554/eLife.12366

Note: The Allen Institute for Brain Science wrote about our project here and the code is openly available. All authors are co-first author.

...

2015: "Leveraging anatomical information to improve transfer learning in brain-computer interfaces."

Wronkiewicz, M; Larson, E; Lee, AK
J Neural Eng, DOI: 10.1088/1741-2560/12/4/046027

Note: The Institute for Learning and Brain Science wrote about our story here and the code is openly available

...

2013: "Towards a next-generation hearing aid through brain state classification and modeling."

Wronkiewicz, M; Larson, E; Lee, AK
Conf Proc IEEE Eng Med Biol Soc, DOI: 10.1109/EMBC.2013.6610124

...

2013: "Automatic Pill Identification from Pillbox Images"

Madsen, D; Payne, K; Hagerty, J; Szanto, N; Wronkiewicz, M; Moss, R; Stoecker, W
VISAPP, DOI: 10.5220/0004303603780384

...

2012: "IpsiHand Bravo: an improved EEG-based brain-computer interface for hand motor control rehabilitation."

Holmes, CD; Wronkiewicz, M; Somers, T; Liu, J; Russell, E; Kim, D; Rhoades, C; Dunkley, J; Bundy, D; Galboa, E; Leuthardt, E
Conf Proc IEEE Eng Med Biol Soc, DOI: 10.1109/EMBC.2012.6346287

...

2012: "Using ipsilateral motor signals in the unaffected cerebral hemisphere as a signal platform for brain-computer interfaces in hemiplegic stroke survivors."

Bundy, DT; Wronkiewicz, M; Sharma, M; Moran, DW; Corbetta, M; Leuthardt, EC
J Neural Eng, DOI: 10.1088/1741-2560/9/3/036011

...

2011: "Real-time naive learning of neural correlates in ECoG Electrophysiology."

Freudenburg, Z; Ramsey, N; Wronkiewicz, M; Smart, W; Pless, R; Leuthardt, E
IJMLC, DOI: 10.7763/IJMLC.2011.V1.40

...

2011: "An EEG-based brain computer interface for rehabilitation and restoration of hand control following stroke using ipsilateral cortical physiology."

Fok, S; Schwartz, R; Wronkiewicz, M; Holmes, C; Zhang, J; Somers, T; Bundy, D; Leuthardt, E
Conf Proc IEEE Eng Med Biol Soc, DOI: 10.1109/IEMBS.2011.6091549

Note: This work helped shape the next generation of the Ipsihand that is used to help stroke patients

...

2010: "Detection of granularity in dermoscopy images of malignant melanoma using color and texture features."

Stoecker, WV; Wronkiewiecz, M; Chowdhury, R; Stanley, RJ; Xu, J; Bangert, A; Shrestha, B; Calcara, DA; Rabinovitz, HS; Oliviero, M; Ahmed, F; Perry, LA; Drugge, R
Comput Med Imaging Graph, DOI: 10.1016/j.compmedimag.2010.09.005

...

2009: "Detection of basal cell carcinoma using color and histogram measures of semitranslucent areas."

Stoecker, WV; Gupta, K; Shrestha, B; Wronkiewiecz, M; Chowdhury, R; Stanley, RJ; Xu, J; Moss, RH; Celebi, ME; Rabinovitz, HS; Oliviero, M; Malters, JM; Kolm, I
Skin Res Technol, DOI: 10.1111/j.1600-0846.2009.00354.x


Mark Wronkiewicz

Curriculum Vitae (PDF)
Below is my CV. See the PDF for a downloadable version.


Current Position

Postdoctoral Researcher; March 2017-Present

  • University of Washington, Seattle
  • Advisor: Adrian KC Lee
  • Currently, I'm continuing my neural engineering research to detect resting-state activity for the development of asychronous BCI systems. My project is focused on developming methods that will allow BCIs to automatically pause their operation based on the user's brain activity. I'm also working on a second project (supported by the AI Grant) using deep learning techniques aimed at generating synthetic neural activity containing the temporal richness and network dynamics of actual brain recordings.

Formal Education

PhD, Neuroscience Graduate Program; 2012-2017

  • University of Washington, Seattle
  • Graduated March 2017 (4.5 years)
  • Dissertation: Facilitating the incorporation of neuroscience methods and knowledge into brain-computer interfaces
  • Advisor: Adrian KC Lee
  • Research addresses current limitations in BCIs, which may soon help rehabilitate people and neural damage (like strokes and spinal cord injuries). Projects I worked on included neuroscience-based targeting of relevant brain regions for control, transfer learning of brain activity (via the source space), detection of resting-state activity to help develop asychronous BCI systems (i.e., those without external cues), and determining the relationship between cortical activity and EEG recordings using deep learning.


BS, Biomedical Engineering; 2008-2012

  • Washington University in St. Louis
  • Minor: Electrical Engineering
  • Extracurricular research on BCIs with Dr. Eric Leuthardt. Projects included a brain-controlled hand orthosis, an EEG biofeedback program, and a brain-controlled iPhone game.
  • Leader and co-founding member of Washington University in St. Louis Brain-Computer Interface Club (WUSTL-BCI) student group (2011-2012)
  • Selected as a fellow for the Center for Innovation in Neuroscience and Technology (CINT) 3-month Innovation Fellowship in medical device design (2011) and was invited back as student leader (2012)

Publications

See Publications tab

Programming

  • Python; 4 years, use daily; Experience with time series data (EEG, MEG, ECoG), scikit-learn, Tensorflow, Keras, MNE-Python, Software development
  • Matlab; 2 years, used recently; Experience with time series data (single-unit recordings), biophysical modeling, and signal processing
  • C++; 4 years, used recently; Experience with BCI2000 as well as image processing and feature engineering
  • Objective C; 2 years

Other skills

  • Github; handle: wronk; Software development on MNE-Python
  • Linux, OSX, and Windows environments
  • Unix command line
  • 3D modeling with Blender and AutoCAD

Achievements

  • Awarded the AI Grant in 2017 for proposal to generate neural data using Generative Adversarial Networks (GANs, a deep learning framework). Top 10 of 450 applicants
  • Awarded NVIDIA GPU Grant in 2017 for proposal to explore methods using deep learning in neural engineering
  • Figure from 2016 Journal of Neural Engineering publication selected for cover art of journal’s October issue (2016)
  • Presented art piece at Art Neureau 2015 in the Fremont Abbey Arts Center
  • Awarded best team project at 2014 Summer Workshop for the Dynamic Brain
  • Choosen for the Allen Institute for Brain Science 2014 Summer Workshop for the Dynamic Brain
  • Recieved NSF Graduate Research Fellowship Program (GRFP) in 2012
  • Awarded best project in undergraduate senior design class along with team members Edward Poyo and Chavelle Patterson. The project was titled ExoFlex: Exoskeleton for Restored Hand Function and consisted of a customizable, 3D printed hand orthosis for BCI control.
  • Participant with my team in the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) 2012 Student Design Competition
  • Semi-finalist with my team in the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) 2011 Student Design Competition
  • Awarded undergraduate scholarships from American Society of Czech Engineers, Bright Flight, Missouri Boys State, and DuPage Country Marines

Volunteering, teaching, and outreach

  • Neuroscience Seminar Coordinator for internal and external academic speakers (2013, 2014, 2015)
  • TA for Neuro 302 (the undergraduate neuroscience wet lab course)
  • Volunteer TA for Undergraduate Computational Neuroscience Course Fall 2014
  • Volunteer TA at the 2-week Allen Institute for Brain Science 2015 Summer Workshop for the Dynamic Brain (taught Python and Git)
  • Volunteer at Brain Awareness Week outreach event for middle school students 2014, 2015, 2016
  • Volunteer at Pacific Science Center Life Sciences Research Weekend 2013, 2014
  • Volunteer for Peaks of Life climbing organization raising money for uncompensated health care at Seattle Children's Hospital

  • Gave a talk called "A Bit About Bitcoin" for the Nerd Nite science/tech outreach event; See my Bitcoin page or the talk on vimeo.

iPhone Applications

  • BrainCopter; On app store since 2012 and covered by WUSTL here with a few pictures here
  • Drug DB; On app store since 2010
  • Pill DB; Automatic medical pill identifier