Skip to content

Instantly share code, notes, and snippets.

@emily-wasserman
Last active September 16, 2024 22:32
Show Gist options
  • Save emily-wasserman/c5ad85228e45f5797c4138505649e055 to your computer and use it in GitHub Desktop.
Save emily-wasserman/c5ad85228e45f5797c4138505649e055 to your computer and use it in GitHub Desktop.
Representational similarity analysis

Representational similarity analysis

I spent over a year working on the project that became this paper, together with Alek Chakroff, Liane Young, and Rebecca Saxe. As a result, when friends and family asked me what I was doing at work, my answer often involved me attempting to explain what exactly 'representational similarity analysis' is. There's a lot of scientific references out there now for neuroscientists interested in this technique - here's one - but not much for those outside the field. Here's my attempt at making this concept accessible!

Neural representations

Words on a page encode meaning. DNA encodes proteins. And neurons encode thoughts. That's the basic assumption on which I'll premise this entire analysis: that patterns of neural activation contain information about cognitive states. Understanding exactly how to extract that information, such that we could infer any cognitive state given its neural pattern - its representation - is one of cognitive neuroscience's holiest grails.

Neural representations

For some reason, Google Images thinks brains are glowing and blue.

But the mapping between patterns and thoughts is far from straightforward. To give you a sense, visual perception is one of the most studied areas of cognitive neuroscience. The deep learning models that are now revolutionizing your Netflix movie recommendations have their roots in models of visual processing. Some models (like this one) can now generate patterns that closely match the true neural patterns in the brains of people viewing objects, but this is still considered an impressive achievement, and the problem is far from solved.

By comparison, my work was in the neuroscience of moral cognition. I could probably spend decades trying to build a model for moral representations, and still not even come close to the performance of the models currently being used to decode visual perception. Yet in the lab, we still had all these brains lying about (on our servers - not Young Frankenstein-style), and we were itching to unlock their secrets. How could I decode the information hidden in these brains' patterns without building a complicated model?

Abby Normal

Similarity: a 'common currency'

Representational similarity analysis sidesteps this problem by modeling the similarity between neural patterns, instead of the patterns themselves. If a pattern really does contain information about some aspect of a cognitive state, we'd expect patterns to be more similar when the cognitive states are more similar.

For example, let's say I put you in a scanner and scan your brain while you look at faces of different people. If I show you the same image of the same face twice, I'd expect your neural patterns for those two images to be very similar. If I show you two different images of the same face, I'd expect your neural patterns to be a little less similar. If I show you two faces that are different in every way - gender, ethnicity, age - I'd expect your neural patterns to be even less similar.

Pattern vectors and faces

Representational similarity analysis works by transforming those neural patterns into similarity matrices, so that each entry in the matrix M(i,j) contains a measure of how similar patterns i and j are. Now I don't need a model of how each individual pattern corresponds to a given cognitive state. Instead, I'll create a model to capture how patterns converge and diverge, depending on how their associated cognitive states are related.

Locating information

But remember, we're not just talking about abstract patterns here - we're talking about patterns in actual brains. And different parts of the brain care about different kinds of information. To go back to the example of faces: while patterns in the brain's visual cortex may carry information about faces, patterns in the motor cortex may not. Then, if we looked at similarity matrices in the motor cortex, we might not see high pattern similarity between pairs of similar faces, because there isn't any 'face representation' being reflected in neural patterns there.

Functional map of the brain

A recent fine-grained functional map of the brain from the Human Connectome Project (Glasser et al., 2016). Image (c) Matthew Glasser & David van Essen.

Instead of looking at patterns over the whole brain, I'll draw a circle around one little chunk of brain, and create a similarity matrix there. That matrix will capture how patterns resemble each other within just this small area of brain. After modeling the similarities there, I'll shift the circle slightly and model again. It's called a searchlight approach - like a searchlight, I illuminate one spot at a time, looking for information there. Ultimately, I'll end up with a similarity map that shows me which parts of the brain encode information I care about.

Modeling similarity

So, what information do I care about? In my case, the data I was analyzing was produced from a set of 48 stories about moral violations, grouped into four categories: Psychological Harm, Physical Harm, Incest, and Pathogen. (Yes, these are exactly what they sound like - we made people read stories about incestuous encounters, or about drinking a smoothie filled with Grandpa's toenail clippings.) I wanted to know where neural patterns for stories from the same category were similar - in other words, where the category was represented. I also wanted to know where neural patterns were similar across categories - where, for example, were both kinds of Harm represented?

In my linear model, each regressor captured one hypothesis about the similarities between neural patterns based on category information. For example, the Physical-Physical Similarity regressor represented the hypothesis that patterns for Physical Harm stories resembled patterns for other stories in that same category. Since the data being modeled came in the shape of a similarity matrix, the regressors were also matrices. Each entry in a regressor matrix was either a 1, to hypothesize similarity between two patterns, or a 0, to hypothesize no relationship.

Example regressor matrix

When I modeled my neural data with these regressors, I got one parameter estimate for each regressor. This represented the degree to which that particular hypothesis about neural pattern similarities was supported by the data.

Putting it all together

Moving my searchlight across the brain, I ran that model in every small chunk of brain, and labeled that chunk with the parameter estimates for each regressor. Then, for each regressor, I wrote its values back onto an empty map of the brain. This gave me a 'heat map', with hot spots showing regions of the brain where patterns did contain information about a certain category.

Depiction of searchlight RSA

After doing this in all 39 of my participants' brains and aggregating their individual maps, I then had a group-level heat map for each model regressor. Across the whole brain, I could see where patterns of activity encoded information about a given category of moral story.

This revealed a clear distinction between two broader categories of moral story: Harm and Purity. Patterns within the categories of Physical and Psychological Harm tended to be similar in the same areas of the brain, and patterns across those two categories did as well. But patterns within Incest and Pathogen, two subcategories of Purity violation, were similar in different regions. Simply put, your brain represents the act of insulting someone pretty as similar to kicking them, but drinking that toenail smoothie and sleeping with your cousin are (while both usually considered pretty gross) represented quite differently.

I still don't have an explicit model that actually tells me how a brain represents information about immoral acts. If I showed you one of our moral stories - for example, the fun one about the girl who drinks a glass of human blood - I couldn't make a very accurate prediction about what neural patterns you'd produce. All those stock photos of glowing brains are misleading; information-wise, a single brain is mostly dark, the meaning of its patterns hidden. But by comparing the similarity of multiple brain patterns - the relationships of relationships - I was able to shine a spotlight on areas of the brain that encoded information about the type of an immoral act.

The (rather messy) Matlab code I wrote for this study lives here. A Python/nipype interface will be forthcoming.

You can find the full set of moral stories at the end of the paper I linked at the top. Be warned: it's a wild ride.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment