RSA with nltools

Hi everyone,

I’m new to RSA and have a question about testing a specific hypothesis about the representational structure of the data. I followed the steps on the RSA page. I first created an adjacency matrix, then used the similarity() method from the Adjacency class to compare the similarity of the adjacency matrix to my data structure across four ROIs. However, the similarity score for each ROI was the same and I could not figure out why. I’ve confirmed the masks are different. I created three models (see script below). For the first two models, the similarity score for each ROI was different, but only for the third ROI, the similarity score was the same for each ROI, which seemed wield to me. I also checked the data, which looked fine to me as well. You can find my script here: ru-highres/RSA_post.ipynb at main · KarenShen21/ru-highres · GitHub


Hi @Karen,

Thanks for posting your question with code. Let me make sure I fully understand what you are trying to do.

  1. You have constructed a hypothesized representational structure of how you expect different conditions to relate to each other, such that the feedback 1,2,3,4 will be more similar to each other while feedback D,K,X,Z will be more similar to each other.
  2. You are interested in examining if there is a statistically significant association between this proposed representational structure and brain patterns within 4 different regions of interest and are interested in testing this hypothesis over subjects in your study.

If this is correct, then the issue with your code is on the second point. You will need to loop over subject to get a correlation value between the brain similarity across conditions and your proposed representation matrix. Then you will perform an inference over subjects separately for each ROI. I personally tend to do this using sign permutation tests, but you could also perform a fisher r-to-z transformation and then use a one sample t-test.

Here is a suggested modification to your code. I can’t test it myself, so you may need to tweak it.

from nltools.stats import one_sample_permutation

datapath = '/data/projects/ru-highres/derivatives/fsl/'

rsa_subject_correlations = {}
for i,m in enumerate([mask1,mask2,mask3,mask4]):

    subject_mask_correlations = {}
    for sub in subjects:
        # create a list of beta maps per condition
        file_list9 = [os.path.join(datapath,"sub-"+str(sub),"L2_task-aff_model-01.gfeat","cope1.feat","stats","zstat1.nii.gz"),

        # put all the beta maps into a Brain_Data object
        beta = Brain_Data(file_list9)

        # compute pairwise correlation between each betamap's masked area
        sub_pattern_similarity = 1- beta.apply_mask(m).distance(metric = 'correlation')
        sub_pattern_similarity.labels = ["aff_1","aff_2","aff_3","aff_4","inf_1","inf_2","inf_3","inf_4"]

        #correlate two matrices - Use M3 based on cells defined above.
        subject_mask_correlations[sub] = feedback_avg.similarity(M3, metric = 'spearman', n_permute = 0,ignore_diagonal=True)['correlation']
    rsa_subject_correlations[f'Mask{i+1}'] = subject_mask_correlations

    # Run permutation test on each ROI and print results
    permutation_stats = one_sample_permutation(subject_mask_correlations.values(), n_permute=5000, tail=2, n_jobs=-1)

    print(f"ROI: Mask{i+1}: correlation={permutation_stats['mean']:.2f}, p={permutation_stats['p']:.3f}")