neuroscience-ai-reading-course

Objective

Data

Data

Models

Mapping Model

Word Embeddings Models

Experiential word representations

Distributional word embedding models

  1. Word2Vec

  2. Fasttext

  1. Dependency-based Word2Vec
  1. GloVe

25 verb features

Non-distributional word vector representation

Evaluation

Task 1: Predicting neural activation patterns using word embeddings and vice-versa

Results

Results

Results

Task 2: Predicting word representations from brain activations

Results

Task 3: Comparing the performance of the models for different classes of nouns

Results

Results

Results

Further Comparison(for a bit of insight)

GloVe vs Dependency-based Word2Vec

Task 4: Comparing most predictable voxels in the brain for each word embedding model

Results

Results

Mixed Model Experiment

Conclusion and Remarks

References