File size: 2,692 Bytes
ac15189 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
license: apache-2.0
datasets:
- Sleep-EDF
- TUAB
- MOABB
language:
- en
tags:
- eeg
- brain
- timeseries
- self-supervised
- transformer
- biomedical
- neuroscience
---
# BENDR: BErt-inspired Neural Data Representations
Pretrained BENDR model for EEG classification tasks. This is the official Braindecode implementation
of BENDR from Kostas et al. (2021).
## Model Details
- **Model Type**: Transformer-based EEG encoder
- **Pretraining**: Self-supervised learning on masked sequence reconstruction
- **Architecture**:
- Convolutional Encoder: 6 blocks with 512 hidden units
- Transformer Contextualizer: 8 layers, 8 attention heads
- Total Parameters: ~157M
- **Input**: Raw EEG signals (20 channels, variable length)
- **Output**: Contextualized representations or class predictions
## Usage
```python
from braindecode.models import BENDR
import torch
# Load pretrained model
model = BENDR(n_chans=20, n_outputs=2)
# Load pretrained weights from Hugging Face
from huggingface_hub import hf_hub_download
checkpoint_path = hf_hub_download(repo_id="braindecode/bendr-pretrained-v1", filename="pytorch_model.bin")
checkpoint = torch.load(checkpoint_path)
model.load_state_dict(checkpoint["model_state_dict"], strict=False)
# Use for inference
model.eval()
with torch.no_grad():
eeg_data = torch.randn(1, 20, 600) # (batch, channels, time)
predictions = model(eeg_data)
```
## Fine-tuning
```python
import torch
from torch.optim import Adam
# Freeze encoder for transfer learning
for param in model.encoder.parameters():
param.requires_grad = False
# Fine-tune on downstream task
optimizer = Adam(model.parameters(), lr=0.0001)
```
## Paper
[BENDR: Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data](https://doi.org/10.3389/fnhum.2021.653659)
Kostas, D., Aroca-Ouellette, S., & Rudzicz, F. (2021).
Frontiers in Human Neuroscience, 15, 653659.
## Citation
```bibtex
@article{kostas2021bendr,
title={BENDR: Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data},
author={Kostas, Demetres and Aroca-Ouellette, St{\'e}phane and Rudzicz, Frank},
journal={Frontiers in Human Neuroscience},
volume={15},
pages={653659},
year={2021},
publisher={Frontiers}
}
```
## Implementation Notes
- Start token is correctly extracted at index 0 (BERT [CLS] convention)
- Uses T-Fixup weight initialization for stability
- Includes LayerDrop for regularization
- All architectural improvements from original paper maintained
## License
Apache 2.0
## Authors
- Braindecode Team
- Original paper: Kostas et al. (2021)
|