Model card for boldgpt_small_patch10.kmq
A Vision Transformer (ViT) model trained on BOLD activation maps from NSD-Flat. Patches were quantized to discrete tokens using k-means (KMeansTokenizer
). The training objective was to auto-regressively predict the next patch with shuffled patch order and cross-entropy loss.
Dependencies
Usage
from boldgpt.data import ActivityTransform
from boldgpt.models import create_model
from datasets import load_dataset
model = create_model("boldgpt_small_patch10.kmq", pretrained=True)
dataset = load_dataset("clane9/NSD-Flat", split="train")
dataset.set_format("torch")
transform = ActivityTransform()
batch = dataset[:1]
batch["activity"] = transform(batch["activity"])
# output: (B, N + 1, K) predicted next token logits
output, state = model(batch)
Reproducing
Training command:
torchrun --standalone --nproc_per_node=4 \ scripts/train_gpt.py --out_dir results \ --model boldgpt_small \ --ps 10 --vs 1024 --vocab_state checkpoints/ps-10_vs-1024_vss-4000_seed-42/tok_state.pt \ --shuffle --epochs 1000 --bs 512 \ --workers 0 --amp --compile --wandb
Commit:
f9720ca52d6fa6b3eb47a34cf95f8e18a8683e4c