LLM4Code-memtune
Collection
Collection for the paper titled "How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning"
•
11 items
•
Updated
This dataset consists of the attack samples used for the paper "How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning"
We have two splits:
fine-tuning attack
, which consists of selected samples coming from the fine-tuning setpre-training attack
, which consists of selected samples coming from the TheStack-v2 on the Java sectionWe have different splits depending on the duplication rate of the samples:
d1
the samples inside the training set are uniqued2
the samples inside the training set are present two timesd3
the samples inside the training set are present three timesdg3
the samples inside the training set are present more than three times