File size: 9,055 Bytes
a0322cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
tags:
- ctranslate2
- int8
- float16
- generated_from_trainer
widget:
- text: "How can I write a Python function to generate the nth Fibonacci number?"
- text: "How do I get the current date using shell commands? Explain how it works."
model-index:
- name: starchat-beta
  results: []
license: bigcode-openrail-m
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.

quantized version of [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta)
```bash
pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
```
Converted on 2023-06-12 using
```
ct2-transformers-converter --model HuggingFaceH4/starchat-beta --output_dir /home/michael/tmp-ct2fast-starchat-beta --force --copy_files merges.txt all_results.json training_args.bin tokenizer.json README.md dialogue_template.json tokenizer_config.json eval_results.json vocab.json train_results.json generation_config.json trainer_state.json special_tokens_map.json added_tokens.json requirements.txt .gitattributes --quantization int8_float16 --trust_remote_code
```

Checkpoint compatible to [ctranslate2>=3.15.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8`  for `device="cpu"`

```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer

model_name = "michaelfeil/ct2fast-starchat-beta"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
        # load in int8 on CUDA
        model_name_or_path=model_name,
        device="cuda",
        compute_type="int8_float16",
        # tokenizer=AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-beta")
)
outputs = model.generate(
    text=["def fibonnaci(", "User: How are you doing? Bot:"],
    max_length=64,
    include_prompt_in_result=False
)
print(outputs)
```

# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.

# Original description
    


<img src="https://huggingface.co/HuggingFaceH4/starchat-beta/resolve/main/model_logo.png" alt="StarChat Beta Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>

# Model Card for StarChat-β

StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat-β is the second model in the series, and is a fine-tuned version of [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus) that was trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Model type:** A 16B parameter GPT-like model fine-tuned on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
- **Language(s) (NLP):** Primarily English and 80+ programming languages.
- **License:** BigCode Open RAIL-M v1
- **Finetuned from model:** [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus)

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/bigcode-project/starcoder
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground


## Intended uses & limitations

The model was fine-tuned on a variant of the [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset, which contains a diverse range of dialogues in over 35 languages. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground) to test its coding capabilities. 

Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:

```python
import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta", torch_dtype=torch.bfloat16, device_map="auto")

# We use a variant of ChatML to format each message
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# We use a special <|end|> token with ID 49155 to denote ends of a turn
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
```

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

StarChat-β has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). 
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) which is derived from The Stack.

Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. 
For example, it may produce code that does not compile or that produces incorrect results.  
It may also produce code that is vulnerable to security exploits.  
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.

StarChat-β was fine-tuned from the base model [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoderplus#limitations) for relevant information. 
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view).

## Training and evaluation data

StarChat-β is trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We applied the same [recipe](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py) used to filter the ShareGPT datasets behind the [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5321        | 0.98  | 15   | 1.2856          |
| 1.2071        | 1.97  | 30   | 1.2620          |
| 1.0162        | 2.95  | 45   | 1.2853          |
| 0.8484        | 4.0   | 61   | 1.3274          |
| 0.6981        | 4.98  | 76   | 1.3994          |
| 0.5668        | 5.9   | 90   | 1.4720          |


### Framework versions

- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3

## Citation

Although there isn't a blog post or paper associated with StarChat-β, you can find details on the earlier version in the blog post below:

**BibTeX:**

```
@article{Tunstall2023starchat-alpha,
  author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
  title = {Creating a Coding Assistant with StarCoder},
  journal = {Hugging Face Blog},
  year = {2023},
  note = {https://huggingface.co/blog/starchat},
}
```