AtakanTekparmak
commited on
feat: Updated README
Browse files
README.md
CHANGED
@@ -5,69 +5,122 @@ license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LI
|
|
5 |
language:
|
6 |
- en
|
7 |
base_model:
|
8 |
-
- Qwen/Qwen2.5-Coder-3B
|
9 |
pipeline_tag: text-generation
|
10 |
library_name: transformers
|
11 |
tags:
|
12 |
- code
|
13 |
-
- codeqwen
|
14 |
- chat
|
15 |
- qwen
|
16 |
- qwen-coder
|
|
|
17 |
---
|
18 |
|
19 |
-
|
20 |
-
# Qwen2.5-Coder-3B-Instruct
|
21 |
|
22 |
## Introduction
|
23 |
|
24 |
-
|
25 |
|
26 |
-
-
|
27 |
-
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
28 |
|
29 |
-
**
|
30 |
-
-
|
31 |
-
-
|
32 |
-
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
|
33 |
-
- Number of Parameters: 3.09B
|
34 |
-
- Number of Paramaters (Non-Embedding): 2.77B
|
35 |
-
- Number of Layers: 36
|
36 |
-
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
|
37 |
-
- Context Length: Full 32,768 tokens
|
38 |
-
|
39 |
-
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
40 |
|
41 |
-
##
|
42 |
|
43 |
-
|
|
|
|
|
|
|
44 |
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
|
|
49 |
|
50 |
-
|
|
|
|
|
|
|
|
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
```python
|
55 |
-
|
|
|
|
|
56 |
|
57 |
-
|
|
|
58 |
|
59 |
-
model = AutoModelForCausalLM.from_pretrained(
|
60 |
-
model_name,
|
61 |
-
torch_dtype="auto",
|
62 |
-
device_map="auto"
|
63 |
-
)
|
64 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
65 |
|
66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
messages = [
|
68 |
-
{"role": "system", "content":
|
69 |
-
{"role": "user", "content":
|
70 |
]
|
|
|
71 |
text = tokenizer.apply_chat_template(
|
72 |
messages,
|
73 |
tokenize=False,
|
@@ -84,30 +137,58 @@ generated_ids = [
|
|
84 |
]
|
85 |
|
86 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
```
|
88 |
|
|
|
|
|
89 |
|
90 |
## Evaluation & Performance
|
91 |
|
92 |
-
|
93 |
|
94 |
-
|
|
|
|
|
|
|
95 |
|
96 |
-
|
97 |
|
98 |
-
|
|
|
|
|
|
|
|
|
99 |
|
100 |
-
|
101 |
-
@article{hui2024qwen2,
|
102 |
-
title={Qwen2. 5-Coder Technical Report},
|
103 |
-
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
|
104 |
-
journal={arXiv preprint arXiv:2409.12186},
|
105 |
-
year={2024}
|
106 |
-
}
|
107 |
-
@article{qwen2,
|
108 |
-
title={Qwen2 Technical Report},
|
109 |
-
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
|
110 |
-
journal={arXiv preprint arXiv:2407.10671},
|
111 |
-
year={2024}
|
112 |
-
}
|
113 |
-
```
|
|
|
5 |
language:
|
6 |
- en
|
7 |
base_model:
|
8 |
+
- Qwen/Qwen2.5-Coder-3B-Instruct
|
9 |
pipeline_tag: text-generation
|
10 |
library_name: transformers
|
11 |
tags:
|
12 |
- code
|
|
|
13 |
- chat
|
14 |
- qwen
|
15 |
- qwen-coder
|
16 |
+
- agent
|
17 |
---
|
18 |
|
19 |
+
# Dria-Agent-α-3B
|
|
|
20 |
|
21 |
## Introduction
|
22 |
|
23 |
+
***Dria-Agent-α*** are series of large language models trained on top of the [Qwen2.5-Coder](https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f) series, specifically on top of the [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) and [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) models to be used in agentic applications. These models are the first instalment of agent-focused LLMs (hence the **α** in the naming) we hope to improve with better and more elaborate techniques in subsequent releases.
|
24 |
|
25 |
+
Dria-Agent-α employs ***Pythonic function calling***, which is LLMs using blocks of Python code to interact with provided tools and output actions. This method was inspired by many previous work, including but not limited to [DynaSaur](https://arxiv.org/pdf/2411.01747), [RLEF](https://arxiv.org/pdf/2410.02089), [ADAS](https://arxiv.org/pdf/2408.08435) and [CAMEL](https://arxiv.org/pdf/2303.17760). This way of function calling has a few advantages over traditional JSON-based function calling methods:
|
|
|
26 |
|
27 |
+
1. **One-shot Parallel Multiple Function Calls:** The model can can utilise many synchronous processes in one chat turn to arrive to a solution, which would require other function calling models multiple turns of conversation.
|
28 |
+
2. **Free-form Reasoning and Actions:** The model provides reasoning traces freely in natural language and the actions in between \`\`\`python \`\`\` blocks, as it already tends to do without special prompting or tuning. This tries to mitigate the possible performance loss caused by imposing specific formats on LLM outputs discussed in [Let Me Speak Freely?](https://arxiv.org/pdf/2408.02442)
|
29 |
+
3. **On-the-fly Complex Solution Generation:** The solution provided by the model is essentially a Python program with the exclusion of some "risky" builtins like `exec`, `eval` and `compile` (see full list in **Quickstart** below). This enables the model to implement custom complex logic with conditionals and synchronous pipelines (using the output of one function in the next function's arguments) which would not be possible with the current JSON-based function calling methods (as far as we know).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
+
## Quickstart
|
32 |
|
33 |
+
````python
|
34 |
+
import json
|
35 |
+
from typing import Any, Dict, List
|
36 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
37 |
|
38 |
+
model_name = "driaforall/function-calling-3B-30k-2e"
|
39 |
+
model = AutoModelForCausalLM.from_pretrained(
|
40 |
+
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
|
41 |
+
)
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
43 |
|
44 |
+
# Please use our provided prompt for best performance
|
45 |
+
SYSTEM_PROMPT = """
|
46 |
+
You are an expert AI assistant that specializes in providing Python code to solve the task/problem at hand provided by the user.
|
47 |
+
|
48 |
+
You can use Python code freely, including the following available functions:
|
49 |
|
50 |
+
<|functions_schema|>
|
51 |
+
{{functions_schema}}
|
52 |
+
<|end_functions_schema|>
|
53 |
+
|
54 |
+
The following dangerous builtins are restricted for security:
|
55 |
+
- exec
|
56 |
+
- eval
|
57 |
+
- execfile
|
58 |
+
- compile
|
59 |
+
- importlib
|
60 |
+
- __import__
|
61 |
+
- input
|
62 |
+
- exit
|
63 |
+
|
64 |
+
Think step by step and provide your reasoning, outside of the function calls.
|
65 |
+
You can write Python code and use the available functions. Provide all your python code in a SINGLE markdown code block like the following:
|
66 |
|
67 |
```python
|
68 |
+
result = example_function(arg1, "string")
|
69 |
+
result2 = example_function2(result, arg2)
|
70 |
+
```
|
71 |
|
72 |
+
DO NOT use print() statements AT ALL. Avoid mutating variables whenever possible.
|
73 |
+
""".strip()
|
74 |
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
+
get_sample_data = """
|
77 |
+
def get_current_date():
|
78 |
+
\"\"\" Get the current date.
|
79 |
+
|
80 |
+
Returns:
|
81 |
+
- Current date in YYYY-MM-DD format
|
82 |
+
\"\"\"
|
83 |
+
pass
|
84 |
+
|
85 |
+
def check_availability(day: str, start_time: str, end_time: str):
|
86 |
+
\"\"\" Check if a time slot is available on a given day.
|
87 |
+
|
88 |
+
Args:
|
89 |
+
- day: The day to check in YYYY-MM-DD format
|
90 |
+
- start_time: Start time in HH:MM format
|
91 |
+
- end_time: End time in HH:MM format
|
92 |
+
|
93 |
+
Returns:
|
94 |
+
- True if slot is available, False otherwise
|
95 |
+
\"\"\"
|
96 |
+
pass
|
97 |
+
|
98 |
+
def make_appointment(day: str, start_time: str, end_time: str):
|
99 |
+
\"\"\" Make an appointment for a given time slot.
|
100 |
+
|
101 |
+
Args:
|
102 |
+
- day: The day to make appointment in YYYY-MM-DD format
|
103 |
+
- start_time: Start time in HH:MM format
|
104 |
+
- end_time: End time in HH:MM format
|
105 |
+
- title: The title of the appointment
|
106 |
+
|
107 |
+
Returns:
|
108 |
+
- True if appointment was made successfully, False otherwise
|
109 |
+
\"\"\"
|
110 |
+
pass
|
111 |
+
"""
|
112 |
+
|
113 |
+
# Helper function to create the system prompt for our model
|
114 |
+
def format_prompt(tools: str):
|
115 |
+
return SYSTEM_PROMPT.format(functions_schema=tools)
|
116 |
+
|
117 |
+
system_prompt = SYSTEM_PROMPT.replace("{{functions_schema}}", get_sample_data)
|
118 |
+
|
119 |
messages = [
|
120 |
+
{"role": "system", "content": system_prompt},
|
121 |
+
{"role": "user", "content": "Can you check if I have tomorrow 10:00-12:00 available and make an appointment for a meeting with my thesis supervisor if so?"},
|
122 |
]
|
123 |
+
|
124 |
text = tokenizer.apply_chat_template(
|
125 |
messages,
|
126 |
tokenize=False,
|
|
|
137 |
]
|
138 |
|
139 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
140 |
+
print(response)
|
141 |
+
````
|
142 |
+
|
143 |
+
The output should be something like:
|
144 |
+
|
145 |
+
````
|
146 |
+
To accomplish this task, we need to follow these steps:
|
147 |
+
|
148 |
+
1. Determine the current date and add one day to it to find tomorrow's date.
|
149 |
+
2. Check the availability of the time slot from 10:00 to 12:00 on that day.
|
150 |
+
3. If the slot is available, make an appointment for the meeting.
|
151 |
+
|
152 |
+
Here's the Python code to achieve this:
|
153 |
+
|
154 |
+
```python
|
155 |
+
from datetime import datetime, timedelta
|
156 |
+
|
157 |
+
# Step 1: Determine the current date and add one day to get tomorrow's date
|
158 |
+
current_date = get_current_date()
|
159 |
+
tomorrow_date = (datetime.strptime(current_date, '%Y-%m-%d') + timedelta(days=1)).strftime('%Y-%m-%d')
|
160 |
+
|
161 |
+
# Step 2: Check the availability of the time slot from 10:00 to 12:00 on that day
|
162 |
+
slot_start_time = '10:00'
|
163 |
+
slot_end_time = '12:00'
|
164 |
+
is_slot_available = check_availability(tomorrow_date, slot_start_time, slot_end_time)
|
165 |
+
|
166 |
+
# Step 3: If the slot is available, make an appointment for the meeting
|
167 |
+
appointment_title = "Meeting with thesis supervisor"
|
168 |
+
appointment_made = False
|
169 |
+
|
170 |
+
if is_slot_available:
|
171 |
+
appointment_made = make_appointment(tomorrow_date, slot_start_time, slot_end_time, appointment_title)
|
172 |
```
|
173 |
|
174 |
+
This code first calculates tomorrow's date, then checks if the time slot from 10:00 to 12:00 is available, and finally makes an appointment if the slot is available.
|
175 |
+
````
|
176 |
|
177 |
## Evaluation & Performance
|
178 |
|
179 |
+
We evaluate the model on the following benchmarks:
|
180 |
|
181 |
+
1. Benchmark 1
|
182 |
+
2. Benchmark 2
|
183 |
+
3. ...
|
184 |
+
4. **Dria-Pythonic-Agent-Benchmark (DPAB):** The benchmark we curated with a synthetic data generation +model-based validation + filtering and manual selection to evaluate LLMs on their Pythonic function calling ability, spanning multiple scenarios and tasks. More detailed information about the benchmark can be found on the [Github repo](https://github.com/firstbatchxyz/function-calling-eval) and in our [blog post](blog-link)
|
185 |
|
186 |
+
Below are the evaluation results for Qwen2.5-Coder-3B-Instruct and Dria-Agent-α-3B
|
187 |
|
188 |
+
| Benchmark Name | Qwen2.5-Coder-3B-Instruct | Dria-Agent-α-3B |
|
189 |
+
|----------------|---------------------------|-----------------|
|
190 |
+
| BFCL | TBD | TBD |
|
191 |
+
| MMLU-Pro | 35.2 ([Self Reported](https://arxiv.org/pdf/2409.12186)) | 29.8* |
|
192 |
+
| DPAB | TBD | TBD |
|
193 |
|
194 |
+
**\*Note:** The model tends to use Pythonic function calling for a lot of the test cases in STEM-related fields (math, physics, chemistry, etc.) in the MMLU-Pro benchmark, which isn't captured by the evaluation framework and scripts provided in their [Github repository](https://github.com/TIGER-AI-Lab/MMLU-Pro/tree/main). We haven't modified the script for evaluation, and leave it for the future iterations of this model. However, by performing qualitative analysis on the model responses, we suspect that the model's score will increase instead of suffering a ~6% decrease.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|