abhiAI777 commited on
Commit
f11d61b
·
verified ·
1 Parent(s): 0bcec24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -7
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: unsloth/phi-4-unsloth-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
@@ -11,12 +11,43 @@ language:
11
  - en
12
  ---
13
 
14
- # Uploaded model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- - **Developed by:** aixonlab
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
+ base_model: microsoft/phi-4
3
  tags:
4
  - text-generation-inference
5
  - transformers
 
11
  - en
12
  ---
13
 
14
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/652c2a63d78452c4742cd3d3/E3FHirSLCZPF8I_K-yKF0.png" width="800"/>
15
+
16
+
17
+ # Valkyyrie-14b v1
18
+
19
+ Valkyyrie 14b v1 is a fine-tuned large language model based on Microsoft's Phi-4, further trained to have better conversation capabilities.
20
+
21
+ ## Model Details 📊
22
+ - Developed by: AIXON Lab
23
+ - Model type: Causal Language Model
24
+ - Language(s): English (primarily), may support other languages
25
+ - License: apache-2.0
26
+ - Repository: https://huggingface.co/aixonlab/Valkyyrie-14b-v1
27
+
28
+ ## Model Architecture 🏗️
29
+ - Base model: phi-4
30
+ - Parameter count: ~14 billion
31
+ - Architecture specifics: Transformer-based language model
32
+
33
+ ## Open LLM Leaderboard Evaluation Results
34
+ Coming Soon !
35
+
36
+ ## Training & Fine-tuning 🔄
37
+ Aether-12b was fine-tuned to achieve -
38
+
39
+ 1. Better conversational skills
40
+ 2. Better creativity for writing and conversations.
41
+ 3. Broader knowledge across various topics
42
+ 4. Improved performance on specific tasks like writing, analysis, and problem-solving
43
+ 5. Better contextual understanding and response generation
44
+
45
+ ## Intended Use 🎯
46
+ As an assistant or specific role bot.
47
+
48
+ ## Ethical Considerations 🤔
49
+ As a fine-tuned model based on phi-4, this model may inherit biases and limitations from its parent model and the fine-tuning dataset. Users should be aware of potential biases in generated content and use the model responsibly.
50
+
51
 
 
 
 
52
 
 
53