--- base_model: microsoft/phi-4 tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Valkyyrie-14b v1 Valkyyrie 14b v1 is a fine-tuned large language model based on Microsoft's Phi-4, further trained to have better conversation capabilities. ## Model Details 📊 - Developed by: AIXON Lab - Model type: Causal Language Model - Language(s): English (primarily), may support other languages - License: apache-2.0 - Repository: https://huggingface.co/aixonlab/Valkyyrie-14b-v1 ## Model Architecture 🏗️ - Base model: phi-4 - Parameter count: ~14 billion - Architecture specifics: Transformer-based language model ## Open LLM Leaderboard Evaluation Results Coming Soon ! ## Training & Fine-tuning 🔄 Valkyyrie-14b-v1 was fine-tuned to achieve - 1. Better conversational skills 2. Better creativity for writing and conversations. 3. Broader knowledge across various topics 4. Improved performance on specific tasks like writing, analysis, and problem-solving 5. Better contextual understanding and response generation ## Intended Use 🎯 As an assistant or specific role bot. ## Ethical Considerations 🤔 As a fine-tuned model based on phi-4, this model may inherit biases and limitations from its parent model and the fine-tuning dataset. Users should be aware of potential biases in generated content and use the model responsibly.