You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for Model ID

We developed a Large Language Model (LLM) on top of DeepSeek, achieving ChatGPT-4-level performance specifically for the Move programming language. This model offers advanced code generation, error handling, and context-aware support, optimized for Move’s unique requirements. By combining DeepSeek’s foundation with a Move focus, our LLM provides reliable, high-performance assistance for smart contract and blockchain development within the Move ecosystem.

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: FLock.io
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Start with this prompt:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("flock-io/move-llm")
model = AutoModelForCausalLM.from_pretrained("flock-io/move-llm")
# Tokenize input text
sys_prompt = "You are an expert in Aptos Move programming language."
input_text = sys_prompt + "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
23
Safetensors
Model size
6.74B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.