AI

[LLM] stablelm-2-12b-chat 사용법

codens 2024. 4. 16. 19:31

text-generation-webui Stable Diffusion 을 만든 stability.ai 에서 만든 LLM(Large Language Model)

 

Introducing Stable LM 2 12B

https://stability.ai/news/introducing-stable-lm-2-12b


https://huggingface.co/stabilityai/stablelm-2-12b-chat

    - 필요 모듈 설치
pip install transformers accelerate

    - pytorch 설치
        - 최신버전 설치 방법 : https://pytorch.org/get-started/locally/
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

//-------------------------------------
from transformers import AutoModelForCausalLM, AutoTokenizer
from datetime import datetime

model_path = "stabilityai/stablelm-2-12b-chat"
# model_path = r"D:\AI\text-generation-webui\models\stabilityai_stablelm-2-12b-chat"

print(datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "loading model...")
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    trust_remote_code=True,
)
print("Model loaded successfully")
prompt = [{"role": "user", "content": "Implement snake game using pygame"}]
inputs = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_tensors="pt")
print(inputs)
print(datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "generating...")
tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=100,
    temperature=0.7,
    do_sample=True,
    eos_token_id=100278,  # <|im_end|>
)
output = tokenizer.decode(tokens[:, inputs.shape[-1] :][0], skip_special_tokens=False)

print(datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "output", output)

//-------------------------------------
주의! 8분 걸림

text-generation-webui 에서는 에러 발생

 

 

반응형