按照這篇文章的代碼
Langchain - Mistral 7B 模型實作day-1. 本文參照 連結 內容進行 colab 實作 | by CWChang | Medium
但只要沒有NVIDIA的CUDA就會有這個錯誤, 是gpu的問題嗎, 還是有其他辦法解決(改成cpu加載模型時記憶體會很高)
Traceback (most recent call last):
File "c:\Users\user\OneDrive\桌面\Zu_bot\AI\mistral.py", line 17, in <module>
model_4bit = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\transformers\modeling_utils.py", line 3049, in from_pretrained
hf_quantizer.validate_environment(
File "C:\Python312\Lib\site-packages\transformers\quantizers\quantizer_bnb_4bit.py", line 62, in validate_environment
raise ImportError(
ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`