Replies: 2 comments
-
Another alternative would be to convert the However, things have progressed quite a bit since then and I'd recommend the newer models anyway. Maybe try the current version of the Python bindings with e.g. the model Phi 3 mini? It would be easier to recommend something with more details about the problem. |
Beta Was this translation helpful? Give feedback.
-
We are dropping support for the GPT-J model architecture in the next release. There was never a script to convert GGML-format GPT-J models (.bin) to GGUF. There is a script to convert the original model weights (pytorch_model.bin) to GGUF here, but not for long. In practice, these models are obsolete and there are better models available now. |
Beta Was this translation helpful? Give feedback.
-
Last year, I had an early version of gpt4all installed on my Linux PC.
I'd had it working pretty darn well, through python, using the
gpt4all-lora-unfiltered-quantized.bin
file, which I still have in my .nomic folder. However recently, I lost my gpt4all directory, which was an old version, that easily let me run the model file through Python..My .nomic folder still has: gpt4all, gpt4all-lora-quantized.bin, gpt4all-lora-quantized-linux.x86, gpt4all-lora-quantized-OSX-intel, gpt4all-lora-quantized-OSX-m1, and gpt4all-lora-unfiltered-quantized.bin
I've tried installing the newer versions of gpt4all, however they refuse to launch correctly on this outdated linux PC..
Is there a way I could still use these files with python? I can still launch it manually from the terminal using
./gpt4all-lora-quantized-linux-x86 -m gpt4all-lora-unfiltered-quantized.bin
Just not sure how to use it with python without the version of gpt4all that I had before..edit: I tried installing older versions of gpt4all and nomic, from last spring, and that gets me closer, but it doesn't quite load right, darnit..
Beta Was this translation helpful? Give feedback.
All reactions