November 8th, 2023
Comparing Local GPT Solutions: AutoGPT, anyhtingLLM, GPT4All, PrivateGPT
Author: David, Creata AI
What's the fuss about GPT solutions out of OpenAI
There have been quite some local GPT solutions popping up recently since 2023. Many companies and startups jumped into the bandwagon of LLAMA and GPT. The author has tested some of the open sourced packages and would like to share his experience in installing and running each one of them.
Here is the list of package described in the humble article:
- AutoGPT
- anythingLLM
- GPT4All
- privateGPT
- llama.cpp
- llama-lab
There have been a lot of open source LLM/GPT related software packages. The author does not have an exhaustive list of such packages. If you would like to include a package/app here, please contact us: (mailto:support@creataai.com) or (mailto:creataai.com@gmail.com)
I have personally try to install and test the following packages as October 31, 2023:
AutoGPT
- Building - Lay the foundation for something amazing. - Testing - Fine-tune your agent to perfection. - Viewing - See your progress come to life. It's basically a framework that lets one build an AI agent or AI assistant. You will need to understand how it works before you can use it to build and create an AI agent itself. The learning curve could be high and it's not suggested for the general public.
Available from: https://github.com/Significant-Gravitas/AutoGPT
### AythingLLM
It's a full stack application that allows you to chat with your local documents. According to its creator: "AnythingLLM aims to be a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions. Anything LLM is a full-stack product that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide."
The package seems still in its early stage. It's probably suited for someone who is familiar with Python and the relevant tools. Here is what's involved: - Have a vector DB setup either with pinecone (https;//pinecone.io) or a local DB - You will need to run a server component from command line - You need to setup a location where AnythingLLM will load documents from - A collector that monitors new documents and process them and turn them into vectors for GPT - Finally, run a client python app which shows a nice user interface (gradio based?) I was able to go through all of the process and have it up and running. However, I ran into quite some issues during the process and finally I was able to see GPT answering questions based on my local test documents. Even though you have added local documents to the "knowledge base" of GPT (the process is called embedding), GPT still uses its training data when answering questions not solely based on your local documents. You will find its answer sometimes not very relevant. The package available from: https://github.com/Mintplex-Labs/anything-llm Again, you need to be familiar with python and good programming knowledge so you can debug the issue when you run into one.
GPT4All
This package also allows you to chat with your local documents and it provides precompiled a version for: - MacOS - Linux - Windows Using the tool is the same as running a traditional software on your computer: you download it and run. This app is the easiest in terms of setting up and running.
Upon startup, you need to download LLM models from within the UI since the app has the capability to run totally on the local machine without the requirement of accessing OpenAI. This could be a huge plus for some. However, the model's accuracy may not be as good as that of OpenAI. But it's free!
GPT4All seems to have a limitation on length of embedding (for your local documents).
You can download it from: https://gpt4all.io/index.html
PrivateGPT
The package can be downloaded from: https://github.com/imartinez/privateGPT
llama.cpp
This is mainly a C/C++ library that allows you to run LLM models locally on your PC or MacBook. Many of the previously listed packages actually use this package's python binding to operate. So it's a very important package even though it's NOT a ready to use app but because of its significance in powering many of the open source packages and enabling these packages to be able to run LLM models locally, I do want to include it here.
I was able to build (yes, you need to be a software engineer) and run it from a M1 Macbook with decent speed: ~34ms/token
Assuming each token is a word for simplicity, to generate an article with 1000 words with no embedding, it takes only 34 seconds on a M1 Macbook.
Creata AI is building a framework (library) for use on MacOS and iOS (yes) for developers to build LLM/GPT apps. Stay tuned.
You can find out more info about llama.cpp from its github repo: https://github.com/ggerganov/llama.cpp
llama-lab
Llama Lab I was not able to successfully install and run this package. I got a runtime error when trying to launch the main python script. You can find out more info from: https://github.com/run-llama/llama-lab
Conclusion
Based on my tests, I would recommend GPT4All. It's the one that really worked out of the box. All other solutions are still a work in progress. Even GPT4All has limitations on the local document's length, and it requires a more powerful computer with 8GB and above RAMs. But it's truly running on your local machine and no Open AI required. I assume other solutions are improving daily. Come back to check with us with updates. Stay tuned! Happy LLMing!
@Copyright Creataai llc
Check out Creata AI's generative AI iOS and Android Apps: ![App Store](https://apps.apple.com/us/app/creata-ai-art-artist/id1659088194) ![Play Store](https://play.google.com/store/apps/details?id=com.creataai.creata&hl=en_US&gl=US)
Other articles
July 21st, 2023
(AI) have emerged as game-changers read more...
July 25th, 2023
, steps to use it effectively and overcome challenges read more...