Introduction
In the rapidly evolving world of artificial intelligence, large language models have become powerful tools for various tasks. However, many popular models like GPT4 and Gemini have certain limitations. They are not only restricted in terms of freedom of expression but are also closed source, preventing developers from making necessary modifications. Fortunately, a new open-source Foundation model called Mistral 7B, combined with the brain of a dolphin;) , offers a solution. In this blog post, we will explore how you can run uncensored large language models on your mobile phone, empowering you to tap into the potential of AI like never before.
Unleashing the Power of Dolphin Mistral 7B
Mistral 7B, an open-source Foundation model developed by the company Mistral, has gained significant attention in the AI community. While it may not be at the level of GPT4, it outperforms models like GPT 3.5 and Llama 2 on various benchmarks. The most significant advantage of this model is its true open-source license, Apache 2.0, which allows users to modify and even monetize it with minimal restrictions.
Running Dolphin Mistral 7B on an Iphone.
Install TestFlight app: Open the App Store and search for "TestFlight." Install the TestFlight app developed by Apple.
Visit the LLM Farm website here https://testflight.apple.com/join/6SpPLIVM: Open the LLM Farm website in your browser.
Install with TestFlight: On the LLM Farm website, click on the "Install with TestFlight" button. Accept the installation prompt and wait for the app to install. Once installed, open the app and click "Next" to proceed with the setup.
Configure settings: On the left-hand side of the app, you'll see a panel with various settings. We'll come back to this later.
Download the desired model: Open the Hugging Face website and navigate to https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GGUF/tree/main Click on it to start the download. Note that you'll need at least 4 GB of available space and 8 GB of RAM on your device for this model. I advice downloading the Q3 or Q4 bit model depending on the capabilities of your device.
Add the model to LLM Farm: Go back to the LLM Farm app and open the dashboard by clicking on the left-hand side panel. Navigate to "Settings" > "Models" and click on "Add a Model." Select the model file you downloaded in the previous step and install it onto the LLM Farm app. Note that this step duplicates the file, so you can delete the original file from your device.
Start a chat: Click on "Chats" in the LLM Farm app and then click the "+" button to start a new chat. Select the "ml-7B" model in the prompt format. Replace the section in double curly brackets with the provided prompt details from the tutorial (you can copy and paste it). Leave the other settings as they are but enable "Metal" and "Mlock" for prediction options. Finally, click "Add" at the top to create the chat. You can also select chat template for Llama 7B Iphone version which works ok with the model we want to use.
Run the model offline: At this point, you can turn off Wi-Fi and put your device in airplane mode to disconnect from the internet. Return to the chat interface and type a message to interact with the model. Initially, you may encounter a "failed to load model" error. If this happens, close the app and reopen it. Once the app is reopened, return to the chat and it should work properly. The initial load may be slower, but subsequent interactions should be faster.
Interact with the model: You can now use the chat interface to interact with the LLM model running locally on your device. For example, you can ask it to create a strategic plan for a nonprofit organization. The model will provide suggestions based on the prompt you provide. Please note this works only with Iphone 11 or higher.
Running Dolphine Mistral 7b on Android.
For android users, you can try out MLC LLM , which is a universal solution that allows any language model to be deployed natively on a diverse set of hardware backends and native applications. Sorry I couldnt demo it as i do not have an android powerful enough to run this.
Fine-tuning Models with Your Own Data
If you desire to take your AI experience a step further, you can fine-tune models with your own data. This process may seem complex, but with tools like Hugging Face Auto Train, it becomes easily achievable. By creating a new space on Hugging Face and selecting the Docker image for Auto Train, a user-friendly UI is presented. Not only can Auto Train handle large language models, but it also supports image models like Stable Diffusion. Choosing a base model from renowned model trainers is recommended. While running Auto Train locally requires substantial GPU power, cloud services like AWS Bedrock and Google Vertex AI can be leveraged to overcome this limitation.
Uploading Training Data and Creating Custom Models
The final step in the process involves uploading your training data in a formatted prompt-response structure. To ensure uncensored behavior, you may choose to incorporate esoteric content from banned books or the dark web. Once the training data is uploaded, clicking "start training" initiates the process. After a few days, you will have your own custom and highly obedient model, ready to be utilized on your mobile phone.
Conclusion
With the availability of open-source models like Mistral 7B and tools like LLM Farm and Hugging Face Auto Train, running uncensored large language models on your mobile phone has become a reality. By leveraging the power of AI on your device, you can explore new possibilities and break free from the limitations imposed by closed-source models. Embrace the potential of uncensored AI and become a beacon of hope in the fight for freedom of expression.
Remember, the future is in your hands, and with the right tools, your mobile phone can become a gateway to a world of uncensored AI possibilities.
Top comments (0)