• 0 Posts
  • 7 Comments
Joined 2 months ago
cake
Cake day: February 19th, 2025

help-circle
  • The smallest Modells that I run on my PC take about 6-8 GB of VRAM and would be very slow if I ran them purely with my CPU. So it is unlikely that you Phone has enough RAM and enough Cores to run a decent LLM smootly.

    If you still want to use selfhosted AI with you phone, selfhost the modell on your PC:

    • Install Ollama and OpenWebUI in a docker container (guides can be found on the internet)
    • Make sure they use your GPU (Some AMD Cards require an HSA override Flag to work
    • Make sure the docker container is secure (Blocking the Port for comunication outside of your network should work fine as long as you only use the AI Modell at home)
    • Get youself an openwight modell (I recomend llama 3.1 for 8 GB of VRAM and Phi4 if you got more or have enough RAM)
    • Type the IP-Adress and Port into the browser on your phone.

    You now can use selfhosted AI with your phone and an internet connection.