back

Time: 12 minute read

Created: February 7, 2025

Author: Lina Lam

Top Open WebUI Alternatives for Running LLMs Locally

Open WebUI is a web interface for running local large language models (LLMs), providing an easy way to interact with AI models. It's a popular choice for privacy-conscious users who want a lightweight self-hosted solution without incurring cloud costs or sending data to external services.

Open WebUI Alternatives

While Open WebUI is a great option for some users, others may need more customization or better integrations. We have curated a list of top Open WebUI alternatives and instructions for quickly setting them up.

Let's get into it!

Start monitoring your LLM app today ⚡️

Get powerful insights and take control of your LLM apps with Helicone's enterprise-grade monitoring platform. Integrate in seconds.

Overview of Open WebUI Alternatives

  1. HuggingChat
  2. AnythingLLM
  3. LibreChat
  4. Lobe Chat
  5. Chatbot UI
  6. Text Generation WebUI
  7. Msty
  8. Hollama
  9. Chatbox
  10. Ollama UI

Prerequisites

  • Docker installed and running on your machine.
  • NodeJS and npm or yarn installed on your machine.
  • Access to an LLM running locally.

1. HuggingChat

Open WebUI Alternatives: HuggingChat

GitHub Repo Stars

HuggingChat is an open-source chat interface developed by Hugging Face. It provides seamless access to a variety of language models and integrates well with Hugging Face's ecosystem.

If you love tinkering with models on Hugging Face, you might be interested in this one due to its native integrations with Hugging Face’s Model Hub and API endpoints.

Strengths:

  • Built into Hugging Face's ecosystem, making integration seamless
  • Supports a variety of models from Hugging Face's Model Hub
  • Web-based with an easy-to-use interface

Limitations:

  • Fairly involved local setup process.
  • Limited customization options compared to some alternatives.

Getting Started with HuggingChat

HuggingChat can be quickly deployed using Docker.

First, obtain your Hugging Face token.

docker run -p 3000 -e HF_TOKEN=hf_*** -v db:/data ghcr.io/huggingface/chat-ui-db:latest

HuggingChat supports endpoints for OpenAI API-compatible local services as well as third-party providers like Anthropic, Cloudflare, and Google Vertex AI.

For the most up-to-date setup instructions, please visit Huggingchat's official documentation.


2. AnythingLLM

Open WebUI Alternatives: AnythingLLM

GitHub Repo Stars

AnythingLLM is an open-source framework designed for users who want to do more than just chat with an LLM locally. It provides local AI agents that can interact with files, databases, and web data—all while running privately on your machine.

Unlike simple chat interfaces, AnythingLLM brings agentic workflows to local models, enabling them to process structured data, search documents, generate reports, and even automate tasks.

Strengths:

  • Supports the use of local AI agents
  • Comes with prebuilt Agent Skills ready for use
  • Well-optimized for Consumer GPUs
  • Fully self-hosted and configurable
  • Supports various LLMs and vector databases
  • Includes built-in observability and logging for tracking AI interactions
  • Features multi-user support and role-based access control for team use

Limitations:

  • Can be overly restrictive and have ineffective chat/query modes
  • Supports various file formats (i.e. PDFs, DOCX, spreadsheets), but data retrieval isn't always accurate
  • Some design choices prioritize aesthetics over functionality

Getting Started with AnythingLLM

We recommend using the Dockerized version for a faster setup.

  1. Pull the latest image:
docker pull mintplexlabs/anythingllm
  1. Run with Docker:

For Mac/Linux:

export STORAGE_LOCATION=$HOME/anythingllm && \
mkdir -p $STORAGE_LOCATION && \
touch "$STORAGE_LOCATION/.env" && \
docker run -d -p 3001:3001 \
--cap-add SYS_ADMIN \
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllm

For Windows (PowerShell):

$env:STORAGE_LOCATION="$HOME\Documents\anythingllm"; `
If(!(Test-Path $env:STORAGE_LOCATION)) {New-Item $env:STORAGE_LOCATION -ItemType Directory}; `
If(!(Test-Path "$env:STORAGE_LOCATION\.env")) {New-Item "$env:STORAGE_LOCATION\.env" -ItemType File}; `
docker run -d -p 3001:3001 `
--cap-add SYS_ADMIN `
-v "$env:STORAGE_LOCATION`:/app/server/storage" `
-v "$env:STORAGE_LOCATION\.env:/app/server/.env" `
-e STORAGE_DIR="/app/server/storage" `
mintplexlabs/anythingllm;
  1. Go to http://localhost:3001 to access the interface.

For the most up-to-date setup instructions, please visit AnythingLLM's official documentation.


3. LibreChat

Open WebUI Alternatives: LibreChat

GitHub Repo Stars

LibreChat is an open-source AI chatbot platform that integrates multiple LLMs, plugins, and AI tools into a single, free interface. It mimics ChatGPT's design and functionality while adding powerful features such as multimodal support, plugin integration, and conversation customization.

It also offers Code Artifacts, an experimental feature that renders React code, raw HTML, and mermaid diagrams directly in the browser.

Strengths:

  • Supports various models (OpenAI, Anthropic, AWS Bedrock, and custom endpoints).
  • Offers AI Agents, multimodal support, code execution, custom presets and more.
  • Allows the rendering of React code and HTML in browser.
  • Offers multilingual UI with extensive customization options.
  • Seamless file and conversation management with export/import capabilities.

Limitations:

  • Can be resource-intensive when running multiple models simultaneously.
  • Advanced features may have a learning curve for new users.

Getting Started with LibreChat

  1. Download the Project

    • Manual: Visit the GitHub Repo, download the ZIP, and extract it.
    • Using Git: Run:
      git clone https://github.com/danny-avila/LibreChat.git
      
  2. Run the App

    • Navigate to the project directory.
    • Create a .env file by copying .env.example and configuring values as needed.
    • Start the app with:
      docker compose up -d
      

LibreChat should now be running locally. See detailed documentation here.


4. Lobe Chat

Open WebUI Alternatives: Lobe Chat

GitHub Repo Stars

Lobe Chat is a lightweight and extensible UI framework designed for interacting with various LLMs, both locally and remotely. It prioritizes user experience with a modern design and offers progressive web app (PWA) support.

It integrates multiple AI models and supports features like image generation, text-to-speech, speech-to-text, as well as plugins to extend its functionality.

Strengths:

  • Built-in TTS and STT voice conversation capabilities.
  • Comes with prebuilt AI agents and allows users to download agents.
  • Supports text-to-image generation using DALL-E 3 and other tools.
  • Plugin system for extended functionality and integration with external services.
  • Clean, intuitive and mobile-friendly UI, with custom themes.

Limitations:

  • Smaller community compared to larger open-source projects.
  • Some advanced features require additional configuration.

Getting Started with Lobe Chat

Run Lobe Chat with Docker:

$ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434 lobehub/lobe-chat

Assuming you have already started an Ollama service locally on port 11434, the above command will run Lobe Chat and connect to your local service.

For the most up-to-date setup instructions, please visit Lobe Chat's official documentation.


5. Chatbot UI

Open WebUI Alternatives: Chatbot UI

GitHub Repo Stars

Chatbot UI 2 is an open-source, self-hosted chatbot interface that allows users to run local or cloud-based AI models with a database-backed storage system.

It provides a sleek and intuitive user experience, making it accessible for both casual users and developers looking to integrate AI into their workflows.

Strengths:

  • User-friendly and modern UI across desktop and mobile.
  • Database-backed, which prevents API key leaks and supports larger storage.
  • Handles text, images, files, audio, and video.
  • Easy deployment via Docker, Supabase, and Vercel.
  • Built-in authentication and storage support using Supabase.
  • Offers flexibility with environment variables for custom configurations.
  • Active development and community support.

Limitations:

  • Requires additional setup for local LLMs like Ollama.
  • Depends on third-party services like Supabase.
  • Customization may require minor code modifications.

Getting Started with Chatbot UI

To quickly deploy Chatbot UI, follow these steps:

1. Clone the Repository & Install Dependencies

git clone https://github.com/mckaywrigley/chatbot-ui.git
cd chatbot-ui
npm install

2. Install Supabase & Start Locally

Ensure Docker is installed then run the following to install the Supabase CLI:

  • macOS/Linux: brew install supabase/tap/supabase
  • Windows:
    scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
    scoop install supabase
    

Start Supabase and set environment variables

In your terminal at the root of your local Chatbot UI repository, run:

supabase start
cp .env.local.example .env.local
supabase status

This creates a .env.local file in your project directory. Populate it with your running Supabase server details.

3. Run Chatbot UI

npm run chat

Access UI at http://localhost:3000
Admin panel: http://localhost:54323/project/default/editor

For more detailed instructions, visit the official GitHub repository.


6. Text Generation WebUI

Open WebUI Alternatives: Text Generation WebUI

GitHub Repo Stars

Text Generation WebUI is a versatile web-based interface for running large language models (LLMs). It supports multiple backends, including Transformers, llama.cpp, ExLlamaV2, and TensorRT-LLM.

Inspired by AUTOMATIC1111's Stable Diffusion Web UI, it aims to provide an easy-to-use and feature-rich experience for text generation.

Strengths:

  • Supports a wide range of local models.
  • Offers extensive model customization.
  • GPU acceleration for high performance but runs on CPU too.
  • Supports multimodal functionality.

Limitations:

  • Requires a capable local machine.
  • Setup can be complex for beginners.

Getting Started with Text Generation WebUI

1. Clone & Install

git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui

Run the installation script for your OS:

  • Linux/macOS: ./start_linux.sh or ./start_macos.sh
  • Windows: start_windows.bat
  • WSL: start_wsl.bat

Select your GPU vendor when prompted.

2. Start the Web UI

Once installation is complete, start the UI by running:

./start_linux.sh  # (or the script for your OS)

Access the interface at http://localhost:7860.

3. Download Models

Place models in the text-generation-webui/models folder or download them via the UI. You can also fetch models from Hugging Face using:

python download-model.py organization/model

For more detailed instructions, visit the official GitHub repository.


7. Msty

Open WebUI Alternatives: Msty

Msty is an intuitive, offline-first AI assistant designed to simplify local and online AI model interactions. Unlike other AI tools that require complex setups, Docker, or command-line expertise, Msty provides a streamlined, one-click installation with a user-friendly interface.

It supports a wide range of models, and has lots of advanced features such as "knowledge stacks," split chats, and chat branching, and Deep Research Mode.

Strengths

  • Does not require Docker, command-line tools, or complex configurations.
  • Knowledge Stacks (RAG) allow users to train AI with custom datasets (PDFs, docs, YouTube links).
  • Split Chat & Model Comparisons let users run multiple AI models side-by-side.
  • Dwell (Deep Research Mode) lets users select text and ask follow-up questions instantly.
  • Offers Chat Branching to let users explore different response paths without starting a new chat.
  • Use models from Hugging Face, Ollama, OpenRouter, and more.

Limitations

  • Limited Customization: While setup is easy, certain customizations may be impossible or require significant additional effort.

Getting Started with Msty

  • Download Msty from the official site.
  • Run the installer and follow the setup instructions. No terminal or Docker required.
  • Start chatting with local models or enter an API key to chat with online models.

For more detailed instructions, visit the official documentation.


8. Hollama

Open WebUI Alternatives: Hollama

GitHub Repo Stars

Hollama is a minimal yet powerful web UI designed for interacting with Ollama servers. It provides a clean, user-friendly interface that supports both local and remote AI models, including OpenAI and reasoning models.

With a focus on simplicity, customization, and performance, Hollama is an excellent choice for those looking to run AI chat models with minimal setup.

Strengths:

  • Designed for speed and efficiency.
  • Supports Ollama & OpenAI Models.
  • Can connect to multiple Ollama servers.
  • Set Custom System Prompts.
  • Create multiple chat sessions, edit messages, and adjust responses.
  • Modify temperature, context size, and system prompts directly from the UI to tune models to your taste.
  • Available in English, Spanish, Japanese, and Turkish.

Limitations:

  • Cannot function standalone without an Ollama instance.
  • Lacks advanced features like web search, RAG, or plugins,

Getting Started with Hollama

1. Download & Install Locally

  • Get an Hollama installer for macOS, Windows, and Linux from the GitHub page.

2. Self-Host with Docker (Recommended for Servers)

To deploy your own Hollama server, run:

docker run --rm -d -p 4173:4173 --name hollama ghcr.io/fmaclen/hollama:latest

Then, open http://localhost:4173 in your browser where you can connect to your Ollama server. You can even download Ollama via the UI.

For more detailed instructions, visit the official GitHub repository.


9. Chatbox

Open WebUI Alternatives: Chatbox

GitHub Repo Stars

Chatbox is a powerful and user-friendly AI chat client that supports multiple language models online and offline, including ChatGPT, Claude, Gemini Pro, Ollama, and DeepSeek.

It is available as a desktop application for Windows, macOS, and Linux, as well as mobile apps for iOS and Android.

Strengths:

  • Works with OpenAI, Azure, Claude, Gemini, and local models such as DeepSeek via Ollama.
  • Ensures your data remains private and secure on your device.
  • Runs on Windows, macOS, Linux, iOS, and Android.
  • Simple installation with no technical setup required.
  • Allows for model tuning within UI.
  • Comes with pre-built Prompt Templates.

Limitations:

  • While free, the Official Edition is not open-source.
  • Compared to some self-hosted LLM solutions, out-of-the-box customization is limited.

Getting Started with Chatbox

1. Download Chatbox Installer

PlatformDownload Link
WindowsInstaller
macOS (Intel)Installer
macOS (M1/M2)Installer
LinuxAppImage
AndroidInstaller
iOSInstaller

2. Build from Source (Community Edition)

To run the open-source Community Edition locally, follow these steps:

git clone https://github.com/Bin-Huang/chatbox.git # clone the repository
cd chatbox
npm install # install dependencies
npm start # run the application

For more detailed instructions, visit the official GitHub repository.


10. Ollama UI

Open WebUI Alternatives: Ollama UI

GitHub Repo Stars

Ollama UI is a powerful, open-source frontend that provides a ChatGPT-like experience for local AI models. It works seamlessly with Ollama, allowing users to run LLMs on their own machines with zero cloud dependencies.

It offers a minimal and intuitive design for users who prefer a straightforward, no-frills experience.

Strengths:

  • No unnecessary bloat—just a clean UI for seamless interactions.
  • Can be set up with a few simple commands—no complex configurations.
  • Available as a Chrome extension.
  • Can load several AI models simultaneously.
  • Allows users to upload PDFs or text files for AI-powered search.

Limitations:

  • Focused mainly on basic chat interactions; lacks some advanced customization options.
  • Designed primarily for Ollama usage.

Getting Started with Ollama UI

Follow these steps to set up Ollama UI on your local machine:

git clone https://github.com/ollama-ui/ollama-ui
cd ollama-ui
make

open http://localhost:8000  # Open in your browser

You can also use the Chrome Extension for a hassle-free experience.

Once installed, you can start chatting with your Ollama models right from your browser.

For more detailed instructions, visit the official GitHub repository.

Get powerful insights on your LLM app today ⚡️

Helicone is the leading open-source monitoring platform for LLM apps.Get started in seconds.

Bottom Line

While Open WebUI is a solid choice for running local LLMs, there are plenty of alternatives that offer different strengths. Whether you need an easy-to-configure tool like Chatbox, a full-featured UI like Text Generation WebUI, or a Hugging Face-integrated solution like HuggingChat, there's an option that will fit your needs.

We recommend exploring these alternatives to find the best match for your workflow and computing environment.

You might find these useful:


Questions or feedback?

Are the information out of date? Please raise an issue or contact us, we'd love to hear from you!