Artificial Intelligence 6 min read

Building Low‑Cost AI Clusters with Old Phones Using Exo and Open WebUI

This article introduces Exo, an open‑source platform that lets you turn idle smartphones, tablets, and laptops into a distributed AI cluster capable of running large language models, and shows how Open WebUI provides a user‑friendly interface for deploying private AI assistants.

IT Services Circle
IT Services Circle
IT Services Circle
Building Low‑Cost AI Clusters with Old Phones Using Exo and Open WebUI

Ever wondered how to repurpose idle iPhones, Android phones, iPads, or laptops into a powerful AI cluster that can run large language models like DeepSeek, LLaMA, or Mistral? The open‑source project Exo makes this possible without expensive NVIDIA GPUs, using everyday devices to build a distributed AI compute network.

What is Exo? Exo, developed by the exo labs team, integrates heterogeneous devices (phones, tablets, computers) into a distributed AI cluster through dynamic model partitioning and automatic device discovery, enabling execution of models that exceed the capacity of a single device.

Key characteristics:

Low cost – no specialized GPU required.

Dynamic partitioning – automatically allocates model layers across devices based on memory and network topology.

Decentralized P2P architecture – eliminates single‑point‑of‑failure in traditional master‑slave setups.

Ease of use – provides a ChatGPT‑compatible API and WebUI for simple interaction.

Five highlighted features:

Broad model support – LLaMA, Mistral, LlaVA, Qwen, DeepSeek, etc. Example command: exo run llama-3.2-3b to launch a model on a single device or a multi‑device cluster for larger models.

Automatic device discovery and dynamic partitioning – run the exo command on each device; the system discovers peers and distributes model layers using a ring‑memory weighted strategy.

Developer‑friendly – a ChatGPT‑compatible API at http://localhost:52415 for curl requests, environment‑variable debugging (e.g., DEBUG=9 exo ), and log analysis.

Cross‑platform compatibility – supports iOS, Android, macOS, Linux, and can connect via Bluetooth or Wi‑Fi (iOS version currently limited due to Python compatibility).

Open source – code available at https://github.com/exo-explore/exo .

Open WebUI offers a user‑friendly interface for interacting with locally deployed large language models, supporting various runtimes such as Ollama and OpenAI‑compatible APIs. It enables you to build a private AI assistant with 66.6k stars on GitHub.

Project advantages:

Local deployment keeps data under your control, addressing privacy concerns.

Flexibility and customizability – choose any supported LLM, customize UI and functionality.

Ease of use – simple interface suitable for non‑technical users.

Application scenarios:

Private AI assistants for text generation, Q&A, translation, etc.

Knowledge bases and artifacts – use LLMs for search and query.

AI‑enhanced search integration.

Real‑time custom voice chat applications.

Open WebUI source code is hosted at https://github.com/open-webui/open-webui .

Distributed Inferencelarge language modelsOpen-WebUIAI clusteringExolow‑cost AI
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.