● SYSTEM ONLINE

Che AI

A local-first AI operating system built for real hardware, persistent memory, and distributed intelligence.

Runs on: Raspberry PiLinuxWindowsJetsonAndroid

CHE AI is designed to run on your own machines instead of living entirely in the cloud. It is built for builders, robotics, edge devices, offline systems, and long-horizon work where memory and system continuity matter. The goal is simple: real AI infrastructure that can think, remember, connect, and operate across actual hardware.

$ che init --network distributed
> Initializing CHE AI OS...
> Loading modules: memory, symbolic, intent
> Connecting to local LLM...
> Detecting hardware profile...
> Network mesh: ONLINE
$ che init --network distributed
> Initializing CHE AI OS...
> Loading modules: memory, symbolic, intent
> Connecting to local LLM...
> Network mesh: ONLINE

CHE Interface Node

Node online. The rest are thinking about it.

Live Status

127Node Interactions
ONLINEMesh state
LOCALInference mode
READYHardware bridge

Supported Deployments

From Raspberry Pi and Jetson nodes to Windows workstations and Android relay devices, CHE AI is structured to move across different hardware classes while maintaining one coherent intelligence stack.

Development Track

Documentation, white papers, and GitHub repositories are linked directly from this site so visitors can inspect the public framework, track progress, and understand the architecture behind the system.

Built for Multiple Hardware Platforms

CHE AI is designed to operate across different classes of hardware instead of being locked to one environment. It can be deployed on Raspberry Pi systems for lightweight edge nodes, Linux workstations for development and orchestration, Windows machines for local desktop control, Jetson platforms for AI and robotics workloads, and mobile devices for portable access and relay functions. This allows CHE to scale from a single local machine to a distributed multi-device network.

Raspberry Pi

Lightweight node deployment, edge control, offline assistants, and low-power distributed systems.

Linux

Primary environment for local AI deployment, development workflows, orchestration, and system control.

Windows

Desktop runtime access, local model hosting, control interfaces, and cross-device coordination.

Jetson

High-performance edge AI for robotics, sensors, acceleration workloads, and advanced hardware integration.

Android / Mobile

Portable access layer for monitoring, interaction, relay nodes, and mobile control surfaces.

Distributed Network

Multiple devices can work together as a coordinated system rather than a single isolated install.

Why CHE is Different

Most AI products are cloud-bound tools. CHE is a system. It is designed to run locally, persist state, and coordinate across real hardware. That changes what you can build and how it behaves over time.

Local-First by Design

No dependency on external APIs to function. Models run on your machines with your data and your control.

Persistent Memory

State carries across sessions and devices. CHE is built to remember, not just respond.

System, Not App

Designed as a runtime layer that connects tools, nodes, and hardware rather than a single interface.

Hardware Native

Works with Raspberry Pi, Jetson, desktops, and mobile as part of one coordinated environment.

Distributed from Day One

Multiple nodes can operate together. Scale is horizontal across devices, not locked to one box.

Built in Public

White papers, documentation, and code are accessible. Inspect, learn, and build alongside it.

What CHE AI Is

Local-First

Built to run on your hardware with direct control over models, data, and deployment.

Memory-Driven

Designed for persistent state, contextual continuity, and longer-term reasoning across sessions.

System-Level

More than chat. CHE is aimed at nodes, tools, robotics, automation, and real operating environments.

Built in Public

Active development, documentation, white papers, and code are all part of the public-facing ecosystem.

Capabilities

Local AI

Runs fully offline on your hardware.

Persistent Memory

Maintains long-term state across sessions.

Hardware Native

Integrates with robotics and embedded systems.

Distributed Nodes

Scales across multiple machines.

White Papers

DCLP v1.0

Designed Cognitive Learning Process framework.

Read white paper →

CHE Runtime

Architecture and symbolic runtime overview.

View source references →

Local AI Systems

Design principles for edge intelligence.

Documentation

Getting Started

Install and run CHE locally.

Open setup resources →

Node Networking

Connect multiple systems together.

Modules

Memory, symbolic, intent systems.

API

Command and integration reference.