v0 Open source · In active development

A personal AI assistant
that runs on your own hardware.

Juno is a local-first, voice-driven assistant built to understand context, take action on your behalf, and keep every byte of your data on devices you own.

License
MIT
Runtime
TypeScript / Node
Models
Local-first
Hardware
Edge AI · Mac
Overview

Most assistants are reactive — you open an app, ask a question, get a reply. Juno is designed to be resident: running in the background, gathering context from the signals you already generate, and ready to respond the moment you need it.

01

Local by default

Inference, storage, and state stay on your hardware. Cloud models are an opt-in escape hatch, never a requirement.

02

Voice-driven

Wake word detection, local speech-to-text, and local TTS — with no button press and no latency round-trip to a remote API.

03

Context-aware

A persistent background layer reads your inbox, calendar, and feeds so every interaction begins with relevant state.

04

Agentic, not conversational

Juno completes tasks — drafting email, browsing the web, controlling your Mac — rather than returning text for you to copy and paste.

Architecture

Three layers, clearly separated.

Each layer has a well-defined responsibility and hands structured state to the next. The result is a system that's straightforward to reason about and extend.

Layer 01

Interactive

Voice in, intent out.

Listens for the wake word, transcribes speech locally, classifies intent, and either answers directly or dispatches structured tasks to the agentic layer.

openWakeWord Whisper Kokoro / Piper
Layer 02

Agentic

Tasks executed through skills.

Receives structured tasks and runs them against a skill library: browsing, email, calendar, system control. Local models handle most work; cloud models are invoked only when needed.

Ollama Playwright Claude / Gemini (opt-in)
Layer 03

Background

Always running, always building context.

Watches RSS, inbox, and calendar. Produces structured context reports that give later interactions grounded, up-to-date state without another round-trip to an API.

ChromaDB / SQLite-vec Context reports Continuous summarization
Capabilities

What it does, in plain terms.

01

Local inference

Qwen, Phi-4, Whisper, and other models run on-device. No data leaves your network unless you opt in to a cloud model for a specific task.

02

Wake-word voice interface

Speak naturally — Juno detects its wake word, transcribes with Whisper, and responds in natural speech via a local text-to-speech model.

03

Juno Browser

A Playwright-based Chromium agent that can fill forms, click through flows, extract clean content, or display pages in a frameless window.

04

macOS companion

A SwiftUI menu bar app with a hard on/off switch. Handles messages, file operations, app control, and screen reading on your Mac.

05

Multi-model routing

Local models handle most work. Claude or Gemini are available for tasks that genuinely need them — configured per-skill, not per-request.

06

MIT licensed

No telemetry, no analytics, no accounts, no marketplace. Every line of the codebase is readable, modifiable, and yours to run.

Hardware

Runs on consumer hardware.

Juno is designed around two devices: an always-on edge accelerator and a Mac. Total additional cost is roughly $250–350.

01 / EDGE
Jetson Orin Nano Super
Always-on brain. Runs wake word, speech-to-text, text-to-speech, intent classification, and background summarization continuously at low power.
~ $249
02 / COMPUTE
Mac Mini M4 / MacBook Air M4
Handles larger inference (Ollama 7B–14B), executes system actions, and hosts the SwiftUI companion app.
Existing hardware

Additional hardware cost: approximately $250–350 depending on configuration.

Open source

Read the code. Run it. Fork it.

Juno is MIT licensed and built in the open. Issues, discussions, and pull requests are welcome.

License
MIT
Language
TypeScript
Status
Active
Star on GitHub