Cases & Practice

Ambient AI for Mac: Tauri + Swift + Ollama. Built It, Tested It, Killed It

What is ambient AI?

Ambient AI is a category of software that passively observes user behavior in the background — tracking active applications, session patterns, and workflow transitions — and surfaces insights or automations without requiring explicit user input. Unlike assistant-style AI, it operates continuously and invisibly, using a local LLM to analyze patterns while keeping all data on-device.

TL;DR

  • -Stack: Swift daemon (NSWorkspace + Accessibility API) → Tauri 2.0 Rust backend → React popup; SQLite WAL + Ollama llama3.2:3b, zero cloud
  • -Tauri uses 30-40 MB RAM vs Electron's 150-300 MB — critical for a background process running all day
  • -In 5 days of real usage: 3,445 sessions collected and a critical bug found in the app's own pattern detection
  • -Both ChatGPT and Gemini rated it 'kill it' as VCs: privacy-first ambient AI has no clear monetization path in 2025
  • -Project is open source on GitHub (spyrae/ambientghost) — architecture and postmortem fully documented

Ghost is a macOS menu bar app. It quietly watches what you do, stores everything locally, and analyzes work patterns through a local LLM. Swift daemon, Rust backend on Tauri 2.0, React popup, SQLite, Ollama. No cloud. No telemetry.

Over 5 days of real usage, Ghost collected 3,445 sessions and found a critical bug in its own code. Then I fed the project to ChatGPT and Gemini, asking them to evaluate it as a VC would. Both said kill it. I agreed.

This is a technical breakdown of the architecture and an honest postmortem. Code is on GitHub.


The Idea: An Invisible AI Assistant

The pitch sounded great. An app that lives in the menu bar, watches which apps you use, finds repeating patterns, and suggests automations. No manual setup, no timers, no dashboards you have to remember to open.

Four levels:

  1. Awareness - live pulse: which app is active, how many switches per day
  2. Nudges - “you’ve been in Telegram for 25 minutes, switch back to Warp?”, break reminders, evening wrap-up
  3. Insights - week-over-week comparison, peak hours, which apps get used together
  4. Automations - the LLM spots a pattern like “every evening you open Telegram then Perplexity” and offers to automate it

Everything local. SQLite, Ollama, zero network requests (except an optional Claude API fallback).


Stack and Architecture

Swift Daemon (GhostDaemon)
  NSWorkspace + Accessibility API
  NWListener WebSocket :9876

        │ JSON events

Rust Backend (Tauri 2.0)
  observer.rs  → sessions, transitions
  db.rs        → SQLite (WAL, 30-day rolling)
  patterns.rs  → n-gram aggregation
  llm.rs       → Ollama → Claude fallback
  nudges.rs    → focus drift, break, wrap-up
  runner.rs    → open app / URL / Shortcut

        │ Tauri IPC

React 19 + Tailwind (popup 320×480)

Why This Stack

Tauri 2.0 over Electron - Ghost has to be invisible. Electron in the menu bar eats 150-300 MB of RAM. Tauri takes 30-40 MB. For a background process running all day, that difference matters. Tauri has 104,000 stars on GitHub; ChatGPT Desktop (54,400 stars), Jan (41,100), and GitButler (19,900) are all built on it.

Swift daemon separate from Tauri - a platform constraint. NSWorkspace and Accessibility API require a native macOS process. Tauri 2.0 supports Swift plugins, but continuous observation needs a standalone daemon. Communication happens over WebSocket on localhost:9876.

Ollama (llama3.2:3b) for pattern analysis - privacy. Data about which apps you use should not leave your machine. Ollama has 166,000 stars and 52 million monthly downloads. llama3.2:3b handles classification of 5-10 transition patterns just fine.

Swift Daemon: Observing macOS

The core is NSWorkspace.didActivateApplicationNotification. Every time you switch apps, macOS sends a notification. The daemon catches it, grabs the window title via Accessibility API, and sends JSON over WebSocket:

@objc private func appDidActivate(_ notification: Notification) {
    guard let app = notification.userInfo?[
        NSWorkspace.applicationUserInfoKey
    ] as? NSRunningApplication else { return }
    sendEvent(app: app)
}

private func sendEvent(app: NSRunningApplication) {
    let bundleId = app.bundleIdentifier ?? "unknown"
    guard bundleId != lastBundleId else { return } // dedup
    lastBundleId = bundleId

    let windowTitle = getWindowTitle(for: app) // AXUIElement
    let event: [String: Any] = [
        "app_bundle_id": bundleId,
        "app_name": app.localizedName ?? "Unknown",
        "window_title": windowTitle,
        "timestamp": isoFormatter.string(from: Date()),
    ]
    server.broadcast(jsonString)
}

Window title comes from AXUIElement. Without Accessibility permission it returns an empty string, but app tracking keeps working:

func getWindowTitle(for app: NSRunningApplication) -> String {
    guard AXIsProcessTrusted() else { return "" }
    let axApp = AXUIElementCreateApplication(app.processIdentifier)
    var focusedWindow: AnyObject?
    AXUIElementCopyAttributeValue(
        axApp, kAXFocusedWindowAttribute as CFString, &focusedWindow
    )
    // ... extract kAXTitleAttribute
}

The WebSocket server uses Network.framework (NWListener). No external dependencies - pure Foundation + AppKit.

Rust Backend: Sessions and Patterns

The Rust side connects to the daemon as a WebSocket client with auto-reconnect every 5 seconds. Each event becomes a new session in SQLite:

fn handle_app_event(event: AppEvent) {
    // Close previous session (if > 30 sec)
    if let (Some(sid), Some(start_ts)) = (...) {
        let duration = (now_ts - start_ts).max(0) as u64;
        if duration >= 30 {
            db::end_session(sid, &now_iso, duration).ok();
        }
    }
    // Record transition
    db::insert_transition(prev_sid, &event.app_bundle_id, ...);
    // Start new session
    db::insert_session(&new_id, &now_iso, &event.app_bundle_id, ...);
}

Idle detection runs every 30 seconds - if no event in the last 2 minutes, the session gets closed.

Patterns: N-gram Aggregation + LLM

Once per hour, patterns.rs takes transitions from the last 7 days, groups them by time window (morning/midday/afternoon/evening/night), extracts app pairs, and counts how many distinct days each pair appeared. Filter: minimum 2 days.

Candidates go to Ollama:

Observed pattern (repeated 4 times, typically at 22:00 evening):
1. Telegram (ru.keepcoder.Telegram)
2. Perplexity (ai.perplexity.comet)

Respond ONLY in JSON: { "title": "...", "confidence": 0.0-1.0, ... }

Trust model: maximum 1 suggestion per day, confidence threshold 0.75. Accepted, snoozed, or dismissed - the decision gets recorded, and Ghost won’t show it again.


Data: 5 Days of Real Usage

Ghost ran from March 18 to 22. Raw numbers:

MetricValue
Sessions3,445
Unique apps53
Transitions3,299
Patterns (LLM)3

Top by time spent:

AppTimeAvg Session
Warp (terminal)4h15 sec
Perplexity Comet2.3h12 sec
Dia (browser)1.3h13 sec
Telegram1h8 sec

Peak activity at 13-15h and 21-23h. Dead zone from 3-8 AM. Typical remote dev rhythm.

The three detected patterns were all “evening research”: Telegram to Perplexity or Warp to Perplexity around 22-23h. Confidence 0.95 with 3-4 repetitions. The LLM found them correctly. Value - zero. I already know I browse Perplexity in the evening.

The Bug That Broke 80% of the Data

Out of 3,445 sessions, 2,778 (80.6%) had duration_seconds = 0 and ended_at = NULL.

The cause was in observer.rs. A session only closes when duration >= 30:

if duration >= 30 {
    db::end_session(sid, &now_iso, duration).ok();
}

Switch apps in 15 seconds - the session gets created but never closed. It sits there with zero duration forever. The irony: the real average session is 8-15 seconds. People don’t work in 30-second blocks. They alt-tab hundreds of times a day.

DayTotalBrokenValid
Mar 18316268 (85%)48
Mar 191,3131,015 (77%)298
Mar 20339289 (85%)50
Mar 21645545 (85%)100
Mar 22832661 (79%)171

The fix is trivial - remove the threshold and close every session. But I didn’t make it. Here’s why.


Why I Killed It

After 5 days, Ghost was showing me: “Warp 4h, Telegram 1h, peak at 13-15h.” That’s Screen Time, which is free and built into macOS.

The pattern “you open Telegram then Perplexity at 22:00” - yeah, I know. The automation would be “open Perplexity when you open Telegram at 22:00.” Who would pay $7/month for that?

I fed Ghost to two AI models and asked them to evaluate it as a VC doing due diligence. Both returned the same verdict: technology exists, product does not.

Three arguments that are hard to counter.

Ghost sells observation, not outcomes. The user buys a disappearing pain. “It notices my patterns” is not a pain point. “It sometimes opens apps for me” is not a pain point. “It warns me I’m stuck doomscrolling” - for most people, also not a pain point.

Delayed value. The product needs days of observation before it shows anything useful. Install today, and the “aha moment” is promised next week. Most utilities die right here.

The market doesn’t work. Rewind.ai raised $33M from a16z, pivoted from software to a $99 hardware pendant, and was acquired by Meta in December 2025. The Mac app was shut down, data deleted. Humane AI Pin - $699 plus subscription, returns exceeded sales, HP bought the assets for $116M (asking price was $750M). RescueTime - 15+ years, $2.6M revenue in 2024, stable niche but plateaued. No ambient observer has become a big business.

And the strongest signal of all. I wasn’t using my own product. In 5 days I never once opened the popup and thought “oh, that’s useful.” For an ambient utility, that’s a death sentence. If the founder doesn’t open the app, users definitely won’t.

Both AIs Suggested the Same Pivot

Automatic time tracking for freelancers. Ghost quietly records sessions, groups them by project via LLM (using window titles), and on Friday delivers a ready-made timesheet. The pain is real - freelancers lose 2-3 hours a week reconstructing their workday from memory.

A valid direction. But I’m not a freelancer, I don’t bill clients by the hour, and I don’t fill out timesheets. You can build a product for someone else’s pain - but that’s a different mode, with customer development, interviews, and iterations. Not my situation.


The Ambient AI Graveyard

Ambient AI / desktop observer projects, 2024-2026:

ProjectRaisedOutcome
Rewind / Limitless$33M (a16z)Pivoted to hardware, acquired by Meta (Dec 2025)
Humane AI Pin$230M+Failed ($699 + subscription), acquired by HP for $116M (Feb 2025)
Microsoft Recall-Privacy scandal, delayed, shipped as opt-in
RescueTime-15+ years, ~$2.6M/yr (2024), plateaued
Timing-Niche, $108-192/yr, stable

The common thread: users don’t trust apps that “record everything.” Microsoft got a privacy backlash over Recall - data was stored in unencrypted SQLite, accessible to any process. Rewind promised local-only storage but never found PMF and escaped into hardware.

The survivors are narrow tools with a specific job (time tracking), not the ones that tried to “observe everything.”


What’s Worth Salvaging

Ghost didn’t find a pain point as a product. As an engineering experiment, it left three things I’ll carry into other projects.

Tauri 2.0 + Swift Daemon

There are almost no open-source examples of Tauri paired with a native Swift process. Most Tauri apps are web wrappers. This is a different pattern: a standalone Swift daemon for system APIs, a WebSocket bridge, and a Rust backend for logic and storage. It transfers to any macOS app that needs Accessibility API, Screen Capture, or other privileged APIs.

Local LLM Pipeline

Ollama + structured JSON output + confidence scoring + trust model (max 1 suggestion/day). This scaffolding works for anything that needs an LLM without a cloud. Prompt, JSON, validation, decision queue, action.

Privacy-First Storage

SQLite + WAL + 30-day rolling cleanup + “Delete All Data” with VACUUM. A ready template for apps that handle sensitive data with full user control.


Code

The project is MIT-licensed: github.com/spyrae/ambientghost

Tauri 2.0, Rust, Swift, React 19, SQLite, Ollama. macOS 13+.

If you’re building a macOS app with native APIs, local LLM, or ambient observation - fork it, adapt it, use it as a starting point.