sandolab.xyz/ar

Tracking & Overlay Systems

Public surface for the repos that actually exist: attention tracking, world models, overlay experiments, proof/study tooling, and desktop attention services.

This is no longer framed as a single AR product. It is a set of connected systems: sando-tracker, learning-overlay, QEDViz, and personal-usability.

CURRENT SURFACE
$ ./run_chord_focus.sh --survey monitor_survey_results.json
/api/world-model -> room layout editor live
learning-overlay -> serve | x11-overlay | both
qedviz -> /proof/:id, /shared/:encoded, /activity
personal-focus-daemon.service -> local attention hooks

What Exists

Four separate codebases, each with a concrete role, rather than one implied headset product.

Attention Tracking

sando-tracker

Monitor-focus service, world-model editor, and room-layout experiments for understanding where attention is directed across a multi-monitor setup.

Monitor focus service can use camera pose, WT901 IMU, or fallback paths

World-model editor serves room and monitor geometry through local APIs

Chord-focus launcher loads survey results and dual-camera defaults

Learning report tooling summarizes monitor prediction logs

Overlay Systems

learning-overlay

Standalone consumer of control-plane APIs with browser and native overlay surfaces.

Runs as HTTP server, X11 overlay, or both

Exposes study-session, proof-capture, activity, and target-run endpoints

Defines a canonical learning schema shared across the surface

Tests cover focus, images, Emacs context, and activity/event reads

Proof Tooling

QEDViz

Proof visualization and study workspace with browser routes, compute services, and deployment machinery.

Routes include /proof/:id, /shared/:encoded, /read, /lib, and /activity

Server covers validation, compute, verification, Drive, and NotebookLM bridge work

Cloud Run deployment is documented as split api-core, compute-service, and lean-lsp services

MCP surface includes visualize_proof, get_proof, and export_proof

Desktop Utilities

personal-usability

Local desktop helpers and operator-attention services that make the workstation itself part of the loop.

Defines stable provider contracts for screen focus and operator attention

Ships installable user services for focus daemon and operator attention

Includes status tooling for checking the active desktop stack

Keeps the desktop boundary explicit instead of hiding it inside a browser app

How The Pieces Fit

The common thread is not "AR." It is attention, context, and surfaced state across desktop, browser, and study workflows.

CAPTURE
sando-tracker
monitor focus
room model
INTERPRET
learning-overlay
schema
target runs
ROUTE
personal-usability
focus daemon
operator attention
SURFACE
QEDViz
proof routes
study workspace
↓ shared context and event flow ↓
monitor layout
focus windows
prediction logs
activity reads
study sessions
proof capture
browser UI
X11 overlay
desktop services

Attention First

The tracking work asks where attention is actually directed before deciding what to surface.

Overlay As Surface

The overlay work is a consumer of existing context, not an excuse to pretend the whole stack is a headset product.

Proof And Study Loop

QEDViz turns some of that context into explicit study and proof surfaces rather than keeping it buried in logs.

Public Boundary

This page is intentionally narrower than the old pitch.

What is real: monitor-focus experiments, room/world models, X11 overlay code, study/proof browser routes, and local desktop attention services.

What is not claimed: a single shipped AR headset product, a field deployment system, or a unified commercial suite.

The useful idea here is the interface boundary between attention tracking, surfaced context, and proof/study work. The repos are early, but they are real.

Current Questions

The next work is about deciding which boundaries are worth hardening.

Q1
active

Attention Model

  • -How far can monitor layout, webcam pose, and IMU signals be pushed before they become noisy?
  • -What kinds of world models are actually worth maintaining by hand?
  • -Which attention transitions matter enough to route onward?
Q2
active

Overlay Boundary

  • -What belongs in an X11 HUD versus a browser route versus a status log?
  • -How little information can be surfaced while still changing behavior?
  • -When does an overlay help, and when is it just another distraction surface?
Q3
active

Study And Proof Surfaces

  • -How should proof routes, reading activity, and study sessions line up?
  • -What event history is actually useful inside a proof workspace?
  • -How much of the proof loop belongs in compute versus UI?
Q4
open

Desktop Agency

  • -How should personal-usability stay separate from the browser surfaces?
  • -Which desktop hooks are durable enough to be treated as provider contracts?
  • -Where should operator attention live as the control plane grows?

Relevant Background

Relevant context for this slice of the work: University of Manitoba psychology and HCI research, software work across consulting, SkipTheDishes, Datomar Labs, and ReLease / Cios, and the current private control-plane stack.

U of M
Psychology + Computer Science
APGV / PacificVis
Verified HCI publication venues
Tracker / Overlay / Proof
Current repo surfaces

Talk About The Interface Boundary

If you want to talk about attention tracking, overlay surfaces, study tooling, or adjacent control-plane design, reach out.

[email protected]