Public surface for the repos that actually exist: attention tracking, world models, overlay experiments, proof/study tooling, and desktop attention services.
This is no longer framed as a single AR product. It is a set of connected systems: sando-tracker, learning-overlay, QEDViz, and personal-usability.
Four separate codebases, each with a concrete role, rather than one implied headset product.
Monitor-focus service, world-model editor, and room-layout experiments for understanding where attention is directed across a multi-monitor setup.
Monitor focus service can use camera pose, WT901 IMU, or fallback paths
World-model editor serves room and monitor geometry through local APIs
Chord-focus launcher loads survey results and dual-camera defaults
Learning report tooling summarizes monitor prediction logs
Standalone consumer of control-plane APIs with browser and native overlay surfaces.
Runs as HTTP server, X11 overlay, or both
Exposes study-session, proof-capture, activity, and target-run endpoints
Defines a canonical learning schema shared across the surface
Tests cover focus, images, Emacs context, and activity/event reads
Proof visualization and study workspace with browser routes, compute services, and deployment machinery.
Routes include /proof/:id, /shared/:encoded, /read, /lib, and /activity
Server covers validation, compute, verification, Drive, and NotebookLM bridge work
Cloud Run deployment is documented as split api-core, compute-service, and lean-lsp services
MCP surface includes visualize_proof, get_proof, and export_proof
Local desktop helpers and operator-attention services that make the workstation itself part of the loop.
Defines stable provider contracts for screen focus and operator attention
Ships installable user services for focus daemon and operator attention
Includes status tooling for checking the active desktop stack
Keeps the desktop boundary explicit instead of hiding it inside a browser app
The common thread is not "AR." It is attention, context, and surfaced state across desktop, browser, and study workflows.
The tracking work asks where attention is actually directed before deciding what to surface.
The overlay work is a consumer of existing context, not an excuse to pretend the whole stack is a headset product.
QEDViz turns some of that context into explicit study and proof surfaces rather than keeping it buried in logs.
This page is intentionally narrower than the old pitch.
What is real: monitor-focus experiments, room/world models, X11 overlay code, study/proof browser routes, and local desktop attention services.
What is not claimed: a single shipped AR headset product, a field deployment system, or a unified commercial suite.
The useful idea here is the interface boundary between attention tracking, surfaced context, and proof/study work. The repos are early, but they are real.
The next work is about deciding which boundaries are worth hardening.
Relevant context for this slice of the work: University of Manitoba psychology and HCI research, software work across consulting, SkipTheDishes, Datomar Labs, and ReLease / Cios, and the current private control-plane stack.
If you want to talk about attention tracking, overlay surfaces, study tooling, or adjacent control-plane design, reach out.
[email protected]