Lace reads your screen by combining a visual capture with the macOS accessibility tree. Pixels provide layout. Accessibility data provides text, labels, roles, and interactive state. Together they let Lace work across any macOS application, not just browsers.Documentation Index
Fetch the complete documentation index at: https://docs.inlace.co/llms.txt
Use this file to discover all available pages before exploring further.
How capture works
Capture is tied to the active chat thread. Draft chats can start capture automatically. Existing chats start capture when you use the recorder control.Screenshot and accessibility scan
Lace captures a screenshot and reads the accessibility tree: text labels, button names, input values, element roles.
Vision analysis
A vision model detects UI elements and identifies spatial relationships between components.
Fusion
Vision results and accessibility data are merged into elements with both visual position and semantic meaning.
What Lace can see
| Data type | Source |
|---|---|
| Text content | Accessibility tree |
| Element roles | Accessibility tree |
| Visual layout | Screenshot + vision |
| Page structure | Both |
| Interactive state | Accessibility tree |