Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.inlace.co/llms.txt

Use this file to discover all available pages before exploring further.

Lace reads your screen by combining a visual capture with the macOS accessibility tree. Pixels provide layout. Accessibility data provides text, labels, roles, and interactive state. Together they let Lace work across any macOS application, not just browsers.

How capture works

Capture is tied to the active chat thread. Draft chats can start capture automatically. Existing chats start capture when you use the recorder control.
1

Screenshot and accessibility scan

Lace captures a screenshot and reads the accessibility tree: text labels, button names, input values, element roles.
2

Vision analysis

A vision model detects UI elements and identifies spatial relationships between components.
3

Fusion

Vision results and accessibility data are merged into elements with both visual position and semantic meaning.
4

Context injection

The parsed screen is attached to the active chat thread as structured context.

What Lace can see

Data typeSource
Text contentAccessibility tree
Element rolesAccessibility tree
Visual layoutScreenshot + vision
Page structureBoth
Interactive stateAccessibility tree

The overlay

Press ⌘O to toggle the screen context overlay. Detected elements are highlighted so you can see exactly what Lace has parsed.

Caching

Lace caches parsed screens. If your active window hasn’t changed, the cached context is reused. When the content changes, Lace re-captures automatically.