The last three tenets of Antigravity — trust, feedback, and self-improvement — come together in a set of features that set the platform apart from simpler AI coding tools. This lesson covers how each works in practice.
Artifacts: The Trust Layer
When an agent completes work in Antigravity, it doesn't just produce a code diff. It produces Artifacts — structured deliverables designed specifically for human review.
Artifact types include:
| Artifact | What it contains |
|---|---|
| Task list | The plan the agent made before starting |
| Implementation plan | Architectural decisions and approach |
| Walkthrough | Step-by-step explanation of what was done |
| Screenshots | Visual state of the UI at key points |
| Browser recordings | Recorded browser interactions for front-end tasks |
The agent is expected to "thoroughly think through verification of its work, not just the work itself." Artifacts are how it communicates that verification — showing you evidence that it understood the task and checked its own output.
This sits between the two extremes Google identified as failing: raw tool-call logs (too much) and just a code diff (too little). Artifacts give you the right level of detail to make an informed review decision.
Reading a Task Walkthrough
The walkthrough Artifact is the most important one to review. It describes, in plain language:
- What the agent understood the task to be
- What approach it chose and why
- What edge cases it considered
- What it verified before marking the task complete
If the walkthrough describes a decision you disagree with, that's exactly what feedback is for — and you don't need to wait for the task to finish.
Giving Feedback Mid-Task
One of Antigravity's design goals was to solve the "core failing of a remote-only form factor" — the inability to iterate with an agent that's running asynchronously.
The solution is multi-modal, mid-task feedback:
On text Artifacts — use Google-doc-style inline comments. Click any section of a plan or walkthrough and add a note. The agent picks it up without stopping.
On screenshots and visual Artifacts — select a region and comment directly on it. "This modal is missing the close button" attached to the exact part of the screenshot.
On the task itself — type feedback in the task panel. The agent incorporates it "into its execution without requiring you to stop the agent's process."
This means the 80/20 problem Google identified — agents completing 80% of the work but making the last 20% harder — is addressed by making feedback frictionless rather than disruptive.
The Knowledge System
Antigravity treats learning as a core primitive, not an afterthought.
Every agent run contributes to a shared knowledge base. The system captures two types of knowledge:
- Explicit: useful code snippets, architecture patterns derived from your project
- Abstract: the sequence of steps that successfully completed a particular subtask
Future tasks retrieve from this knowledge base before starting work. If the agent successfully set up authentication in your stack last month, it knows the steps and applies them when auth comes up again.
All accumulated knowledge is visible from the Agent Manager. You can inspect what the system has learned, understand why it's making certain decisions, and remove knowledge that's outdated or wrong.
Putting It Together
The full workflow looks like this:
- Spawn a task in Manager View with a clear objective
- Agent produces Artifacts — a plan, a walkthrough, verification screenshots
- You review Artifacts and add inline comments where the approach needs adjusting
- Agent incorporates feedback and continues without stopping
- Task completes — you review the final diff alongside the walkthrough
- Knowledge is captured — the successful approach is stored for future tasks
Over time, the platform gets better at your specific codebase, your team's conventions, and your preferred patterns. The more you use it, the less you need to re-explain context.