06 / 25
Who Writes Code?
Core Questions
- When are agents allowed to author code?
- How is authorship recorded?
- Who owns the result?
Code has always had authors. Names in git logs, signatures on commits, blame annotations pointing to who wrote what. This matters for debugging, for accountability, for credit. Now agents are writing code too. The question isn't whether to let them — they already are. The question is how to record it, attribute it, and decide who owns the result.
When are agents allowed to author code?
The honest answer: agents are already authoring code in most engineering organizations, whether leadership knows it or not. Developers use Claude, Codex, Cursor, and other AI-powered editors. They paste suggestions into their editors. The code ships. Nobody tracks that an AI wrote it.
This isn't inherently bad, it's just invisible. The real question is: when should agents author code autonomously, without a human actively reviewing each line before it's committed?
The Autonomy Spectrum
Level 0: Suggestion
Agent suggests code. Human reviews, edits, and commits. Human is the author. This is Copilot-style autocomplete.
Level 1: Draft
Agent writes a complete change. Human reviews the diff before merge. Agent is the author and committer; human is the reviewer/approver.
Level 2: Autonomous (gated)
Agent writes and commits code. Merge requires human approval or CI gate. Agent is the author; human is the approver.
Level 3: Autonomous (ungated)
Agent writes, commits, and merges without human review. Full autonomy. Only appropriate for low-stakes, highly constrained tasks.
Most teams should operate at Level 1 or Level 2. Level 0 is where we've been; it doesn't capture the productivity gains of agentic workflows. Level 3 is where we're headed eventually, but only for narrow, well-defined tasks with strong guardrails.
The key insight: autonomy should be proportional to reversibility. An agent can autonomously update a README because the worst case is a bad commit that's easy to revert. An agent should not autonomously modify database migration logic because the worst case is production data loss.
What agents should (and shouldn't) write autonomously
Good candidates for autonomy
- ✓Documentation updates
- ✓Test additions (not modifications)
- ✓Dependency version bumps
- ✓Linting and formatting fixes
- ✓Boilerplate generation
- ✓Config file updates (non-security)
Requires human review
- ✗Business logic changes
- ✗Security-sensitive code
- ✗Database migrations
- ✗Authentication / authorization
- ✗Financial calculations
- ✗API contract changes
How is authorship recorded?
Git has two relevant fields: author and committer. By convention, the author is who wrote the code; the committer is who added it to the repository. These are usually the same person, but they don't have to be.
For agent-authored code, you have choices. If you buy the principle from Identity, Secrets & Trust, there is also a clear default: don’t let the agent impersonate the developer.
Attribution Models
Default: Agent as author and committer
If the agent wrote the code, the agent should be both the author and the committer of the commits it produced. Humans provide review and approval via the PR process.
Avoid: Human as author, agent in metadata
This is effectively impersonation. It makes history say a human wrote code they didn’t. If you need a human on the hook, require human review and record approval explicitly.
Avoid: Co-authored-by as a substitute for identity
Co-authored-by can be useful for human-human collaboration, but it’s not a clean model for autonomous agent work. Don’t use it to paper over the question of “who actually wrote this code?”
Practical rule: don’t let the agent impersonate a developer in git history. If the agent authored the change, its commits should say so. Accountability comes from review and approval, not fake authorship.
Whatever you choose, be consistent. If you can't query your git history to find all agent-authored code, you've lost traceability. Use a dedicated email domain or naming convention for agent identities so they're always distinguishable.
Practical implementation
Set up dedicated Git identities for your agents (and keep them separate from humans):
# In your agent's environment git config user.name "Claude (Agent)" git config user.email "[email protected]"
Add trailers for traceability and review linkage:
git commit -m "Fix null check in user service Agent-task-id: task-abc123 Agent-model: claude-sonnet-4-20250514 Agent-charter: github.com/org/repo/issues/42 Requested-by: [email protected] Reviewed-by: [email protected]"
Git blame, review, and long-term trust
Months from now, someone will run git blame to answer “why is this like this?” If every line points to an agent identity with no human sign-off, you’ll create a trust problem and a debugging problem.
The fix isn’t to attribute agent code to humans. The fix is to attach humans to the history in the right place: review and approval.
- Require PR review for agent-authored changes (CODEOWNERS + required approvals).
- Link every change to a durable spec/issue (the charter) so “why” is recoverable.
- Record reviewer identity in the PR system of record, and optionally in commit trailers (e.g.
Reviewed-by).
Signing and provenance
Attribution tells you who claims to have written the code. Signing tells you that claim is cryptographically verified. In a world where anyone can set git config user.email to anything, signing is what gives attribution teeth.
For agent-authored code, signing answers a critical question: did this code actually come from the agent system you trust, or did someone spoof it?
Signing Strategies
GPG signing with agent keys
Each agent has its own GPG key. Commits are signed. You can verify that a commit claiming to be from "agent-claude" was actually made by a system holding that agent's private key.
SSH signing (Git 2.34+)
Simpler than GPG. The agent's SSH key is used for signing. GitHub, GitLab, and others can display verification status. Easier to manage than GPG in most setups.
Sigstore / Gitsign
Keyless signing tied to identity (OIDC). The agent authenticates via your identity provider, and the commit is signed with an ephemeral key. No long-lived secrets to manage. Modern, but less universally supported.
Beyond commit signing, consider provenance attestations — structured metadata about how code was produced. This is the direction the industry is heading with SLSA and in-toto.
A provenance attestation for agent code might include:
- Which agent system produced the code
- Which model and version was used
- The task specification or charter that prompted it
- The environment where it ran (for reproducibility)
- Cryptographic signature over all of the above
This is more than most teams need today, but it's where compliance requirements are heading — especially in regulated industries.
Who owns the result?
Ownership has multiple meanings: legal ownership (IP), operational ownership (who maintains it), and accountability (who's responsible when it breaks). Agent-authored code complicates all three.
Legal ownership (IP)
This is genuinely unsettled law. In most jurisdictions, copyright requires a human author. Code generated purely by an AI may not be copyrightable at all. Code generated by an AI at human direction probably is, with the human as author.
For practical purposes in a corporate context:
- Your employment or contractor agreements likely assign all work product to the company, regardless of how it was created.
- The company owns the code, whether a human or an agent wrote it.
- The open question is whether that code is protectable — but that's a problem for your legal team, not your dev workflow.
If you're building open source, or your agents are producing code for external customers, consult a lawyer. The rules are evolving.
Operational ownership
When the code breaks at 3am, who gets paged? The agent can't answer a PagerDuty alert. A human has to be responsible.
Common patterns:
- The requester owns it: Whoever asked the agent to write the code is responsible for its behavior. This is like hiring a contractor — you own what they produce.
- The reviewer owns it: Whoever approved the PR takes ownership. They signed off; they're accountable.
- The team owns it: Code ownership follows existing CODEOWNERS rules. The team that owns that part of the codebase owns agent-authored changes to it too.
The third model scales best. Agents become just another contributor to a team's codebase, subject to the same ownership and review rules as human contributors.
Accountability
When agent code causes a security incident or a customer-facing bug, who's accountable? This isn't about blame — it's about having a clear answer for your leadership, your customers, and potentially regulators.
The answer should be: the human who approved the agent's work. This is why Level 3 autonomy (ungated) is risky — without a human in the loop, accountability is diffuse.
Build your audit trail to answer these questions:
- Who requested this work from the agent?
- What spec or charter defined the task?
- Who reviewed and approved the change?
- What tests verified it before deploy?
If you can answer all four, you have a defensible accountability chain regardless of whether a human or agent wrote the code.
What goes wrong
Ghost authorship
Developers paste agent suggestions into their commits without attribution. Six months later, nobody knows which code was human-written vs agent-written. You can't assess risk or quality because the provenance is lost.
Accountability vacuum
A bug ships. The commit was agent-authored. No human reviewed it (or the review was rubber-stamped). When leadership asks "who's responsible?" nobody has a good answer. This erodes trust in the entire agent workflow.
Unsigned agent commits
Anyone can claim to be "[email protected]" in a commit. Without signing, you can't distinguish real agent commits from spoofed ones. An attacker could inject malicious code and blame it on the agent.
Licensing landmines
The agent regurgitates code it saw during training — code that was GPL-licensed. You ship it as proprietary. A year later, someone notices. Now you have a legal problem. Provenance tracking helps, but doesn't fully solve this.
Tools that help
Keyless commit signing using Sigstore. Ties signatures to OIDC identities. No GPG key management. Works well for agent identities via service accounts.
Supply-chain Levels for Software Artifacts. A framework for provenance attestations. Defines levels of assurance for where code came from.
GitHub / GitLab commit trailers
Both platforms recognize common trailers (e.g. Reviewed-by, Signed-off-by) and display or preserve them in the UI. Use custom trailers for agent metadata and link to the task charter/spec.
CODEOWNERS
GitHub's built-in ownership system. Applies equally to agent-authored changes — the team that owns the path reviews and owns the code.
Summary
- →Agent autonomy should be proportional to reversibility. Low-risk tasks can be autonomous; high-risk tasks need human review.
- →Agent should be author and committer. Don’t use Co-authored-by as a substitute for agent identity.
- →Sign agent commits. Without signatures, attribution is just a claim anyone can forge.
- →Make review part of history: require approvals and link changes to a charter so git blame has a “why.”
Related Guides
Stay updated
Get notified when we publish new guides or make major updates.
(We won't email you for little stuff like typos — only for new content or significant changes.)
Found this useful? Share it with your team.