NOTE: this is still a WIP, I've spent the past few months avoiding AI tools as an educational exercise. I've recently started experimenting with coding agents and this is the workflow I've found myself turning to thus far.
Repo setup:
docs/folder contains.mdfiles that describe the current state of the codebase.docs/proposals/contains.mdfiles that describe ideas for future work.
Workflow:
- Plan phase: LLMs generate proposals. I review and leave comments. Repeat until I'm satisfied with the spec.
- Build phase: LLM takes proposal as input and implements it + updates docs.
--------
CLAUDE.md:
Instructions for AI agents working in this codebase.
## Documentation Structure
- `docs/` — Markdown files describing the current state of the codebase
- `docs/proposals/` — Markdown files describing ideas for future work
## Guidelines
- When asked to write a proposal, put it in `docs/proposals/`. I may leave comments for you to address.
- Keep `docs/` in sync with actual code — update docs when making changes.
--------
beads has some interesting agent orchestration ideas.
- beads is a git backed issue tracker that was made for agents to use
- there also seems to be a way for agents to message each other? (look into gt)
- the philosophy is to trust the agent as much as possible and run as many agents as possible
- Steve Yegge claims in his blog that the project is fully vibe coded and he doesn't even look at the code
- to maintain code quality, he spends about 40% of his time on code health (asking agents to suggest improvements, refactors, find code smells, add tests, etc)
- this seems reasonable if you have a very clear idea of what you want to build. Just let a bunch of agents generate hella code and tests. The issue tracker serves as the memory and communication layer between agents.