VIBEPASS: Can Vibe Coders Really Pass the Vibe Check?
Evaluates whether natural-language-to-code workflows produce software that meets real-world quality standards. Benchmarks vibe-coded applications against professional baselines.
Read paper →Open Process Movement
HyperAgility AssemblyResources
Curated from academic research, practitioner blogs, and community signals. Everything here links to real, published content.
Academic research on AI-assisted software development from ArXiv.
Evaluates whether natural-language-to-code workflows produce software that meets real-world quality standards. Benchmarks vibe-coded applications against professional baselines.
Read paper →Benchmarks how well coding agents maintain existing codebases through CI pipelines. Tests agents on real-world maintenance tasks rather than greenfield generation.
Read paper →Measures how AI agents perform when optimizing code across large, real-world repositories. Directly relevant to the blast radius concerns in HyperAgility.
Read paper →Blind comparisons of LLM-generated refactorings against expert human samples. Asks whether "human-level" is the right benchmark when AI operates at different scale.
Read paper →Benchmark for evaluating how AI models handle complete web application development from natural language descriptions to working applications.
Read paper →Research showing that review-oriented interaction patterns outperform planning-oriented ones for code synthesis. Supports HyperAgility's emphasis on review processes.
Read paper →Studies how coding agents deviate from intended goals when facing conflicting constraints. Directly related to scope drift and hallucination in underspecified tasks.
Read paper →Scaffolding, harness design, context engineering, and lessons learned from building production AI coding agents. Practical architecture guidance.
Read paper →Transforms raw execution traces into actionable insights when AI coding agents fail. Explains why agents fail, not just that they failed.
Read paper →Uses git history as structured knowledge to improve AI coding agent context. Treats commit messages as a protocol for preserving architectural intent.
Read paper →Categorizes error patterns unique to AI agents that don't fit traditional debugging models. Useful for understanding why AI-generated code fails differently.
Read paper →Examines how the framing of system prompts affects how deeply AI agents investigate bugs. Trust-based framing produces more thorough debugging than fear-based.
Read paper →Methods for attributing which parts of a codebase were generated by AI vs. written by humans. Relevant to ownership and accountability tracking.
Read paper →Evaluates how well LLMs handle concurrency when generating code. Tests a category of bugs that AI-generated code is particularly prone to.
Read paper →Compiles tool-using agents from behavioral specifications. A formalization of spec-driven approaches to AI agent development.
Read paper →Practitioner writing on AI-assisted development from across the web.
Michael Timbs on what happens to code quality when AI agents generate most of the codebase. Confirms the patterns HyperAgility was designed to address.
Read article →How even Amazon's stringent CI/CD pipelines couldn't prevent AI-assisted coding from causing production outages. Process needs to evolve alongside tooling.
Read article →Argues that AI-assisted development is a fundamentally different mode of work, not just faster typing. The mental model needs to shift entirely.
Read article →Moving past "vibe coding" toward structured collaboration with AI. Practical patterns for using AI as a thinking partner, not just a code generator.
Read article →A practitioner's take on why traditional code review processes are failing under the volume and velocity of AI-generated changes.
Read article →Practical exploration of scaling agentic development while maintaining control over the codebase. Where does the approach break down?
Read article →How AI-generated contributions are changing the dynamics of open source maintenance, review burden, and community norms.
Read article →Simon Willison's guide to agentic engineering patterns. A clear explanation of the architecture behind the tools changing how we write software.
Read guide →Practitioner conversations on the problems HyperAgility addresses.
Community discussion on the growing burden of reviewing AI-generated pull requests. Real experiences from teams dealing with review overload.
Read discussion →Discussion on the confidence gap: developers shipping code they generated with AI but don't fully understand. The core problem HyperAgility tries to solve.
Read discussion →Evan Boyle on how agentic coding breaks the assumptions behind traditional software delivery. This is the core tension HyperAgility addresses.
View thread →Geoffrey Huntley on the gap between generating code and building reliable systems. Speed without process is just faster failure.
View thread →Open source tools addressing AI-assisted development challenges.
Detects common quality issues in AI-generated JavaScript, TypeScript, and Python code. Catches patterns that traditional linters miss.
View on GitHub →Monitoring and observability tooling for AI coding agents. See what your agents are doing, where they're failing, and why.
View on GitHub →Persistent memory system that gives AI coding agents context across projects through the Model Context Protocol.
View on GitHub →Analyzes your repository and scores how well-structured it is for AI-assisted development. Identifies areas that will cause agents to struggle.
Try it →The thinking that HyperAgility builds on.
The original Agile Manifesto. Many of its values still hold, but the constraints it was designed around have shifted. HyperAgility builds on what remains relevant.
agilemanifesto.org →Principles for building software-as-a-service apps. The emphasis on explicit contracts, disposability, and dev/prod parity aligns with AI-assisted architecture.
12factor.net →The source for this guide, the manifesto board, and all community contributions. Open issues, submit PRs, or start a discussion.
View repository →This page is maintained in the open. Submit a link, paper, or tool by opening an issue or PR on GitHub.
Suggest a resource