We Built a Nine-Agent RFP Engine. It Took Four Weeks and Cost $6 to run.


April 28, 2026

Our team responds to RFPs (Request for Proposal). Government contracts, enterprise software bids, consulting engagements. The process is time-consuming. Read the document, decide if we should even pursue it, research the issuing org, architect the response, pull language from prior proposals, write, review, repeat. A serious bid can consume 40–80+ hours before you even get to talk with a human on the other side.

So we built a system to do most of it.

01 — Intake & Classification. Ingests the RFP document, normalizes structure, extracts key metadata: deadline, agency, scope, evaluation criteria, submission requirements, and red flags.

02 — Go / No-Go Analysis. Scores the opportunity objectively and describes the qualities, capabilities, and capacity of the advantaged organization. Surfaces the honest case for and against pursuing it.

03-04 — Discovery Research / Win Strategy. Goes beyond the RFP itself.  Maps requirements to evaluation criteria, identifies gaps, flags landmines, notes where competitors have likely advantages or a stronger right to win.

05-07 — Response Architecture. Designs the document structure section by section, maps evaluation criteria to answer location, assigns ownership. Includes Q&A Strategy to identify the questions for the RFP’s formal Q&A period and adjustments to the reqs/response based on the answers. 

08 — Proposal Copywriter. Writes the narrative sections: executive summary, company overview, past performance, management approach. Draws from our proposal library and case study archive, adapting language to the specific buyer and context rather than copying it wholesale.

09 — QA & Compliance Check. Reviews the draft against the original requirements. Flags anything unanswered, contradicted, or missing.

From the start to a reviewable first draft takes under ten minutes. The engineering investment was about four weeks. The cost to run it on an actual RFP is under $6.

What This Shows You About Agentic AI

Many people think about AI as a chatbot you talk to. That’s a reasonable starting point yet a misleading endpoint. What we built is something different in kind, not just degree. Here’s what’s actually happening under the hood:

Agents are specialists, not generalists. Each of the nine agents has one job, one prompt architecture, and one output contract. The Go/No-Go agent isn’t writing prose, it’s producing a structured decision with supporting rationale. The Draft agent isn’t analyzing requirements, it’s consuming a structured brief and generating section-by-section content against it. Specialization is what makes the system reliable. A single “write me a proposal” prompt fails in ways a nine-stage pipeline succeeds. 

Context management is the hard part. Sometimes, you want every agent in the chain to know what the agents before it produced. Other times, you want isolation. Designing how information flows, what gets passed forward, in what format, at what level of compression, is where most agentic systems break down. Here is the core tension of AI development. You must give an agent enough information to maintain coherence across a chain, but compress and filter it well enough to avoid context rot and hallucinations.

Prompt engineering is systems design. The prompts that drive each agent aren’t instructions you type into a chat window. They’re specifications. They define input format, output schema, edge case handling, persona, and constraints. Writing a good agent prompt is closer to writing an API spec than to writing a sentence. Most companies that are struggling with AI are struggling because they’re treating this work as an afterthought. This is an articulation and language challenge.

Document orchestration is a first-class problem. The RFP lives in one place. Prior proposals live in another. The evaluation criteria extracted by Agent 1 need to reach Agent 8 intact. The compliance check in Agent 9 needs access to the original document, not a summary of it. Building the scaffolding that handles documents as structured objects, not just  blocks of text, is an engineering work  challenge.

What took us four weeks to build would have taken us four months or more two years ago. That’s if we could have built it at all. The models are good enough now that the constraint is imagination, design, and mindset, not the technology.

Why This Is a Leadership Problem, Not an IT Problem

BCG’s 2026 AI Radar survey found that 72% of CEOs now describe themselves as the primary AI decision-maker in their organization. This is double the share from the prior year. Half of them believe their job is on the line if they don’t get it right. I don’t think that’s anxiety talking. I think it’s accurate.

Those I’m watching who are making real progress aren’t delegating AI to a team and waiting for a report. They’re close to the work or building themselves. They understand what their workflows actually look like. They know which forty-hour process in their company is the right candidate for a four-week build. That judgment doesn’t live just in IT or HR. It lives in the person who runs the business.

You don’t need to know how to design or create agents. You need to know what’s worth building.

Image
Trip Bodley
CEO

Ready to talk?


Get in touch and let's get started!

Image
Image
Image
Image
Image
Image
Image