Every AI vendor wants to sell you agents. Autonomous systems that think, remember, plan, and act. The future of AI, they say.
Sometimes they're right. Sometimes they're selling complexity you don't need.
The question isn't whether agents are powerful. They are. The question is whether your problem actually requires that power, or whether a well-crafted prompt would do the job in a fraction of the time, at a fraction of the cost, with a fraction of the risk.
The Complexity Spectrum
AI implementations fall along a spectrum:
Prompts: Single-turn interactions. Ask a question, get an answer. No memory, no planning, no tool use.
API calls: Structured, deterministic operations. Call a service, get a predictable result. Rules-based, not reasoning-based.
Agents: Autonomous systems that maintain state, make decisions, use tools, and complete multi-step goals without explicit instruction for each step.
Each level adds capability. Each level also adds complexity, cost, and risk.
When Prompts Are Enough
Prompts excel at tasks that are:
Single-step: The entire task can be completed in one response. Summarize this document. Classify this email. Generate this description.
Stateless: No need to remember previous interactions. Each request stands alone.
Low-risk: Errors are inconvenient, not catastrophic. A mediocre summary doesn't break anything.
Fast-feedback: Users see output immediately and can correct or retry easily.
Examples:
- Text summarization
- Content generation
- Classification and categorization
- Translation
- Code explanation
Prompts are simple to implement, cheap to run, easy to debug. If your problem fits, don't complicate it.
When API Calls Are Enough
API calls work when you need:
Structured operations: Fetch this data. Run this calculation. Update this record.
Predictability: Same input should produce same output. No creativity or judgment required.
System integration: Connecting services, not generating content.
Auditability: Clear input/output mapping for compliance and debugging.
Examples:
- Database queries
- Service orchestration
- Data transformation
- Rule-based automation
APIs are boring in the best way. They do exactly what you tell them.
When You Actually Need Agents
Agents become necessary when:
Multi-step reasoning: The task requires planning, breaking down into subtasks, and sequencing actions that depend on each other.
Dynamic decision-making: Next steps depend on results of previous steps. You can't script the workflow in advance.
Tool orchestration: Multiple tools must be selected and used based on context. Not just "call API X" but "decide which APIs to call and in what order."
Memory matters: The system needs to remember context across interactions or retain state through a complex workflow.
Autonomy is valuable: Having a human in the loop for every decision is impractical or too slow.
Examples:
- Research tasks spanning multiple sources
- Customer support resolving multi-intent queries
- Complex data analysis requiring hypothesis generation and testing
- Workflow automation that adapts to changing conditions
The Overengineering Trap
The biggest risk isn't underengineering. It's overengineering.
Agents introduce:
Unpredictability: Multi-step reasoning can produce unexpected paths. Testing all possible behaviors is nearly impossible.
Debugging complexity: When something goes wrong in a 15-step agent workflow, finding the root cause is hard.
Latency: Autonomous planning and multi-step execution take time. Sometimes a lot of time.
Cost: More LLM calls, more tokens, more compute. Agent tasks can cost 10-100x simple prompts.
Safety risks: Autonomous systems can take actions you didn't anticipate. Guardrails become essential, not optional.
If a prompt solves your problem, an agent adds complexity without benefit.
The Decision Framework
Ask these questions:
Can this be done in one step?
- Yes → Use a prompt
Is the workflow deterministic?
- Yes → Use API calls
Does the system need to make autonomous decisions?
- No → Use prompts or APIs
Is memory across interactions essential?
- No → Use stateless approaches
Is the cost of agent complexity justified by the value?
- No → Simplify
Can you accept the unpredictability of autonomous behavior?
- No → Don't use agents
Only when you answer "agents are necessary" to most of these questions should you reach for agentic architectures.
The Hybrid Approach
Real systems often combine approaches:
- Prompt-first fallback: Try simple prompt. Escalate to agent only for complex queries.
- Agent coordination: Agents for planning, prompts/APIs for execution.
- Human checkpoints: Agents propose, humans approve, then execute.
You don't have to choose one approach for everything. Match the tool to the task.
What This Means for Governance
Different approaches require different oversight:
Prompts: Monitor for quality, hallucinations, policy violations. Relatively simple.
API calls: Traditional software monitoring. Check errors, latency, correctness.
Agents: Full observability into reasoning chains. Track tool usage, decision points, safety boundaries. Monitor for drift, goal divergence, resource consumption.
Agent governance is harder because agent behavior is harder to predict. If you can avoid agents, your governance burden is lighter.
When you do use agents, AI supervision becomes essential, not optional. Supervision provides the real-time enforcement and constraint layer that turns autonomous agents into controllable systems.
The agent hype is real. Agents can do remarkable things.
But "can" isn't "should."
The best engineers reach for the simplest solution that works. Sometimes that's an agent. Often it's a prompt.
Don't let vendor marketing convince you to overcomplicate your AI. Solve the problem in front of you with the minimum complexity required. And when agents are the right answer, build supervision into the stack from day one.
