top of page
  • Blogger
  • Facebook
  • Linkedin

Updated: 13 hours ago

The Pragmatic Summit in February 2026 (https://www.pragmaticsummit.com/) reports that 93% of software developers use AI coding tools, saving an average of 4 hours a week; 27% of code is AI-authored; and more than 50% of developers in advanced companies use AI coding agents every day. (https://www.youtube.com/watch?v=LOHgRw43fFk) AI agent coding has become the most efficient way to build business applications.


The U.S. Bureau of Labor Statistics reports that software developer roles declined only 0.3% over 2024–25 and will grow 15% from 2024 to 2034, which is much faster than the average for all occupations. (https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm)


Why do we need more developers while AI coding tools can improve their productivity in coding?


First, the demand for AI-powered applications developed with AI coding agents is exploding, offsetting productivity gains and requiring many developers capable of coding with AI agents. Gartner predicts that by 2026, more than 80% of enterprises will have tested or deployed GenAI-enabled applications — up from less than 5% in 2023. (https://www.gartner.com/en/experts/top-tech-trends-unpacked-series/harness-democratized-generative-ai-transform-your-business)


Second, software developer roles involve stakeholder communication, requirements analysis, architectural design, quality assurance, incident management, performance management, capacity planning, and much more—all of which require an understanding of environments, collaboration, and judgment that an AI coding agent cannot handle on its own.


So, no matter how far AI agent coding advances, software developers need to understand the business strategy and requirements and be capable of software engineering to be involved effectively with AI coding agents. AI coding agents can amplify developers' capabilities, but cannot replace them.


A pattern is a reusable, named solution to a commonly recurring problem within a given context. To build a successful business application via agent coding, we should apply well-known patterns for IT-enabled business transformation. To build high-quality software via agent coding, we should stick to proven software engineering patterns. Since an AI coding agent is one kind of AI agent, it should inherit the growing set of patterns for AI agents, especially those for generative AI agents.


I. Digital Transformation Patterns


There are well-known best practices for IT-enabled business transformation, or digital transformation (DX), that utilize new technologies such as cloud computing, big data, IoT, AI models, and AI agents. These DX patterns can result in significant improvement in business performance. Developers should apply the DX patterns when using agent coding to develop business applications, including AI agents for business management and operations.


1. Plan-Do-Check-Act (PDCA) Cycle: As with all business transformation efforts, DX enabled by agent coding should also follow the Deming cycle. Other similar patterns that developers can apply include the BPM Cycle (Design, Model, Execute, Monitor, and Optimize), 6 Sigma (Define, Measure, Analyze, Design, Optimize, and Verify: DMADOV), Design Thinking (Empathize, Define, Ideate, Prototype, Test, and Implement), and Lean Startup (Build, Measure, and Learn). (https://www.kosta-online.com/challenge-page/requirement-analysis; https://www.kosta-online.com/post/ai-agent-sucess-factors)


The following AI agent coding patterns (explained below) support this DX pattern: Explore-Plan-Code, ReAct, Iterative Human-in-the-Loop, Autonomous Monitoring-Evaluation-Learning Closed-Loop Feedback, and Analytics and Monitoring.


2. Use Cases with Clear Value Proposition: Find use cases of AI agent coding that create a great value proposition for the stakeholders. (https://www.kosta-online.com/challenge-page/use-case-analysis-and-realization)


Low-variance, high-standardization workflows, such as mainstream operational processes, tend to be tightly governed and follow predictable logic. In these cases, agents based on nondeterministic LLMs (Large Language Models) could add more complexity and uncertainty than value. (https://www.kosta-online.com/post/ai-agent-hype-and-reality) By contrast, high-variance, low-standardization workflows could benefit significantly from LLM-based agents. For example, tasks demanding information aggregation, verification checks, and compliance analysis are where agents can be effective. (https://www.kosta-online.com/post/ai-agent-sucess-factors)


AI agent coding platforms should create clear value for mechanical boilerplate coding. They also create value for high-level, abstract tasks, such as requirements specification and architectural design, although humans must intervene and own this. The following AI agent coding patterns (explained below) support this DX pattern: Macro Prompts, Specification-Driven Development, Image as Spec, Spec-to-Scaffold Automation, and Explore-Plan-Code.


3. Enterprise Architecture (EA)-based Strategy Plan: Since the early 2000s, leading companies and governments have adopted Enterprise Architecture (EA) as the method for business and IT strategy planning. EA encompasses business process management (BPM), metadata management (MDM), and enterprise service-oriented architecture (SOA), which we explain below. (J. W. Ross , et al. Enterprise Architecture As Strategy: Creating a Foundation for Business Execution, Harvard Business School Press, 2006; https://www.kosta-online.com/challenge-page/enterprise-architecture-design)


The Open Group Architecture Framework (TOGAF)’s EA Content Framework (https://digital-portfolio.opengroup.org/togaf-standard-architecture-content/latest/01-doc/chap01.html)
The Open Group Architecture Framework (TOGAF)’s EA Content Framework (https://digital-portfolio.opengroup.org/togaf-standard-architecture-content/latest/01-doc/chap01.html)

EA governance should be the mechanism for classifying which processes are safe for LLM agent automation, standardizing verification checkpoints and quality gates, and auditing agent behavior against enterprise compliance requirements. Without EA governance, the risk of nondeterminism is unmanaged at the enterprise level. DX enabled by agent coding, if planned outside an EA framework, risks creating shadow IT at scale — powerful, fast-moving, but ungoverned and strategically incoherent.


DX enabled by agent coding should be planned and implemented within the EA framework. (https://www.kosta-online.com/post/ai-agent-sucess-factors) The following AI agent coding patterns (explained below) support this DX pattern: Context Perception and State Management, Policy and Guardrails, and Context Engineering.


4. Business Process Management (BPM)-based Process Innovation: Agentic AI efforts that fundamentally reimagine entire workflows are likely to deliver better results. Understanding how agents can help with each step in the workflow is the path to value. People will still be central to getting the work done, but now with different agents, tools, and automations to support them. (https://www.mckinsey.com/capabilities/quantumblack/our-insights/one-year-of-agentic-ai-six-lessons-from-the-people-doing-the-work#/)



DX enabled by agent coding should start with identifying the end-to-end process the company wants to reengineer. (https://www.kosta-online.com/challenge-page/bpmn-based-business-process-design-and-implementation) On the other hand, agent coding fundamentally changes the software development lifecycle, requiring new specific roles, skills, and techniques at each step of the development process. The following AI agent coding patterns (explained below) support this DX pattern: Context Perception and State Management, Policy and Guardrails, Context Engineering, Role Switching, and Agent Teams.


5. Metadata Management(MDM)-based Semantic Model: Semantic models are crucial for software application development. Firms should establish company-wide metadata management to maintain high-quality semantic models. (https://www.kosta-online.com/challenge-page/data-engineering)


Semantic models are essential for contextual understanding, relationship mapping, intent inference, intelligent code generation, and advanced debugging in agent coding. Three layers of semantic models should align: Domain Semantic Models, Process Semantic Models, and Codebase Semantic Models.


Domain/Process Semantic Model and Codebase Semantic Models (https://www.kosta-online.com/challenge-page/data-engineering)
Domain/Process Semantic Model and Codebase Semantic Models (https://www.kosta-online.com/challenge-page/data-engineering)

The following AI agent coding patterns (explained below) support this DX pattern: Domain-Driven Design (DDD), Context Perception and State Management, and Context Engineering.


6. Enterprise Service-Oriented Architecture (SOA): Since the middle of the 2000s, leading companies and governments have adopted SOA across their software applications. (https://learning.dell.com/content/dam/dell-emc/documents/en-us/KS2009_Hariharan-Service_Oriented_Architecture_(SOA)_and_Enterprise_Architecture_(EA).pdf)


DX applications developed using agent coding should also be in SOA. Agent coding platforms themselves are internally SOA and interact with external systems via SOA APIs. (https://www.kosta-online.com/post/ai-first-and-api-first-strategies; https://www.kosta-online.com/post/ai-agent-system-is-soa) The following AI agent coding patterns (explained below) support this DX pattern: Agent in SOA, Bounded Context Injection, and API Integration.


II. Software Engineering Patterns


It is important to apply software engineering patterns when developing high-quality software using an agent coding platform. Important software engineering patterns include UX design patterns, behavior-driven design (BDD) patterns, SOLID patterns, Gang of Four object design patterns, domain-driven design (DDD) patterns, refactoring patterns, clean code patterns, service-oriented architecture (SOA) patterns, test-driven design (TDD) patterns, and so on.


7. Requirement Specification: The software's behavior (functionalities, features, use cases) and structure (semantics, business objects, data) should be specified. The business analyst can specify the behavior using BPMN-based business process models, use cases, BDD acceptance criteria, use case scenarios, etc., and the structure using a UML conceptual-level class diagram, entity-relationship diagram, ontology, etc. (https://www.kosta-online.com/post/the-complete-guide-to-business-analysis)



As in Design Thinking, the Lean Startup, and Agile Development, the software should be built incrementally, ensuring that the features added at each increment are well received by users.


The level of specification rigor should be calibrated to the increment size and risk: BPMN + UML + BDD for high-risk, core domain logic; use cases + BDD acceptance criteria for medium-complexity features; and brief story + acceptance criteria for low-risk boilerplate/scaffolding. (https://www.kosta-online.com/post/agent-coding-gold-standard)


The following AI agent coding patterns (explained below) support this SE pattern: Macro Prompts, Specification-Driven Development (SDD), Image as Spec, Explore-Plan-Code, Role Switching, Learn by Example, Agent Teams, Parallel Features Development, Feature Flagging, and Best of N.


8. User-Centric Design: Build interfaces that are intuitive and user-friendly. Conduct usability tests to gather feedback and make improvements. Ensure the app is accessible to all users, including those with disabilities.


Agents are strong UI implementers but weak UI designers — reinforcing the Design Thinking principle that human empathy and user research must precede and govern agent implementation. This pattern is most valuable when explicitly integrated into the Requirement Specification and Agile Development Process rather than treated as a standalone principle.


The following AI agent coding patterns (explained below) support this SE pattern: Image as Spec, Explore-Plan-Code, and Learn by Example.


9. Behavior-Driven Development (BDD): Acceptance criteria in Gherkin syntax are one of the most effective structured prompts for Spec-Driven Development (SDD) in agent coding. Gherkin specifications can serve as a living semantic model — simultaneously a business document, a technical specification, and an executable test suite.


The following AI agent coding patterns (explained below) support this pattern: Specification-Driven Development (SDD), Explore-Plan-Code, Self-Verification, and Domain Modeling.


10. Object Design and Domain-Driven Design (DDD): A domain model should be developed from the semantic model and use cases for the application built with agent coding. (https://www.kosta-online.com/post/the-complete-guide-to-business-analysis) Object design principles, such as SOLID, object design patterns, such as Gang of Four patterns, and refactoring patterns should be applied to the domain model and the resultant code. (https://www.kosta-online.com/challenge-page/object-design)



An agent coding session should focus on a bounded context in the domain model so that the entire codebase does not overload the context window. The application developed via agent coding should be built on a service-oriented architecture, where each service corresponds to a bounded context. Each service can be implemented using the Hexagonal Architecture. (https://www.kosta-online.com/post/the-complete-guide-to-soa-msa-and-modulith)


The following AI agent coding patterns (explained below) support this SE pattern: Policy and Guardrails, ReAct, Explore-Plan-Code, Domain Modeling, Bounded Context Injection, Constraint-Driven Coding, and Refactor-As-Transformation.


11. SOA: Break down the application into smaller, manageable services to enhance functional cohesiveness, loose coupling, implementation agnosticity, maintainability, reusability, and easier testing and debugging. Use well-defined APIs to interact with other services and external systems. Maintain clear documentation for API endpoints. Use API versioning to manage changes and updates without breaking existing functionality.


Choose from SOAP-based SOA, Microservice Architecture (MSA), and Modular Monolith (Modulith) architecture styles and patterns, depending on which architecturally significant requirements (ASRs) (e.g., maintainability, data consistency, reusability, operational simplicity, agility, scalability) are most important. (https://www.kosta-online.com/post/the-complete-guide-to-soa-msa-and-modulith; https://www.kosta-online.com/challenge-page/deep-understanding-of-microservice-architecture)


Modulith architecture is often relevant for applications built using agent coding. Compared with MSA, it allows simpler deployment, lower DevOps overhead, an easier path to refactoring as the domain model evolves, and lower operational complexity, allowing teams to focus on agent coding quality. When a bounded context within the application requires a microservices architecture, you can apply the Strangler Fig pattern to migrate only that part from Modulith to MSA. (https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig)


SOAP-based SOA, Microservice Architecture, and Modulith Architecture Patterns
SOAP-based SOA, Microservice Architecture, and Modulith Architecture Patterns

The following AI agent coding patterns (explained below) support this SE pattern: Agent in SOA, Domain Modeling, Bounded Context Injection, and API Integration.


12. Test-Driven Development (TDD): Run the test(red)-code(green)-refactor loops. TDD can produce a well-structured test pyramid. (https://www.kosta-online.com/challenge-page/tdd-based-test-automation) Agents write tests before implementation to force clearer thinking and fewer hallucinations. Emerges as one of the strongest patterns for succinct, reliable code from coding agents.


BDD is an optimal TDD variant for agent coding because Gherkin scenarios are already structured agent prompts; the outside-in approach aligns with use case → domain model derivation; acceptance tests double as PRD story completion verification; and business stakeholders can validate test specifications before agent implementation.



The following AI agent coding patterns (explained below) support this SE pattern: Specification-Driven Development, Reflection and Self-Correction, Refactor-As-Transformation, Error-Driven Refinement, and Self-Verification.


13. Static Analysis: There should be quality gates at which the agent performs linting, type checking, and security scanning on its own output before submitting.


The following AI agent coding patterns (explained below) support this SE pattern: Reflection and Self-Correction, Constraint-Driven Coding, Error-Driven Refinement, and Self-Verification.


14. Documentation and Training: Maintain thorough documentation for both developers and users. Use comments in your code to explain complex logic. Create user-friendly guides to help users navigate the application.


Agent coding amplifies documentation importance in three ways:AGENTS.md / CLAUDE.md as living documentation, code comments as agent context, and agent-generated documentation such as API docs, code comments, and user guides.


The following AI agent coding patterns (explained below) support this SE pattern: Persistent Memory, Repeatable Commands, and Community Contributions.


15. Agile Development Process: Develop software iteratively and incrementally in AI agent coding, ensuring each increment is viable through thorough testing and user review. (https://www.kosta-online.com/challenge-page/cloud-native-computing-adoption-roadmap)


An XP (Extreme Programming)-based Scrum or Kanban development process within a Lean Startup loop, followed by a Design Thinking loop, is widely used. An extended agile process, such as the Scaled Agile Framework (SAFe), can be used for a large project to implement within an Enterprise Architecture framework.


DevOps, based on XP and CD (Continuous Delivery), is an appropriate development process for a very large project with many decentralized teams that requires extreme agility and independent scaling across services. DevOps enables continuous delivery based on a microservices architecture (MSA), allowing each service to be deployed independently. In agent coding, DevOps (with SAFe providing a cross-team coordination layer) can be chosen when MSA is chosen as the SOA style.



The following AI agent coding patterns (explained below) support this SE pattern: Incremental Development, Autonomous Version Control, Parallel Features Development, Feature Flagging, and Best of N.


III. Generative AI Agent Patterns


AI coding agents are a kind of generative AI agent, which is, in turn, a kind of AI agent. The following are generative AI agent patterns that AI coding agents should inherit.


16. Foundation Model Wrapper: Generative AI agents use foundation models as the inferencing engine. While foundation models (like Claude Opus and Sonnet) are passive generators of text or images, AI agents (like Claude Code) wrap these models with additional components — such as planning, memory, and tool-use capabilities — to make them autonomous actors that can execute multi-step tasks.



The following AI agent coding patterns (explained below) support this AI agent pattern: Context Engineering, Environment Configuration, Persistent Memory, State Management, Agent Teams, and Parallel Subagents.


17. Prompt Engineering: Structuring, designing, and refining input text (prompts) to guide generative AI models toward producing accurate, relevant, and high-quality outputs.


The following AI agent coding patterns (explained below) support this AI agent pattern: Macro Prompts, Specification-Driven Development, and Image as Spec.


18. Prompt Chaining: Break complex tasks into smaller, sequential steps, where the output of one prompt serves as the input for the next. It improves accuracy, allows for complex reasoning, and enhances controllability by tackling problems in a structured, modular way.


Incremental Development is an AI agent coding pattern (explained below) supporting this AI agent pattern.


19. Agent in SOA: Agent coding platforms themselves are internally SOA and interact with external systems via SOA APIs. (https://www.kosta-online.com/post/ai-first-and-api-first-strategies; https://www.kosta-online.com/post/ai-agent-system-is-soa; https://www.kosta-online.com/post/ai-agent-hype-and-reality)


The inner SOA architecture of an agent coding platform composes SOA services either through orchestration via a BPMS or through choreography via a pub/sub event bus, as shown in the Generative AI Agent Architecture diagram above.


AI Agent Orchestration Using Camunda (https://camunda.com/solutions/agentic-orchestration/)
AI Agent Orchestration Using Camunda (https://camunda.com/solutions/agentic-orchestration/)

Bounded Context Injection and API Integration are AI agent coding patterns (explained below) that support this AI agent pattern.


20. AI Agent Loop: AI agent loops are the foundational, iterative process that enables autonomous AI agents to perform complex, multi-step tasks by breaking them down into manageable cycles of reasoning, acting, and evaluating. Unlike traditional “one-shot” AI prompts, an agent loop runs continuously — often in a while loop — to refine work, use external tools, and handle unexpected, dynamic environments until a final goal is met.



The following AI agent coding patterns (explained below) support this AI agent pattern: Ralph Loop, Explore-Plan-Code, Context Window Optimization, Bounded Context Injection, Role Switching, Iterative Human-in-the-Loop, Persistent Memory, State Management, Reusable Saved Prompts, Incremental Development, Autonomous Version Control, Best of N, and Analytics and Monitoring.


21. Context Perception and State Management: Context perception and state management are foundational components of generative AI agents, enabling them to transition from passive, stateless Large Language Models (LLMs) into autonomous, context-aware systems. See this component as an SOA service within the Generative AI Agent Architecture above. (https://www.kosta-online.com/post/ai-agent-hype-and-reality)


While perception allows the agent to interpret its environment and gather data, state management ensures continuity by maintaining a record of past actions, decisions, and environmental changes, enabling the agent to operate effectively over long-term, multi-turn interactions.


In AI agent loops, each iteration resets the in-context state but relies on external state for continuity, which is precisely the architectural workaround for the context window limitation.


The following AI agent coding patterns (explained below) support this AI agent pattern: Context Engineering, Context Window Optimization, Environment Configuration, Persistent Memory, and State Management.


22. Policy and Guardrails: Policy and guardrails in AI agents are essential, proactive safety mechanisms — technical, operational, and ethical constraints — designed to ensure autonomous systems operate within defined security, legal, and organizational standards. See this component as an SOA service within the Generative AI Agent Architecture above. (https://www.kosta-online.com/post/ai-agent-hype-and-reality) They prevent harmful content, mitigate risks such as prompt injection and data leakage, and maintain real-time reliability and compliance in AI behavior.


Constraint-Driven Coding is an AI agent coding pattern (explained below) supporting this AI agent pattern.


23. ReAct: Execute the reasoning-acting loop step-wise. In each loop, ask the agent to produce a structured execution plan, then review the plan and approve it for execution. This pattern enforces deterministic tool invocation, reduces cascading hallucination, and clarifies dependency ordering.


Explore-Plan-Code and Role Switching are AI agent coding patterns (explained below) supporting this AI agent pattern.


24. Reflection and Self-Correction: An agent evaluates its output, checks for accuracy and gaps (the reflection phase), and improves its work before presenting it, thereby enhancing reliability (the self-correction phase).


Self-Verification and Error-Driven Refinement are AI agent coding patterns (explained below) that support this AI agent pattern.


ReAct (Reasoing/Planning + Action) and Reflection (https://huggingface.co/blog/Kseniase/reflection)
ReAct (Reasoing/Planning + Action) and Reflection (https://huggingface.co/blog/Kseniase/reflection)

25. Long-Term Memory: Companies can develop agent components that can be reused across different workflows. That includes developing a centralized set of validated services (such as LLM observability or preapproved prompts) and assets (for example, application patterns, reusable code, and training materials) that are easy to locate and use. Integrating these capabilities into a single platform is critical. (https://www.mckinsey.com/capabilities/quantumblack/our-insights/one-year-of-agentic-ai-six-lessons-from-the-people-doing-the-work#/)


In agent coding, developers can store shareable artifacts such as project guidelines, best practices, reusable prompts, and example code in long-term memory, such as Claude.md, Skills, and Knowledge Repository. The following AI agent coding patterns (explained below) support this AI agent pattern: Project Structure, Context Window Optimization, Environment Configuration, Memory Snapshot, Persistent Memory, Repeatable Commands, State Management, and Reusable Saved Prompts.


26. Multi‑Agent Collaboration: Multiple autonomous AI agents, each with specialized roles and skillsets, interact to solve complex tasks that exceed the capabilities of a single monolithic model. This distributed approach enhances accuracy, adaptability, and scalability by allowing agents to divide responsibilities and share information. (https://www.kosta-online.com/post/ai-agent-hype-and-reality)



Agent Teams and Parallel Subagents are AI agent coding patterns (explained below) that support this AI agent pattern.


27. Autonomous Monitoring-Evaluation-Learning Closed-Loop Feedback: The agent can verify its performance at each step of the workflow. Building monitoring and evaluation into the workflow can enable teams to catch mistakes early, refine the logic, and continually improve performance. (https://www.mckinsey.com/capabilities/quantumblack/our-insights/one-year-of-agentic-ai-six-lessons-from-the-people-doing-the-work#/; https://www.kosta-online.com/post/ai-agent-hype-and-reality)


In agent coding, the software development workflow may include multiple checkpoints where the agent autonomously verifies intermediate results against pre-specified rules. Unlike other types of AI agents, AI coding agents don’t have a native, autonomous Learning Closed-Loop Feedback facility. Iterative Human-in-the-Loop and Analytics & Monitoring are AI agent coding patterns (explained below) that support the Autonomous Monitoring-Evaluation-Learning Closed-Loop Feedback AI agent pattern.


28. Human-in-the-Loop: A loss in trust or a decline in quality can easily offset any efficiency gains achieved through automation. Developers should give agents clear job descriptions, onboard them, and provide ongoing feedback so they become more effective and improve over time. Developing effective agents is challenging work that requires harnessing individual expertise to create evaluations and codifying best practices with sufficient granularity for given tasks. This codification serves as both the training manual and performance test for the agent, ensuring that it performs as expected. (https://www.mckinsey.com/capabilities/quantumblack/our-insights/one-year-of-agentic-ai-six-lessons-from-the-people-doing-the-work#/; https://www.kosta-online.com/post/ai-agent-hype-and-reality)


Centaur: Human + AI Coding Agent
Centaur: Human + AI Coding Agent

Agent coding cannot produce production-grade applications without human guidance and interventions. The AI agent coding patterns supporting this AI agent pattern are Iterative Human-in-the-Loop, Two-Way Communication, Community Contributions, and Best of N, as explained below.


IV. AI Coding Agent Patterns


When developers use coding agents (with a large language model, tools, file systems, a terminal, and a planning loop), effectiveness depends less on prompt tricks and more on architectural interaction patterns among the human, the agent, and the codebase. Below are high-leverage agent coding patterns, framed for production app development rather than toy examples.



29. Macro Prompts: The very first, simplest, most important step in agent coding is to step back from tiny, micromanaging prompts that keep us tightly in the loop on every single change. We should step back and start thinking of ourselves as innovators and entrepreneurs with big ideas that leverage AI labor, scale it up, and use it in scalable ways that deliver a thousandfold improvement in software engineering productivity. (https://www.coursera.org/learn/claude-code/lecture/zEjfi/1000x-improvement-in-software-engineering-productivity-with-big-prompts)


30. Specification-Driven Development (SDD): Treat the prompt not as prose but as a formal specification artifact. Specify, for example, a Product Requirements Document (PRD) written in Markdown, use cases for business flow, a semantic model for domain entities and invariants, and acceptance criteria in Gherkin style for executable constraints. That reduces reasoning entropy, enables deterministic planning, and converts a creative LLM into a constrained planner. (https://www.thoughtworks.com/radar/techniques/spec-driven-development)



AI coding agent platforms (like Claude Code, Cursor, Windsurf) produce superior, production-ready code when guided by more rigorous, structured requirement specs. By specifying precise, machine-readable specifications — e.g., a UML semantic model, BDD Gherkin-style acceptance criteria, use cases in TypeScript, and API contracts in OpenAPI — agents can generate complex, multi-file components, significantly reducing technical debt and improving reliability.


31. Image as Spec (Show than Tell): When working with Claude Code, certain concepts are dramatically easier to communicate through images than through text descriptions: colorful UI wireframe design, screenshots, example documents, data visualization charts, dashboard mockup, business process model diagram, decision tree, UML class diagram, entity relationship diagram, architecture diagram, video or animated GIF, performance data graph, network topology diagram, etc.


You can submit UML and BPMN models as an image prompt for backend development. It can, however, enhance accuracy and round-trip engineering if you convert those diagrams to text using Diagram-to-Code tools such as PlantUML and Mermaid.


Figma Design to Code and Vice Versa with Claude Code
Figma Design to Code and Vice Versa with Claude Code

32. Spec-to-Scaffold Automation: Let the agent generate the project structure, set up the Docker file, and configure CI. Then freeze the structure before business logic begins.


33. Project Structure: The project structure, naming, and directory structure are really important contexts for Claude Code to locate the code to change. Applying de facto standards for naming conventions, layouts, frameworks, and libraries makes it easier (i.e., token-efficient) to figure out where things are and what they do from the names alone.


34. Ralph Loop: A bash-based orchestration loop that wraps an AI coding tool and runs it repeatedly and autonomously until all requirement spec items are complete. The agent restarts with fresh context for each iteration to prevent context degradation and corruption, using Git and file systems as persistent external memory to maintain continuity across iterations.



35. Explore-Plan-Code: Separate research and planning from implementation to avoid solving the wrong problem. Spend more time designing and innovating.


Rapidly prototype with personas to explore requirements and options. Do rigorous requirement modeling, such as UML semantic modeling, BPMN business process modeling, and use case modeling to specify requirements with a maximum of 4C’s (correctness, consistency, completeness, and comprehensibility).


Then craft constraints and prompts. Let Claude Code generate code and then commit with a descriptive message and create a PR. (https://code.claude.com/docs/en/best-practices)


36. Context Engineering: Curating what the model sees to improve results. Claude Code can infer intent, but it can't read your mind. Reference specific files, mention constraints, and point to example patterns. Claude Code's context configuration features include Claude.md, Rules, Skills, MCP servers, Subagents, Hooks, Plugins, and Slash commands.



You can provide rich data to Claude in several ways: Reference files with @; Copy and paste images directly into the prompt; Give URLs for documentation and API references; Use /permissions to manage tool allowlists; Pipe in data by running cat error.log | claude to send file contents directly; Tell Claude to pull context itself using Bash commands, MCP tools, or by reading files.


37. Context Window Optimization: Manage the Context Window so that it does not feed the whole repo, but only the directory map, relevant files, and architectural constraints. During long sessions, the context window can fill with irrelevant conversation, file contents, and commands, reducing performance and sometimes distracting Claude. Use /clear between tasks to reset the context window. Use /compact to preserve key decisions while freeing space.


38. Domain Modeling: Agent coding platforms like Claude Code design workflows, classes, and SOA services based on requirement models, such as BPMN process models, UML semantic models, use case scenarios, and BDD acceptance criteria. (https://www.kosta-online.com/post/agent-coding-gold-standard)


The agent coding platform first classifies verbs (i.e., activities and tasks) in behavioral models into process activities (more abstract activities) and class operations (unit tasks). It then connects process activities into a workflow, while assigning class operations to conceptual-level classes and design-level classes as their methods. (https://www.kosta-online.com/post/agent-coding-gold-standard)


The latter assignment method is called Class Responsibility Assignment, often using Class-Responsibility-Collaboration (CRC) cards, and applying the GRASP method. (C. Larman, Applying UML and Patterns, 2001) Class Responsibility Assignment produces the domain model and CRC cards for the application being built, which you can ask Claude Code to present to you for review, correction, and approval. (https://www.kosta-online.com/post/agent-coding-gold-standard)


39. API Integration: Both the inner and outer architecture of agent coding platforms like Claude Code are SOA. So, you can use Claude Code’s API to send requests and receive responses, providing a clear, structured way to interact with the AI coding agent. There are several ways the agent integrates with systems via APIs: direct API calls, tool invocation, the MCP (Model Context Protocol) gateway, the A2A (Agent-to-Agent) protocol, and unified API platforms.



The application built using an agent coding platform should also be in SOA. The agent maps bounded contexts in the domain model to SOA services, producing a Context Map in DDD. The workflow designed by the agent using process activities is used to compose SOA services into an application. The agent implements service composition via centralized orchestration using a BPMS or via decentralized choreography using a pub/sub event bus, depending on the application's architecturally significant quality attributes. (https://www.kosta-online.com/post/the-complete-guide-to-soa-msa-and-modulith)


The agent generates the service APIs, test code, and source code based on behavioral requirement specifications, the domain model, and the SOA architecture design.


40. Bounded Context Injection: To avoid a large codebase overloading the context window, focus on a single bounded context in the domain model per session. To optimize the context window, it is important to design software that is modular within a service-oriented architecture.


41. Environment Configuration: Write an effective CLAUDE.md. Configure permissions. Configure CLI flags and environment variables. Install plugins. Connect the MCP servers. Set up hooks. Create skills. Create custom subagents. Configure Agent Teams. (https://code.claude.com/docs/en/best-practices)


42. Constraint-Driven Coding: The agent uses TDD to ensure that the generated code satisfies the requirements. You may embed explicit constraints, such as “Perform unit testing for every basis path in each class” and “No ORM, use raw SQL.” Agents optimize strongly around explicit constraints.


43. Refactor-As-Transformation: The agent performs far better with measurable transformation goals: for example, "Reduce cyclomatic complexity to less than 10", and "Conform to SOLID principles."


44. Role Switching: Have the agent switch roles sequentially: for example, "Act as an architect, a security reviewer, a performance engineer, and then a QA." This pattern creates structured critique loops without changing context.



45. Memory Snapshot: Before a large change, take a snapshot of memory: for example, "Summarize the current architecture in 10 bullet points."


46. Deterministic Output Contracts: Require strict formats to prevent narrative drift and allow programmatic parsing. For example, "Return JSON withfiles modified, migrations, and new dependencies."


47. Error-Driven Refinement: When a test fails, let the agent explain the root cause, propose a fix, and show the patch only (i.e., do not allow a full rewrite).


48. Diff Edit: Require the agent to "output unified diff only", and "not rewrite unchanged code." Otherwise, the agent rewrites entire files. Benefits of diff edit include safe incremental edits, Git-friendly, and easier review. Claude-like coding agents respond well when constrained to patch semantics.


49. Iterative Human-in-the-Loop (HITL): Even highly autonomous systems benefit from structured human oversight. Humans should approve risky changes and review architectural decisions. These ensure accountability and prevent the silent accumulation of technical debt. (https://www.kosta-online.com/post/vibe-coding-benefits-and-limitations) In Claude Code, use Plan mode, Diff review, Permission prompts, and /rewind for HITL iterative review.


Google requires human code review for every code change by at least one person (e.g., a readability specialist, the code owner, or a peer developer), even if Google developers use its own agent coding platform, the Gemini Code Assist, as an intelligent coding partner. (https://www.michaelagreiler.com/code-reviews-at-google/)



When Andrej Karpathy, a founding researcher at OpenAI and former director of AI at Tesla, was asked if a few prompters would replace large development teams, he explicitly pushed back, arguing that "top-tier, deep technical expertise may be even more of a multiplier than before." (https://www.implicator.ai/karpathy-says-ai-coding-agents-made-programming-unrecognizable-since-december/)


50. Two-Way Communication: Agents proactively ask humans questions when uncertainty arises during instruction execution, preventing unsafe actions or hallucinations. On the contrary, you can ask the agent questions as you would ask another engineer, especially when you are onboarding to a new codebase. For larger features, have the agent interview you using the AskUserQuestion tool. Ask the agent to correct as soon as you notice it going off track. For example, you can copy and paste an error message from running the agent-generated app into the prompt to ask the agent to fix the error.


51. Self-Verification: Provide Claude Code tests, screenshots, or expected outputs so Claude can check itself. For example, write the following in Claude.md so it loops: "When you build new code, write tests for it (including unit, integration, and user acceptance tests), and before you check in, go and compile the code and make sure it passes all the tests."


52. Learn by Example: Provide Claude Code example code to learn core design, style, conventions, and principles.


53. Persistent Memory: Claude.md is the project team's persistent memory — essential context that provides the institutional global knowledge and is available in every prompt. It should include clear, concise instructions; operational processes; naming and standards; testing and quality gates; examples and references; expectations and boundaries; and tools and dependencies.


You can also ask Claude to first write codebase research in research.md, which you review and correct before any planning begins. Then ask Claude to save its implementation plan as FEATURE_PLAN.md or INTEGRATION_DESIGN.md. Annotate the plan file directly and ask Claude to update it. Repeat this annotation loop several times until the plan fits your system precisely.



54. Repeatable Commands: Claude Code commands (slash commands) deliver targeted context and process for specific, repeatable tasks — think of them as specialized instruction sets that give Claude Code exactly what it needs for particular workflows without overwhelming it with irrelevant information.


Commands are stored as Markdown files in .claude/commands/ and ~/.claude/commands/. Command files support YAML frontmatter (allowed-tools, model, description, argument-hint), dynamic bash execution (!), file references (@), and $ARGUMENTS for parameterization. MCP servers can also expose prompts that automatically become slash commands.


55. Community Contributions: If you contribute to a community, a Claude.md file, Claude Code commands, innovative process documentation, and creative solutions to common development challenges, that will help other developers skip the trial-and-error phase and jump straight to productive AI-assisted Development. Every shared command becomes a building block for the next developer's breakthrough.


56. State Management: Keep track of conversation state and application context within a session by leveraging Claude’s ability to maintain context across multi-turn exchanges in the active context window. This pattern improves user experience in ongoing conversations by allowing Claude to dynamically adjust responses based on previous interactions — useful for context-aware replies and iterative refinement.


To maintain context between sessions, write persistent instructions and project conventions to CLAUDE.md; rely on Auto Memory and Session Memory, which automatically extract and save structured summaries of past sessions; explicitly save written artifacts like research.md or PLAN.md for Claude to reference in future sessions; or resume a specific past session directly.



57. Reusable Saved Prompts: Turn repeated instructions into persistent components. Put short, always-applicable rules, such as naming conventions and architecture standards, into CLAUDE.md; Put complex, procedural, or domain-specific workflows like testing procedures or deployment steps into Skills; and use slash commands for explicit, user-initiated repeatable actions.


58. Agent Teams: Let a Manager Agent coordinate a Frontend Agent (React components, UI state), a Backend Agent (API routes, server logic), and an Infrastructure Agent (schema, migrations, deployments) — all running in parallel, communicating with each other. Frontend Agent and Backend Agent need to keep an API contract in sync while building simultaneously.


Multi-Agent Collaboration vs. Hierarchical Subagents (https://code.claude.com/docs/en/agent-teams)
Multi-Agent Collaboration vs. Hierarchical Subagents (https://code.claude.com/docs/en/agent-teams)

59. Parallel Subagents: Let a Product Agent create requirements and specs, a UX/Design Agent design UI flows and wireframes, and Claude Code as the Coding Agent implement the result, with humans serving as the supervisor of the three subagents and the quality gate between each phase. Let a Planner Agent break a big task into tiny, ordered steps and let an Executor Agent (Claude Code) receive and implement each task independently. Let a Reviewer Agent critique and improve another agent’s output before it reaches a human. Each subagent has a single responsibility, with scoped tool permissions.


60. Non-Interactive Mode: You can run Claude non-interactively without a session. Claude Code scales horizontally with parallel non-interactive batch operations. Non-interactive mode is how you integrate Claude into CI pipelines, pre-commit hooks, or any automated workflow. You can distribute work across many parallel batch operations.


61. Autonomous Mode: You can have Claude Code bypass all permission checks and work fully unattended. This pattern works well for well-scoped, non-critical workflows like fixing lint errors or generating boilerplate, but only with mandatory safety prerequisites in place: e.g., always git commit before creating a rollback point; scope tasks tightly with a precise prompt; and set max-turns to prevent infinite loops.


62. Incremental Development: Implement features in small, verified increments rather than attempting to “one-shot” everything in a single session. Slice each feature into increments small enough that each fits comfortably within a fresh context window and does not cause context rot.


Leave structured artifacts between sessions so that context can be aggressively cleared without losing the project roadmap. The current recommended artifacts are: Tasks stored persistently at ~/.claude/tasks/; Git commit as a rollback point and a progress marker; and PLAN.md or SPEC.md to be referenced at the start of each session.


63. Autonomous Version Control: Let Claude Code create and manage feature branches following naming conventions defined in CLAUDE.md. For maximum context isolation, use branch-scoped CLAUDE.md files that automatically swap context when switching branches via a git hook.


Claude Code handles the full Git lifecycle autonomously: reading GitHub issues → creating feature branches → implementing changes → committing with conventional commit messages → pushing branches → creating PRs with auto-generated descriptions → deploying to staging via GitHub Actions → waiting for deploy confirmation before proceeding.


Feature branches may be abandoned (worktree removed, branch discarded) or later merged into main to cherry-pick the best features.



64. Parallel Features Development: For simultaneous autonomous development across multiple features, use Git Worktrees. Git worktrees combined with Claude Code create a powerful workflow for parallel development. Instead of juggling multiple branches on a single working directory, Git worktrees give each Claude Code instance its own isolated directory, branch, and file state, all sharing the same underlying Git history and remote connections.


Note that parallel is not always faster. Tasks that share files can produce merge conflicts that take longer than sequential development.


65. Feature Flagging: A Feature Flag gate keeps merged code behind a flag in the live environment, releasing it only after explicit human approval.Feature flags also enable incremental rollout (1% → 5% → 25% → 100%) with automatic rollback if error rates exceed thresholds, providing a safety net for teams. Combined with Git Worktrees, flags allow Claude Code to run parallel feature experiments simultaneously — each behind its own flag — with no branch interfering with production until a human approves the rollout.



For AI-generated code scanned by Claude Code Security, the feature flag serves as the deployment gate after the security multi-stage verification pipeline — holding flagged code until the security team approves patches in the Claude Code Security dashboard. In regulated environments (HIPAA, SOC 2), feature flag gates are mandatory compliance requirements, not optional workflow choices.


66. Best of N: When you start deploying AI labor to solve problems, don't have it solve the problem once. Have it solve the problem three, five, ten times, and then give you all of them back so that you can go, evaluate, judge, and decide what you like best and why, or how you want to combine them. (https://www.coursera.org/learn/claude-code/lecture/4EVrV/the-best-of-n-pattern-leverage-ai-labor-cost-advantages)


67. Performance Optimization: Claude Code’s response speed depends on four factors you control — model selection (among Haiku, Sonnet, and Opus), model routing strategy, context window size, and prompt specificity.


Claude Code automatically enables Prompt Caching, which dramatically reduces input costs.


Use Streaming for real-time partial responses to keep the UI responsive; Message Batches API for bulk async operations; and parallel sessions for independent concurrent tasks — frontend in one session, backend in another.


For application-level optimization, Claude Code identifies and fixes N+1 database query problems, implements connection pooling, adds composite indexes, and suggests Redis caching layers, provided that the developer activates them through deliberate task specification in the PRD or prompt.


68. Analytics and Monitoring: Three native monitoring layers are available: Console Analytics Dashboard, the Claude Code Analytics API (daily aggregate, organization-wide, low setup effort), and OpenTelemetry integration.


The OpenTelemetry integration exports real-time per-event metrics to any OTel-compatible backend, such as Prometheus/Grafana, Datadog, and ClickHouse. OpenTelemetry is incredibly complex to set up.


The Analytics API provides daily aggregated organization-wide productivity metrics, tool acceptance/rejection rates, and cost breakdown by model. This is relatively easy to set up. Cost analytics are the most critical enterprise metric — track spending by user, model, and team to measure ROI and justify adoption.


The native Console Analytics Dashboard offers a quick, high-level view of accepted lines of code and active users, but lacks business context and downstream impact, making it limited for a complete ROI picture.



 
 
 

Comments


bottom of page