Tech
AI

How I Code in 2026

10 Jan 2026

9 min read

Over the past year, the way I write code has changed a lot. I used to code mostly manually, but now I use AI to help me code. Before AI, the time I spent fighting with the syntax was a lot. I had to spend a lot of time reading the documentation, trying to understand the syntax, and then trying to apply it to the code. Which is fine actually, because I learn a lot from it and I like to learn.

Now, with AI, it helps me to write code faster and more efficiently. It's not just about writing code, it's about planning, executing, and reviewing the code.

AI is no longer just something I occasionally use to help me write a function. It's now part of my daily development loop, from planning, to coding, to reviewing, to fixing, all the way until deploy.

The learning is changed. I am still learning about the code, the technologies, the frameworks, the patterns, the best practices, etc. But now, I am also learning from the AI. The AI is not just a tool to help me write code, it's a tool to help me learn.

So I thought I'd write this down, partly for myself, partly for anyone curious how engineers actually use AI in real work.

This is basically my current loop:

----------------------------------------------------------
|                        Plan                            |
----------------------------------------------------------
                          |
                          v
----------------------------------------------------------
|  Execute (Code + Review + Fix + Run + Typecheck/Lint)  |
----------------------------------------------------------
                          |
                          v
----------------------------------------------------------
|                     Push/Deploy                        |
----------------------------------------------------------

But important:

This is not a straight line. It loops.


1. Planning

Before I write code, I spend time breaking down the problem. Having a good understanding of how the feature should work, from the high level to the low level details. Also think from the user perspective, how the feature should be used, how this will help them, what they will expect, how can we make it easier for them to use, etc.

What I usually do:

  • break requirements, make it clearer and more understandable
  • design data structures (DTO, Zod schema, API payload)
  • think about UI structure
  • think about server vs client components
  • list edge cases

To help me, I often use the most powerful AI models (like GPT-5 level, Claude Opus, etc). Because if the plan is bad, everything downstream might become messy. I do the planning process several times until I'm satisfied with the result.

I am using Cursor for my daily. So, I can give the AI agent the prompt and the context.

Here's an example of a real kind of prompt I'd write (but not exactly this, just a simplified one):

I'm building a CPF contribution step in a loan application flow.

Details:

- user inputs CPF contribution history
- multiple entries allowed (min 1, max 12)
- each entry has employer name, contribution amount, start and end date
- must follow CPF rules
- data will be submitted to the backend API

Help me:

- suggest Zod schema + TypeScript types
- suggest component structure and form state
- define API payload shape
- list edge cases + validation rules
- suggest server vs client component split

AI gives me a first draft of the plan. Then I refine it based on:

  • domain knowledge (e.g. CPF, mortgage rules, etc)
  • performance considerations
  • our internal design system
  • uncovered cases or security concerns

And very important:

Planning is not one-time. I revisit and refine it multiple times during the whole development cycle.

AI really helps here. I sometimes got things that I haven't thought of before, but the AI suggested it. It's like having a second brain that helps you think better.

But, I think it's also important to review the plan thoroughly, because the AI might not always get it right, or might 'halucinate' (generate something that is not real). Human in the loop is still important.


2. Execution

After I have the plan, I start coding. The AI agent is my pair programmer here. Depending on the complexity of the feature, I might give the AI a big chunk of the plan and let it scaffold the entire feature vertically, from the data layer to the UI. Or if the feature is small, I might give the AI the whole plan.

It writes the first version of code for me. Normally, it would produce a working code that's good enough to be used, but not perfect. But that's fine. Because the goal at this stage is momentum, not perfection. I will review and refine the code for sure.

The "Living Context" Document

The secret to making large-scale execution work is a Living Context Document. I keep a markdown file that acts as the "Source of Truth."

It contains:

  • Requirements & Decisions: Why we chose specific logic (this is the plan).
  • API Contracts: Exactly how the data should look.
  • The Progress Ledger: What's done and what's next.
  • and other relevant information.

Instead of re-explaining my life story to the AI every five minutes, I simply point to the file.

The Refinement Loop

This is where I spend most of my energy. Even though the AI wrote the code, I own the result. I am in every single step of the process:

  • Deep Architectural Review: I don't just check if it runs. I check the overall architecture of the code. Is the components split correct? Is the naming semantic? Does it follow our monorepo's specific patterns? Is it following the plan? Does it use the correct design system? etc.
  • Manual Refactoring: If the AI produces "clever" but unreadable code, or if worse, it halucinates, I step in. I often refactor manually because I have the historical context of the codebase that the AI might miss. I'll simplify logic, improve readability, or extract reusable hooks myself to ensure they meet our standards.
  • The "Break and Fix" Cycle: I run the code, test it, and try to break the UI. When I find a bug, I'll check where the root cause is, what caused it, and how to fix it. Depending on the complexity of the bug, I might ask the AI to fix it, or I might fix it myself.
  • The Helpfulness of Tooling: I never trust the AI's "word" that the code is correct. I rely on my local environment to be sure that the code is working as expected:
    • TypeScript: Enforces strict type safety.
    • Biome: Handles linting and formatting instantly.
    • Manual Testing: I personally click through every edge case. Does the interaction feel right? AI can't feel the UX; only I can.
  • Performance & Optimization: I manually review the execution path. Are we triggering unnecessary re-renders in this list? Any unnecessary data fetching? etc. I'll prompt the AI to optimize the code, or I'll manually wrap components in memo where my domain knowledge says it's necessary.
  • Security: I review the code for security concerns. Is the code secure? Are we using the correct security practices? Are we following the security best practices? etc.

This process is not linear. It's a loop. I might go back and forth between the planning, execution, and refinement stages several times until I'm satisfied with the result. One thing is certain: I own the context, the decisions, the code, and the result.

3. Push & Deploy

Once everything is good, I push the code to the repository. The "Pull Request" isn't just a request for a human to look at my code, it's the final gate where my manual work meets automated intelligence.

The CI/CD & AI-Check

As soon as the code hits GitHub, the CI pipeline kicks off. While the standard build and test suites run, an AI agent performs a pre-review. It checks for security, consistency, and documentation.

The Human Element

Despite all the AI assistance, manual review from my team is still mandatory. Because the AI handled the boilerplate, my teammates can focus their energy on the high-level logic.

Deploying with Confidence

Once merged, the code is deployed. I have more confidence in the deployment because I kept in the loop and I know exactly what was shipped.


Final thoughts

Looking back at how I used to code just a few years ago, the difference is big. I used to spend most of my time fighting syntax. Today, I spend more time thinking about the problem and the solution.

Some people worry that AI will make us "lazy" or that we'll lose our edge. I've found the opposite to be true. To use AI effectively, you have to be more rigorous. You have to be a better architect, a better reviewer, and a more critical thinker. You aren't just a "coder" anymore, but you are an orchestrator.

I still hold the wheel, I still know every turn in the road, but I have a co-pilot that helps me drive faster and further than I ever could alone. I also learned a lot from doing the process.

The loop never truly ends, it just gets tighter and more efficient. It's not about the AI replacing the human, it's about the human using the AI to help them code faster and better.

Thank you for reading!

Tech
AI