Back to blog
Opinion

How I use AI in web development (no hype)

By Flávio Emanuel · · 8 min read

The real picture

I use AI tools in my daily development workflow. Cursor, Claude, GitHub Copilot. Not because it’s trendy. Because it saves time on specific tasks.

The “AI will replace programmers” talk is just as overblown as “WordPress will kill devs” was in 2015. It didn’t happen. It won’t happen. What actually happened with WordPress: devs who only installed themes lost ground. Devs who did real work kept working. AI will play out the same way.

What did change: the productivity bar went up. Those who use it well produce more. Those who don’t are slower by comparison. But “using it well” is the part nobody explains.

Where AI actually speeds things up

Boilerplate and repetitive code

Creating a React component with TypeScript, setting up a Supabase client, scaffolding an API route structure, building a custom hook with correct types. Things I can do with my eyes closed but that take 10-15 minutes of typing. AI does it in seconds.

Concrete example: on AutoPars, each integration (Asaas, Melhor Envio, Zapi, Resend, Cloudflare) needed a client with header config, token, base URL, and type definitions. That’s 5 files nearly identical in structure. AI generated all 5 in minutes. I adjusted the specifics of each API.

Tests

Writing unit tests for existing functions is where AI saves me the most time. It reads the function, understands what it does, and generates edge cases. I review, fix what doesn’t make sense, and add scenarios it missed.

The real gain isn’t AI writing better tests than me. It’s removing the inertia barrier. Writing tests is the task every dev puts off. With AI generating the skeleton, I adjust in 5 minutes what would take 20 to write from scratch.

A pattern that works well: I write the most important test manually, to set the style and structure. Then I ask AI to generate the next 5-10 scenarios following that same pattern. It keeps consistency and covers the edge cases I’d leave for later (and “later” usually means “never”).

Documentation

Generating JSDoc, function comments, project READMEs, changelogs. The part of the work every dev postpones until delivery time, then rushes through and does poorly. AI produces decent documentation without complaining and without procrastinating.

Research and debugging

Instead of opening 5 Stack Overflow tabs, I ask directly. “This query is slow on Supabase, what could it be?” and I get hypotheses to investigate: missing index, N+1, heavy RLS policy. It doesn’t replace understanding the problem, but it speeds up diagnosis.

This is especially useful with APIs I don’t know. When I integrated Melhor Envio for the first time on AutoPars, I used AI to understand the API structure, the required endpoints, and the authentication flow. What would have been an afternoon of reading docs took 30 minutes of conversation with AI + validation against the official documentation.

Migration and refactoring

Converting JS files to TypeScript, changing import patterns, updating deprecated APIs, renaming variables across 30 files. Mechanical work that AI does well and fast. On a recent project, I migrated 40 files from plain JavaScript to TypeScript with strict typing. AI inferred the correct types in about 80% of cases. The remaining 20% needed manual adjustment, but it still took a third of the time compared to doing everything by hand.

CSS and styling

Generating Tailwind classes for a layout I describe, turning a mental design into CSS code. AI is particularly good at this because CSS is repetitive and predictable. “Card with padding 24, subtle border, border-radius 12, dark background with hover lifting 2px” — AI translates that to code immediately.

Responsiveness too. Asking “now make this work on mobile with vertical stack and smaller font” gives the right result 90% of the time. Fine-tuning breakpoints and spacing is still manual, but the bulk of the work comes out ready.

Where AI gets in the way

System architecture

AI doesn’t know your business context, expected scale, real constraints, infrastructure budget, or the skill set of the team that will maintain the code.

On AutoPars, the decision to separate the React marketplace from the Astro landing page was mine. AI would have suggested doing everything in Next.js with SSR. Would it work? Technically, yes. Would it be the best decision? No. The marketplace needs a SPA for interactivity, and the LP needs SSG for SEO. Two stacks, two deploys, each optimized for what it does. That decision came from experience with the trade-offs of each approach, not from a prompt.

Code that looks right but isn’t

This is the most dangerous risk. AI generates code that compiles, passes lint, and looks correct. But the logic can be wrong in subtle ways.

I once accepted an AI suggestion that implemented a product filter with inverted logic — it showed what should be hidden and hid what should be shown. The code was clean, well-typed, no syntax errors. The bug only showed up when I tested with real data.

The problem: when you stop paying attention and trust the output without reviewing line by line, the bug goes to production. AI doesn’t know if the result is correct. It knows if it looks correct. The difference between those two is where the risk lives.

Complex business rules

“If the seller has a premium plan and the buyer is a company, apply volume-tiered discounts but only if the shipping type is X and the delivery region is Y.” This is logic that requires conversation with the client, not a prompt.

AI can implement the rule if you describe it precisely. But the hard part isn’t implementing — it’s figuring out what the rule is. That comes from meetings, uncomfortable questions, and understanding the business that no AI has.

Security

AI doesn’t think about SQL injection, XSS, CSRF, or RLS. It generates code that works, not code that’s secure. In Supabase, RLS (Row Level Security) is the most important security layer — and AI consistently generates RLS policies that are either too permissive or too restrictive. Security responsibility stays with the dev.

I’ve caught AI suggestions exposing endpoints without authentication, returning other users’ data through a poorly filtered query. The code ran, the types matched, lint passed. But any authenticated person could see data from any account. This kind of flaw doesn’t show up in unit tests — it shows up when someone with bad intentions tests the API manually.

Code nobody can understand

AI can generate “creative” solutions that work but are impossible to maintain. 200-character regexes, one-liners with 4 chained ternaries, unnecessary abstractions. Code that solves the problem today and creates three new ones tomorrow.

How I use it in practice

My workflow with AI has 5 rules:

  1. I think about the solution before asking for anything. If I don’t know how to solve the problem, AI will generate something I can’t evaluate. First I understand what needs to be done, then I use AI to speed up execution
  2. I use AI to implement parts I already know how to solve. The “how” I already know — I want AI to do the typing
  3. I review everything AI generates before committing. Line by line. If I don’t understand why a line is there, I rewrite it
  4. I never use AI for architecture decisions or critical business rules. Those decisions need human context
  5. I use it heavily for research and exploring APIs I don’t know. It’s like having a pair programmer who already read the docs

AI speeds up execution. The reasoning is still mine.

What this means for the client

Projects get done faster. What used to take 6 weeks might take 4. Not because quality dropped, but because time spent on mechanical tasks decreased.

The Mariah app (order management) shipped in under 4 hours. Part of that is experience with the stack (React + Supabase + Tailwind). Part is AI speeding up the boilerplate. Both together.

But watch out: a junior dev with AI doesn’t become a senior dev. They become a faster junior dev, which is dangerous — they produce more code with less judgment. The real gain is a senior dev using AI, because experience and judgment are still what separates code that works from code that survives in production. For the client, this means hiring an experienced dev who uses AI is the best scenario — it combines speed with quality decision-making.

The future (no hype)

AI will get better. Models will understand more context, generate fewer bugs, and handle complexity better. But it will still need someone who understands the business problem, makes technical decisions with real trade-offs, and takes responsibility for the outcome.

The tool changes. The shape of the work changes. The responsibility doesn’t.

The dev who will do well in the coming years is the one who treats AI as a productivity tool, not as a substitute for knowledge. Knowing how to ask AI is useful. Knowing how to evaluate what it delivers is what separates a professional from an amateur. And knowing when not to use it is what prevents the most expensive problems.

Next step

Need a dev who truly delivers?

Whether it's a one-time project, team reinforcement, or a long-term partnership. Let's talk.

Chat on WhatsApp

I reply within 2 hours during business hours.