Vibe Coding: Thoughts

Last week I did some vibe coding & developed a small application to help me view my team’s progress on our github repo. Here are some thoughts on the Vibe Coding process.

TLDR Treat the AI like a junior developer

  • Give explicit instructions
  • Instruct it to do small changes at a time & verify
  • Ensure it has understood you (read the acknowledgement & summary)
  • Trust but verify (Review the generated code)
  • Remind it to focus on its tasks & use automation (Use automation already available in the IDE)
  • AI is great at code generation. Not as good at maintaining existing code (today)

Best practices

  • Add tests early in the vibe coding process. Ideally, follow TDD
  • Include instructions to check relevant websites in the prompt. This will ensure the AI actually follows best practices
  • When you encounter a bug introduced, ‘ask’ the AI why it introduced the error. This may lead you to learn better prompting practices.
  • Don’t use it only to generate code as per your instructions. Use it to shore up your flaws. Ex: ask the AI for suggestions to improve the code. More open-ended the question the better.

Details

  • The AI is a junior developer who is trying to be a “Yes-man”.

It will implement what you ask without stopping to consider if it is the right approach. In a script which was handling the creation, execution, failures, retries & termination of some background processes, I asked it if replacing the code with the concurrently library would help fix a problem I was encountering. It enthusiastically replaced all the code with the library fixing one problem & removing all the intended features of the program.

  • The AI is like an over optimistic junior developer who jumps to use new & complicated code where a simple one will do the work.

Prompts like “Fix the compilation issue in this TypeScript code” cause it to go into an optimisation loop. I’d start with 12 compilation issues & end up with 20 and the Agent stuck in a loop before giving up.

Fix: explicitly ask it to “Fix one TypeScript compilation code at a time. Don’t do any optimisations. After each fix, run the build command to verify that no new issues have been added”

  • The AI is like a junior developer with a very short “memento” like memory.

Sometimes the AI does not do what it explicitly says it will do. OR says it did something without actually doing it.

  • Example 1. In one prompt, I asked it to add a ‘kill_script’ only as a fallback. It acknowledged the instruction but then replaced the original code with the kill_script instead. Takeaway: read the summaries.

  • Example 2. I asked it to identify code duplication in a couple of files. It found the right code which was duplicated & fixed them. I asked it to go ahead & do the same fix for all the files I’d identified. It claimed it had done the same change to all the files. However, it had not updated several files though they were in the original context and only partially implemented the actual change in a few places (not removed the redundant code)

  • Minor quibbles:

    • The Agent adds useful comments when generating code but removes existing comments arbitrarily in code it is just replicating as is. A bug probably introduced when it first parses the code & dismisses comments as irrelevant.
    • It does unnecessary changes (like linting fixes) which it could delegate to the IDE (consuming tokens for tasks I didn’t ask it to perform)

comments powered by Disqus