Up to 90% of my code is now generated by AI
How is programming changing due to the development of generative artificial intelligence
The field of AI caught my attention only after the release of ChatGPT. Previously, as a senior full-stack developer, I used GitHub Copilot and Tabnine since 2021, which helped me write code faster. Today, with the help of large language models, I generate up to 90% of the code for my projects and the way I create software has changed.
Let me explain what this means.
What LLMs can do today?
Today's LLMs have limited reasoning capabilities, restricted knowledge, lack access (by default) to up-to-date information, and often cannot handle tasks that are obvious to us, like telling if 9.11 or 9.9 is the bigger number.
I personally don't know who's right — Geoffrey Hinton, who claims that LLMs are intelligent, or Yann LeCun, who says that LLMs possess primitive reasoning capabilities. In practice, it doesn’t matter much to me, and I don’t spend much time thinking about it.
What I care about and focus on is whether I can take advantage of the opportunities offered by the current GenAI and the potential of next-generation models and tools.
I spoke with Greg about AI in general, and he concluded that it's really difficult to find creative, often simple use cases that make a difference. The interesting part is that it’s not about AI itself because we’ve faced the same issue with programming, no-code tools, and automations.
Creativity comes from leaving ego behind
Claims like "AI will take our jobs" or "LLMs are useless" may be correct in some sense, but they share a common trait: they represent an attitude that prevents us from exploring available possibilities.
I don't know if AI will take our jobs, if LLMs are a "scam", or if AI is a bubble in general. Maybe programmers, designers, and writers (skills I have) will be entirely replaced by AI. Whatever which scenario will come true ultimately, I have no influence on that.
At the same time, I have influence on how I’ll use the opportunities we have available today and to what extent I’ll explore them. Therefore, instead of speculating or worrying about the future and things I have no influence over, I fully act in the area I can control.
Creativity comes from understanding
It's not difficult to notice that recently, LLMs have been occupying a large part of my attention. Despite the enthusiasm I have for technology in general, which has fascinated me since my youngest years, I try to look at it from various perspectives, to the best of my intellectual abilities. We are talking here about learning techniques for working with LLMs, but also about taking a critical look at their weaknesses.
My sources of knowledge about LLMs includes:
Anthropic Research, which is materials published by the team whose model Claude 3.5 Sonnet is, at the time of writing these words, the best available LLM
OpenAI Research, which is materials published by the creators of ChatGPT, probably being the furthest along in terms of development and understanding of large language models
Stanford Online, which is a YouTube channel (but not only) where recordings of lectures and presentations are available, allowing for a deep understanding of the mechanics of large language models and their architecture
Yann LeCun head of Meta AI, openly speaking about the current problems of large language models and the long road that is still ahead of us
Andrej Karpathy, former head of Tesla's autopilot, involved with OpenAI in recent years, currently focusing on his own ventures
Georgi Gerganov, creator of llama.cpp and whisper.cpp, exploring the possibilities of open language models
Awni Hannun is an Apple researcher involved in the development of MLX and applications of open models running directly on device
Pliny the Prompter, breaking the safeguards of large language models and tools that use them
3Blue1Brown, a YouTube channel featuring high-quality videos, including content in the area of generative AI
Kyrtin Atreides openly criticizes LLMs, describing them as the biggest scam in history, yet he also finds some narrow use cases for them
Even though each of the mentioned sources and the people behind them provide me with a wealth of valuable knowledge, undoubtedly my own experiences have taught me the most.
Creativity comes from experience
Building tools and applying LLMs in applications, automations, or direct conversations with the model have shown me their capabilities. Combining this with knowledge from the "source" has helped me grasp many principles underlying the technology I work with.
Some examples include:
Bypassing limitations of current LLMs and expanding their capabilities
Extending their base knowledge and tackling challenges related to it
Connecting them with real-world scenarios helps them experience both the value and the issues
Preparing content, tools, and your environment for the presence of AI
As you can see, I wrote a few words about some of these experiences, and it appears that I will be writing about more of them here, so if you would like to learn about them, subscribe to our newsletter.
Practice
So, now you know my context, and we can go back to the title of this article and how it happened that almost all the code of my apps is now generated.
Rule #1: Availability
From the beginning, it was clear to me that LLMs need to be available to me all the time. I'm not talking about using ChatGPT in a browser or GitHub Copilot in IDE, but a scenario where an LLM is integrated with my laptop, phone, and executes tasks through automation workflows or a custom back-end app I've developed.
One example you can personally experience is the Alice app. This interface allows you to chat with LLM, customize it with Snippets, or connect with external services using custom Remote Snippets you can create on your own — and it makes LLM available across your Mac or PC.
To create this project, I used technologies such as Rust, node.js and the frameworks Tauri and Svelte. When I started this it, I only knew Node.js well and a bit of Svelte. The other tools were entirely new to me. So, how is it possible that, as a solo developer, I was able to create such an app?
Well, you might guess that LLMs helped me with that. I can reach out for their help whenever I need it. Not only do I receive assistance, but I've also learned a lot about their behavior, capabilities, and limitations.
Rule #2: Customization
Content generated by LLM natively is sometimes useful, but usually won't meet our needs. That's why it's worth spending time on customizing the system instruction or, better yet, using options that allow creating at least a few of them that will be tailored to us.
For example, one of the tools I use is promptfoo.dev, which allows me to automatically test prompts that I use for my AI agents. Promptfoo is a relatively new tool that is developing rapidly. That's why LLMs either don't have knowledge about it, or their knowledge doesn't include the latest features.
As you can see in the example above, the LLM generated a valid configuration file using my preferred model. I created a snippet that modified the model's behavior using my own rules and provided Promptfoo documentation as context.
Rules #3: Tools
I mentioned that when it comes to availability, I don't speak about GitHub Copilot, which is, in fact, a good tool. Today, it's much better to use Cursor as your IDE. It has built-in AI features like Copilot++, inline generation, and chat.
Cursor allows me to select code and then write using natural language to specify the changes needed. The best part is that I can reference multiple files, directories, the entire codebase, or even external documentation to provide context for the completion.
Other IDEs like IntelliJ from JetBrains also follow a similar path as Cursor, but there's not much to compare currently, and I hope it will change soon.
Meanwhile, there is one more tool that deserves special attention, and it is Aider. As can be seen below, in this case, it's enough to describe the change we want to make in the project, and Aider will independently indicate the files requiring editing and then make the changes itself, asking for our confirmation at each stage.
Aider works great in practice, and even its early versions were described by users as 'suitable for production applications'. However, I think everyone should evaluate this for themselves, especially since launching this tool is simple.
Rule #4: Use Creativity
Not without reason, I previously spoke about creativity resulting from experience and quality sources of knowledge. The value derived from LLMs is not directly related to the model itself, the prompt, or the tools you use, but rather to the way you work with them. Sometimes it's challenging to come up with your own ideas for using new tools. What I do is try to connect them with something I already do or know.
Start by setting up your social media and newsletter feeds with the best sources of knowledge, ideas, and inspiration related to Generative AI. Focus on the people or companies behind the technologies or creators who are truly doing their work. And then... just do your own thing, but explore paths you've never walked before.
90%
If you look at the content of this post so far, you can clearly see that my entire professional environment is focused on generative AI, and my attention is concentrated on blazing trails and seeking new opportunities either through drawing inspiration from others or through my own experiments.
Some key points:
I don't delegate my responsibility to AI
I'm constantly updating my knowledge with the latest information about LLMs and techniques for working with them
I work with the best models available via API, including Claude 3.5 Sonnet at the time of writing this
I use the best available tools on the market and constantly scan for new solutions using ProductHunt and X
I learn new technologies and tools with LLM. I spend time chatting as if I were speaking with a teacher
I generate code that is within my understanding or slightly exceeds my current knowledge or skills
I have a habit of reaching for AI as the primary source of information, and I'm using Perplexity, Google, or StackOverflow less and less frequently
When the LLM lacks knowledge on a given topic, I provide it by pasting fragments of documentation, code, or examples from Issues on GitHub as context for the query
It's obvious to me that LLMs have limited context knowledge about me, my project and the features I need to implement, and the effectiveness of its operation largely depends on the way I describe my problem
It's clear to me that LLMs have limited reasoning abilities. For the hardest problems, I break them into smaller parts or use LLMs to guide me through them rather than solve them directly
I don't use LLM daily; I use it all the time. As a result, up to 90% of my code is now generated. My focus has shifted from typing code and seeking typos to actually shaping the software.
I hope you find value in this article,
Adam
neat
you took 30 minutes to google, now you take 10. What else is using an LLM?