Jack Rosen

Skeptic to Believer: My Journey Through AI

Published Feb 8, 2026

#engineering#ai#agentic coding

9 min read

My Journey Through AI

When I started to hear about AI coding tools, I didn’t believe they would be helpful and believed they would be harmful to my output. As time went on, I started to really understand how much they can accelerate productivity when given the right guardrails.

This is not a guide on how to use AI “correctly”, but a record on how my view of AI has changed as I started to explore different tools as they evolved. Here is my journey.

ChatGPT

Similar to the rest of the world, I started really hearing about how AI is going to change the world in early 2023 with the release of ChatGPT. Everyone was telling me that it was the best thing ever, you could ask it any question and it would answer you. It also was very confident in its opinions, which made people really believe it. As with most announcements of new technology, I viewed it with healthy skepticism. People were saying that it would take over all of our jobs and that it was the biggest existential threat to society. My first question was: “Can it really take over my job?”

To test this out, I started asking it to write me some code. My first question: “Please write me a client that makes a network request to https://www.google.com in Swift and return it back as a String”. While I can’t find the exact code, it looked something like this:

func performRequest() -> String? {
    guard let url = URL(string: "https://www.google.com") else {
        return nil
    }

    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        if let error = error {
            return nil
        }

        guard let data = data else {
            return nil
        }

        if let responseString = String(data: data, encoding: .utf8) {
            return responseString
        } else {
            return nil
        }
    }

    task.resume()
}

While on the surface, this code looks like it will compile, an experienced Swift engineer likely can see the issue here. It is returning a String synchronously, but asynchronously making the request. I then said “that clearly won’t compile” and it then apologized and spit out roughly the following code:

func performRequest() -> String? {
    guard let url = URL(string: "https://www.google.com") else {
        return nil
    }

    let dispatchGroup = DispatchGroup()
    var returnString: String? = nil
    dispatchGroup.enter()
    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        defer { dispatchGroup.leave() }
        if let error = error {
            return
        }

        guard let data = data else {
            return
        }

        if let responseString = String(data: data, encoding: .utf8) {
            returnString = responseString
        } else {
            return
        }
    }
    task.resume()
    dispatchGroup.wait()
    return returnString
}

Well, now we had working code, but you would rarely want to make an asynchronous network request blocking, since that is an easy foot-gun for an iOS app. After this, I felt like the technology wasn’t ready, and I definitely wasn’t ready to trust it. While I would use it for some basic research, I didn’t trust it writing code.

GitHub Copilot

About a year later, in early 2024, I started having some friends tell me about GitHub Copilot. This was very appealing to me, since it fixed one of the biggest issues that I had found when trying to code with ChatGPT: needing to context switch between apps to write code. ChatGPT didn’t have enough context on what I was building, so I consistently had to spend more time explaining what I was doing than it would take for me to do the work itself. With GitHub Copilot, it was integrated directly into my IDE. This allowed me to not need to context switch between apps and it finally understood what I wanted to do. I found that I could add a comment for what I wanted a function to do, and then after a few seconds of waiting, a function would appear.

Was the code perfect? No.

Was it able to get the concept of what I wanted and take a lot of the boilerplate out of what I needed to do (especially when writing Java)? Yes!

This was a revolutionary leap for me. I was able to break up what I was working on into separate chunks and AI could write about 25% of my code. I was able to start trusting AI to build the structure of what I wanted, with me needing to fill in lots of gaps.

Cursor

After some time working with ChatGPT, the concept of “Agentic Coding” really started to take off, with Codex, Claude Code, and Cursor being released around the same time. First, I started using Cursor. This was another leap forward, with the entire IDE integrated directly with the agent. At first, I started giving it very minor tasks, such as “Add a very simple function”. And honestly, it went very poorly. The AI agent wanted to start working on things I didn’t ask and started making my work much harder. The first step I did was add a Cursor rule: “Before doing ANYTHING, you need to ask me. I understand what I want much more than you do, so let’s go step by step together”. While it did not always listen, Cursor started to feel much more like a pair programmer than a rogue agent designed to ruin my workflow.

Cursor Rules

While Cursor was able to start writing code pretty well at this point, I had a few different issues. It would write code that:

To begin working through those problems, my team and I started to work on Cursor Rules. These simple .mdc files unlocked even more potential for Cursor. No longer did we need to explain that we should use the specific code generation library. We didn’t need to explain our architecture anymore. Instead, we would be able to focus our efforts on the tasks we are working on, not teaching the AI how to code every time.

At this point, I was able to trust AI to build large swaths of code, with me needing to validate some assumptions it had made, even with me being explicit. I had believed that we had reached the pinnacle of Agentic Coding. I was very wrong.

Claude Code

After Cursor, I had started to transition to using Claude Code in the CLI. At first, I felt like I was working much slower. The CLI UI/UX was awful for me, making reviewing code much harder and slower for me. I was unable to understand the changes that the AI agent was making, and with my experience with Cursor, it was super important to review the code. I decided to go back to Cursor after a few weeks because I struggled with using Claude Code. Something interesting happened when I switched back. I realized I was fighting Cursor more to follow the coding standards we had set. I had to be much more explicit in my requests or else it would make simple mistakes.

Even though the UI/UX of Claude was more difficult to work with, I had gotten used to it asking clarifying questions and having a better understanding of how each system of our backend/app work together. With Claude Code, I was able to give it rather vague directions and it would scan the code, ask questions, create a strong plan, and implement it rather well. While I wasn’t able to review each subsection of code as easily, I realized that I didn’t need to as well, since it was much better at following standards. After I finished up my task (having Claude Code write tests and make sure they all pass), I would read its summary to make sure its approach made sense, push the code up to GitHub, and then review the code as if I were a PR reviewer.

Once I started building trust in the code Claude Code could write, my productivity roughly tripled. One example was when I was working on a code generation tool. Before AI tools, it took me roughly 3 days for a PoC and 2 weeks for it to be ready for production. With Claude Code, I was able to get a PoC in a few hours and then fully in production with 2 more days of work. I would be able to give Claude Code a task, have it look at the tech plan that we had written, and it would be able to understand how its part fits into the whole system and implement it pretty quickly.

This was the big shift for me:

My mental model when working with AI suddenly changed. Instead of supervising it to fix mistakes, I was reviewing it to validate its decisions.

Where We Go From Here

Hopefully as you have seen from this post, AI tools and Agentic Coding has come a long way in roughly 3 years. We started out with a separate app that you needed to copy and paste code and context into to get any help and are now at a place where you can give relatively complex tasks to your CLI and it will write functional code. Right now, the whole process is still designed with a human in the loop as the orchestrator of tasks. But do we always need that? The bet that tools such as Gas Town are making is that we do not. In 2023, if you told me that you could have these tools build a relatively functional C compiler, I probably would have laughed in your face. Now that it is happening, I believe that we are in the early stages of AI revolutionizing the software engineering industry. My mindset around AI has changed extensively, going from inherent skepticism to trustworthy, but needing validation. As these tools expand, the validation probably will become less and less important.