When I first tried ChatGPT, it was a lot of fun. I enjoyed trying it out and playing with it. However, when I tried to do something meaningful, it let me down. Is there too much hype about ChatGPT?

Side note: I wonder if all the “bros” that were driving the excessive hype for NTFs and Crypto have moved to the AI space.

I didn’t give up on ChatGPT. Turns out I wasn’t a very good prompt writer. After many attempts, and plenty of experimentation, I think I’ve got the recipe down.

Now, it’s my personal assistant.

First, what was my big mistake? I wanted ChatGPT to “think” for me. Instead, I should have focused on having it “do” things for me.

I was feeding it poorly written prompts, that were often too short, and I expected magical results. However, poor instructions mean poor results.

Instead of a second brain, ChatGPT became my intern. And as I got better at my prompts, my intern’s skills got better and better. Like any intern, if you don’t train them properly, they look sort of incompetent.

I learned to avoid a few things along the way. My intern is terrible at coming up with ideas from scratch. Asking it to write an email or start a performance review required it to make excessive decisions. Too many decisions led to a poor output.

I realized that ChatGPT requires 3 constraints in every prompt, although it took me a while to figure this out. My “intern” needs a specific objective, a specific format for the output, and a list of things to avoid.

Here’s an example of a prompt:

Your objective: [write objective here]
The format of the output: [example template]
List of things to avoid:
* [thing to avoid]

Also, I realized that prompt writing is an iterative process. You’re not going to get it right the first time. I know I didn’t. However, it’s an investment in both you (you’ll become a better prompt writer) and your “intern”.

Once you have a working, reliable prompt, you can get fantastic results all the time.