
From the beginning, I had high expectations.
Not just that it would be a good tool—
I really thought it would be like hiring someone.
“This is it,” I thought.
“The moment where I can finally just say things,
and things get done.”
Efficiency matters to me.
Because I’m lazy.
That’s why I’ve always relied on systems.
Structures that run without me,
routines that cut repetition,
rules that replace human involvement.
To me, GPT felt like it could design all of that for me.
I had it summarize planning documents,
write emails to clients,
reformat reports,
and even generate new ideas.
At that moment, it didn’t just feel like a tool to reduce work—
It felt like a tool that could replace a person.
That wasn’t just a hope.
It was a small conviction.
But it didn’t last.
Yes, it followed instructions.
But nothing beyond that.
It couldn’t hold context,
forgot what it had just said,
and dropped the thread whenever I needed consistency.
It understood the words,
but it didn’t understand me.
It wasn’t that my expectations were shattered—
I simply realized I had overestimated it.
AI can return calculated outputs,
but it doesn’t follow my internal logic.
Still, in that shortcoming,
I started to see how it could actually be used.
It wasn’t all-powerful.
But when I gave it a single, specific task,
it performed reliably.
It worked like a manual—
Not adaptive, but repeatable.
That’s when I stopped looking for infinite potential,
and started using it in clearly defined ways.
That was my first real lesson from AI:
“Don’t expect it to do everything.
Give it a clear frame, and make it repeat.”
My expectations dropped,
but in return, my direction became clearer.