Skip to main content

·249 words·2 mins

🧪 Quicker failures lead to better questions.

That is probably the biggest thing AI tools changed in my research workflow.

Recently, while working on the concept-encoding part of MrCogito, I noticed something important:

AI did not give me the answer. But with Cursor, my custom skills, and agents, it helped me ask much better questions, much faster.

Instead of getting stuck in the usual loop of “try another variant, run another experiment, hope for a better metric”, I could move much faster between: → code → experiment logs → notes → papers → implementation ideas

That changed the quality of my thinking.

I stopped asking: “Which tweak should I try next?”

And started asking: “What is the model actually learning?” “Why does this result look better on paper but not in meaning?” “Which shortcut is the architecture exploiting?”

That deeper understanding is what actually moved the project forward.

It pushed me away from cosmetic fixes and toward better directions, like rethinking the objective, changing how the bottleneck is used, and exploring new solutions that I would probably have reached much later otherwise.

For me, this is the real value of AI in research: not replacing judgment, but helping me reach better questions sooner.

And better questions often steer the model forward more than one more clever trick.

I wrote a fuller breakdown here: https://ai.ksopyla.com/posts/better-failures-better-questions/

🧠 Has AI changed the way you think about your work, or just the speed at which you do it?

#AIResearch #MachineLearning #Cursor #OpenScience #DeepLearning