AI and Conformity
Will AI lead to the (relative) success of conformists versus idiosyncratic people?
I’ve been using Cursor for coding at work for the last three months or so. I find it useful, but also insanely frustrating. Not a day goes without my swearing at it. I find that it just doesn’t understand me.
I mean - there is a reason you have formal languages, such as computer code, where everything is precisely defined without any ambiguity. The whole hope of vibe coding is that you can dispense with the formality, and communicate in otherwise imperfect natural language, which the code (LLM) takes and magically creates precise code.
Now, notice that the LLM is creating information in this process. It is taking imprecise instructions and converting them into precise instructions. The precision is being added based on assumptions it is making. This assumption is based on its training, which is “all the data on the internet”.
Putting it another way, the bridge between your imprecise instructions to the LLM and the precise instructions the LLM gives the computer is the “average of how the internet would build this bridge”.
Now, there is a problem here. If you are a conformist, then there is a good chance that the way you would build this bridge would exactly tally with how the “average of the internet” (or “wisdom of crowds”, if you like it that way) would build the same bridge. However, if, for whatever reason, you think differently, then it can make for an entirely frustrating experience.

I suspect this is one of the reasons for my constant frustrations with Cursor. When it comes to Data Science, which is my primary domain, I’m entirely self-taught. I taught myself through random jobs in management and finance, and from reading blogs (rather than books or watching MOOCs). This means the way I solve problems, or code, is fairly different from that of the large mass of data scientists.
So when I do things halfway and decide to take the LLM’s help and give it imprecise instructions on how to finish it, it interprets these imprecise instructions the wrong way. The information it is creating is NOT the information that I want it to create.
People tell me that I should prompt it better. I try my damned hardest, but again remember that the prompt is an imprecise instruction - if I wanted to be precise I’d be giving instructions in R (or Python; or C++), not in English. The delta in some cases is so large (between what the LLM does and what I want it to do) that it is impossible for the “resultant” of its training and my prompt to achieve what I want it to achieve.
On a broader note, I’m starting to wonder if LLMs will make us all conformists. If you look at general LinkedIn gyaan nowadays, one thing people say is about how your success in your career now depends on how well you are able to use LLMs. From my arguments above, it is clear that it will be easier for you to use LLMs if your thinking is like that of the “average of the internet”. Which means that it is likely that you will be able to use LLMs much better if you are a conformist!
I’ve written about this once before, about how LLMs are leading to a kind of monoculture.
I feel like things might get worse than that - the cost of diversity is not just in terms of social disapproval now (which, to be honest, is largely going away). It is that if you are not like everyone else, your usage of important tools (such as LLMs) is not going to be as efficient. And so you have an incentive to work like everyone else, so that LLMs can easily take your imprecise instructions and convert them into precise instructions.
This can also be self-fulfilling. As more and more people write “conformist code”, the more conformist the average code will be, and so the more LLMs will struggle to deal with more idiosyncratic code. And that will lead more people to shift to writing conformist code. And so on.
On a final serious note, when people say “soon everyone will be coding in English”, it pretty much refers to “average” “boilerplate” code. There, there is so much data on the internet, and requirements are so common, that it will be possible for the LLM to easily extend your imprecise instructions and make them precise.
For anything idiosyncratic, though, you will have to use your own brain.
What do you think?
I want to comment on the "linkedin gyaanis" stating that one needs to be proficient at communicating with LLMs (prompt engineering) to secure their job. As you pointed out, this is not how it's going to be. If I want an LLM to make something technical for me, I'll have to add technical details (terms, etc) to make it the way I want or it'll just slide to make average thing. Now, if I have to add technical details (to give high level instructions with sufficient detailing) then it just creates a new class of technical expertise which I doubt anyone can take expect the people who are already good at these things, as they are the ones who know their models inside-out, so they can simply name a thing and point the LLM to fix it.
Another thing is that prompt understanding by LLMs is likely going to improve so they can understand what one is trying to convey without the user being too precise but this can only take us so far on techincal projects unless we have something like AGI, in that case everyone is unemployed no matter how good they are at prompting.
Summarising, prompting is not going to save one. If there's anything that can save us, it will be deep understanding of complex systems, but that will also be of no value in a post-AGI world.