Discussion about this post

User's avatar
Lokesh Parihar's avatar

I want to comment on the "linkedin gyaanis" stating that one needs to be proficient at communicating with LLMs (prompt engineering) to secure their job. As you pointed out, this is not how it's going to be. If I want an LLM to make something technical for me, I'll have to add technical details (terms, etc) to make it the way I want or it'll just slide to make average thing. Now, if I have to add technical details (to give high level instructions with sufficient detailing) then it just creates a new class of technical expertise which I doubt anyone can take expect the people who are already good at these things, as they are the ones who know their models inside-out, so they can simply name a thing and point the LLM to fix it.

Another thing is that prompt understanding by LLMs is likely going to improve so they can understand what one is trying to convey without the user being too precise but this can only take us so far on techincal projects unless we have something like AGI, in that case everyone is unemployed no matter how good they are at prompting.

Summarising, prompting is not going to save one. If there's anything that can save us, it will be deep understanding of complex systems, but that will also be of no value in a post-AGI world.

Expand full comment
2 more comments...

No posts