i trust llms too much. i made a mistake not checking how i computed AUROC for the decoder models.
recurse has good thoughts on using LLMs for coding.
I think a useful analogy would be to compare LLMs to e-bikes. If your goal is to get somewhere quickly, an e-bike is definitely advantageous compared to a bike with no e-assist. If your goal is to become a better / stronger cyclist, an e-bike isn’t really going to help you with that.
[It’s similar] with AI tools / LLMs. They can let you explore more at a faster pace and get you places faster. This can be really helpful for familiarizing yourself with topics or concepts or doing exploratory research. And if your goal is just to produce code, they will definitely speed that up. But if your goal is to engage with the work of programming on a deeper level, which I think is essential to become a better programmer, the more you rely on them the more you rob yourself of the benefits accrued through engaging with the process.
I doubt very much if it is possible to teach anyone to understand anything, that is to say, to see how various parts of it relate to all the other parts, to have a model of the structure in one’s mind. We can give other people names, and lists, but we cannot give them our mental structures; they must build their own. — John Holt