

I’m a bit torn on this. On one hand: obviously LLMs do this, since they’re essentially just huge pattern recognition and prediction machines, and basically any person probing them with new complex problems has made that exact observation already. On the other hand: a lot of everyday things us humans do are not that dissimilar from recognizing patterns and remembering a solution, and it feels like doing this step well is a reasonable intermediate step towards AGI, and not as hugely far off as this article makes it out to be.
A few years back, my department at work went to a cabin for a Christmas celebration. It was very nice of them to organize (and pay for) something like that. But after the mandatory team-building events, I was stressed out and looked to retreat somewhere less crowded. I ended up spending the rest of the night with 2 cats in the breakfast room, one even falling asleep on my lap. All three of us were just hiding from the commotion. Cat tax:
Her name was “Blume”, which translates to “Flower” :)