I'll believe it when I see it
Jack Clark's recent essay on RSI for AI R&D being realistic by '26 reminded me of a funny, and I claim very relevant, story:
Back in ye olde '17 to '18, when rumors of these large language models were filling the halls, a co-worker pulled me over to show me an early model. He got my attention with the claim that "...soon everyone will have a tutor!".
A BOLD claim, indeed!
I scurried to his desk excited and watched him paste in the most convoluted cli call I had ever seen. He hit enter, and after what felt like an awkward eternity, the response came back … and of course exhibiting that classic llm artifact where the model just repeats the same sentence over and over. His terminal filled up quickly and he had to ctrl+c out.
"I'm not sure what happened, it just worked before! WTH?"
I of course gave him some corporate words of assurance, but I quipped privately to myself: a personal tutor for everyone?! I'll believe it when I see it!
Alas, here we are…
That moment now feels like a fitting metaphor for where we are with rsi and intelligence. Regardless of someone's personal p(rsi) or exact timeline, it's clearly going to become reality soon, and it will impact all of our work and how we individually provide value to society and our community. Even with how unhelpful current models can still be at times :-).
However, I'm not really sure anyone can really guarantee anything of real substance about whether it will be good, bad, or what the effects will look like, even if their own efforts are pointed in a certain direction. I think the best we can do is to honestly point to the trend line itself, and that's about it?
For me, I think this is at the heart of why I find it quite unnerving.