"AI alignment" really is just about working out their shared world sci fi collaboration (www.lesswrong.com)
an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments
an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments
I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don’t think it got a proper post, and I think it deserves one.
Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!...
archive archive.is/8NW7e
archive: archive.is/KdzMM
Found this because an article on Helen Toner popped up in my feed and I wanted to find out more, and boy did I find out more.
(whatever the poster looks like and wherever they live, their personality is a scrawny nerd in a basement)
Choice quote:...
This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer’s recent switch to doomerism with his new phrases of “shut it all down” and “AI alignment is too hard” and...
yes really, that’s literally the title of the post. (archive copy, older archive copy) LessWrong goes full Motte....
Video games also have potential legal advantages over IQ tests for companies. You could argue that “we only hire people good at video games to get people who fit our corporate culture of liking video games” but that argument doesn’t work as well for IQ tests....