No, they’re able to grasp the near term risks, they just don’t want that to get in the way of making money because they know they’re unlikely to be affected.
I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don’t think it got a proper post, and I think it deserves one.
Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!...
Since they brought up Kathy Forth I’d just like to remind everyone within a few weeks of Kathy’s death it was revealed that they had in fact known about the accusations against Brent Dill.
What’s “that whole eudaimonia thing from a while back”? (I’m familiar with the concept of eudaimonia in general, but I’m not sure what you’re referring to)
Weirdly rationalists also sometimes read this book and take all the wrong lessons from it.
Scott Alexander is a crypto-reactionary and I think he reviewed it as a way to expose his readers to neoreactionary ideas under the guise of superficial skepticism, in the same manner as the anti-reactionary FAQ. The book’s author might be a anarchist but a lot of the arguments could easily work in a libertarian context.
The review is mostly positive, but then it also has passages like this:
Well, for one thing, [James C.] Scott basically admits to stacking the dice against High Modernism and legibility. He admits that the organic livable cities of old had life expectancies in the forties because nobody got any light or fresh air and they were all packed together with no sewers and so everyone just died of cholera. He admits that at some point agricultural productivity multiplied by like a thousand times and the Green Revolution saved millions of lives and all that, and probably that has something to do with scientific farming methods and rectangular grids. He admits that it’s pretty convenient having a unit of measurement that local lords can’t change whenever they feel like it. Even modern timber farms seem pretty successful. After all those admissions, it’s kind of hard to see what’s left of his case.
and
Professors of social science think [check cashing] shops are evil because they charge the poor higher rates, so they should be regulated away so that poor people don’t foolishly shoot themselves in the foot by going to them. But on closer inspection, they offer a better deal for the poor than banks do, for complicated reasons that aren’t visible just by comparing the raw numbers. Poor people’s understanding of this seems a lot like the metis that helps them understand local agriculture. And progressives’ desire to shift control to the big banks seems a lot like the High Modernists’ desire to shift everything to a few big farms. Maybe this is a point in favor of something like libertarianism?
the cell’s ribosomes will transcribe mRNA into a protein. It’s a little bit like an executable file for biology.
Also, because mRNA basically has root level access to your cells, your body doesn’t just shuttle it around and deliver it like the postal service. That would be a major security hazard.
I am not saying plieotropy doesn’t exist. I’m saying it’s not as big of a deal as most people in the field assume it is.
Genes determine a brain’s architectural prior just as a small amount of python code determines an ANN’s architectural prior, but the capabilities come only from scaling with compute and data (quantity and quality).
When you’re entirely shameless about your Engineer’s Disease
I intensely dislike him. I’d say his views rhyme with those of the rationalists, so to speak, which is why I think it’s noteworthy that even a guy like him is criticizing EA now.
I suppose when talking about science to a popular audience it can be hard not to make generalizations and oversimplifications and if it’s done poorly that oversimplification can cross over into plain old inaccuracy (if I were to be charitable to Yud I would say that this is what happened here).
To wit: even the “K’nex connector with 4 ports” model of carbon doesn’t really explain the bonding of aromatic molecules like benzene or carbon nanotubes; I’ve likewise seen people confidently make the generalization “noble gases don’t react”, apparently unaware of the existence of noble gas compounds.
His argument, as I understand it, is that he knew about the covalent bonds between proteins but didn’t mention them because he was simplifying things for a lay audience, and that those covalent bonds don’t matter because they aren’t the “load bearing” elements in flesh.
There are two problems I see
His earlier statements suggest he actually had no knowledge of that whatsoever
I think his revised explanation is still wrong, because the extracellular matrix that holds cells together and connective tissue are composed largely of proteins that have these covalent crosslinks and rely on them for strength. When you tear a ligment it’s not just van der waals and hydrogen bonds being broken, those alone would be far too weak.
The so called “experts” say that spider silk is stronger than steel, but steel beams can hold up bridges while I can break a spider web with my little finger. Looks like the “experts” are wrong and spider silk isn’t very strong after all - probably because it’s made of proteins held together by weak van der Waals forces instead of covalent bonds.
Thank you, that link is exactly what I was looking for (and also sated my curiosity about how Yudkowsky got involved with Bostrom and Hanson, I had heard they met on the extropian listserv but I had never seen any proof).
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
I highly suspect the voice analysis thing was just to confirm what they already knew, otherwise it would have been like looking for a needle in a haystack.
People on twitter have been speculating that someone who knew him simply ratted him out.
Yeah, the fact that they said she was committed to “AI safety” (instead of “AI ethics”) was a misstep, Wired has published her take on “AI safety” before but maybe they haven’t realized how contentious those two terms have become (or just forgot).
I hate to say it, but even sneerclub can get a bit biased and tribal sometimes. He who fights with monsters and so on
I suspect watching the rationalists as they bloviate and hype themselves up and repeatedly fail for years on end have lulled people into thinking that they can’t do anything right, but I think that’s clearly not the case anymore. Despite all the cringe and questionable ethics, OpenAI has made a real and important accomplishment.
They’re in the big leagues now. We should not underestimate the enemy.
The accomplishment I’m referring to is creating GPT/DALL-E. Yes, it’s overhyped, unreliable, arguably unethical and probably financially unsustainable, but when I do my best to ignore the narratives and drama surrounding it and just try out the damn thing for myself I find that I’m still impressed with it as a technical feat. At the very, very least I think it’s a plausible competitor to google translate for the languages I’ve tried, and I have to admit I’ve found it to be actually useful when writing regular expressions and a few other minor programming tasks.
In all my years of sneering at Yud and his minions I didn’t think their fascination with AI would amount to anything more than verbose blogposts and self-published research papers. I simply did not expect that the rationalists would build an actual, usable AI instead of merely talking about hypothetical AIs and pocketing the donor money, and it is in this context that I say I underestimated the enemy.
With regards to “mocking the promptfans and calling them names”: I do think that ridicule can be a powerful weapon, but I don’t think it will work well if we overestimate the actual shortcomings of the technology. And frankly sneerclub as it exists today is more about entertainment than actually serving as a counter to the rationalist movement.
I suppose the goalpost shifting is my fault, the original comment was about Sutskever but I shifted talking about OpenAI in general, in part because I don’t really know to what extent Sutskever is individually responsible for OpenAI’s tech.
also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ.
I think people are missing the irony in that comment.
The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.
For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.
So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.
The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.
and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.
I think there’s a non-ironic element too. Sutskever can be both genuinely smart and weird cultist; just because someone is smart in one domain doesn’t mean they aren’t immensely foolish in others.
The current TPOT implosion
Is uh, anyone else watching? This dude (chaos) was/is friends with Brent Dill.
Reply guy EY attempts incredibly convoluted offer to meet him half-way by implying AI body pillows are a vanguard threat that will lead to human extinction... (nitter.net)
… while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks....
"As always, pedophilia is not the same as ephebophilia." - Eliezer Yudkowsky, actual quote (www.lesswrong.com)
I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don’t think it got a proper post, and I think it deserves one.
The SSC subreddit ponders the difference between Bayesianism and plain old bias (old.reddit.com)
"Successful people create companies. More successful people create countries. The most successful people create religions." (blog.samaltman.com)
From Sam Altman’s blog, pre-OpenAI
this week's LW chud who is the sort of anti-wokeist who says he isn't right wing writes about human interaction from first principles. 100 upvotes. (www.lesswrong.com)
Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!...
Nonlinear seem to think this post replying to the accusations about them will make them look like the heroes (forum.effectivealtruism.org)
warning: seriously nasty narcissism at length...
LW: [Request]: Use "Epilogenics" instead of "Eugenics" in most circumstances - people just don't like the word itself yes that must be it. Coined by Aella. (www.lesswrong.com)
archive archive.is/8NW7e
LW: CRISPR Will Make Me A Genius - "I don’t have a formal background in biology. And though I learn fairly quickly and have great resources like SciHub and GPT4," (www.lesswrong.com)
archive: archive.is/KdzMM
Effective Altruism Funded the “AI Existential Risk” Ecosystem with Half a Billion Dollars (www.aipanic.news)
"it’s like the stages of a rocket ship and racism was the first stage" (awful.systems)
Image taken from this tweet: twitter.com/softminus/status/1732597516594462840...
deleted_by_author
18+ Why Yudkowsky is wrong about "covalently bonded equivalents of biology" (titotal.substack.com)
This is my article on one of the dumbest and most obviously false claims Yudkowsky has ever made, about biology not using covalent bonds.
To what extent did Eliezer Yudkowsky invent the Effective Altruist movement? (forum.effectivealtruism.org)
I was wondering if someone here has a better idea of how EA developed in its early days than I do....
Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? (www.forbes.com)
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
The Inside Story of Microsoft’s Partnership with OpenAI (archive.is)
Most of the article is well-trodden ground if you’ve been following OpenAI at all, but I thought this part was noteworthy:...
Prominent Women in Tech Say They Don't Want to Join OpenAI's All-Male Board (www.wired.com)
non-paywall archived version here: archive.is/ztech
OpenAI Employees Say Firm's Chief Scientist Has Been Making Strange Spiritual Claims (futurism.com)