I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don’t think it got a proper post, and I think it deserves one.
Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!...
Feel like the very beginning of this is not completely crazy (I’ve also thought in the past that straight people often perform “attractiveness” more for the approval of their same-sex friends) but it seems to kind of jump off the evo-psych deep end after that, lol
Also you can’t build a bunch of assumptions about “we should organize society this way” while ignoring the existence of LGBT people, and then go “yeah I know I ignored them but it simplified my analysis.” Like yeah it simplifies the analysis to ignore a bunch of stuff that actually exists in reality, but… then that means maybe your conclusions about how to structure society are wrong??
edit: also this quote is choice:
I don’t know if this really happens. But even if not, the fiction does a great job of highlighting the dynamic I’m thinking of.
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
It really does illustrate the way they see culture not as, like, a beautiful evolving dynamic system that makes life worth living, but instead as a stupid game to be won or a nuisance getting in the way of their world domination efforts
The problem is just transparency, you see – if they could just show people the math that led them to determining that this would save X million more lives, then everyone would realize that it was actually a very good and sensible decision!
Is this a correct characterisation of the EA community? That they all harbour anti-abortion sentiment but for whatever reason permit abortion?
I actually wouldn’t be surprised if this were the case – the whole schtick of a lot of these people is “worrying about increasing the number of future possibly-existing humans, even at the cost of the suffering of actually-existing humans”, so being anti-abortion honestly seems not too far out of their wheelhouse?
Like I think in the EAverse you can just kinda go “well this makes people have less kids which means less QALYs therefore we all know it’s obviously bad and I don’t really need to justify it.” (with bonus internet contrarian points if you are justifying some terrible thing using your abstract math, because that means you’re Highly Decoupled and Very Smart.) See also the quote elsewhere in this thread about the guy defending child marriage for similar reasons.
AI doctors will revolutionize medicine! You’ll go to a service hosted in Thailand that can’t take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that...
I think he means script as in, literally a series of lines to say to your doctor to magically hack their brain into giving you the prescription you need (gee, I wonder how these people ever got into pickup artistry!), not a script as in prescription. I think it’s not about cost, it’s about doctors… prescribing you the wrong thing for some reason so you have to lie to them to get the correct medication? Is this some conspiracy theory I’m not aware of, lol
I don’t really know enough about metabolism to say why his example is wrong, but I will say that I lost 30 lbs by counting calories, at pretty much exactly the rate that the calorie-counting predicted, so I’m gonna have to say his first-principles reasoning about why that’s impossible is probably wrong
Yeah, it’s definitely really hard. The hard part is not “knowing that eating less food will make you lose weight,” it’s actually doing the thing without suffering from willpower failure. But, even given that, Yudkowsky seems to be arguing here that eating less calories won’t make you lose less weight, because such a simplistic model can’t possibly be true (analogizing it to the silly idea that eating less mass will make you lose weight.)
However, uh, his conclusion does contradict empirical reality. For most people, this would be a sign that they should reconsider their chain of logic, but I guess for him it is instead a sign that empirical reality is incorrect.
What I don’t get is, ok, even granting the insane Eliezer assumption that LLMs can become arbitrarily smart and learn to reverse hash functions or whatever because it helps them predict the next word sometimes… humans don’t entirely understand biology ourselves! How is the LLM going to acquire the knowledge of biology to know how to do things humans can’t do when it doesn’t have access to the physical world, only things humans have written about it?
Even if it is using its godly intelligence to predict the next word, wouldn’t it only be able to predict the next word as it relates to things that have already been discovered through experiment? What’s his proposed mechanism for it to suddenly start deriving all of biology from first principles?
I guess maybe he thinks all of biology is “in” the DNA and it’s just a matter of simulating the ‘compilation’ process with enough fidelity to have a 100% accurate understanding of biology, but that just reveals how little he actually understands the field. Like, come on dude, that’s such a common tech nerd misunderstanding of biology that xkcd made fun of it, get better material
it’s a shame, because gender transition stuff is probably one of the most successful “human biohacking” type things in common use today, and it’s also just… really cool. alas, bigotry
since we both have the High IQ feat you should be agreeing with me, after all we share the same privileged access to absolute truth. That we aren’t must mean you are unaligned/need to be further cleansed of thetans.
They have to agree, it’s mathematically proven by Aumann’s Agreement Theorem!
“ We have unusually strong marketing connections; Vitalik approves of us; Aella is a marketing advisor on this project; SlateStarCodex is well aware of us. We are quite networked in the Effective Altruism space. We could plausibly get an Elon tweet. ”...
The winning votes will become investments into the post, binding the CONTENT_EXCRECATOR to CREATE_THE_CONTENT and based on some configurable metric (post score, ad revenue etc.) the investment will accrue dividends
I’m in, but only if this part is handled by fractionalizing an NFT linking to the original post on your custom blockchain
During the interview, Kat openly admitted to not being productive but shared that she still appeared to be productive because she gets others to do work for her. She relies on volunteers who are willing to do free work for her, which is her top productivity advice.
Productivity pro tip: you can get a lot more done if you can just convince other people to do your work for you for free
There’s something infuriating about this. Making basic errors that show you don’t have the faintest grasp on what people are arguing about, and then acting like the people who take the time to get Ph.Ds and don’t end up agreeing with your half-baked arguments are just too stupid to be worth listening to is outrageous.
I don’t even get his point. You can voluntarily use your freedom to constrain yourself already. What, is the Food Optimizer gonna knock down your door and force-feed you McDonald’s? Has vegetarianism become illegal? Clearly what he’s actually mad about is that the state won’t let him involuntarily constrain others
The whole “autogynephilia” thing has always kind of struck me as similar to the “you gotta stay constantly vigilant because the devil is constantly trying to tempt men into having gay sex” thing. Like, yeah, if you conceptualize it as pathological, you’re gonna feel like there’s something wrong with you. But it only feels weird when you’re feeling it from the “wrong side,” so to speak.
I think this blog got posted to sneerclub before though and yeah it’s kinda too sad to make fun of. This post is a couple years old now but it looks like they’re still blogging in this vein… I hope eventually they’re able to come to terms with their true feelings.
The world we have is ugly enough, but tech capitalists desire an even uglier one. The logical conclusion of having a society run by tech capitalists interested in elite rule, eugenics, and social control is ecological ruin and a world dominated by surveillance and apartheid. A world where our technological prowess is finely...
Reply guy EY attempts incredibly convoluted offer to meet him half-way by implying AI body pillows are a vanguard threat that will lead to human extinction... (nitter.net)
… while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks....
"As always, pedophilia is not the same as ephebophilia." - Eliezer Yudkowsky, actual quote (www.lesswrong.com)
I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don’t think it got a proper post, and I think it deserves one.
this week's LW chud who is the sort of anti-wokeist who says he isn't right wing writes about human interaction from first principles. 100 upvotes. (www.lesswrong.com)
Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!...
Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? (www.forbes.com)
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
loving the EA forum on how the problem with spending the charity money on a castle was the public relations (forum.effectivealtruism.org)
the effectively altruistic AI's fans are gonna pipe bomb a Planned Parenthood (forum.effectivealtruism.org)
Serious Yud or Joking Yud? (nitter.net)
AI doctors will revolutionize medicine! You’ll go to a service hosted in Thailand that can’t take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that...
Yud offers more weight loss discourse (nitter.net)
Cold viruses and bitcoin mining oh noes (nitter.net)
Blood Music was way cooler then this just saying.
Good Guy Orange Site refuses to believe rationalists/EAs can be as bad as we're describing and is sure we're just exaggerating (news.ycombinator.com)
Big Yud and the Methods of Compilation (nitter.net)
In today’s episode, Yud tries to predict the future of computer science.
Let's walk through the uncanny valley with SBF so we can collapse some wave functions together (hachyderm.io)
Rationalist check-list:...
Aella and company want to put GM bacteria in your mouth (www.lanternbioworks.com)
“ We have unusually strong marketing connections; Vitalik approves of us; Aella is a marketing advisor on this project; SlateStarCodex is well aware of us. We are quite networked in the Effective Altruism space. We could plausibly get an Elon tweet. ”...
The rise of the new tech right (archive.ph)
Caught the bit on lesswrong and figured you guys might like.
If learning incorrect things is EY's only definition of trauma, his existence must be eternal torment. (nitter.net)
source nitter link...
a scrawny nerd in a basement writes (www.lesswrong.com)
(whatever the poster looks like and wherever they live, their personality is a scrawny nerd in a basement)
Rationalist posts detailed catalogue of confirmed bad behaviour by EA/rationalist org Nonlinear. Second rationalist goes meta on first post: how can we even know anything, it's so unfair to Nonlinear
original post detailing mistreatment of employees...
Libertarian becomes lawyer, appreciates police (www.lesswrong.com)
Choice quote:...
this year's "hmm, actually this is bad" post: Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong (www.lesswrong.com)
I will never get over how the pretty girl in the photo attached is LITERALLY Roko of the Basilisk's example of the bad ending for humanity (awful.systems)
really: archive.ph/p0jPI...
A LessWronger writes /16500/ words on why they are definitely not trans. (unremediatedgender.space)
It will not surprise you at all to find that they protest just a tad too much....
18+ Silicon Valley’s Quest to Build God and Control Humanity (www.thenation.com)
The world we have is ugly enough, but tech capitalists desire an even uglier one. The logical conclusion of having a society run by tech capitalists interested in elite rule, eugenics, and social control is ecological ruin and a world dominated by surveillance and apartheid. A world where our technological prowess is finely...
18+ If you've made it here from the outer reaches, comment and say hi