It was impossible for computers to beat chess and go masters when the computers were trying to play like humans -trying to model high level understanding of strategy and abstract values. The computers started winning when they got fast enough to brute force games - to calculate all of the possible outcomes from all of the possible moves, and to choose the best one.
This is basically the same difference between LLMs and ‘true’ general AI. The LLMs are brute forcing the next line of a screenplay, with no way to incorporate abstract concepts like truth or logic. If you confuse an LLM for an AI, then you’re going to be disappointed in its performance. If you accept that an LLM is a way to average past communications, and accept that a lot of its training set were fiction, then it’s an amazing tool for generating consensus text (given that the consensus includes fantasies and lies). It’s not going to write new code, but it will give you an approximation of all the existing examples of some algorithm. An approximation that may introduce errors, like copy-pasting sequential lines from every stackexchange answer.
Computer graphics, computer game opponents, they’re still doing the same things they were doing decades ago, and the improvements are just doing it all faster. General AI needs to do something different than LLMs and most other ML algorithms.
Numpy won’t tell me what ln(74000000000000006.7/74000000000000000). Ran into exactly this problem for individual calculation
Trouble is that 74000000000000006.7/74000000000000000 ~ 1.000 000 000 000 000 1 and double-float precision is 0.000 000 000 000 000 2. Needs a 96 or 128 bit float. The whole topic of estimating one’s personal contribution to global phenomena is loaded with computer precision risks, which is part of what makes me skeptical of the final result, without looking far more closely than my interest motivates. Like calculating the sea level rise from spitting in the ocean - I believe it happens, but I’m not sure I believe any numerical result.
I won’t comment on the final accuracy, but I will note that this is an extremely roundabout path to your final answer, and some of the intermediate steps are…weird. Most notably, the speculation that every man, woman, and child on the planet might run a 1 kW appliance 24/7/365. This is 7e13 kWh or 70k TWh, about 3x current global energy use (not just electicity) before accounting for efficiency. The equation you cite for radiative forcing, specifically its ln(new/old) term is very non-linear, so you should get a much lower marginal effect from 70k TWh than from 1 kWh.
A simpler approach is to calculate the CO2 required for your 1 kWh AC, i.e.: 1kWh * 3600 kJ/kWh / 0.6 efficiency / 890 kJ/mol = 6.7 mol CO2. Current atmospheric CO2 is 75 Pmol. From that, I get radiative forcing of ln((7.4e16 + 6.7)/7.4e16)/ln(2)3.7 * 4pi*(6.4e6^2). Numpy won’t tell me what ln(74000000000000006.7/74000000000000000). It will tell me the forcing from 10 kWh is ~2.5W, or the same 0.25W/kWh you got. I guess ln is not that nonlinear in the 1+1e-16 to 1+1e-4 range, after all.
0.25W/kWh seems improbably high. 1 kWh is about 0.1 W running 24/365. At 60% efficiency, that’s burning 0.2W of natural gas and implies that the radiative forcing from CO2 is much greater than the energy to produce the CO2 in the first place. I get that the energy source for heating is different from the energy source for electricity, but it feels wrong, even without the 1000 year persistence. I don’t know where the radiative forcing equation came from nor its constraints, so I’m suspicious of its application in this context. There’s a lot of obscenely large numbers interacting with obscenely small numbers, and I don’t know enough to say whether those numbers are accurate enough for the results to be reasonable. Then there’s the question of converting the energy input to temperature change.
I was just googling around, and it looks to me like a private rail car costs something like a 2nd home, storage fees similar to property tax, $4/mile to have Amtrak haul you around. Basically a vacation home, but mobile. Definitely a 1% thing, but not billionaires-only. Probably way more prestige in saying you’ve got a private rail car than a beach house. At least among a certain segment.
Honestly, I feel like he’s incomprehensibly wealthy even relative to single-billionaires. Google says there’s about 2600 of them on the planet, so I’m putting Musk incomprehensibly wealthy compared to 99.99997% of us
How a single senator holds up hundreds of such individuals over something completely unrelated to the job performance of these flag officers is bewildering.
There’s a senate rule that all general offiver promotions require unanimous consent. Like the cloture rule for judicial appointments, they could change it tomorrow, but a lot of Senate rules are there to allow Senators to feel powerful.
Many of the journals I’ve published in do require a link, usually a PMID or DOI, but they’re not usually part of the review process. That is, one doesn’t expect academic content reviewers to validate each of the citations, but it’s not unreasonable to imagine a journal having an automated validator. The review process really isn’t structured to detect fraud. It looks like the article in question was in the preprint stage - i.e.: not even reviewed yet - and I didn’t notice mention of where they were submitted.
Message here should be that the process works and the fake article never got published. Very different than the periodic stories about someone who submits a blatantly fake, but hand written, article to a bullshit journal and gets published.
Stationary rowing, 5 days/week. It’s a good whole-body exercise, heavy on cardiovascular & low impact, but not particularly strengthening. Can sit in front of a movie and just go. Got a tracker to record performance & heart rate, and I really like seeing new bouts appear in the graph. That may be more motivating than the nebulous protection from future cardiovascular disease.
Gotta admit, I only went looking for the dragon because everyone in game said it’d be super helpful, and there’s a quest called “Gather your allies.” My talker had like 20 charisma and expertise in all the charisma skills…I resolved a lot of conflicts without violence. Disappointed to be forced into combat with the dragon by our guardian angel.
Kind of disappointed with all the interactions with our ‘guardian angel’ once their true nature was revealed. Maybe I made wrong choices, but their guidance just seemed…off. Not wrong. Not evil. Just somehow not quite right. Maybe somehow inconsistent with their revealed nature, and pushing towards ex machina, like a number of things I don’t see how I’d have discovered if they hadn’t outright told me. The dragon interaction is part of that not-quite-rightness.
I definitely found the ending to be the least satisfying part of the game. I went straight from the dragon to the final battle, and I think that sequence intensified the less-than-satisfying feeling.
The absence of a running karma total is a surprisingly powerful difference. I do still look back at old posts, and it’s nice when there’s votes, but without the little number next to a name or when I mouse-over a profile, there’s no motivation to be the first in a thread to repost a cliche joke or to ragebait for fake internet points.
Own 8/10 - assuming you count Phantom Liberty different from CP; finished 7/10 (likewise PL), mostly before this decade. Some of them before 2010. I wonder if I can still find my Baldur’s Gate CDs…
Estimate bills for the month & transfer the rest to mutual funds
Expenses all paid by credit card, so I’m always ‘budgeting’ for the previous month and there’s no guesswork. Emergency expenses larger than a paycheck might require selling some mutual funds, but in 20 years that never happened.
Now I’m not working, budgeting is basically the same, except that interest and dividends appear at random intervals in brokerage, are no longer automatically reinvested, but transferred to checking to cover bills, usually around the same time as the paycheck used to appear.
I don’t automate anything, because I want to notice if a bill is larger than expected and address whatever caused that to happen.