The OG metric system (from the XVIII century) had no prefix for 10⁶. “Mega-” would be only formally acknowledged by the SI in 1960.
The ton units (yup, plural) backtrack all the way to a volume unit from the Middle Ages, the amount of liquid that you’d be able to put in a big arse cask*
Based on those two things, I think that the ton was standardised to 10⁶g considerably before the name “megagram” had the chance to appear, to the point that it became the default name across languages.
*I don’t know the English name for the cask, but in Portuguese it’s “tonel”. From that “tonelada” (the unit). It used to be 800kg before the metric system though.
Yup - at least in Europe this backtracks all the way into the Middle Ages. And it was actually a big deal because the units were similar, neither completely identical nor completely different. And that was actually a big deal because people could argue which of those units they meant, specially when buying/selling stuff. (For example, let’s say that some Portuguese merchant agrees to sell “five tons of fish” to a random Englishman. Now you get:
the merchant arguing “five Portuguese tons”, expecting to sell 793*5=3965kg of fish
the buyer arguing “five English tons”, expecting to buy 1088*5=5440kg of fish
even if both were in good faith they’d feel themselves cheated on the deal.
To make it worse sometimes the units changed inside the same realm, over time.
And you might confuse MB, megabytes, with MiB, mebibytes. MB s typically used to measure storage, and MiB typically used to measure data. There’s 1000 bytes in a kilobyte, and 1024 bytes in a kibibyte.
No good reason, just historical inertia and resistance to change. People stick to what they’re familiar with, either the imperial system or to common metric units. Making a “metric ton” similar in size to an “imperial ton” arguably helped make it easier for some people to transition to metric.
Megagram is a perfectly cromulent unit, just like “cromulent” is a perfectly cromulent word, but people still don’t use it very often. That’s just how language works. People use the words they prefer, and those words become common. Maybe if you start describing things in megagrams other people will also start doing it and it will become a common part of the language. Language is organic like that, there isn’t anyone making decisions on its behalf, although some people and organizations try.
Similarly large volumes of water should be given in kl, Ml, Gl etc. instead of m^3. Which one is bigger 2500000 m^3 or 790000 m^3? Count the zeros if you want and then tell me if using appropriate prefixes would have made it easier to tell the difference.
If you used scientific notation or commas (or periods, depending on region) to format those numbers for human consumption, that would also make it easier.
The sort of personal that insists on calling a ton a megagram is probably going to be the same sort of insufferable Jimmy Neutron arsehole that insists on calling salt “sodium chloride”, yes you’re technically correct, but people experience food as salty and no one is going to say “this food is very sodium chloridy!”
Never heard anyone use megameters either. They either stay on kilometer, or switch to miles. And miles mean different things from one place to the next.
Huh? Why would you switch to miles from kilometers?
And IMHO megameters aren’t used that often because there is rarely anything useful to measure with it. Using a different unit makes you lose your sense of scale (e.g. the earth has a radius of ~7000km, not 7Mm) and for astronomy megameters aren’t big enough most of the time (and you might as well use lightseconds/years because gigameters give no real intuition of scale).
It looks a bit less cluttered, compare e.g. “40.0 Mm” “40.0 x 10⁶ m” or “4.00 x 10⁷ m”. Plus I think that he took into account that he wasn’t lecturing future physicists but future chemists - in Chemistry you rely on those prefixes all the time, and for most stuff you won’t be changing the order of magnitude too much. (Major exception, pK-whatever)
Kilo translates to thousand. Mega to a million. So in you example, kilometer fits perfectly. Megameter would be a million meters, or a thousand kilometers which is annoying to say on the scale we humans use on a day to day basis. And if it comes to space, megameters are way too little.
I think it’s written ‘tonne’. And you should call it metric tonne if it’s not clear from the context.
Wikipedia says:
The tonne is a unit of mass equal to 1000 kilograms. It is a non-SI unit accepted for use with SI. It is also referred to as a metric ton to distinguish it from the non-metric units of the short ton (United States customary units) and the long ton (British imperial units). The official SI unit is the megagram (symbol: Mg), a less common way to express the same amount.
So yes, you can call it a megagramme and you’d be right. But we european people also sometimes do silly stuff and colloquially use wrong things. For example we also say it’s 20 degrees celsius outside. And that’s not the proper SI unit either. But that’s kinda another topic.
I’m not so sure. But maybe you’re right. I think I was confusing that with tonnage of a ship. But that’s a whole other concept and you can’t really confuse the two.
With the 1000 t thats only because kg is a stupid SI unit and leads to the whole debacle. If there wasn’t a prefix in the unit name itself, I think people would have started to use the SI unit prefixes correctly at some point instead of inventing and omitting other names to compensate.
I think I’ve heard things like megatonne. For example you can say your nuclear bomb has X megaton tnt equivalent.
A mass of a million kg should be 1 gigagram or 1 kilotonne. Not 1000t. (Edit: And not a kilotonne either, rather a mega-kilogram.)
But it literally is a kiloton? Mostly getting used for explosives if you talk about it, but it’s used:
kiloton /ˈkɪlə(ʊ)tʌn/
noun: kiloton; plural noun: kilotons; noun: kilotonne; plural noun: kilotonnes
a unit of explosive power equivalent to 1,000 tons of TNT.
The reason megagram isn’t used much is because it would be shortened to mg. Which is usually milligram. Sure, you could go the “Mg” route compared to “mg”, but that sucks. So “t” for ton works well. It’s just another name though, it doesn’t matter.
Yeah, I know. But you have the problem with the letter ‘m’ everytime. You just have to pay attention and write it correctly. And there is also ‘micro-’ in addition to the ‘milli’ and ‘mega’ you mentioned. However, most of the time it’s unlikely you’re off by a factor of 1 billion and won’t notice. Just do it right: 'µ, ‘m’, ‘M’.
If you listen to my school teacher, you’re not supposed to use SI prefixes with other things. I think that’s not true but would apply to the ‘kiloton’. People wouldn’t like me talking about a ‘kilo-foot’ or ‘milli-yard’. I’ve had 3 deca-spoons of soup or there were 2.5 kilo-people at the concert.
The official definition of a 1,000 kg is Mg but it’s not very frequently used in practice. Mostly because use of metric tonnes was already diffused
Keep in mind that there is more than just SI units used in Europe in the past. For example if you read through an old thermodynamics textbook in Italian it is likely to use a lot calories and often the CGS system (centimeter grams second and calories).
CGS system (centimeter grams second and calories).
For the pleasure to be pedantic, the proper CGS energy unit is the erg, not calorie.
But indeed, even in France, home of the metric, you’ll find people using some customary unit (Calories, or pounds) and even some US units like inches for computer-screen and feet for powered airplanes altitude, and then a shit ton of approximation
A mass of a million kg should be 1 gigagram or 1 kilotonne. Not 1000t. (Edit: And not a kilotonne either, rather a mega-kilogram.)
The good thing: All of them are correct. The SI system actually does not care if you throw around extra zeros, so 1000t is fine. It is actually better to stay in the same SI prefix and just use larger numbers to make list entries easier comparable. Just imagine some ship shop would list it’s smaller offers in Mg and then switch to Gg for larger ships.
Not as many as you think, a water heater only has 3 times the electrical capacity of a standard wall outlet. So probably less than 6-10 Keurigs, but you’d need them on 4-5 (or more) separate circuits otherwise you’d blow the breakers.
There’s another saying 34 equating it to a tank less heater.
The original question is too vague - there’s no one to one mapping between keurigs and water heaters. If you’re just trying to heat your houses hot water, any of those answers are valid. So is “1”. It’s just a question of what you REALLY want and what your constraints are.
If we’re looking into their heating capacity they should be able to heat approximately 7 and 1/2 gallons of water an hour. A lower end water heater can supply about 85 gallons of water per hour so you’d need about 11 of them to meet a small house capacity.
If we’re looking at their water holding capacity and power consumption. The average house has a 40-60 gallon water heater and a Keurig has a 48oz reservoir. 10 Keurigs would give you around 4 gallons of hot water. You would need 107 to get to a 40 gallons capacity. When heating they use 1500 watts according to the Internet, so you’d need 160,500 watts (or 1,345.75 amps) of Keurigs to be the equivalent of a low end water heater for a house. The average 40 gallon heater uses between 4500 and 5500 watts.
This is how I got started 20 years ago when I got my first apartment. Cookbook with “easy” or “quick” recipes and you’ll eventually get good at it. It’s still the best way to learn.
The PS3 had a 128-bit CPU. Sort of. “Altivec” vector processing could split each 128-bit word into several values and operate on them simultaneously. So for example if you wanted to do 3D transformations using 32-bit numbers, you could do four of them at once, as easily as one. It doesn’t make doing one any faster.
Vector processing is present in nearly every modern CPU, though. Intel’s had it since the late 90s with MMX and SSE. Those just had to load registers 32 bits at a time before performing each same-instrunction-multiple-data operation.
The benefit of increasing bit depth is that you can move that data in parallel.
The downside of increasing bit depth is that you have to move that data in parallel.
To move a 32-bit number between places in a single clock cycle, you need 32 wires between two places. And you need them between any two places that will directly move a number. Routing all those wires takes up precious space inside a microchip. Indirect movement can simplify that diagram, but then each step requires a separate clock cycle. Which is fine - this is a tradeoff every CPU has made for thirty-plus years, as “pipelining.” Instead of doing a whole operation all-at-once, or holding back the program while each instruction is being cranked out over several cycles, instructions get broken down into stages according to which internal components they need. The processor becomes a chain of steps: decode instruction, fetch data, do math, write result. CPUs can often “retire” one instruction per cycle, even if instructions take many cycles from beginning to end.
To move a 128-bit number between places in a single clock cycle, you need an obscene amount of space. Each lane is four times as wide and still has to go between all the same places. This is why 1990s consoles and graphics cards might advertise 256-bit interconnects between specific components, even for mundane 32-bit machines. They were speeding up one particular spot where a whole bunch of data went a very short distance between a few specific places.
Modern video cards no doubt have similar shortcuts, but that’s no longer the primary way the perform ridiculous quantities of work. Mostly they wait.
CPUs are linear. CPU design has sunk eleventeen hojillion dollars into getting instructions into and out of the processor, as soon as possible. They’ll pre-emptively read from slow memory into layers of progressively faster memory deeper inside the microchip. Having to fetch some random address means delaying things for agonizing microseconds with nothing to do. That focus on straight-line speed was synonymous with performance, long after clock rates hit the gigahertz barrier. There’s this Computer Science 101 concept called Amdahl’s Law that was taught wrong as a result of this - people insisted ‘more processors won’t work faster,’ when what it said was, ‘more processors do more work.’
Video cards wait better. They have wide lanes where they can afford to, especially in one fat pipe to the processor, but to my knowledge they’re fairly conservative on the inside. They don’t have hideously-complex processors with layers of exotic cache memory. If they need something that’ll take an entire millionth of a second to go fetch, they’ll start that, and then do something else. When another task stalls, they’ll get back to the other one, and hey look the fetch completed. 3D rendering is fast because it barely matters what order things happen in. Each pixel tends to be independent, at least within groups of a couple hundred to a couple million, for any part of a scene. So instead of one ultra-wide high-speed data-shredder, ready to handle one continuous thread of whatever the hell a program needs next, there’s a bunch of mundane grinders being fed by hoppers full of largely-similar tasks. It’ll all get done eventually. Adding more hardware won’t do any single thing faster, but it’ll distribute the workload.
Video cards have recently been pushing the ability to go back to 16-bit operations. It lets them do more things per second. Parallelism has finally won, and increased bit depth is mostly an obstacle to that.
So what 128-bit computing would look like is probably one core on a many-core chip. Like how Intel does mobile designs, with one fat full-featured dual-thread linear shredder, and a whole bunch of dinky little power-efficient task-grinders. Or… like a Sony console with a boring PowerPC chip glued to some wild multi-phase vector processor. A CPU that they advertised as a private supercomputer. A machine I wrote code for during a college course on machine vision. And it also plays Uncharted.
The PS3 was originally intended to ship without a GPU. That’s part of its infamous launch price. They wanted a software-rendering beast, built on the Altivec unit’s impressive-sounding parallelism. This would have been a great idea back when TVs were all 480p and games came out on one platform. As HDTVs and middleware engines took off… it probably would have killed the PlayStation brand. But in context, it was a goofy path toward exactly what we’re doing now - with video cards you can program to work however you like. They’re just parallel devices pretending to act linear, rather than they other way around.
There’s this Computer Science 101 concept called Amdahl’s Law that was taught wrong as a result of this - people insisted ‘more processors won’t work faster,’ when what it said was, ‘more processors do more work.’
You massacred my boy there. It doesn’t say that at all. Amdahl’s law is actually a formula how much speedup you can get by using more cores. Which boils down to: How many parts of your program can’t be run in parallel? You can throw a billion cores at something, if you have a step in your algorithm that can’t run in parallel… that’s going to be the part everything waits on.
Or copied:
Amdahl’s law is a principle that states that the maximum potential improvement to the performance of a system is limited by the portion of the system that cannot be improved. In other words, the performance improvement of a system as a whole is limited by its bottlenecks.
Gene Amdahl himself was arguing hardware. It was never about writing better software - that’s the lesson we’ve clawed out of it, after generations of reinforcing harmful biases against parallelism.
Telling people a billion cores won’t solve their problem is bad, actually.
Human beings by default think going faster means making each step faster. How you explain that’s wrong is so much more important than explaining that it’s wrong. This approach inevitably leads to saying ‘see, parallelism is a bottleneck.’ If all they hear is that another ten slow cores won’t help but one faster core would - they’re lost.
That’s how we got needless decades of doggedly linear hardware and software. Operating systems that struggled to count to two whole cores. Games that monopolized one core, did audio on another, and left your other six untouched. We still lionize cycle-juggling maniacs like John Carmack and every Atari programmer. The trap people fall into is seeing a modern GPU and wondering how they can sort their flat-shaded triangles sooner.
What you need to teach them, what they need to learn, is that the purpose of having a billion cores isn’t to do one thing faster, it’s to do everything at once. Talking about the linear speed of the whole program is the whole problem.
I am unsure about the historical reasons for moving from 32-bit to 64-bit, but wouldnt the address space be a significantly larger factor? Like you said, CPUs have had vectoring instructions for a long time, and we wouldn’t move to 128-bit architectures just to be able to compute with numbers of those size. Memory bandwidth is, also as you say, limited by the bus widths and not the processor architecture. IMO, the most important reason that we transitioned to 64-bit is primarily for the larger address address space without having to use stupidly complex memory mapping schemes. There are also some types of numbers like timestamps and counters that profit from 64-bit, but even here I am not sure if the conplex architecture would yield a net slowdown or speedup.
To answer the original question: 128 bits would have no helpful benefit for the address space (already massive) and probably just slow everyday calculations down.
8-bit machines didn’t stop dead at 256 bytes of memory. Address length and bus width are completely independent. 1970s machines were often built with bit-slice memory, with however many bits of addressing, and one-bit output. If you wanted 8-bit memory then you’d wire eight chips in parallel - with the same address lines. Each chip would deliver a different part of the same logical byte.
64-bit math doesn’t need 64-bit hardware, either. Turing completeness says any computer can run the same code - memory and time allowing. As an object example, Javascript exclusively used 64-bit double floats, even when it was defined in the late 1990s, and ran exclusively on 32-bit machines.
slight correction. vector processing is available on almost no common architectures. What most architectures have is SIMD instructions. Which means that code that was written for sse2 cannot and will not ever make use of the wider AVX-512 registers.
The risc-v isa is going towards the vector processing route. The same code works on machines with wide vector registers, or ones with no real parallel ability, but will simply loop in hardware.
Simd code running on a newer cpu with better simd capabilities will not run any faster. Unmodified vector code on a better vector processor, will run faster
Can you follow directions? Congratulations, you can cook! It’s really not that difficult, cooking is just simple chemistry.
When I was young my mum bought me a cookbook and once a week, usually Sundays, we would make a recipe or two that were in it. Sometimes full meals, sometimes just desserts, etc. You’ll learn by doing, so get yourself a cookbook or find a cooking show to watch if you’re a more visual learner. Just put yourself out there and try. I believe in you.
Well obviously OP can’t go back in time to when they were a child, but there’s nothing to stop them getting a cook book once a week and trying out a recipe or two.
I mean, yeah, obviously. But claiming it’s really easy because you were lucky to have normal parents and have been doing it since you were kid, especially on a question that implies someone didn’t have the luxury, is not helping.
When we look at the sky, there is a line where there is way more stars than usual. This line goes all the way around the sky. This was called the milky way by the Greeks because it was like a road sparkled with milk drops. At some point, we deduced that we were in a group of stars arranged in a flat disk. Later, we realized that some weird space clouds (nebulae) were much further away than we thought and were actually other huge groups of stars like our own that we named galaxies, still after milk.
There are more details me course. Even along the line in the sky drawn by the milky way, there is one side where there is much more stars and dust than the other. We deduced that we were at the edge of the disk and the bright region was the center of our galaxy. Also, the amount of gas and dust that block certain types of light that teach us that our galaxy has arms.
That’s definitely the more PG-13 version of why the ancient Greeks called it the Milky Way lol. Alternate version from Wikipedia:
In Greek mythology, Zeus places his son born by a mortal woman, the infant Heracles, on Hera’s breast while she is asleep so the baby will drink her divine milk and thus become immortal. Hera wakes up while breastfeeding and then realizes she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the band of light known as the Milky Way.
Great explanation, although I want to clarify that not all nebulae are galaxies. Nebulas are massive clouds of dust and gas that are found within galaxies. Other galaxies were previously thought to be nebulas in space outside of the Milky Way, called extragalactic nebulae. However, in the early 1900s it was proven that these were actually other galaxies and not nebulas, so the term is no longer used.
While there are nebulae in other galaxies, they are not easily visible to us, so the word nebula generally refers to those contained within the Milky Way.
We just finished, I hope there will be a spinoff because I’m not ready to finish with them all. There are still so many things for them to do, Roy Kent is the best.
I love S01 and S02, but S03 was a huge letdown for me compared with S01 and S02. Too many side stories and unnecessary new characters and important things happening off screen rather than on screen
Most of my posts get zero engagement and it seems appropriate. Maybe it’s hashtags. The posts where I bitch about twitter or Reddit or google are usually the ones that get attention. I am just assuming people looking for follow backs but 🤷🏻♂️
I’m trying to keep my feed manageable at this point…not looking for 1,000 mutual followers that just follow for numbers.
Unless you have a lot of active folks who follow you posts generally only get engagement from hashtags, groups, or local (if you’re on a server with active local watchers)
nostupidquestions
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.