BarryZuckerkorn

@[email protected]

He’s very good.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

BarryZuckerkorn,

Ad-riddled blogspam, probably written by some AI.

There’s literally nothing in this post that isn’t better covered by a more reputable site.

BarryZuckerkorn,

Do we know that for sure? What if the triggering event was secret dealings between Altman and Nadella?

I have no idea how likely that is, but the original messaging around Altman’s firing seemed to leave open the possibility that this was about conflicts of interest. The messaging since seems to have contradicted that, though, so who knows.

BarryZuckerkorn,

Can you run that outside of a virtual box?

It’s not virtualization. It’s actually booted and runs on bare metal, same as the way Windows runs on a normal Windows computer: a proprietary closed UEFI firmware handles the boot process but boots an OS from the “hard drive” portion of non-volatile storage (usually an SSD on Windows machines). Whether you run Linux or Windows, that boot process starts the same.

Asahi Linux is configured so that Apple’s firmware loads a Linux bootloader instead of booting MacOS.

And wouldn’t it be a lot cheaper to just build your own PC rather than pay the premium for the apple logo?

Apple’s base configurations are generally cheaper than similarly specced competitors, because their CPU/GPUs are so much cheaper than similar Intel/AMD/Nvidia chips. The expense comes from exorbitant prices for additional memory or storage, and the fact that they simply refuse to use cheaper display tech even in their cheapest laptops. The entry level laptop has a 13 inch 2560x1600 screen, which compares favorably to the highest end displays available on Thinkpads and Dells.

If you’re already going to buy a laptop with a high quality HiDPI display, and are looking for high performance from your CPU/GPU, it takes a decent amount of storage/memory for a Macbook to overtake a similarly specced competitor in price.

BarryZuckerkorn,

Except the boot process on a non apple PC is open software.

For the most part, it isn’t. The typical laptop you buy from the major manufacturers (Lenovo, HP, Dell) have closed-source firmware. They all end up supporting the open UEFI standard, but the implementation is usually closed source. Having the ability to flash new firmware that is mostly open source but with closed source binary blobs (like coreboot) or fully open source (like libreboot) gets closer to the hardware at startup, but still sits on proprietary implementations.

There’s some movement to open source more and more of this process, but it’s not quite there yet. AMD has the OpenSIL project and has publicly committed to open sourcing a functional firmware for those chips by 2026.

Asahi uses the open source m1n1 bootloader to load a U-boot to load desktop Linux bootloaders like GRUB (which generally expect UEFI compatibility), as described here:

  • The SecureROM inside the M1 SoC starts up on cold boot, and loads iBoot1 from NOR flash
  • iBoot1 reads the boot configuration in the internal SSD, validates the system boot policy, and chooses an “OS” to boot – for us, Asahi Linux / m1n1 will look like an OS partition to iBoot1.
  • iBoot2, which is the “OS loader” and needs to reside in the OS partition being booted to, loads firmware for internal devices, sets up the Apple Device Tree, and boots a Mach-O kernel (or in our case, m1n1).
  • m1n1 parses the ADT, sets up more devices and makes things Linux-like, sets up an FDT (Flattened Device Tree, the binary devicetree format), then boots U-Boot.
  • U-Boot, which will have drivers for the internal SSD, reads its configuration and the next stage, and provides UEFI services – including forwarding the devicetree from m1n1.
  • GRUB, booting as a standard UEFI application from a disk partition, works like GRUB on any PC. This is what allows distributions to manage kernels the way we are used to, with grub-mkconfig and /etc/default/grub and friends.
  • Finally, the Linux kernel is booted, with the devicetree that was passed all the way from m1n1 providing it with the information it needs to work.

If you compare the role of iBoot (proprietary Apple code) to the closed source firmware in the typical Dell/HP/Acer/Asus/Lenovo booting Linux, you’ll see that it’s basically just line drawing at a slightly later stage, where closed-source code hands off to open-source code. No matter how you slice it, it’s not virtualization, unless you want to take the position that most laptops can only run virtualized OSes.

I think you mean that Apple uses its own memory more effectively then a windows PC does.

No, I mean that when you spec out a base model Macbook Air at $1,199 and compare to similarly specced Windows laptops, whose CPUs/GPUs can deliver comparable performance on benchmarks, and a similar quality display built into the laptop, the Macbook Air is usually cheaper. The Windows laptops tend to become cheaper when you’re comparing Apple to non-Apple at higher memory and storage (roughly 16GB/1TB), but the base model Macbooks do compare favorably on price.

BarryZuckerkorn,

This is a £1400 laptop from scan V’s £1500 macbook air currently.

Ah, I see where some of the disconnect is. I’m comparing U.S. prices, where identical Apple hardware is significantly cheaper (that 15" Macbook Air starts at $1300 in the U.S., or £1058).

And I can’t help but notice you’ve chosen a laptop with a worse screen (larger panel with lower resolution). Like I said, once you actually start looking at High DPI screens on laptops you’ll find that Apple’s prices are actually pretty cheap. 15 inch laptops with at least 2600 pixels of horizontal resolution generally start at higher prices. It’s fair to say you don’t need that kind of screen resolution, but the price for a device with those specs is going to be higher.

The CPU benchmarks on that laptop’s CPU are also slightly behind the 15" Macbook Air, too, even held back by not having fans for managing thermals.

There’s a huge market for new computers that have lower prices and lower performance than Apple’s cheapest models. That doesn’t mean that Apple’s cheapest models are a bad price for what they are, as Dell and Lenovo have plenty of models that are roughly around Apple’s price range, unless and until you start adding memory and storage. Thus, the backwards engineered pricing formula is that it’s a pretty low price for the CPU/GPU, and a very high price for the Storage/Memory.

All of the PC components can be upgraded at the cost of the part + labour.

Well, that’s becoming less common. Lots of motherboards are now relying on soldered RAM, and a few have started relying on soldered SSDs, too.

BarryZuckerkorn,

I would choose a larger screen over that marginal difference in dpi every day of the week.

Yes, but you’re not addressing my point that the price for the hardware isn’t actually bad, and that people who complain would often just prefer to buy hardware with lower specs for a lower price.

The simple fact is that if you were to try to build a MacBook killer and try to compete on Apple’s own turf by matching specs, you’d find that the entry level Apple devices are basically the same price as other laptops you could configure with similar specs, because Apple’s baseline/entry level has a pretty powerful CPU/GPU and high resolution displays. So the appropriate response is not that they overcharge for what they give, but that they make choices that are more expensive for the consumer, which is a subtle difference that I’ve been trying to explain throughout this thread.

You cannot compare an app that runs on two different OS.

Why not? Half of the software I use is available on both Linux and MacOS, and frankly a substantial amount of what most people do is in browser anyway. If the software runs better on one device over another, that’s a real world difference that can be measured. If you’d prefer to use Passmark or whatever other benchmark you’d like you use, you’ll still see be able to compare specific CPUs.

BarryZuckerkorn,

If the equity is worth $19 billion, and the debt is worth $13 billion, that’s a drop of $44 billion to $32 billion. Still hilarious, although not as dramatic.

BarryZuckerkorn,

The equity is merely an estimate; it’s no longer a traded company so a public valuation is not applicable.

Even for private companies, though, the valuation matters for all sorts of events that might happen in the meantime: employees with equity still might be forced to sell if they quit their job, so that value ends up actually supporting real transactions trading equity for cash, income tax will look to the fair value at the time of vesting (or grant, in some cases).

And the debt is secured by the $19B valuation, so it’s not in addition to the equity; the company is “worth” $19B but caries a debt burden of $13B making it’s liquidation value $6B

I don’t think this is right. In a typical leveraged buyout, the debt is secured by the assets of the company itself, not by the equity in the company. In other words, the money is owed by Twitter Inc. (and secured by what Twitter owns), not by Twitter’s shareholders (and not secured by the shares themselves).

The old owners got $44 billion. $13 billion came from lenders, not new shareholders. New shareholders agreed to the deal because it allowed them to pony up less money for 100% ownership of the corporation, but the corporation itself is now more burdened with debt. The enterprise value, however, is shareholder equity plus debt, so the enterprise value itself doesn’t change with the debt. That’s why I added the total debt to the total valuation of the equity.

BarryZuckerkorn,

That’s a driver for Apple hardware running in a non-Apple OS. That’s different from tricking an Apple OS into running non-Apple hardware.

BarryZuckerkorn,

As long as no one is getting hurt I don’t really see the problem.

It’d be hard to actually meet that premise, though. People are getting hurt.

Child abuse imagery is used as both a currency within those circles to incentivize additional distribution, which means there is a demand for ongoing and new actual abuse of victims. Extending that financial/economic analogy, seeding that economy with liquidity, in a financial sense, might or might not incentivize the creation of new authentic child abuse imagery (that requires a child victim to create). That’s not as clear, but what is clear is that it would reduce the transaction costs of distributing existing child abuse imagery, which is a form of re-victimizing those who have already been abused.

Child abuse imagery is also used as a grooming technique. Normalization of child sexual activity is how a lot of abusers persuade children to engage in sexual acts. Providing victimless “seed” material might still result in actual abuse happening down the line.

If the creation of AI-generated child abuse imagery begins to give actual abusers and users of real child abuse imagery cover, to where it becomes more difficult to investigate the crime or secure convictions against child rapists, then the proliferation of this technology would make it easier to victimize additional children without consequences.

I’m not sure what the latest research is on the extent to which viewing and consuming child porn would lead to harmful behavior down the line (on the one hand, maybe it’s a less harmless outlet for unhealthy urges, but on the other hand, it may feed an addictive cycle that results in net additional harm to society).

I’m sure there are a lot of other considerations and social forces at play, too.

BarryZuckerkorn,

@penguincoder is a pretty active dev for Beehaw, has been very open about his views that the lemmy software is built on very shaky foundations, including the programming language and architecture choices underpinning the whole thing, making moderation unnecessarily difficult and making it hard to comply with legal requirements of hosting such a service, and providing severe limits to scale. It might make more sense to build up a new forum from the ground up, compatible with ActivityPub, than to try to fork Lemmy, or persuade Lemmy’s existing maintainers to start accepting big patches.

BarryZuckerkorn,

The best conversations happen among small groups of people selected out of a huge, huge pool of people.

Niche interests and discussions need to be able to advertise their existence to millions in order to persuade that dozen people to actually participate.

BarryZuckerkorn,

I mean Rust is a godsend as a decision for the language to use.

I have no dog in the fight, but Penguincoder has been pretty vocal about Rust being the wrong choice for a web service: slow to develop and modify, easy to make mistakes that take much more work to fix later (and blames this fact for the state of the lemmy codebase). Its greatest strength is the speed of execution, but that doesn’t really matter for web servers, that are basically never CPU limited.

I can’t imagine anything even major needing changing, let alone a full rewrite.

I think the moderation tool examples given sound pretty broken, and it isn’t just Beehaw admins complaining about them. Lemmy.world and a few others have instance admins complaining about how hard it is to remove images from the server (deleting posts/users/comments just orphans the image file without deleting the associated file), how all the moderation functions seem not to contemplate the federation issue (removing an abusive comment or banning a user on one instance does nothing to address that same problematic content already federated to another instance).

BarryZuckerkorn,

It’s not even swipes. It’s an overlay showing which potential swipes have are recommended by your chosen recommenders (who can’t message or interact with any users). The first step of actually choosing to swipe left or right remains with the user.

BarryZuckerkorn,

Most of the normal apps on the phone are using AI on the edges.

Image processing has come a long way using algorithms trained through those AI techniques. Not just the postprocessing of pictures already taken, like unblurring faces, removing unwanted background people, choosing a better frame of a moving picture, white balance/color profile or noise reduction, but also in the initial capture of the image: setting the physical focus/exposure on recognizable subjects, using software-based image stabilization in longer exposed shots or in video, etc. Most of these functions are on-device AI using the AI-optimized hardware on the phones themselves.

On-device speech recognition, speech generation, image recognition, and music recognition has come a long way in the last 5 years, too. A lot of that came from training on models using big, robust servers, but once trained, executing the model on device only requires the AI/ML chip on the phone itself.

In other words, a lot of these apps were already doing these things before on-device AI chips started showing up in 2013 or so. But the on-device chips have made all these things much, much better, especially in the last 5 years when almost all phones started coming with dedicated hardware for these tasks.

BarryZuckerkorn,

Google has a great track record at fulfilling its promises of support, but a terrible track record of giving un-promised support. So when they promise support, that should go into the “good track record” column.

BarryZuckerkorn,

Training AI models takes a lot of development on the software side, and is computationally intense on the hardware side. Loading a shitload of data into the process, and letting the training algorithms dig down on how to value each of billions or even trillions of parameters is going to take a lot of storage space, memory, and actual computation through ASICs dedicated to that task.

Using pre-trained models, though, is a less computationally intensive task. Once the parameters are defined on that huge training set, that model can be applied by software that just takes the parameters already defined in training and applies it to smaller data sets.

So I would expect the AI/ML chips in actual phones would continue to benefit from AI development, including models developed many chip generations later.

Desalination system could produce freshwater that is cheaper than tap water (news.mit.edu)

The configuration of the device allows water to circulate in swirling eddies, in a manner similar to the much larger “thermohaline” circulation of the ocean. This circulation, combined with the sun’s heat, drives water to evaporate, leaving salt behind. The resulting water vapor can then be condensed and collected as pure,...

BarryZuckerkorn,

Because this instance doesn’t use downvotes. The default interface doesn’t show downvoting, and attempts to downvote through another interface are literally discarded by the server.

BarryZuckerkorn,

Commissioner Anna Gomez was sworn in yesterday. Up until then, the FCC has been deadlocked 2-2 between Democrats and Republicans, so the FCC has been unable to push net neutrality.

They just announced that with their 3-2 majority, one of their top priorities is to get Net Neutrality regs passed. This is an important step, announced like literally the first day they’ve had control of the commission.

BarryZuckerkorn,

That might be, but the tug back and forth at least gives the ISPs pause before going full bore into engineering (or contracting for) non-neutral arrangements. Why invest the time, money, and effort into something that is only sometimes legal?

BarryZuckerkorn,

I’m gonna push back against that defeatist attitude that things aren’t worth doing if success can’t be guaranteed. First off, as a general matter it’s still doing because we don’t want that one-way ratchet where only one side occasionally tries while the other will always bring their A game and pull off a few upsets, for an overall winning record. I think that most progressives/liberals are unnecessarily handicapping themselves by not showing up for every fight.

Second, specific to this type of regulation, the “cost of doing business” issue doesn’t even really apply. If the punishment for violating a regulation is a fine, then maybe you pay a few fines and it works out. But that’s not generally how the FCC works, because although they do have the power to issue fines, the big thing they have is the power to actually order compliance with their rules.

If the punishment for not building a house to code is a thousand dollars in fines, that’s not going to stop home builders when they’re making hundreds of thousands in profit per building. But if the punishment for not building a house to code is you’re not allowed to sell it until you tear it down and do it right, well, then we’re talking about a punishment that cuts hundreds of thousands of dollars into lost profits/revenue.

The FCC’s regulations are more like that. If the FCC orders the ISPs that “oh that contract where you’re accepting money in exchange for fast lane access is illegal, so you can’t do that anymore,” that now-illegal contract between two big businesses turns into a more complicated effort of lawyers figuring out what they’re supposed to do. Does the other side still pay, if they’re not getting anything in return? Or if the FCC says that a particular QoS rule on their routers needs to be removed, do the network engineers go back to the drawing board to implement their own traffic shaping stuff that does comply with the regs?

BarryZuckerkorn,

I’m sympathetic to the idea that an individual user should be able to override their instance admins’ preferences on access for content-related reasons, but I don’t think it would be workable from an administrative viewpoint to allow users to allowlist instances that were blocklisted for administrative reasons.

Lemmy.world dealt with (and is probably still dealing with) a series of malicious actions designed to actually bring down the service or otherwise tie up its resources (including moderator/admin attention and effort, and exposure to literal criminal charges), using maliciously crafted requests to bring down servers, literally illegal content posted to their servers, etc. Defederation in response to these types of attacks would be defeated if a user could let the content come through anyway.

I imagine most instances are dealing with similar issues.

So ideally we’d need to be able to create 4 categories of relationships with other instances:

  1. Blocked no matter what
  2. Blocked by default for users, can be user overridden
  3. Allowed by default for users, can be user overridden
  4. Allowed no matter what (not sure what the use case for this status would be, but seems to be trivial to implement since it already exists as default).

But I think you’d find that the typical scenario that justifies blocking would actually put the typical block into category 1, not category 2.

Beans Are a Vegetable: an Overanalysis (lemmy.ca)

“Of course beans count as a vegetable!” I said to my wife. We have this house rule that it’s okay to eat mac and cheese for dinner so long as you add a vegetable. If I may ‘spill the beans’, I’m obsessed with them. Each one is tastier than the last: garbanzo bean, black bean, kidney. The butter bean, which is just...

BarryZuckerkorn,

“Well technically this isn’t a 100% plant based meal because the mushrooms are not part of the plant kingdom”

BarryZuckerkorn,

Where I am, electricity is pretty cheap, but natural gas is tremendously cheaper per jule… so we can actually pay less by using the “inefficient” fuel for our home.

Most of the push towards rapid adoption of heat pumps is happening in Europe, where geopolitical developments (to put it mildly) caused gas prices to spike last winter. The nature of the natural gas logistics means that different continents can have wildly different prices (unlike petroleum, where you can always throw it on a ship and send it from where it’s cheap to where it’s expensive), so a lot of European countries are seeing these debates play out against the backdrop of their own energy markets. Germany passed a law this year that would phase out new gas furnace installations, so that’s why a lot of the debate is happening with a focus on German markets.

Whether (or how quickly) a transition to heat pumps pays for itself in euros will depend a lot on what happens in the future to gas and electricity prices.

BarryZuckerkorn,

a specific amount of energy (watts?)

Energy is measured in joules. Watts are joules per second, and a measure of how quickly the energy is being used.

And from there to absolute zero (0K) would be “available energy” in my perception.

No, it’s not available. The only way to use heat energy is to find something that’s colder, to be able to transfer that heat to, and use that heat transfer to drive some other process that puts the energy in another form: in a chemical bond, in an electrical charge, in a moving object, into moving something heavy higher, etc.

Once everything in the universe completely evens out in heat, where none of the heat can go into anywhere else (because everything else is just as hot), that’s known as the heat death of the universe.

So if you’re starting with stuff that’s all the same temperature, and you want to make one part of that system colder by pumping heat out from the place to be cooled and dumping that heat into an already hot place, it’ll always cost more energy than you can capture again when you try to use that heat for other stuff. That’s because if you want to use that heat energy, the only way to do it would be to take advantage of the heat differential between the hot zone and the cold zone, by equalizing the temperature between two zones. Well, if you’re going to do that, then why did you spend energy cooling the cold zone in the first place? It’ll cost more energy to capture the heat as it returns to the cold zone than it cost to make the cold zone in the first place, so it would’ve been more efficient to just let the two zones remain equal temperature.

BarryZuckerkorn,

Bundling things together is good when it reduces friction for the consumer, but bad when it reduces choice for the consumer. Every decision about bundling needs to be understood from that perspective, and evaluated on a case by case basis against that tradeoff.

That loss of choice is especially anti-consumer when a provider leverages a dominant market position in one product to push their own inferior version of a totally different product. For example, right now there’s a competition for consumer cloud storage. But none of the providers are actually competing on cloud storage features or pricing. All of them are competing based on bundling with the other totally unrelated products provided by that competitor:

  • Apple pushes iCloud by giving it first party advantage on all Apple devices, with system and OS integration that the other cloud providers aren’t allowed to match.
  • Google pushes Google Drive by using that storage space as part of the quota for Gmail, Google Photos, and Google Workspace.
  • Microsoft pushes OneDrive as an add-on to its dominant position in Microsoft Office and Exchange, and gives it first party integration into Windows.
  • Adobe pushes Adobe Cloud as an add-on to its dominant position in its suite of apps
  • Amazon gives cloud storage to people who subscribe to, like, 2-day shipping and a TV streaming service and discounts at Whole Foods, in what is probably the most absurd bundle of them all.

And you see it everywhere. YouTube tries to protect its inferior Music service by bundling it with ad-free videos, Samsung put the inferior Bixby assistant on its phones, Google uses its dominance in browser, search, and maps to protect its advertising business, Apple gives its credit card preferential treatment in its payment app, etc.

So when a service protects its own affiliated service through unfair/preferential treatment, it harms the consumer by making the entire bundle less useful than a bunch of independent services, each competing to be the best at that one specific thing.

Beehaw on Lemmy: The long-term conundrum of staying here

Yesterday, you probably saw this informal post by one of our head admins (Chris Remington). This post lamented some of the difficulties we’re running into with the site at this point, and what the future might hold for us. This is a more formal post about those difficulties and the way we currently see things....

BarryZuckerkorn,

I’m pretty protective of my online privacy so I have a tendency to make alts rather than allow disparate interests to be correlated to the same user (I’d rather have 3 accounts than a single account that show that I’m a person with my hobby with my career living in my city), so I’ve scattered my lemmy alts all across the lemmyverse (less beholden to instance downtime or an admin trying to correlate users).

There’s a complete dearth of content for niche communities like individual games or special interest hobbies, because the userbase is simply too small to support a healthy special interest community.

At this point, lemmy doesn’t even have much in the way of communities on some mainstream topics: sports, lifestyle/advice, food, cars, fashion, television, film, music, local issues in major population centers, etc. I mean, back in the 2000’s, these were topics that were mainstream enough that they were able to publish printed magazines or even newspapers for newsstands, but we can’t even get a critical mass of commenters for many of these topics on lemmy.

Yes, linux/FOSS and video games and tech are relatively niche interests that do have robust discussion here on lemmy, but that’s mainly a function of who tended to adopt use of the platform. Is lemmy going to be like Hacker News or Slashdot in that it never makes the jump to the mainstream?

BarryZuckerkorn,

Because some more microseconds later, it’s the difference between being able to serve 1k requests per second and dropping connections, vs. 100k requests per second and working smoothly.

Doesn’t this assume that the bottleneck is that particular function? If the service as a whole chokes on something else at 500 requests per second, then making that particular function capable of handling 100k requests isn’t going to make a difference. For web apps, the bottleneck is often some kind of storage I/O or the limits of the network infrastructure.

BarryZuckerkorn,

I feel like a bid/ask function wouldn’t be that technically difficult to implement.

A driver can punch in minimums: Maybe there’s a driver who is only willing to drive for $10 per ride, $0.50 per minute, and $0.25 per mile at a particular moment in time, and maybe a multiplier or premium for certain routes that involve tolls or larger passenger groups, etc., or a discount for pre-booking at least certain number of hours or days in advance. Maybe the pricing could take into consideration more variables (idle time versus driving time, pickup distance, minimum rider rating, etc.). Potential riders punch in their desired routes and they get real-time pricing information on the available drivers and the quoted price according to each driver’s formula.

The formulas shouldn’t be that hard for the driver or the passenger, from their interface, as long as the service has access to go route data.

BarryZuckerkorn,

Google has run a fiber ISP for a little over 10 years now. It was one of the first U.S. ISPs to offer gigabit speeds to residential customers, and has provided steady competitive pressure to other providers to provide faster speeds in those markets, as well.

Google also operates a mobile service called Google Fi as an MVNO. They handle the billing, but lease the capacity the way other MVNOs do.

A decentralized, blockchain-based messaging network for safer communications (techxplore.com)

Researchers from several institutes worldwide recently developed Quarks, a new, decentralized messaging network based on blockchain technology. Their proposed system could overcome the limitations of most commonly used messaging platforms, allowing users to retain control over their personal data and other information they share...

BarryZuckerkorn,

I disagree here. with p2p/federated you have to worry about if your microprovider goes out.

This Quarks protocol still seems to require reliance on “nodes,” which is the same thing as a federated service, with extra steps. It’s more overhead without any of the portability you want.

BarryZuckerkorn,

Vim is a text editor that works in a command line and therefore doesn’t require a graphical interface or windowing system, or anything like a mouse or trackpad or touch interface. It has a whole system of using the keyboard to do a bunch of things really efficiently, but the user has to actively go and learn those keyboard shortcuts, and almost an entire language of how to move the cursor around and edit stuff. It’s great once you learn it, so it creates a certain type of evangelist who tries to spread the word.

This meme template is perfect, because the vim user really did learn a bunch of stuff, and then wants to try to convince other people to do the same, using a pretty unpersuasive rationale (not using a mouse while programming).

BarryZuckerkorn,

Young people tend to be more persuadable before 30, and tend to bake in their political views around that age. So big events in one’s 20’s tend to lead to lasting partisan affiliations for life after that.

FDR’s presidency won over a lot of people to the Democrats in the 30’s and 40’s. Eisenhower’s presidency shifted people over to Republicans in the 50’s. Nixon pushed people away from Republicans. But by the 70’s Democrats were losing a lot of voters, and then Reagan won a bunch of people over to the GOP. Then 9/11 won people over to Republicans, while the Iraq war pushed them away.

But each of these things had an outsized effect on those under 30. So Boomers who remember getting fed up with Democrats in the 70s and crossing over for Reagan (and then voting Republican in every election since) just thought it was the effect of age, rather than the effect of that particular political moment in 1980.

And even though this data and the analysis is mainly for Americans, it’s probably reflective of how people shape their own political beliefs everywhere.

BarryZuckerkorn,

But I do miss having the “fucking [insert slur here]” “kill yourself” “only a basement-dwelling loser would have this opinion” comments auto-hid because the average passing user disapproved of it and decided to express their disapproval via downvote, instead of coming across it myself semi-frequently and reporting it.

This is why I think downvotes are an important element of the UI and ranking algorithms. No matter how many members there are, or how many comments or votes there are, there are still going to be 24 hours in a day and 168 hours in a week. So naturally, smaller communities actually tend to have larger gaps in mod coverage in length of time between an item going onto the mod queue and being resolved by a human mod.

So I’m in favor of mechanisms being built in for removing content from easy view, without mods. Downvotes seems like the easiest way to implement that kind of mechanism.

BarryZuckerkorn,

Each subreddit had its own atmosphere and culture and environment. I would expect the same to happen here, only with an opportunity for different instances to also foster their own dynamics, in addition to each community within each instance.

We’re too small to have niche thriving communities

The same was largely true of reddit when I joined (in about 2008 or 2009). There were a lot of technology/science/engineering/programming people in the mix, so there was good content for that, but most of what it was just kinda grew out of some ideas that had come from other forums (lolcats style content, advice animals memes) and from internal inside trends organically bubbling up within the community (the concepts of the AMA, TIL, ELI5, AITA, narwhal fandom, grumpy cat, reddit switcheroo), and then weird turns of phrases the people started repeating elsewhere like a cargo cult (the overuse of the word “obligatory,” accidentally a whole word, ಠ_ಠ, playing with movie titles by adding or removing or switching letters). We saw the rise and fall of some content creators and power users, the rise and fall of communities (/r/fffffffuuuuuuuuuuuu, inglip, space dicks, all sorts of communities that eventually got banned).

Trends don’t stop trending. Any community, large or small, ends up developing its own cultural touchstones and a shared history. Eventually we’ll see things turn from innovative to an inside joke to overdone within different lemmy communities, too.

BarryZuckerkorn,

I really can’t understand why an instance with the .world domain is so US-centric.

Honestly, I think it has a lot to do with a lot of the other popular lemmy instances being specifically oriented around a specific non-US country, so that those of us who are in the US felt deterred from joining the ones that explicitly included “.de” or “.ca” or “.ch” in their domain, with German/Canadian/Swiss stuff in the sidebar.

BarryZuckerkorn,

It doesn’t even need to be for marginalized communities, either (even if the benefit is most pronounced for those who don’t feel comfortable being themselves in the broader public sphere). Large organizations have always seen the benefit of smaller subgroups for like-minded people of similar experience/background to have a narrower discussion, even if some of those subgroups have quite a bit of social power out in the broader world.

For example, I am active in a few online communities (and in-person social circles) consisting of lawyers. As a profession and as a group, we have plenty of power and influence, so the benefit of having a gated space, even if we feel “safe” elsewhere, is still to foster discussion and community.

Churches and religious student groups will run Bible studies and the like, and they don’t tolerate people coming in and trying to derail the conversation by questioning the premises of their religions, either. Even if (or perhaps especially if) it is the dominant religion in their area.

BarryZuckerkorn,

Restricted membership groups are still valuable, no matter what you want to call it.

Shared experiences are often a good foundation for a group: residents of a particular neighborhood, alumni of a particular school, members of a particular family, etc. You can see lively discussion there that opens up in a way that might not happen in a general open group.

Common beliefs also form a good foundation for group membership. Almost every religion has meetings of other members of that religion, where discussion can happen within that framework of that religion’s views. A Baptist bible study group wouldn’t tolerate a new member coming in and just insisting every meeting that the Bible is fake and that Christianity is a lie. Does it create an “echo chamber” of only people who believe in a specific religion? Well, yes, because that’s the point, and why those members choose to congregate there.

Hell, I’m in a sibling chat thread where specific members of my family feel safe talking about their struggles with their significant others, roommates, jobs, neighbors, etc., because we like being able to bounce ideas off of people raised like us, by the same parents, in the same household. I don’t think we’d be able to have that productive conversation if we didn’t have that specific thread that we knew was just for us, and not for the other people in our lives to read and comment on.

Unless you’re taking the radical view that people shouldn’t be allowed to congregate in smaller groups that restrict membership, safe spaces are a natural consequence of how people associate with one another.

BarryZuckerkorn,

but because of the fossil fuels generated by the companies they invest their money in.

Lemme go ahead and roll my eyes here. Yes, American Airlines produces a significant percentage of the world’s greenhouse emissions. But they burn that fuel for the passengers, not just for the benefit of shareholders. Same with ExxonMobil, BP, etc.

Consumption is what drives pollution. Investments to profit off of that consumption is secondary.

BarryZuckerkorn,

A human brain is just the summation of all the content it’s ever witnessed, though, both paid and unpaid.

But copyright is entirely artificial. The deal is that the law says you have to pay when you copy a bunch of copyrighted text and reprint it into new pages of a newly bound book. The law also says you don’t have to pay when you are giving commentary on a copyrighted work, or parodying a copyrighted work, or drawing inspiration from a copyrighted work to create something new but still influenced by that copyrighted work. The question for these lawsuits is whether using copyrighted works to train these models and generate new text (or art or music) is infringement of those artificial, human-made, legal rights.

As an example, sound recording copyrights only protect the literal copying of a sound recording. Someone who mimics that copyrighted recording, no matter how perfectly, doesn’t actually infringe on the recording copyright (even if they might infringe on the composition copyright, a separate and distinct copyright). But a literal duplication process of some kind would be infringement.

We can have a debate whether the law draws the line in the correct places, or whether the copyright regime could be improved, and other normative discussion what what the rules should be in the modern world, especially about whether the rules in one area (e.g., the human brain) are consistent with the rules in another area (e.g., a generative AI model). But it’s a separate discussion from what the rules currently are. Under current law, the human brain is currently allowed to perform some types of copying and processing and remixing that some computer programs are not.

BarryZuckerkorn,

Your second paragraph about sound mimickry, as far as I’m aware, is not accurate.

It is. The recording copyright is separate from the musical composition copyright. Here’s the statute governing the rights to use a recording:

The exclusive rights of the owner of copyright in a sound recording under clauses (1) and (2) of section 106 do not extend to the making or duplication of another sound recording that consists entirely of an independent fixation of other sounds, even though such sounds imitate or simulate those in the copyrighted sound recording.

So if I want to go record a version of “I Will Always Love You” that mimics and is inspired by Whitney Houston’s performance, I actually only owe compensation to the owner of the musical composition copyright, Dolly Parton. Even if I manage to make it sound just like Whitney Houston, her estate doesn’t hold any rights to anything other than the actual sounds actually captured in that recording.

BarryZuckerkorn,

It’s not meaningless.

I know that it’s pretty easy to pick the lock on my front door. Or to break the window and get in. But still, there are a non-zero number of burglars who would be stopped by that lock. Same with my bike lock, which is a bit harder to pick but still possible. Nevertheless, the lock itself does deter and prevent some non-zero number of opportunistic thefts.

There are a non-zero number of law enforcement agencies that would be stopped by full disk encryption, even if the device is powered on and the encrypted media is mounted. There are a non-zero number of law enforcement agencies that would be stopped by all sorts of security and encryption strategies. And I’d argue that simple best practices would stop quite a few more than you’re seeming to assume: encrypt any data at rest on any devices you control, and then use e2e encryption for any data stored elsewhere.

You don’t even have to be that technically sophisticated. For Apple devices, turn on FileVault (as it is by default if you log into an Apple account when you set up the device), turn off iCloud. For Windows devices, use Bitlocker. For Android, turn on the “Encrypt Phone” setting, which is on by default. If you’re messing around with your own Linux devices, using LUKS isn’t significantly more difficult than the rest of system administration.

BarryZuckerkorn,

Dietary cholesterol isn’t well correlated with serum cholesterol, which is what the paper you’ve linked is about. It even veers off into the natural conclusion if you believe that serum cholesterol is the only thing that matters: statin prescriptions for everyone!

The TV streaming apps broke their promises, and now they’re jacking up prices (arstechnica.com)

For a moment, it seemed like the streaming apps were the things that could save us from the hegemony of cable TV—a system where you had to pay for a ton of stuff you didn’t want to watch so you could see the handful of things you were actually interested in....

BarryZuckerkorn,

Going back to cable isn’t the answer. It’s a failed model and needs to die.

Defined narrowly enough, yes, that old model is dead.

But more broadly, as an economic matter there will always be a business model for having a basket of content, with some portion of historical content (classic movies and tv shows from decades past) on demand, some ongoing/current on-demand content (last week’s episode of some scripted show), and live broadcast (sporting events happening right now). Build up enough of a catalog, charge a single price to subscribers for access to that content, and people will pay for the entire bundle. And because each subscriber is interested in a different portion of that bundle, the mass of subscribers essentially cross-subsidizes the fat tail of niche content: I don’t mind paying for your niche if it means my niche gets to survive.

The technological and cultural changes have deemphasized the importance of cable’s live delivery mechanism of 100+ “channels” each with programming on a specific schedule, but the core business model still will be there: subscribe to content and you can get some combination of live channels and a catalog of on-demand content.

The content owners, through either carriage fees with the cable/IPTV providers, or through the streaming services, or everything in between, are trying to jack up the price to see what the market will bear for those bundles. They might miscalculate to the point where the subscriber count drops so much that their overall revenue decreases even with a higher revenue per subscriber (and I actually think this is about to happen). And then instead of a market equilibrium where almost everyone pays a little bit to where there’s a huge bundle of content available, the little niche interests just can’t get a subscriber base and aren’t made available, even if the content is already made.

BarryZuckerkorn,

I mean, you just defined YouTube.

Well, I was trying to give a broad enough description to cover literally every video service, so mission accomplished!

My point is that every service will have different items in each category, and that splitting up the world’s catalog of content into many different services ends up breaking down the economic benefit of bundling. The YouTube bundle is different from the Netflix bundle, which is different from the Apple TV+ bundle, which is different from Disney+ and Hulu, which is different from Max (formerly HBO Max). YouTube has live content, but if you want to watch a specific basketball game live, you’ll have to subscribe to the service with that (and you’ll have to endure ads and product placement as part of that game). And maybe that’s not the $15/month YouTube Premium, but is instead the $73/month YouTube TV.

BarryZuckerkorn,

Sorry, in Linux everything is a file, so there is no “everything else.”

USB-C confirmed for the iPhone 15 in new leaked images - Macworld (www.macworld.com)

We’ve known that the iPhone is switching to USB-C for a while now, but there was always a possibility that Apple would stick with Lightning for one more year. Based on the latest leaked images, however, Apple is all-in on USB-C for the iPhone 15 and iPhone 15 Pro models, with USB-C parts for the iPhone 15, iPhone 15 Plus, and...

BarryZuckerkorn,

You’re 100% right.

The 12" MacBook had a great form factor right at the time that Intel CPUs really started to struggle with performance at lower power consumption, so the design turned into a huge weakness for thermal management. If they had similar performance per watt as the base M1 later showed off, that device would’ve been perfect for an ultraportable laptop, the spiritual successor to the discontinued 11" MacBook Air.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines