@luis_in_brief@b0rk When it does change it takes years, like the multi-year battle between the GCC steering committee and RMS over allowing plugins and finding a way that would keep evildoers from stealing the previous bodily fluids, when everyone RMS was worried about was already building on top of LLVM instead.
@luis_in_brief@b0rk I was once on the GCC steering committee. I saw our main role at the time as managing RMS and keeping him from driving the developers away. Burned out eventually, have been out of GCC development for a long time.
@Teri_Kanefield He's appealing and suing the judge, right? I guess that will delay any consequences for a while.
Though perhaps his creditors might start calling loans immediately while there are still assets to grab, and I don't think appeals would hold them off, but I don't know.
"I do not make these decisions lightly," says House Speaker McCarthy, announcing a full #impeachment inquiry into President Biden. "We will go wherever the evidence takes us."
It's worth keeping in mind that in ARPANET days (the direct ancestor of the Internet), we were largely ignored by AT&T et al. because they had their own plans for public networking that would follow the telephone model -- pay per minute of connect time and per kilopacket sent or received, etc.
That the ARPANET model would succeed was not at all guaranteed, and in fact seemed extraordinarily unlikely at times. Just a minor change here or there in the timeline and most of what we take for granted now on the Net would not exist in any kind of recognizable form.
The rise of advertiser-supported services is largely what made most of the modern Internet possible, and those persons and organizations who routinely block or rant against advertising have yet to offer an alternative funding model to keep this stuff going that wouldn't make the current digital divide look like a pimple compared with their Mt. Everest of user charges that would be necessary if the advertising model collapses.
@lauren The problem isn't that there are ads. The problem is the tracking and the vast amounts of code from many parties attached to every page. It is a security and surveillance nightmare. The alternative is the model that supported newspapers and magazines up through the 1990s, very successfully: if a reader is interested in a particular publication, or section of a newspaper, it is likely that they will be interested in related products or services. Or if a publication is widely read, advertisers that want to reach almost everyone advertise there. No need for cross site cookies placed by 25 different ad networks on every page. Publications control their own ads, instead of giving all the money to Google and Facebook.
Could this have been rolled out just... IDK maybe a little too fast? And by people who didn't understand the capabilities of the software all that well?
There are consequences for degradation of search quality-- for example before this started I might try to get an answer directly from a search engine. They seemed to come from Wikipedia and that's not so bad-- now? I wear a bad data hazmat suit when searching. I wouldn't ask for "the nearest grocery store to [address]"
@futurebird I think that this is a bad interaction between Google's "page rank" approach (pages with lots of links to them from "reputable" sites are assumed to be best), AI pollution (someone generated this page from an LLM producing this stupidity and posted it), and lots of people linking to the page as an example of AI stupidity.
To fix this, they'll have to figure out a way to rank AI generated results worse, or figure out when a link means "look at this stupid thing" instead of "look at this highly accurate thing".