­Most Americans favor restrictions on false information, violent content online

65% of Americans support tech companies moderating false information online and 55% support the U.S. government taking these steps. These shares have increased since 2018. Americans are even more supportive of tech companies (71%) and the U.S. government (60%) restricting extremely violent content online.

PostmodernPythia,

The key is defining terms like “false” and “violent.”

tias,

In other news: most Americans disagree on what information is false

OneRedFox,
@OneRedFox@beehaw.org avatar

Checks out. I wouldn’t want the US government doing it, but deplatforming bullshit is the correct approach. It takes more effort to reject a belief than to accept it and if the topic is unimportant to the person reading about it, then they’re more apt to fall victim to misinformation.

Although suspension of belief is possible (Hasson, Simmons, & Todorov, 2005; Schul, Mayo, & Burnstein, 2008), it seems to require a high degree of attention, considerable implausibility of the message, or high levels of distrust at the time the message is received. So, in most situations, the deck is stacked in favor of accepting information rather than rejecting it, provided there are no salient markers that call the speaker’s intention of cooperative conversation into question. Going beyond this default of acceptance requires additional motivation and cognitive resources: If the topic is not very important to you, or you have other things on your mind, misinformation will likely slip in.

Additionally, repeated exposure to a statement increases the likelihood that it will be accepted as true.

Repeated exposure to a statement is known to increase its acceptance as true (e.g., Begg, Anas, & Farinacci, 1992; Hasher, Goldstein, & Toppino, 1977). In a classic study of rumor transmission, Allport and Lepkin (1945) observed that the strongest predictor of belief in wartime rumors was simple repetition. Repetition effects may create a perceived social consensus even when no consensus exists. Festinger (1954) referred to social consensus as a “secondary reality test”: If many people believe a piece of information, there’s probably something to it. Because people are more frequently exposed to widely shared beliefs than to highly idiosyncratic ones, the familiarity of a belief is often a valid indicator of social consensus.

Even providing corrections next to misinformation leads to the misinformation spreading.

A common format for such campaigns is a “myth versus fact” approach that juxtaposes a given piece of false information with a pertinent fact. For example, the U.S. Centers for Disease Control and Prevention offer patient handouts that counter an erroneous health-related belief (e.g., “The side effects of flu vaccination are worse than the flu”) with relevant facts (e.g., “Side effects of flu vaccination are rare and mild”). When recipients are tested immediately after reading such hand-outs, they correctly distinguish between myths and facts, and report behavioral intentions that are consistent with the information provided (e.g., an intention to get vaccinated). However, a short delay is sufficient to reverse this effect: After a mere 30 minutes, readers of the handouts identify more “myths” as “facts” than do people who never received a hand-out to begin with (Schwarz et al., 2007). Moreover, people’s behavioral intentions are consistent with this confusion: They report fewer vaccination intentions than people who were not exposed to the handout.

The ideal solution is to cut off the flow of misinformation and reinforce the facts instead.

beejjorgensen,
@beejjorgensen@lemmy.sdf.org avatar

I have no problem with Twitter moderating content. The First Amendment says they can.

But the government moderating it–the First Amendment says they can’t.

cygnus,
@cygnus@lemmy.ca avatar

Most Americans agree that false information should be moderated, but they disagree wildly on what’s false or not.

HairHeel,
@HairHeel@programming.dev avatar

Y’all gonna regret this when Ron DeSantis gets put in charge of deciding which information is false enough to be deleted.

wxboss,

Most Americans don’t want to think for themselves. They would rather someone else do that heavy lifting for them.

However, it’s important that people have the freedom to reason for themselves and make choices accordingly without some governmental entity mandating a certain thought trajectory. People shouldn’t surrender such fundamental human freedoms to their government.

“If liberty means anything at all it means the right to tell people what they do not want to hear.” ― George Orwell, Animal Farm

knokelmaat,

I personally like transparent enforcement of false information moderation. What I mean by that is something similar to beehaw where you have public mod logs. A quick check is enough to get a vibe of what is being filtered, and in Beehaw’s case they’re doing an amazing job.

Mod logs also allow for a clear record of what happened, useful in case a person does not agree with the action a moderator took.

In that case it doesn’t really matter if the moderators work directly for big tech, misuse would be very clearly visible and discontent people could raise awareness or just leave the platform.

mrmanager,
@mrmanager@lemmy.today avatar

Americans are generally quite stupid. Imagine asking private big tech to moderate what you can see online. :)

But it’s the same in every country. The large masses are clueless. If you ask Europeans, you would get the same response even though it’s Americans moderating it, which is even worse.

You know Microsoft is planning to put the next windows online and let people just access it? Same pattern here, people trusting big tech with their own privacy and integrity. So weird.

sub_,

65% of Americans support tech companies moderating false information online

aren’t those tech companies the one who kept boosting false information on the first place to get ad revenue? FB did it, YouTube did it, Twitter did it, Google did it.

How about breaking them up into smaller companies first?

I thought the labels on potential COVID or election disinformation were pretty good, until companies stopped doing so.

Why not do that again? Those who are gonna claim that it’s censorship, will always do so. But, what’s needed to be done is to prevent those who are not well informed to fall into antivax / far-right rabbit hole.

Also, force content creators / websites to prominently show who are funding / paying them.

Steeve,

aren’t those tech companies the one who kept boosting false information on the first place to get ad revenue?

Not really, or at least not intentionally. They push content for engagement, and that’s engaging content. It works the same for vote based systems like Reddit and Lemmy too, people upvote ragebait and misinformation all the time. We like to blame “the algorithm” because as a mysterious evil black box, but it’s really just human nature.

I don’t see how breaking them up would stop misinformation, because no tech company (or government frankly) actually wants to be the one to decide what’s misinformation. Facebook and Google have actually lobbied for local governments to start regulating social media content, but nobody will touch it, because as soon as you start regulating content you’ll have everyone screaming about “muh free speech” immediately.

manpacket,

Do they agree on the definition of the false information?

argv_minus_one,

But how do you implement such a thing without horrible side effects?

rodbiren,

It’s all fun and games till well intentioned laws get abused by a new administration. Be careful what you wish for. My personal take is that any organization that is even reasonably similar to a news site must conform to fairness in reporting standards much like broadcast TV once had. If you don’t, but an argument could be made that you present as a new site, you just slap a sizeable banner on every page that you are an entertainment site. Drawing distinctions on what is news and what is entertainment would theoretically work better than an outright ban of misleading content.

At the end of the day it won’t matter what is written unless the regulations have actual teeth. “Fines” mean so little given the billion dollar backers could care less and retractions are too little too late. I want these wannabe Nazi “News Infotainment” people to go to jail for their speech that causes harm to people and the nation. Destroying democracy should be painful for the agitators.

invno1, (edited )

I don’t think this is really about censorship. You can say and advertise whatever you want, but after this if it can be proven false you have to pay the price. All it does is make people double check their facts and figures before they go shooting off random falsehoods.

Nuuskis9,

This just another fake poll used to justify the biometrics requirement for internet connection.

dingus, (edited )
@dingus@lemmy.ml avatar

This statement reads like “I’m angry people don’t want me using botnets to push my agenda.”

Biometrics? Comcast didn’t even ask me for a drivers license. They asked me for a credit card to make a payment.

Also, frankly, last I checked, Pew Research is pretty much unrivaled in social science polling data, so not sure why you’re calling it “fake.”

Nuuskis9,

I don’t know how things function in the US, but here in Europe it’s already pretty much a standard that average joes has totally the opposite opinion what media claims to be the result.

And when multiple other polls with multiple time more responders shows the opposite result, media is always silent.

Our politicians started to push biometrics / strong identificitation last year or this for every people who connects to internet. That’s just step closer to that “you have no pricacy” conspiracy theory created by conpiracy theorist politicians.

dingus,
@dingus@lemmy.ml avatar

Our politicians started to push biometrics / strong identificitation last year or this for every people who connects to internet. That’s just step closer to that “you have no pricacy” conspiracy theory created by conpiracy theorist politicians.

That’s unfortunate to hear, and I am not here to diminish your lived experience, but I can tell you that they’re not pushing for biometric ID in the US to log on to the internet, and this poll is from a well-respected, well-known to be unbiased research group in the US who does polling data for US households and US issues and has for decades.

So in respect to this thread, your feelings on that have next to nothing to do with this discussion.

Nuuskis9,

Well, that’s good to hear then. Hopefully it stays on that track. EU and US is banning encrypted messaging currently at the same time though. And India already illegalized them, but Indian people says that so far it hasn’t affected anybody by any means.

alyaza,
@alyaza@beehaw.org avatar

This just another fake poll used to justify the biometrics requirement for internet connection.

quelle surprise that your comment history is full of right-wing, crank, Great Reset (((globalist))) stuff:

Haha it is funny that you oppose only with the arguments you read from fact checkers instead of going to Youtube and see what those globalists says by themselves. In every country the top politicians goes regularly to their meetings who talks harsh things directly and all you know is that all they talked is a conspiracy theory. You always repeat the sentences who nobody actually said, expect fact checkers. Lol

Year 2030 is a global target for renovations in every aspects of societies and countries.

Let’s hope the successor of Rutte isn’t a WEF muppet and will stop the closure of farms.

Wow I didn’t realize you hollandaises love WEF-puppets, 15 min cities and Rutte’s lies even after he got nailed by Gideon van Meijeren in the parliament. Well, have fun with your 11 200 closed farmlands then. Luckily the puppets haven’t planned that in here, but most likely it’ll happen here too.

15 min cities as a consept was invented in the Soviet Union by Stalin.

Nuuskis9,

So what? You can’t even prove any of your “conspiracy” claims wrong. All you can do is to restrict other peoples freedom of speech.

If you’ve got different opinions, then debate with arguments. Currently all you have capabilities is just very low iq monkey business. But naturally that’s your choice how you want to play. Don’t expect me to give you any credits for that.

argv_minus_one,
Nuuskis9,
ConsciousCode,

What worries me is who defines what the truth is? Reality itself became political decades ago, probably starting with the existence of global warming and now such basic foundational facts as who won an election. If the government can punish “falsehood”, what do you do if the GOP is in charge and they determine that “Biden won 2020” is such a falsehood?

mojo,

Big tech should be doing so, but they already get pressure from advertisers to moderate that away. For the smaller ones like the insane right winger sites, let them spew nonsense, since it sets a dangerous precedent for everyone else if they’re not allowed to shit out misinformation and bigotry.

That’s where the internet should very much be made free. There’s too many cases of legitimate websites that can be shut down through these means. We need to correct misinformation with correct information.

alyaza,
@alyaza@beehaw.org avatar

We need to correct misinformation with correct information.

my counterpoint to that would be: hasn’t COVID demonstrated how ineffective this actually is versus deplatforming incorrect information to begin with? people did do a lot of that during the pandemic, and yet there are still tens, possibly hundreds of millions of people, who were misled and now believe things contrary to even basic science. i think to a lesser extent the anti-vaxx movement has demonstrated similar limits to this approach against deplatforming.

mojo,

There’s a lot more reasons that count for that. That became a very big social and political issue, which is a separate issue. I ask you then, how would we enforce this if non megacorps were forced to comply? Should decentralized media like Lemmy be forced to comply as well? Would US be forced to defederate with other country instances in the situation they may not govern their speech? Would encrypted communication that cannot be governed get banned so we can carefully make sure misinformation doesn’t get spread? As soon as you think of the how would this be enforced, you quickly realize how terrible of an idea this is.

Of course it’s easy to say misinformation bad, nobody is disagreeing with you. But actually going into actual practical solutions that maintain freedom and privacy is the hard part.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • [email protected]
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • SuperSentai
  • oklahoma
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • KamenRider
  • feritale
  • All magazines