This profile is from a federated server and may be incomplete. Browse more on the original instance.

mal099, (edited )

Here's the study, for anyone who wants to read it. It's surprisingly short and open access.
A few things:
The participants took Wasabi supplements once a day for 12 weeks, not just normal Wasabi. In each pill, there was 0.8 mg of the active compound (6-MSITC). Quick googling gave me the following for the level of that compound in actual Wasabi:

Another study determined ~550-556 μg/g of 6-MSITC in wet weight of wasabi root [10] . The present study observed a concentration of 120-150 μg/g wet weight of 6-MSITC in stem and rhizome blend.

In other words, you could actually also get the same amount of 6-MSITC that was in the supplements by eating a few grams of regular wasabi each day, assuming that the processed stuff still has similar levels. The abstract provides a reasonable summary of the study, and of the fact that it agrees largely with some previous science on the subject (although there's not a lot, two studies in small journals):

Cognitive functions decline with age. Declined cognitive functions negatively affect daily behaviors. Previous studies showed the positive effect of spices and herbs on cognition. In this study, we investigated the positive impact of wasabi, which is a traditional Japanese spice, on cognitive functions. The main bioactive compound of wasabi is 6-MSITC (6 methylsulfinyl hexyl isothiocyanate), which has anti-oxidant and anti-inflammatory functions. Anti-oxidants and anti-inflammatories have an important role in cognitive health. Therefore, 6-MSITC is expected to have positive effects on cognitive function. Previous studies showed the beneficial effects on cognitive functions in middle-aged adults. However, it is unclear that 6-MSITC has a positive effect on cognitive functions in healthy older adults aged 60 years and over. Here, we investigated whether 12 weeks’ 6-MSITC intervention enhances cognitive performance in older adults using a double-blinded randomized controlled trial (RCT). Methods: Seventy-two older adults were randomly assigned to 6-MSITC or placebo groups. Participants were asked to take a supplement (6-MSITC or a placebo) for 12 weeks. We checked a wide range of cognitive performances (e.g., executive function, episodic memory, processing speed, working memory, and attention) at the pre- and post-intervention periods. Results: The 6-MSITC group showed a significant improvement in working and episodic memory performances compared to the placebo group. However, we did not find any significant improvements in other cognitive domains. Discussion: This study firstly demonstrates scientific evidence that 6-MSITC may enhance working memory and episodic memory in older adults. We discuss the potential mechanism for improving cognitive functions after 6-MSITC intake.

They tested the study participants once before and once after the 12 weeks of daily wasabi supplements. The participants were not tested for any long term cognitive effects.

As someone else has pointed out here in the comments, the study does list a Wasabi company as one of the sources of funding:

Funding
This study was founded[sic] by KINJIRUSHI Co., Ltd. and the Japan Society for the Promotion of Science (19H01760, 22H01088).
Conflicts of Interest
This study was supported by KINJIRUSHI Co., Ltd. The funding body had no role in the design of the study, collection, analyses, or interpretation of the data, writing of the manuscript, or the decision to publish the results.

Also, I don't want to poison the well too much, but I feel like I should mention that the editorial board of the journal resigned in 2018 because the publishers "pressured them to accept manuscripts of mediocre quality and importance". Doesn't mean it's all bad, but it's a very early study and more research should be done.

mal099, (edited )

The headline is pretty misleading. Reading the headline, I was imagining Nigerian Prince scams. But in the article, they state "Compared to older generations, younger generations have reported higher rates of victimization in phishing, identity theft, romance scams, and cyberbullying."
Teens get bullied more than the elderly? Say it ain't so!
While GenZ is, according to their source, also the generation with the highest percentage of victims if phishing scams, it's actually millenials that fall for identity theft and romance scams the most.

The article also states that the "cost of falling for those scams may also be surging for younger people: Social Catfish’s 2023 report on online scams found that online scam victims under 20 years old lost an estimated $8.2 million in 2017. In 2022, they lost $210 million."
The source for Social Catfish's claim is data released in 2023 by the FBI Internet Crime Complaint Center. According to that data, in 2022, there were 15,782 complaints for internet crime by victims under 20 totaling $210.5 million in losses. In the same year, there were 88,262 complaints by victims over 60, totaling $3.1 billion in losses.

Every generation since the beginning of times has claimed that the following generation was rude, stupid, and stopped doing things the "right way" like we used to do in the good old days. It has always been bullshit, it will always be bullshit. Stop stressing, the kids are alright.

mal099,

Got a point there, but it's what the sources say. One possibility might be that it's the teenagers that got scammed (or even just filing the complaint?), but their parents' accounts that got emptied. This part of the report is unfortunately really lacking in detailed descriptions of the data.

Bought the Elegoo Mars 4 Max. Test print went great, but my next print failed. What went wrong? (kbin.social)

Hello everyone! As it says in the title, I very recently bought my first 3D printer, the Elegoo Mars 4 Max. Two days ago, I did my first test print, and it came out pretty good (although maybe a bit too hard, but details looked sharp). The only problem was that I somehow got an error message in the end ("error printing file data...

mal099,

It completed, but was stuck to the resin tank. No errors this time.

mal099,

No worries, I had no idea what TSMC even is and was confused by these settings having two different numbers, but now that you've mentioned it I was able to Google it and I think I understand now.
Anyway, thanks for the suggestion and for helping me learn, I might try that!

mal099,

Thanks for the suggestion! Just trying to understand, why would reducing the exposure time for the bottom layer help with adhesion?

mal099,

Thank you for the comment!
You can see the parts in the pictures in the post. Yes, as far as I can tell, only a few layers were printed, then I lost adhesion. The build did stay on the tank's floor, from which I had to scrape it off. The machine kept going until "finished", but did not print anything for more than 90% of the printing process.
I will try angling the parts next time, did not think it was necessary since the first test print, which was not angled, went so well.

mal099,

Damn, you're right. The study has not been peer reviewed yet according to the article, and in my opinion, it really shows. For anyone who doesn't want to actually read the study:

They took the set of questions from a different study (which is fine). The original study had a set of 500 randomly chosen prime numbers and asked ChatGPT if they were prime, and to support its reasoning. They did this to see if in the cases where ChatGPT got the question wrong, ChatGPT would try to support its wrong answer with more faulty reasoning - a dataset with only prime numbers is perfectly fine for this initial question.

The study in the article appears to be trying to answer two questions - is there significant drift in the answers ChatGPT gives, and is ChatGPT getting better or worse at answering questions. The dataset is perfectly fine for answering the first question, but completely inadequate for answering the second, since an AI that simply thinks all numbers are prime would be judged as having perfect accuracy! Some good peer review would never let that kind of thing slide.

mal099,

@rastilin is making some unproven assumptions here. But it is true that the "math question" dataset consists only of prime numbers, so if the first version thought every number was prime and the second thought no numbers were prime, we would see this exact behavior. Source:

For this dataset, we query the primality of 500 randomly chosen primes between 1,000 and 20,000; the correct answer is always Yes.

From Zhang et al. (2023), the paper they took the dataset from.

mal099,

True, GPT does not return a "yes" or "no" 100% of the time in either case, but that's not the point. The point is that it's impossible to say if GPT has actually gotten better or worse at predicting prime numbers with their test set. Since the test set is composed of only prime numbers, we do not know if GPT is more likely to call a number "prime" when it actually is a prime number than when it isn't. All we know is that it was very likely to answer "yes" to the question "is this number prime?" in March, and very likely to answer "no" in July. We do not know if the number makes a difference.

mal099,

I would steal this argument, but if it can be reposted here for free, then I don't think anybody really owns it. 🤔

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines