On the #enshitification of #academic#publishing. Where scientists burn the candle at both ends, paying to read and publish their work, in what is the ultimate grift.
Sometimes I'm called upon to teach a writing intensive capstone class where the main assignment has been a review paper. Given #GenerativeAI, I've been wondering what to do differently. Helping students improve their writing is totally different now...that's all I know.
I found this article:
The role of ChatGPT in scientific communication: writing better scientific review articles
@dogzilla@eilonwy@rspfau@academicchatter to be fair, I don't dislike that - teaching is not just about passing on knowledge, it's also about preparing students for the next steps in life.
Whether we like it or not - #generativeAI will be with us going forward, so might as well teach how to extract its benefits - e.g. examples of how to build and use #ChatGPT bots, etc... #academicchatter#ai
Ich habe gestern mit den neuen GPTs von OpenAI rumgespielt und mir letztendlich für meine aktuellen Projekte drei tatsächlich hilfreiche Chatbots erstellt.
Say hello to "Linux Server Admin Assistant", "Bricks Builder Assistant" und "Kirby CMS Advisor". Derzeit frei verfügbar für alle, die es brauchen und ein Abo für ChatGPT abgeschlossen haben.
"Das ist nicht die Zukunft, aber man kann sie von hier aus sehen" (DXHR)
Yesterday I played around with the new GPTs from OpenAI and ended up creating three actually helpful chatbots for my current projects.
Say hello to “Linux Server Admin Assistant”, “Bricks Builder Assistant” and “Kirby CMS Advisor”. Currently freely available to anyone who needs it and has a ChatGPT subscription.
“This is not the future, but you can see it from here” (DXHR)
Yesterday I played around with the new GPTs from OpenAI and ended up creating three actually helpful chatbots for my current projects.
Say hello to “Linux Server Admin Assistant”, “Bricks Builder Assistant” and “Kirby CMS Advisor”. Currently freely available to anyone who needs it and has a ChatGPT subscription.
“This is not the future, but you can see it from here” (DXHR)
The ancientJewish mysticism of Kabbalah, which finds deep meaning in sequences of letters and numbers, resonates in generative #AI like #ChatGPT, robots and DNA coding.
“Like the golem, robots, androids and even AI are powered with recombinations of elemental units. Instead of Hebrew letters, the units are ones and zeros. In both instances, the specific permutation makes all the difference.”
I'm always a bit skeptical of presentations from tech company CEOs on
how their product areas are necessary in the mental health field.
That said, this article has a few good points:
/"Umar Nizamani, CEO, International, at NiceDay, emphasised that AI will
inevitably become an essential tool in mental health care: 'I am very
confident AI will not replace therapists – but therapists using AI will
replace therapists not using AI.'"//
/
I am beginning to think this also -- for better or worse. I took a VERY
fast 60 second look at NiceDay and it appears to be another
all-encompassing EHR, but with a strong emphasis on data. Lots of tools
and questionnaires and attractive graphs for therapists to monitor
symptoms. (I need to take a longer look later.) So data-driven could
be very good, if it does not crowd out the human touch.
/"Nizamani said there had been suicides caused by AI, citing the case of
a person in Belgium who died by suicide after downloading an anxiety
app. The individual was anxious about climate change. The app suggested
'if you did not exist' it would help the planet, said Nizamani."//
/
YIKES... So, yes, his point that care in implementation is needed is
critical. I worry at the speed of the gold-rush.
/"He [//Nizamni] //called on the industry to come together to ensure
that mental health systems using AI and data are 'explainable’,
'transparent', and 'accountable'." //
/
This has been my biggest focus so far, coming from an Internet security
background when I was younger.
/"Arden Tomison, CEO and founder of Thalamos"/ spoke on how his company
automates and streamlines complex bureaucracy and paperwork to both
speed patients getting help and extract the useful data from the forms
for clinicians to use. More at: https://www.thalamos.co.uk/
/"Dr Stefano Goria, co-founder and CTO at Thymia, gave an example of
'frontier AI': 'mental health biomarkers' which are 'driving towards
precision medicine' in mental health. Goria said thymia’s biomarkers
(e.g. how someone sounds, or how they appear in a video) could help
clinicians be aware of symptoms and diagnose conditions that are often
missed."//
/
Now THIS is how I'd like to receive my AI augmentation. Give me
improved diagnostic tools rather than replacing me with chatbots or
over-crowding the therapy process with too much automated tool data
collection (some is good). I just want this to remain in the hands of
the solo practitioner rather than being a performance monitor on us by
insurance companies. I want to see empowered clinicians.
Apparently some people have to be told that using AI services in the
cloud to compose medical letters is a violation of HIPAA.
Now what I would like to see with all the AI-assisted EHR systems
currently being developed (EPIC, Oracle, Amazon, etc.) is not only BAA
contracts in place with the tech companies, but also:
a) Separate AI systems that don't share data with the main AI system.
(So the Hospital AI database would be separate from the general AI
database), or
b) Much better: Separate AI software and databases that are held
internal to the Hospital's own computer servers with restricted Internet
access to the outside.
This is wholly feasible, yet somehow I have a low trust level of it
occurring.
For any private practice people out there playing with AI on a small
office scale, I'm not a lawyer, but what I would recommend are a) AI
systems that can be run on a desktop (not in the cloud), and b) cutting
them off from Internet or severe restrictions on where those desktops
can call out to since you likely don't know what's in the code of the AI
you downloaded!
*Iowa health system warns against using ChatGPT to draft patient letters*
<https://www.beckershospitalreview.com/cybersecurity/iowa-health-system-warns-against-using-chatgpt-to-draft-patient-letters.html>
/Iowa City-based University of Iowa Health Care is warning employees
against the use of ChatGPT for its potential to violate HIPAA.../
--
#AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #chatgpt
#artificialintelligence #psychology #counseling #socialwork
#psychotherapy #EHR #medicalnotes #progressnotes
@[email protected] @[email protected]
@[email protected] @[email protected] @[email protected]
@[email protected] #mentalhealth #technology #psychiatry #healthcare
#patientportal
#HIPAA #dataprotection #infosec @[email protected] #doctors #hospitals
#BAA #businessassociateagreement
.
.
NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @[email protected]
.
Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: <https://www.nationalpsychologist.com>
.
EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE:
<http://subscribe-article-digests.clinicians-exchange.org>
.
READ ONLINE: <http://read-the-rss-mega-archive.clinicians-exchange.org>
It's primitive... but it works... mostly...
How do these #DisruptiveTechnologies affect #ediasporas?
Do they hinder or exacerbate the impacts of networks predominantly driven by #data-centric activities?
Hey, hacker fam. Quick update on what's going to be a big week.
Tomorrow I'm flying out to Bellevue and Wednesday I'm speaking at #BlueHat about the work @SophosXOps has done helping #Microsoft protect all Windows users from a very devious attack.
After I return, I'm in full-swing campaign mode running for the #BVSD#SchoolBoard. I've been doing door-knocking and meet-and-greet for days. Yesterday I spent hours giving out water to marathon runners here in #boulder
Next week though - I'll be participating in a candidate forum hosted by BVSD and you will be able to watch it live from anywhere because it will be broadcast by #livestream on BVSD's Youtube channel (https://www.youtube.com/@bouldervalleyschooldistric5781/streams). October 18 from 6pm-7:30pm MDT (UTC -6)
You can read up now on the forum and ** you can even submit questions.**
'#eDiasporas are networks driven by human agency, referring to communities of individuals who maintain connections with their home countries and diasporic fellows through digital tools.'
'In contrast, Hyperconnected Diasporas (HD) are networks of data-driven activities that heavily rely on social media extractivist data-opolies or Big Tech platforms, potentially posing threats to institutional trust and the data privacy of diasporic citizens.'
How do they affect e-#diasporas (networks of human-driven agency), either hindering or exacerbating the impacts of #HyperconnectedDiasporas (networks of data-driven activities)?
AI is a problem for editors and authors – and it's serious.
There is a dark side to this technology, with major long-term consequences for authorship and editorial work that we're only just beginning to discover – not least copyright theft.
As an editor, I'm supporting authors against AI scraping of their work without consent.
I'm publishing a no-holds-barred blog post next week on why AI is a serious problem for editors, and why authors have every right to be concerned about AI use in publishing.
Do look out for it – will post a link here on Monday 🔗