Ich habe gestern mit den neuen GPTs von OpenAI rumgespielt und mir letztendlich für meine aktuellen Projekte drei tatsächlich hilfreiche Chatbots erstellt.
Say hello to "Linux Server Admin Assistant", "Bricks Builder Assistant" und "Kirby CMS Advisor". Derzeit frei verfügbar für alle, die es brauchen und ein Abo für ChatGPT abgeschlossen haben.
"Das ist nicht die Zukunft, aber man kann sie von hier aus sehen" (DXHR)
Yesterday I played around with the new GPTs from OpenAI and ended up creating three actually helpful chatbots for my current projects.
Say hello to “Linux Server Admin Assistant”, “Bricks Builder Assistant” and “Kirby CMS Advisor”. Currently freely available to anyone who needs it and has a ChatGPT subscription.
“This is not the future, but you can see it from here” (DXHR)
@koen_hufkens@academicchatter one of my theories has been that autistics and schizophrenics have been quite aware that such tools do this: their emergence is a means to block out those who have weaponized religious, economic, and political voices co-opted by those outside their generational peer group. it's time to start thinking at the level of philosophical war. the artilect war went online more than a decade ago.
The ancientJewish mysticism of Kabbalah, which finds deep meaning in sequences of letters and numbers, resonates in generative #AI like #ChatGPT, robots and DNA coding.
“Like the golem, robots, androids and even AI are powered with recombinations of elemental units. Instead of Hebrew letters, the units are ones and zeros. In both instances, the specific permutation makes all the difference.”
@TheConversationUS@folklore
A while back, I used "tree of life" as a prompt for the craiyon AI image generator.
It came up with some boring and predictable stuff, but also this:
@paelse@TheConversationUS@folklore which was almost certainly regurgitated from the bazillions of images — including Kabbalistic ones — that AI image generators have taken from public domain sources and stolen from artists and copyrighted works from all over the Internet.
I'm always a bit skeptical of presentations from tech company CEOs on
how their product areas are necessary in the mental health field.
That said, this article has a few good points:
/"Umar Nizamani, CEO, International, at NiceDay, emphasised that AI will
inevitably become an essential tool in mental health care: 'I am very
confident AI will not replace therapists – but therapists using AI will
replace therapists not using AI.'"//
/
I am beginning to think this also -- for better or worse. I took a VERY
fast 60 second look at NiceDay and it appears to be another
all-encompassing EHR, but with a strong emphasis on data. Lots of tools
and questionnaires and attractive graphs for therapists to monitor
symptoms. (I need to take a longer look later.) So data-driven could
be very good, if it does not crowd out the human touch.
/"Nizamani said there had been suicides caused by AI, citing the case of
a person in Belgium who died by suicide after downloading an anxiety
app. The individual was anxious about climate change. The app suggested
'if you did not exist' it would help the planet, said Nizamani."//
/
YIKES... So, yes, his point that care in implementation is needed is
critical. I worry at the speed of the gold-rush.
/"He [//Nizamni] //called on the industry to come together to ensure
that mental health systems using AI and data are 'explainable’,
'transparent', and 'accountable'." //
/
This has been my biggest focus so far, coming from an Internet security
background when I was younger.
/"Arden Tomison, CEO and founder of Thalamos"/ spoke on how his company
automates and streamlines complex bureaucracy and paperwork to both
speed patients getting help and extract the useful data from the forms
for clinicians to use. More at: https://www.thalamos.co.uk/
/"Dr Stefano Goria, co-founder and CTO at Thymia, gave an example of
'frontier AI': 'mental health biomarkers' which are 'driving towards
precision medicine' in mental health. Goria said thymia’s biomarkers
(e.g. how someone sounds, or how they appear in a video) could help
clinicians be aware of symptoms and diagnose conditions that are often
missed."//
/
Now THIS is how I'd like to receive my AI augmentation. Give me
improved diagnostic tools rather than replacing me with chatbots or
over-crowding the therapy process with too much automated tool data
collection (some is good). I just want this to remain in the hands of
the solo practitioner rather than being a performance monitor on us by
insurance companies. I want to see empowered clinicians.
@admin@psychotherapist@psychology@socialpsych@infosec
We need cross-discipline digital literacy. AI advances that improve one discipline are often exploited in others. When you celebrate diagnosis mental health biomarkers (how someone sounds, or how they appear in a video), this information is coveted by data brokers who would love to sell it. MH video is unacceptable until security improves. Even then, it will be risky.
How do these #DisruptiveTechnologies affect #ediasporas?
Do they hinder or exacerbate the impacts of networks predominantly driven by #data-centric activities?
Hey, hacker fam. Quick update on what's going to be a big week.
Tomorrow I'm flying out to Bellevue and Wednesday I'm speaking at #BlueHat about the work @SophosXOps has done helping #Microsoft protect all Windows users from a very devious attack.
After I return, I'm in full-swing campaign mode running for the #BVSD#SchoolBoard. I've been doing door-knocking and meet-and-greet for days. Yesterday I spent hours giving out water to marathon runners here in #boulder
Next week though - I'll be participating in a candidate forum hosted by BVSD and you will be able to watch it live from anywhere because it will be broadcast by #livestream on BVSD's Youtube channel (https://www.youtube.com/@bouldervalleyschooldistric5781/streams). October 18 from 6pm-7:30pm MDT (UTC -6)
You can read up now on the forum and ** you can even submit questions.**
'#eDiasporas are networks driven by human agency, referring to communities of individuals who maintain connections with their home countries and diasporic fellows through digital tools.'
How do they affect e-#diasporas (networks of human-driven agency), either hindering or exacerbating the impacts of #HyperconnectedDiasporas (networks of data-driven activities)?
AI is a problem for editors and authors – and it's serious.
There is a dark side to this technology, with major long-term consequences for authorship and editorial work that we're only just beginning to discover – not least copyright theft.
IIRC the difference between Bing's integrated AI and others is that it cites actual sources from the search engine, so it's less prone to inaccuracy/hallucination?
Is there currently any facility within search engines to opt out or deactivate AI, or is it built-in as standard, requiring consent to use it?
And does it train itself from searches? Happy for more info!
I know some people have issues with things written with #ChatGPT, but it's helped me a lot. I am #dyslexic, have difficulty writing (especially longer content), and have difficulty putting complex ideas in an easy-to-understand way that flows. ChatGPT has helped me write a lot faster and better.
@AlicornSkyler like the internet itself, it isn’t going away and will get better, but not all better, with time. I think future generations, at lease for a while, will be learning how to write effective prompts.
It wasn't an easy decision, but I'm officially taking The Authenticity Initiative pledge. In today's #blog, I've explained why I'm vowing to not use #ChatGPT or any tool like it within my personal work. And I've explained why I won't edit it either. #writingcommunity