I want to study psychology but won't AI make it redundant in a couple of years?

I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?

You might think this is a stupid and irrational question. “There is no way AI will do psychology well, ever.” But I think in today’s day and age it’s pretty fair to ask when you are deciding about your future.

cooopsspace,

Given the vast array of existing pitfalls in AI, not to mention the outright biases and absence of facts - AI psychology would be deeply flawed and would more likely kill people.

Person: I’m having unaliving thoughts, I feel like it’s the only thing I can do

AI: Ok do it then

That alone is why it’ll never happen.

Also we need to sort out how to house, heal and feed our people before we start going and replacing masses of workforce.

conciselyverbose,

The level of liability you'd expose yourself actively advertising it as some sort of mental health product is insane.

I do believe someone will be dumb enough, but it's a truly terrible, insanely unsafe idea with anything resembling current tech in any way.

snek,

No, it won’t. I don’t think I would have made it here today alive without my therapist. There may be companies that have AI agents doing therapy sessions but your qualifications will still be priceless and more effective in comparison.

Havald,

I won’t trust a tech company with my most intimate secrets. Human therapists won’t get fully replaced by ai

Nonameuser678,
@Nonameuser678@aussie.zone avatar

Psychotherapy is about building a working relationship. Transference is a big part of this relationship. I don’t feel like I’d be able to build the same kind of therapeutic relationship with an AI that I would with another human. That doesn’t mean AI can’t be a therapeutic tool. I can see how it could be beneficial with things like positive affirmations and disrupting negative thinking patterns. But this wouldn’t be a substitute for psychotherapy, just a tool for enhancing it.

lvxferre,
@lvxferre@lemmy.ml avatar

If you’re going to avoid psychology, do it because of the replication crisis. What is being called “AI” should play no role on that. Here’s why.

Let us suppose for a moment that some AI 7y from now is able to accurately diagnose and treat psychological issues that someone might have. Even then the AI in question is not a moral agent that can be held responsible for its actions, and that is essential when you’re dealing with human lives. In other words you’ll still need psychologists picking the output of said AI and making informed decisions on what the patient should [not] do.

Furthermore, I do not think that those “AI systems” will be remotely as proficient at human tasks in, say, a decade, as some people are claiming that they will be. AI is a misnomer, those systems are not intelligent. Model-based text generators are a great example of that (and relevant in your case): play a bit with ChatGPT or Bard, and look at their output in a somewhat consistent way (without cherry picking the hits and ignoring the misses). Then you’ll notice that they don’t really understand anything - they’re reproducing grammatical patterns regardless of their content. (Just like they were programmed to.)

Cossty,

I haven’t heard of Replication Crisis. Thanks for pointing that out.

lvxferre,
@lvxferre@lemmy.ml avatar

It boils down to scientists not knowing if they’re actually reaching some conclusion or just making shit up. It’s actually a big concern across multiple sciences, it’s just that Psychology is being hit really hard, and for clinical psychologists this means that they simply can’t trust as much the theoretical frameworks guiding their decisions as they were supposed to.

livus,
@livus@kbin.social avatar

If you have a talk with the AI called Pi, it talks like a therapist. It's impressive at first but you can't escape the knowledge that it dgaf about you.

And that's a trait people really don't want in a therapist.

ThankYouVeryMuch,

Yeah for $100 an hour many people would give a fuck about you

rynzcycle,

You jest, but honestly this is what helped me. I felt very alone, deeply depressed and held a long rooted belief that I wasn't important enough to deserve better.

Knowing that this person was listening because they were being paid/it was their job, helped be get past the guilt and open up. Likely saved my life. AI would not have given me that.

DABDA,
@DABDA@lemmy.world avatar

All my points have already been (better) covered by others in the time it took me to type them, but instead of deleting will post anyway :)


If your concerns are about AI replacing therapists & psychologists why wouldn’t that same worry apply to literally anything else you might want to pursue? Ostensibly anything physical can already be automated so that would remove “blue-collar” trades and now that there’s significant progress into creative/“white-collar” sectors that would mean the end of everything else.

Why carve wood sculptures when a CNC machine can do it faster & better? Why learn to write poetry when there’s LLMs?

Even if there was a perfect recreation of their appearance and mannerisms, voice, smell, and all the rest – would a synthetic version of someone you love be equally as important to you? I suspect there will always be a place and need for authentic human experience/output even as technology constantly improves.

With therapy specifically there’s probably going to be elements that an AI can [semi-]uniquely deal with just because a person might not feel comfortable being completely candid with another human; I believe that’s what using puppets or animals or whatever to act as an intermediary are for. Supposedly even a really basic thing like ELIZA was able convince some people it was intelligent and they opened up to it and possibly found some relief from it, and there’s nothing in it close to what is currently possible with AI. I can envision a scenario in the future where a person just needs to vent and having a floating head just compassionately listen and offer suggestions will be enough; but I think most(?) people would prefer/need an actual human when the stakes are higher than that – otherwise the suicide hotlines would already just be pre-recorded positive affirmation messages.

Cossty,

You still had some good/new points in last paragraph. Thx

user224,
@user224@lemmy.sdf.org avatar

By the way, if you want to try Eliza, you can telnet into telehack.com and run the command eliza to launch it.

FaceDeer,
@FaceDeer@kbin.social avatar

Well, I won't say I think there's no risk at all. AI is advancing rapidly and in very surprising ways. But I expect that most of the jobs that AI is currently "replacing" will actually still survive in some related form. When sewing machines were invented it didn't poof tailors out of existence, they started doing other things. The invention allowed people to be able to own way more clothing than they did before, so fashion design became a bigger thing. Etc.

Even if AIs get really good at psychology there'll still be people who are best handled by a human. Heck, you might end up with an AI "boss" that decides which cases those would be and give you suggestions on how to handle them, but your own training will likely still be useful.

If you want to be really future-proof then make sure to set aside some savings and think about alternate careers that you might enjoy keeping abreast of as hobbies just in case something truly drastic happens to your primary field.

hugz,

The caring professions are often considered to be among the safest professions. "Human touch" is very important in therapy

halcyondays,

deleted_by_author

  • Loading...
  • Cossty, (edited )

    That’s great answer. Thank you.

    theKalash,

    I think you’re taking South Park too seriously.

    Evilschnuff,

    There is the theory that most therapy methods work by building a healthy relationship with the therapist and using that for growth since it’s more reliable than the ones that caused the issues in the first place. As others have said, I don’t believe that a machine has this capability simply by being too different. It’s an embodiment problem.

    intensely_human,

    Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.

    I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.

    It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.

    If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.

    What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.

    That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.

    You could give it a few basic motivations then instruct it to act that out every day.

    Now I’m not saying that they’re conscious, that they feel as we feel.

    But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.

    Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.

    Evilschnuff,

    The term embodiment is kinda loose. My use is the version of AI learning about the world with a body and its capabilities and social implications. What you are saying is outright not possible. We don’t have stable lifelong learning yet. We don’t even have stable humanoid walking, even if Boston dynamics looks advanced. Maybe in the next 20 years but my point stands. Humans are very good at detecting miniscule differences in others and robots won’t get the benefit of „growing up“ in society as one of us. This means that advanced AI won’t be able to connect on the same level, since it doesn’t share the same experiences. Even therapists don’t match every patient. People usually search for a fitting therapist. An AI will be worse.

    intensely_human,

    We don’t have stable lifelong learning yet

    I covered that with the long term memory structure of an LLM.

    The only problem we’d have is a delay in response on the part of the robot during conversations.

    Evilschnuff,

    LLMs don’t have live longterm memory learning. They have frozen weights that can be finetuned manually. Everything else is input and feedback tokens. Those work on frozen weights, so there is no longterm learning. This is short term memory only.

    jabathekek,
    @jabathekek@sopuli.xyz avatar

    I don’t think many people would want to seek psychiatric care from what they might see as a computer. A large part of clinical psychology is creating and maintaining a relationship with patients and I highly doubt language models will become sophisticated enough to achieve that in seven years, if at all. Remember these aren’t true AI’s, they are language models. They have a long way to go before they can be seen as true intelligences.

    scorpionix,
    @scorpionix@feddit.de avatar

    Given how little we know about the inner workings of the brain (I’m a materialist, so to me the mind is the result of processes in the brain), I think there is still ample room for human intuition in therapy. Also, I believe there will always be people who prefer talking to a human over a machine.

    Think about it this way: Yes, most of our furniture is mass-produced by IKEA and others like it, but there are still very successful carpenters out there making beautiful furniture for people.

    Cossty,

    That’s a fair point.

    intensely_human,

    I was gonna say given how little we know about the inner workings of the brain, we need to be hesitant about drawing strict categorical boundaries between ourselves and LLMs.

    There’s a powerful motivation to believe they are not as capable as us, which probably skews our perceptions and judgments.

    nottheengineer,

    It’s just like with programming: The people who are scared of AI taking their jobs are usually bad at them.

    AI is incredibly good at regurgitating information and translation, but not at understanding. Programming can be viewed as translation, so they are good at it. LLMs on their own won’t become much better in terms of understanding, we’re at a point where they are already trained on all the good data from the internet. Now we’re starting to let AIs collect data directly from the world (chatGPT being public is just a play to collect more data), but that’s much slower.

    Nibodhika,

    I slightly disagree, in general I think you’re on point, but artists specially are actually being fired and replaced by AI, and that trend will continue untill there’s a major lawsuit because someone used a trademarked thing from another company.

    Cossty,

    I am not a psychologist yet. I only have a basic understanding of the job description but it is a field that I would like to get into.

    I guess you are right. If you are good at your job, people will find you just like with most professions.

    intensely_human,

    The web is one thing, but access to senses and a body that can manipulate the world will be a huge watershed moment for AI.

    Then it will be able to learn about the world in a much more serious way.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • [email protected]
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • oklahoma
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • SuperSentai
  • feritale
  • KamenRider
  • All magazines