billwashere,

One thing Apple is good at is waiting until the market is ripe and then releasing a better product. Like mp3 players, phones, tablets, etc.

68x,

Except Siri keeps getting worse every year. In terms of actual usefulness, connectivity issues and plain old voice recognition.

billwashere,

And it seems to not be just Siri. Every Alexa device in my house has gotten more deaf and more stupid. But yes Siri has declined recently I agree.

fer0n,

All my echos went out the window as soon as they started with product promotion

billwashere,

I haven’t had that happen yet. But yeah I would too.

bamboo,

Add google assistant to the list. I remember it being great back in 2018 and now it struggles to do basic things like “set a timer”

fer0n,

In my experience it’s not really getting worse, it’s just not getting substantially better either.

doctorcherry, (edited )

This is going to be pretty interesting, despite seeming far behind Apple is very well positioned to benefit from the AI developments. Apple has an opportunity for deep integration of AI features into the operating systems as well as offline compute through specialised silicon design that no other company really has.

redballooon,

Better late than never.

But even more interesting than when is whether this uses local AI models or if this becomes again a data protection trust sink.

captainastronaut,
@captainastronaut@seattlelunarsociety.org avatar

It will most certainly use local models. (Locally tuned from a common base model) That’s kind of their whole differentiator.

redballooon,

We’ll see. To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4. Even coming close to GPT-3.5-turbo counts as impressive.

kinttach,

We only recently got on-device Siri and it still isn’t always on-device if I understand correctly. So the same level of privacy that applies to in-the-cloud Siri could apply here.

BudgetBandit,

My on-device-Siri that lives in my Apple Watch Series 4 is definitely processing everything locally now. She got dumber than I.

abhibeckert, (edited )

Apple has sold computers with local voice input and command processing for more than 20 years, and iPhones have pretty much always had that feature (it was called “Voice Control” before Siri existed, and it was 100% local).

I’d argue that, for Apple, what they’ve started doing recently is processing commands in the cloud. The list of commands that are processed locally vs in the cloud has changed over time… and they did move most of it to the cloud several years ago when they bought a cloud based smart assistant startup and used it as the basis for a new an improved assistant on iPhone. But every year they remove the dependence on that and are going back to how it used to be with local processing. These days even when a command is processed in the cloud it’s often only part of a multi-step process where the majority of the work was done on device. And many everyday commands are done entirely on device.

For example if you ask it what the weather is, it’s entirely an on device command except for actually checking the latest weather report… and you can ask it what the temperature is “inside” which will check a sensor in your house and be entirely offline (if your home has a temperature sensor. There’s one built into Apple smart speakers and also a small but growing number of third party smart home products)

abhibeckert, (edited )

To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4.

True - but iPhones do run a local language model now as part of their keyboard. It’s definitely not GPT-4 quality but that’s to be expected given it runs on a tiny battery and executes every single time you tap the keyboard. Apple has proven that useful language models can be run locally on the slowest hardware they sell. I don’t know of anyone else who’s done that?

Even coming close to GPT-3.5-turbo counts as impressive.

Llama 2 is GPT-3.5-Turbo quality and it runs well on modern Macs which have a lot of very fast memory. Even their smallest fanless laptop can be configured with 24GB of memory and it’s fast memory too - 800Gbps. That’s not quite enough to run the largest Llama2 model but it’s close to enough memory. Their more expensive laptops have more memory and it’s faster - they can run the 70 billion parameter llama 2 without breaking a sweat.

And on desktops Apple sells Macs with 192GB of memory and it’s way faster at 6.4Tbps. That’s slightly more memory (and for a lot less money) than the most expensive data center GPU NVIDIA sells (the NVIDIA unit is faster at compute operations but LLMs are often limited by available memory not compute speed).

Marsupial,
@Marsupial@quokk.au avatar

You can even run llama2 locally on android phones.

garretble,
@garretble@lemmy.world avatar

Is this why Jon Stewart got canceled?

hanni,

This and China

Eggyhead,

I really want to know what he and his guests would have had to say.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines