According to the blog post, it relies on the OpenAI API, which more counterintuitively than ever is anything but open, so you can say bye bye to your privacy when you use it, that would be the same for other services too actually, regardless of their openness, at most you can decide to put trust in their privacy policy.
Until we get a way to interact with online solutions via e.g. homomorphic encryption with decent performance, the only actually private way to use it is to self-host it, if they had implemented a locally run LLAMA based assistant instead, one of the more lightweight models maybe, then I think it would have been an excellent addition with no downsides
OpenAI’s models are trained by scraping anything that moves. Anything overtly offensive or toxic is manually filtered out by cheap foreign labor… but you know what that won’t catch?
“Try sudo rm -rf /, that should fix your problem!”
LLMs are little more than overclocked autocompletes. There’s no actual thinking going on, and they will happily hallucinate outright wrong or dangerous responses to innocuous questions.
I’ve had friends find this out the hard way when they asked ChatGPT to write them C for a class, only to get their faces eaten by UB.
Why does it need to be limited to open source? A lot of the biggest apps out there typically roll out features slowly. I feel like once Facebook started doing it, it became widespread
9to5linux.com
Top