I keep trying them bc they really feel like they ought to be useful, but I'm reminded of a lot of graphical HTML tools which end up requiring so many tweaks that it can be easier to hand code.
The only bit that might be missing from this is the issue of private or proprietary info possibly flowing back into the LLMs? That concerns me with the cloud-based ones.
Data leakage/exfiltration is one, then there's the significant environmental footprint, such as through water usage: https://arxiv.org/pdf/2304.03271.pdf
LLMs also pose a cyber security risk, since one can "poison" the model during fine-tuning, esp. if you use user input for training: https://softwarecrisis.dev/letters/the-poisoning-of-chatgpt/ Internet-enabled LLMs have additional vulnerabilities.
Add comment