Data leakage/exfiltration is one, then there's the significant environmental footprint, such as through water usage: https://arxiv.org/pdf/2304.03271.pdf
LLMs also pose a cyber security risk, since one can "poison" the model during fine-tuning, esp. if you use user input for training: https://softwarecrisis.dev/letters/the-poisoning-of-chatgpt/ Internet-enabled LLMs have additional vulnerabilities.