
Models vulnerable to data poisoning, data leakages
As India begins its journey into building foundational large language models (LLMs), there lie the challenges of data leakages, prompt injection attacks, model poisoning, and attempts to trick the models to commit errors. Cybersecurity experts call for …