A systematic review published in Frontiers of Engineering Management (2025) has mapped the dual nature of large language models (LLMs), identifying them as powerful tools for innovation that simultaneously introduce significant security and ethical risks. The research, conducted by a team from Shanghai Jiao Tong University and East China Normal University, analyzed 73 key papers from over 10,000 documents to provide a comprehensive assessment of threats ranging from cyber-attacks to social bias. The study's findings, available via https://doi.org/10.1007/s42524-025-4082-6, underscore that the rapid adoption of LLMs like GPT, BERT, and T5 across education, healthcare, and digital governance necessitates urgent attention to both technical defenses and ethical oversight.
The review categorizes LLM-related threats into two primary domains: misuse-based risks and malicious attacks targeting the models themselves. Misuse includes the generation of highly fluent phishing emails, automated malware scripting, identity spoofing, and the large-scale production of false information. Malicious attacks occur at both the data/model level—such as model inversion, poisoning, and extraction—and the user interaction level through techniques like prompt injection and jailbreaking. These methods can potentially access private training data, bypass safety filters, or coerce models into outputting harmful content, posing direct threats to data security and public trust.
In response to these evolving threats, the study evaluates current defense strategies, which include three main technical approaches. Parameter processing aims to reduce attack exposure by removing redundant model parameters. Input preprocessing involves paraphrasing user prompts or detecting adversarial triggers without requiring model retraining. Adversarial training, including red-teaming frameworks, simulates attacks to improve model robustness. The research also highlights detection technologies like semantic watermarking and tools such as CheckGPT, which can identify model-generated text with accuracy rates up to 98–99%. However, the authors note that defenses frequently lag behind the pace of evolving attack techniques, indicating a pressing need for scalable, cost-effective, and multilingual-adaptive solutions.
Beyond technical safeguards, the study emphasizes that ethical governance is equally critical. The researchers argue that risks such as model hallucination, embedded social bias, privacy leakage, and misinformation dissemination represent social-level challenges, not merely engineering problems. To foster trust in LLM-based systems, future development must integrate principles of transparency, verifiable content traceability, and cross-disciplinary oversight. The implementation of ethical review frameworks, dataset audit mechanisms, and public awareness education is deemed essential to prevent misuse and protect vulnerable populations.
The implications of this research extend across multiple sectors. Effective defense systems could help protect financial institutions from sophisticated phishing schemes, reduce the spread of medical misinformation, and uphold scientific integrity. Techniques like watermark-based traceability and red-teaming may evolve into industry standards for responsible model deployment. The study concludes that the secure and ethical development of LLMs will fundamentally shape societal adoption of artificial intelligence. The researchers advocate for future work focused on AI responsible governance, unified regulatory frameworks, safer training datasets, and enhanced model transparency reporting. With coordinated effort, LLMs have the potential to mature into reliable tools that support education, digital healthcare, and innovation ecosystems while minimizing the risks associated with cybercrime and social misinformation.


