Pentesting GenAI LLM models: Securing Large Language Models
https://WebToolTip.com
Published 4/2025
Created by Start-Tech Trainings
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch
Level: All | Genre: eLearning | Language: English | Duration: 51 Lectures ( 3h 16m ) | Size: 1.6 GB
Master LLM Security: Penetration Testing, Red Teaming & MITRE ATT&CK for Secure Large Language Models
What you'll learn
Understand the unique vulnerabilities of large language models (LLMs) in real-world applications.
Explore key penetration testing concepts and how they apply to generative AI systems.
Master the red teaming process for LLMs using hands-on techniques and real attack simulations.
Analyze why traditional benchmarks fall short in GenAI security and learn better evaluation methods.
Dive into core vulnerabilities such as prompt injection, hallucinations, biased responses, and more.
Use the MITRE ATT&CK framework to map out adversarial tactics targeting LLMs.
Identify and mitigate model-specific threats like excessive agency, model theft, and insecure output handling.
Conduct and report on exploitation findings for LLM-based applications.
Requirements
Basic understanding of IT or cybersecurity Curiosity about AI systems and their real-world impact No prior knowledge of penetration testing or LLMs required