Cover Image for VAM! AI Reading Group: "Poisoning Attacks on LLMs Require a Near-Constant Number of Poison Samples"
Cover Image for VAM! AI Reading Group: "Poisoning Attacks on LLMs Require a Near-Constant Number of Poison Samples"
Avatar for Vancouver AI Meetup (VAM!)
16 Going

VAM! AI Reading Group: "Poisoning Attacks on LLMs Require a Near-Constant Number of Poison Samples"

Registration
Event Full
If you’d like, you can join the waitlist.
Please click on the button below to join the waitlist. You will be notified if additional spots become available.
About Event

Thomas will present the paper Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples.

📄 Paper: Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

https://arxiv.org/abs/2510.07192

To maximize engagement, please try to read the paper in advance.

Paper 3-line Summary: This research challenges the assumption that the feasibility of poisoning attacks on large language models (LLMs) decreases with model scale, demonstrating instead that injecting backdoors requires a near-constant absolute number of documents regardless of the dataset size or model scale. The largest pretraining poisoning experiments conducted to date showed that as few as 250 poisoned documents could successfully compromise models ranging from 600 million to 13 billion parameters, even though the largest models trained on over 20 times more clean data. Ultimately, these findings suggest that poisoning attacks become significantly more practical for large models because the adversary’s requirements remain nearly constant while the training dataset size expands, emphasizing the need for robust defenses.

Want to present?

The list of papers will be available here: https://docs.google.com/spreadsheets/d/1HET5sjnHjwiF3IaCTipR_ZWspfgglqwdBFRWAfKBhp8/edit?usp=sharing

To connect with the group, join the Discord: https://discord.gg/teJvEejs94

Timeline:
🕠 6:30 PMArrival & Networking.

🗣️ 6:45 PM ~ 7:15 – Paper Presentation

🗣️ 7:15 PM - Discussions

About the Facilitator

Issam Laradji is a Research Scientist at ServiceNow and an Adjunct Professor at University of British Columbia. He holds a PhD in Computer Science and a PhD from the University of British Columbia, and his research interests include natural language processing, computer vision, and large-scale optimization.

Looking forward to discussing the latest AI Papers!

Location
Waves Coffee House - Howe
900 Howe St #100, Vancouver, BC V6Z 2M4, Canada
Avatar for Vancouver AI Meetup (VAM!)
16 Going