

VAM! AI Reading Group: Paper "LLMs can hide text in other text of the same length"
โYunus will present the paper LLMs can hide text in other text of the same length.
โ๐ Paper: LLMs can hide text in other text of the same length
โhttps://arxiv.org/pdf/2510.20075
โTo maximize engagement, please try to read the paper in advance.
โPaper 3-line Summary: The paper introduces a rank-based steganography method that lets a large language model hide an arbitrary text perfectly inside another fluent text of the same length. By recording and reusing token ranks in the modelโs probability distribution, it achieves full-capacity encoding while keeping the stegotext plausible and human-readable. Experiments on Reddit posts show these stegotexts remain within the normal plausibility range of real text, raising concerns for AI safety, censorship, and trust in language outputs.
โWant to present?
โThe list of papers will be available here: https://docs.google.com/spreadsheets/d/1HET5sjnHjwiF3IaCTipR_ZWspfgglqwdBFRWAfKBhp8/edit?usp=sharing
โTo connect with the group, join the Discord: https://discord.gg/teJvEejs94
โTimeline:
๐ 6:30 PM โ Arrival & Networking.
โ๐ฃ๏ธ 6:45 PM ~ 7:15 โ Paper Presentation
โ๐ฃ๏ธ 7:15 PM - Discussions
โAbout the Facilitator
โIssam Laradji is a Research Scientist at ServiceNow and an Adjunct Professor at University of British Columbia. He holds a PhD in Computer Science and a PhD from the University of British Columbia, and his research interests include natural language processing, computer vision, and large-scale optimization.
โLooking forward to discussing the latest AI Papers!