Saturday Aug 05, 2023

ICML 2023 - A Watermark for Large Language Models

In this episode we discuss A Watermark for Large Language Models by John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein. This paper presents a watermarking framework for large language models (LLMs), aiming to embed hidden signals in the generated text while remaining undetectable to humans. The approach involves selecting specific tokens and promoting their use in text generation. The proposed watermark is tested on a multi-billion-parameter LLM and its robustness and security are discussed, highlighting the need to detect and audit machine-generated text to mitigate potential malicious use.

Comments (0)

To leave or reply to comments, please download free Podbean or

No Comments

Copyright 2023 All rights reserved.

Podcast Powered By Podbean

Version: 20241125