Language generation models have been an increasingly powerful enabler for
many applications. Many such models offer free or affordable API access, which
makes them potentially vulnerable to model extraction attacks through
distillation. To protect intellectual property (IP) and ensure fair use of
these models, various techniques such as lexical watermarking and synonym
replacement have been proposed. However, these methods can be nullified by
obvious countermeasures such as “synonym randomization”. To address this issue,
we propose GINSEW, a novel method to protect text generation models from being
stolen through distillation. The key idea of our method is to inject secret
signals into the probability vector of the decoding steps for each target
token. We can then detect the secret message by probing a suspect model to tell
if it is distilled from the protected one. Experimental results show that
GINSEW can effectively identify instances of IP infringement with minimal
impact on the generation quality of protected APIs. Our method demonstrates an
absolute improvement of 19 to 29 points on mean average precision (mAP) in
detecting suspects compared to previous methods against watermark removal
attacks.
Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!
More Stories
Amazon Starts Flagging ‘Frequently Returned’ Products That You Maybe Shouldn’t Buy
Russia Supplies Iran With Cyber Weapons as Military Cooperation Grows
Microsoft Unveils OpenAI-Based Chat Tools for Fighting Cyberattacks