Deep learning models are created using a large amount of time and data, and are therefore very costly. Therefore, attention has been focused on technologies that protect rights by embedding digital watermarks in learning models. In this work, we target recurrent neural networks (RNNs) and embed watermarks during model training. There are few studies that show the possibility of watermark embedding for RNN training models. Therefore, in our previous research, we have shown that watermark embedding is possible for learning models generated by LSTM networks, a type of RNN, and have conducted detection. In this paper, we investigate the effect of watermarking on the model when it is embedded into the training model of RNN. In particular, we will conduct experiments and discuss the results regarding the impact of embedding watermarks on the task and the impact of increasing the number of bits embedded in the watermark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.