Recently, the transformer-based models have been widely used in sequence-to-sequence (seq2seq) tasks, especially neural machine translation (NMT). In the original transformer, the layer number in encoder is equal to the layer number in decoder. However, the structure is more complex and task is more difficult in decoder than those in encoder, so the layer number should not be same. In order to verify how many layer number in encoder and decoder is properly valued, we improve transformer as our model and conduct four experiments on four translation tasks of IWSLT2017. The experimental results show that the layer number in decoder should be larger than that in encoder, which can bring better translation performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.