In this paper, a Light Emitting Diode (LED) fingerprinting method based on the Transformer self-attention mechanism is proposed to address the challenge of trusted authentication of terminal identity in visible light communication. The method leverages the unique individual characteristics of LED emission spectra for identity recognition. To extract fingerprint features from the spectral data, a parallel network structure is proposed. The one-dimensional spectral signal is input into a Convolutional Neural Network (CNN) to extract local features, while simultaneously being fed into a Transformer branch for self-attention calculation to capture global correlation features. The encoded global features from the Transformer are fused with the local features extracted by the CNN, and a feature interaction module is used to generate the final feature vector for classification. Experiment results demonstrate that on a spectral dataset from 8 different LED lights of the same model and batch, the proposed method achieves a classification accuracy of 98.33%, providing a new approach and method for secure access in visible light communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.