PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12285, including the Title Page, Copyright information and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
International Conference on Advanced Algorithms and Neural Networks (AANN 2022)
This paper studies the estimation of wind power generation in the application of wind power demand analysis in dry bulk terminal, and puts forward a calculation method of wind power generation based on wind speed monitoring data. Through the analysis and statistics of wind speed monitoring data, the wind duration under different wind speed levels is estimated, Based on this, according to the basic output power data information of the wind turbine provided by the wind turbine manufacturer, the wind power generation can be estimated according to the month and year, which can be used for the feasibility analysis of the terminal enterprise when building the wind power generation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper selected Earthquake Cases in China as the sample set and proposed a neural network model based on a rough set to realize automatic selection of core abnormal indicators by rough set attribute reduction algorithm then built a neural network model with strong generalization ability to simulate the uncertain relationship between anomalies and earthquakes. The results showed that the magnitude difference was between -0.5 and 0.5, which met the requirement of earthquake prediction accuracy. It could be seen that the model is effective in earthquake prediction research
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article is based on deep learning theory and big data technology to build a model on how to analyse massive amounts of audio data and use it to provide better services. Firstly, the spectrograms and waveforms are visualised to initially analyse the audio features. Then, the MFCC and Chroma features of audio were extracted respectively, and the MLP model was built to classify the two features and trained separately. In order to make the audio recognition technique highly efficient, this paper also adopts the non-negative matrix decomposition method (NMF) to enhance the audio data, which makes the differentiation between different audio data more significant, and the accuracy of the MLP model built based on the reconstructed new audio data finally reaches 89.12%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the popularization and application of network technology, the problem of computer network security becomes more and more prominent, so the intrusion detection system becomes more and more important. Firstly, this paper studies the application of clustering algorithm of data mining technology in network security detection. Then, on the basis of literature data, the related knowledge of network security detection is understood, and a network security detection system based on clustering algorithm is designed and analyzed. Finally, the algorithm applied in the system is verified. The test results show that the improved ant colony clustering algorithm in this paper is better than the basic ant colony clustering algorithm in detection time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As an important component of Sigma-Delta (Σ-Δ)ADCs, the performance of the digital filter has a direct and important impact on the accuracy. In this paper, a 17-bit precision Σ-Δ ADC digital filter is designed with a cascaded integrated combination (CIC) filter and a half-band filter cascade, and graded downsampling using a two-band CIC filter and a two-band half-band filter.By optimizing the order of the half-band filterand using CSD encoding to reduce hardware resource consumption and improve computing performance. The filter is simulated on FPGA, and the maximum operating speed of the filter is 43.19MHz, which meets the real-time input of the sample rate signal. The output SNR is 105.86dB, and the ENOB is 17.29bits. Finally, a full custom digital circuit layout design for the digital filter was implemented based on a 0.18μm CMOS technology with a layout area of 41878μm2 and a total power of 378μW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To further study the reliability of human factors in dam break accidents, by establishing an index system of human factors influencing factors, human errors are divided into four categories and twelve sub-categories of "people, technology, organization, and environment". Combined with BP Neural network technology, we input more than 60 historical dam failure data to get the weight of human error under different dam failure reasons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The on-board visual multi-object tracking task is a basic function of vehicle intelligent driving, which plays a connecting role in various applications and research fields such as traffic control, automatic driving and human-computer interaction. However, neither the method of using detection technology to locate and then using data association technology to generate target trajectory nor the method of combining detection and tracking fail to solve the problem that tracking performance (i.e. MOTA, MOTP, IDSW and other indicators) and tracking speed cannot coexist. This also makes it difficult for the existing multi-target tracking algorithms to be widely used in smart vehicles. To this end, we have designed an end-to-end deep learning multi-object tracking method that can be used in the vehicle, namely the self-query tracker, SQT. Specifically, the input of the algorithm consists of two parts :the current frame, and the tracking result of the previous frame. Firstly, the current frame is input to the backbone network to obtain the feature graph A. The feature graph A is input into the detection branch, and the detection object of the current frame can be quickly obtained through the regression between the heat map and the frame. Then the feature map A of the current frame is flattened and input to the coding-decoding network composed of Transformer. The tracking result of the previous frame is used as the query vector to obtain the position map of the tracking object of the previous frame in the current feature map. The final tracking result can be obtained by matching the two results (the detection result and the position mapping of the tracking object in the previous frame in the current feature graph). The training and verification based on MOT20 data set show that the inference time of each frame is about 44ms, and the multi-target tracking accuracy is 58.9%. The model is integrated into intelligent vehicle ROS platform for testing, and the test results show that the proposed algorithm can realize multi-target real-time tracking in complex traffic scenarios, and the algorithm has good practical application value. On the platform using RTX 2080Ti, the proposed method reached 15+ FPS and the MOTA score was 58.9 on the MOT20 dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reinforcement learning has made great progress in solving single agent problems in recent years. However, the development of multi-agent reinforcement learning is much slower. Many existing algorithms perform not very well in the field of multi-agent reinforcement learning. To train a multi-agent reinforcement learning model efficiently, the MADDPG algorithm based on deep neural networks is proposed in this paper. The structure of the neural networks of MADDPG is based on the Actor-Critic framework, which contains centralized critic networks and decentralized actor networks. The result shows that the three agents in the experimental environment learn to cooperate and compete well in just 50 thousand episodes. Although the model of MADDPG algorithm has high computational complexity if the number of agents is too high, it can still perform well in a multi-agent environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the production process of printed circuit boards (PCBs), defect detection is a very important part of the process. Due to the complexity of printed circuit board structure, traditional detection methods have problems such as poor detection accuracy and low detection speed. In this paper, a deep learning-based target detection method is proposed for PCB defect detection. The baseline model of the method is PP-YOLOv2, and Resnet50 is used as the backbone network for feature extraction and mish activation function with better smoothing. We trained the model on the COCO pre-trained model. Finally, tests were performed on publicly available PCB datasets, and the experimental results show that the method has high detection accuracy and fast detection speed, which is more suitable for production use compared with other PCB defect detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using deep learning to colorize grayscale images often requires the preparation of a large number of training images, which also requires very high computing power. In view of this situation, this paper designs a fast colorization method for grayscale images that combines traditional feature extraction with a simple neural network. The method is mainly divided into three steps: Firstly, select the reference color image and train the network, then the target gray image is input into the network to generate the first stage color image; Next, the color image obtained in the previous step is transformed into HSV space, only the V component is retained, and the gray value of the target gray image is used for color synthesis to obtain the second stage color image with clear texture; Finally, the Reinhard algorithm is used for color migration, and then the reference color image is used to color the target image accurately to obtain the third stage color image. Experiments demonstrated that the algorithm proposed is fast, efficient, and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the improvement of network technology, group key management system gradually becomes an important research topic. In general, the group key management protocol schemes mainly consist of three different types, and the centralized group key management protocol has been widely used in different fields, such as multicast communication and Intelligent Internet information system. The basic idea is that the key generation centre controls the whole group. In order to construct secure and efficient group key algorithm protocol, this paper proposes a group key management algorithm based on polynomial construction. The results of proposed scheme show that any authorized member can obtain group key by private key pair. Furthermore, they can check the validity of the broadcast message.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Re-ID is a technology to judge whether there is a specific pedestrian in the image or video. Attention mechanism is applied into Re-ID to optimize the feature representation and improve the discrimination characteristics of features. This paper studies the mechanism of Dual Joint Attention (DJA), which optimizes feature representation by paying attention to local important features and capturing global context in spatial domain and channel domain respectively. And use DJA to build a model. In the classification of the model, A-softmax is used as the loss function which clusters the features in the angle space by imposing the multiplicative angle interval constraint, so as to directly integrate the metric learning into the classification. Experiments show that mAP and Rank-1 are significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual object tracking is one of the most important topics in computer vision, and it is widely used in many industries, such as security and self-driving. However, the existing tracking algorithms do not perform well in some difficult scenarios. One of the most difficult challenges faced by the current trackers is that when there are distractors around the object, the tracker often occurs the problem of tracking drift. To alleviate this problem, we propose a visual attention based tracking algorithm. Experiments on the benchmarks OTB2013, OTB2015 and GOT-10k show that our algorithm can achieve a good tracking performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new CMOS sixth-order Gm-C complex filter with tunable center frequency, bandwidth and image rejection rate used in low intermediate frequency receiver is presented. The innovations of this paper are as follows: 1. This paper proposes a complex filter with low center frequency and high image rejection ratio. 2. As the requirements of image rejection and narrowband frequency selection, a tunable trans-conductance control circuit is designed. 3.The complex filter realizes the function of adjustable parameters such as center frequency, and is suitable for narrow-band communication network. The simulation results show that the control voltage range of the filter is 0.6V~1.1V, and the corresponding center frequency adjustment range are 108~359 kHz; the bandwidth adjustment range is 181~268 kHz; the image rejection ratio of the filter ranges from 42~72dB, the current consumption range is 2.5~4.4mA, and the operating voltage is 1.8V.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Travel time prediction is a fundamental part of traffic analysis. Meanwhile it affected by spatial correlations, temporal dependencies, external conditions (e.g. weather, meta data, traffic conditions). In this paper, we propose a deep learning framework that integrates CNN and Bi-LSTM to learn spatial-temporal feature representations of travel time prediction. The short-term (5 minutes interval) historical traffic data which fully utilize to capture the patterns and trend of the travel time. Our paper sorted the feature into two categories: time-varying attributes, non-time-varying attributes. The proposed models called MV-FCL were evaluated on a network in the City of Zhangzhou, China. The results demonstrate that the proposed MV-FCL model outperform state-of-art baselines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a classical clustering algorithm, K-medoids algorithm needs to manually input its clustering number when the program runs, so it is difficult to realize the adaptive calculation of clustering number. Therefore, an improved K-medoids algorithm considering distance and weight is proposed in this paper. The clustering algorithm uses dimension-weighted Euclidean distance to measure the distance between samples, and then obtains the density and weight of sample distance. Then, the point with the highest density in the sample was taken as the first cluster center, and all samples in the cluster were removed. The next cluster center was found according to the weight of the previous cluster center and the remaining sample points in the data set. Repeat the above process, when all the data sets are screened, multiple clustering centers will be automatically obtained. Simulation experiments on the UCI real and artificial simulated datasets show that the proposed algorithm has high accuracy and good stability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to overcome the large positioning error of indoor garage caused by fuzzy scene texture, an indoor garage positioning localization method in weak texture scene is proposed. The method is divided into two stages: off-line and on-line. The off-line stage aims to establish the database of parking lot number and world coordinates of marked corners. In the online stage, the image is preprocessed first, and then the improved Hough transform algorithm is used to extract the marked corners. The feature points are matched according to the identified parking space number, combined with the world coordinate information of the marked corners stored in the database, and finally the current positioning information is calculated with the help of PNP algorithm. Taking an indoor parking garage as an example, the actual test shows that the accuracy of the marked corners extracted by the improved Hough transform algorithm proposed in this paper can reach 92.5%, which is higher than the accuracy of 84.5% by the traditional method. The average positioning error is about 0.45m, which meets the positioning requirements of the indoor garage. Simulation results show that this method is effective in weak texture scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landslide geological disasters have seriously threatened human life and property safety. This paper takes the Wenchuan earthquake landslide as the research object, and uses five deep learning models (LeNet5, AlexNet, VGG16, ResNet152V2 and DenseNet201) to explore the landslide detection method based on high-resolution Google Earth images, select the optimal model, and then use Google Earth to detect landslides. The images are cropped into image samples of 4 different sizes (60×60, 120×120, 180×180 and 240×240 Pixel), and the dataset is then trained, validated and tested using the optimal model. The results show that: (1) Among the five deep learning models, the DenseNet201 model is the best, the F1-Score is the largest, reaching 0.8878, and the RMSE is the smallest 0.2524; (2) In the Google Earth image sample datasets of four different sizes, the DenseNet201 deep learning model is used to analyze the landslide images. For detection, the F1-Score can reach 0.8995, the RMSE can reach 0.2486, and the Accuracy can reach 0.9308. It can be seen that based on high-resolution Google Earth images, the deep learning method can quickly and accurately detect landslide information, providing a method reference for the prevention and control of landslide geological hazards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To address the problem that the MPPT strategy of PV system based on BP neural network has large errors when the light intensity changes suddenly, an improved particle swarm optimization is proposed for the optimization of weights and thresholds of BP neural network, and a simulation model of MPPT control of PV system based on PSO-BP neural network algorithm is established. The test and simulation results show that the optimized BP neural network converges faster and the prediction accuracy is improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increasing number of passengers and the continuous expansion of the scale of the airport, the security of the airport has gradually attracted people’s attention. Aiming at the security needs of airports in foggy weather, an improved object tracking method based on SiamRPN is proposed, which solves the problems of drift and occlusion in the process of object tracking in foggy weather. First, a video pre-processing module plays the role of processing the input video, improving the target's visibility to be tracked in the video. Then, a global attention module is proposed and introduced in the feature extraction network. The global context information is integrated into the backbone network. The Region Proposal Network (RPN) network performs classification and regression operations and finally calculates the tracking results. The method is tested on a private data set that contains 20 airport videos. The results show that compared with SaimRPN, the improved method has more competitive performance. The AUC of success rate and precision has increased 6.5% and 3.1% compared with SiamRPN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem of the BP neural network falling into local minimum and slow network convergence rate, the genetic algorithm is used to optimize the weights and thresholds of the network, and the learning rate is dynamically adjusted according to the total error change of the network output. The improved BP neural network model is used to predict and analyze the Shanghai composite index. The empirical study shows that the improved BP neural network prediction model can accelerate the convergence rate of the algorithm and effectively improve the prediction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem of artifact or image distortion when repairing the image with complex background as well as large and irregular damage, a region normalization image inpainting algorithm based on dual channel generative adversarial network is proposed. Firstly, the images to be repaired are fed into the parallel channel generative adversarial network where the damaged areas and the non-damaged areas are separately normalized through the basic region normalization (RN-B) to solve the deviation of mean and variance. Thereafter, a residual block containing hybrid dilated convolution and learnable region normalization (RN-L) is performed to capture multi-scale and multi-receptive field image information and automatically detect potentially damaged and undamaged regions for separate rate normalization. Subsequently, global radiological transformation is performed to enhance the fusion of damaged and undamaged regions. And the result is inputted to the decoding layer for decoding after the self-attention module. Finally, the results are sent to the Patch-GAN discriminator for discriminant optimization. Simulation experiments are carried out on the public datasets CelebA-HQ and Paris-street. Experimental results show that the proposed algorithm can enhance the accuracy of texture detail and effectively avoid image distortion, and is superior to state-of-the-art inpainting algorithms in terms of visual effect, PSNR, SSIM and L1 loss.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, people are increasingly inseparable from electronic communication tools. Email is one of the important means of communication, but the existence of spams seriously affects the users' usage. This paper focuses on the spam classification problem in a practical context. Real email messages are collected and the classification is performed using the Dynamic_LSTM model. By comparing with algorithms of traditional machine learning as well as ordinary RNN, it is shown that the accuracy of Dynamic_LSTM is increased by 8%.In addition, it is not affected by the max-feature. The experimental results show that the Dynamic_LSTM model performs better at the classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Legal compliance inspection (LCI) plays an essential role in the judicial applications of laws and rules but has been neglected in the development of legal artificial intelligence (LegalAI). Currently, few methods in LegalAI can be utilized to solve this task. In this work, we propose to use natural language inference (NLI) framework to solve the LCI problems based on a fundamental fact that the legal judgment should be subject to judicial syllogisms. Specifically, we present LegalNLI – a specially constructed dataset reformatted from the Chinese legal datasets for other problems. The proposed LegalNLI is a document-level NLI dataset in the legal domain whose premises and hypotheses vary from hundreds to thousands of words in length. In addition, there are few artifacts in LegalNLI that are some clues so as to possibly identify the label by looking only at the hypothesis without observing the premise. Therefore, it is more effective to solve the LCI task by adopting the NLI framework instead of direct text classification methods. Finally, we provide some experiments for evaluating some existing state-of-the-art systems of sentence-level NLI task on the LegalNLI dataset and find it is challenging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Question Answering over the Knowledge Graphs (KGQA) has attracted extensive attention. The graph neural network can represent the dependent information of the KG, so it is well applied to the KGQA. But most of the KGQA approaches based on graph neural networks model question sentences and candidate answer entities separately. And the influences among questions, relations, and structure are not fully utilized when learning entity representations. To solve these problems, it is proposed that a question answering method based on graph attention network with edge weight to enhance the question relevance of entity representation. For the relationship in the extracted candidate answer subgraph, the Roberta is used to calculate the question’s semantic similarity to be the edge weight. A graph attention network is relied on to fuse the pre-trained entity embeddings and edge weight information for node updates to obtain candidate answer representations. The experimental results show that our proposed model has certain advantages compared with some other benchmark methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional grey prediction model has the problem of low prediction accuracy in wind power generation prediction. In this paper, an improved Grey prediction model is proposed by smoothing the original data, and a combination prediction model is constructed by combining it with BP neural network. The example shows that the accuracy of the improved optimal combination forecasting model is higher than that of the single forecasting model, and is better than that of the traditional optimal combination forecasting model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The junction box is the key subsystem responsible for power conversion, distribution and data transmission in the seabed observation network system. The on-line monitoring and intelligent diagnosis device of power conversion equipment, the core component of the junction box, is used as the edge agent to complete the local analysis and processing of equipment operation, alarm, action and other information, so as to realize the analysis of equipment operation status and dynamic diagnosis of abnormal conditions. The system architecture scheme of dynamic diagnosis technology of junction box equipment based on edge computing is proposed. This paper expounds the implementation method of dynamic diagnosis technology of edge agent for junction box equipment communication status, secondary circuit, voltage / current, pilot channel, device self-test and other data, and explores the key technologies of edge computing self-tuning and scalability. It can realize the dynamic diagnosis of the equipment status of the junction box, real-time acquisition of working conditions, early warning of abnormal status and accurate positioning of defects. Comprehensively promote the transformation of junction box equipment operation and maintenance to research and judgment based on data and intelligence, and improve the reliability of the equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sign language is the main way for the hearing-impaired people, a huge special group, to communicate with others in society. The use of new information technology in sign language recognition and translation is helpful for smooth communication between hearing impaired and healthy people. With the development of the Transformer network and attention mechanism in machine translation, the study has entered a new process. Aiming at the phenomenon of longer-term dependency, based on Transformer, we propose a continuous sign language translation model that incorporates the sequence relative position into the attention mechanism, replacing the original absolute position encoding. Combining with movement characteristics, we use image difference technology to dynamically calculate difference threshold and use image blur detection to adaptively extract key frames. Experimental results on RWTH-PHOENIX-Weather 2014T Dataset verify the effectiveness of the proposed model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The neural network has been fully applied in various disciplines, but it has not been applied in the econometric forecasting method of environmental economics. This paper identifies the research topic as the relationship between corporate environmental protection policy regulation and corporate performance. To analyze the application advantages of neural network forecasting methods in environmental economics, we compare traditional methods with neural network forecasting methods. We use linear regression and MLP feed-forward neural network algorithm to predict and fit relevant data of Chinese listed companies. By comparing the implementation process and the results, we concluded that the neural network algorithm has higher prediction accuracy than the traditional linear regression algorithm to analyze the actual situation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Capsule networks have a new network structure that combines unsupervised learning method with the convolutional neural networks. Compared with convolutional neural networks, its advantage lies in its generalization ability to a novel viewpoint. In this paper, we combined the Dirichlet process mixture model with the capsule network, and use the coordinate ascending algorithm to realize the information transfer between capsule layers. We use a shallow network to verify the model’s generalization ability for a different viewpoint on the SmallNORB and BUAA-SID datasets. The comparison shows that our method has a lower test error than the Gaussian mixture model that is directly inferred by approximation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, due to the increasing amount of data in various devices and networks, its structural redundancy information also occupies more bandwidth, which hinders the further development of the network. How to effectively eliminate the structural redundancy has become the focus of attention. To this end, this paper discusses this work by building the DYNATABLE algorithm and 256-LSA cache replacement algorithm. It can improve the performance more than 22%, and effectively eliminate the structural redundancy part in the database. This indicates that this standardized design model is highly practical and is expected to be effectively applied in subsequent work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the input-output model, this paper calculates the direct input coefficient, total input coefficient, influence coefficient and sensitivity coefficient to analyze the current sectoral linkage of agricultural in Fujian Province, and establish a simultaneous equation model to study the co-movement effect between sectoral linkage and economic growth of agriculture by using the Eviews software. The results show that most of the sectors with close forward and backward linkage of agriculture are labor-intensive sectors in Fujian Province. The pulling effect of agriculture on other sectors is very limited. The backward linkage of agriculture and agricultural economic growth promote each other, and the economic effect of backward linkage is greater than that of forward linkage of agriculture in Fujian Province.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The session-based recommendation predicts the user’s next-click preference with anonymous temporal sessions, which meets the considerable challenges in scarce item recommendation or the newly-established e-commerce platform. Those practical limitations severely constrain the performance of existing session-based recommendation models, which is called by few-shot session-based recommendation task. This paper proposes a Graph Attentive Transfer Learning (GATL) approach involving a source domain with sufficient data to distill useful knowledge into the target domain with limited sessions. Concretely, GATL contains an intra-session attentive feature learning module to explore the correlations among different items in each session and a cross-domain inter-session interactive feature learning with an adversarial transfer learning strategy to solve the few-shot learning in target session-based recommendation. The proposed modules ensure GATL can extract the intra- and inter-session graph feature vectors and feed them into an improved prediction layer for overall item prediction. Experimental results on two datasets (Diginetica and Retailrocket) demonstrate the effectiveness of our proposed GATL model on the few-shot session-based recommendation task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, deep convolutional neural network (CNN) has been successfully applied to synthetic aperture radar (SAR) target recognition, and achieved good recognition effect. Compared with traditional methods, the recognition performance has been significantly improved. However, in practical applications, the resources of data processing platform are very limited, the computation and the memory cost of deep convolutional neural network are high. These two factors hinder its smooth deployment on embedded devices. This paper proposed a lightweight neural network design strategy combined with knowledge distillation for target recognition. First, a convolutional network model is designed based on the improved inverted residual structure, and a lightweight neural network is obtained, which is used as a student network. Then, the teacher network (a well-trained deep network model) is used to perform knowledge distillation, which affects the student network. training to improve the recognition accuracy. Finally, the trained student network is used to complete the 10-category target recognition in the MSTAR dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the low accuracy of traditional algorithms in predicting Enhancer-Promoter Interactions (EPIs) in the human genome, an EPIs prediction network based on ResNeXt and attention mechanism was proposed. In the data processing stage, the gene sequence data of a small number of positive samples in the data set is expanded to be consistent with the number of negative samples; then an EPIRNX model is constructed for feature selection and extraction for a given gene sequence, and long-distance features are mined for use in This cell line prediction; the transfer learning model EPIRNXTransfer was also trained for cross cell line prediction. Using AUROC and AUPRC as evaluation indicators, EPIRNX can better predict EPIs in this cell line than traditional models, and EPIRNX-Transfer can better predict EPIs across cell lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Salps algorithm was used to optimize the initial weights and thresholds of BP neural network, to speed up the BP neural network parameters of PID controller, and finally obtain the optimal parameters. The variable weight is integrated into the iterative process to expand the early search range and improve the late search accuracy. In Matlab2019 simulation environment, BP neural network and BP neural network optimized by Salps were compared in the prediction effect. The results show that the optimized BP neural network has higher prediction accuracy than the traditional BP neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperparameter optimization is a challenging problem in developing deep neural networks. Decision of transfer layers and trainable layers is a major task for design of the transfer convolutional neural networks (CNN). Conventional transfer CNN models are usually manually designed based on intuition. In this paper, a genetic algorithm is applied to select trainable layers of the transfer model. The filter criterion is constructed by accuracy and the counts of the trainable layers. The results show that the method is competent in this task. The system will converge with a precision of 97% in the classification of Cats and Dogs datasets, in no more than 15 generations. Moreover, backward inference according the results of the genetic algorithm shows that our method can capture the gradient features in network layers, which plays a part on understanding of the transfer AI models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classification of electroencephalography (EEG) based on motor imagery (MI) is a critical issue in brain computer interface (BCI). With the development of deep learning (DL), many DL algorithms are applied to MI classification. However, most algorithms classify signals based on the feature from one domain. This study proposes an ensemble approach using a factorization machine to combine the models using the features from multiple domains. We trained three individual models firstly, then concatenated the prediction of the models as input of the factorization machine. The performance obtained by the proposed approach is evaluated by accuracy on dataset Ⅲ from BCI competition Ⅱ. The highest accuracy and mean accuracy are 94.6% and 85.0%, comparable to the state-of-the-art approaches. The experiment demonstrates that our ensemble method is effective and stable, even for weak classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a very humid and unstable air, convective clouds can grow to very high heights, and can produce weather processes such as torrential rain, hail, and thunder and lightning during the strong development stage. These strong storms are either generated individually, or more often in groups associated with weather-scale fronts and mesoscale convergence zones, which can cause large losses of life and property. This paper uses the 2019 Guangdong S-band dual-polarization radar base data and ground hail observation records, and extracts hail (positive samples) and non-hail data (negative samples) from the echo structure characteristics as label data to construct a training data set. Using a multi-layer neural network algorithm, design the hail recognition network architecture, taking the reflectance factor (Z), differential reflectivity factor (ZDR), differential propagation phase shift rate (KDP), correlation coefficient (CC), etc. as input, using 5 layers Neural network, modeling and predicting the hail area. A typical hail case is also used to compare and analyze the recognition effect of the WSR-88D hail recognition algorithm and the multilayer neural network algorithm. The results show that the two methods can predict hail clouds more accurately. The hail area predicted by the network is larger, and the recognition at the lower level is more reliable. Using multi-layer neural network method can improve the effect of hail recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To realize the rapid detection of missing items in home appliances, we proposed a new method based on an improved YOLOv4-Tiny model. Mosaic and Mixup data augmentation methods are used to enrich image data sets, and the SE-Block module is used to apply an attention mechanism on the channels of the feature layer. Experiments show that: The mAP and Recall of the improved YOLOv4-Tiny model proposed in this paper are 97.55% and 95.31%, respectively, which are 3.64% and 5.17% higher than the original YOLOv4-Tiny model, and the FPS reaches 181. The model accuracy is improved without losing detection speed. The proposed method provides technical support for detecting missing items of household appliances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to overcome the limitations of existing generative text summary generation algorithms in Chinese text and improve feature extraction ability of traditional deep learning models, a generative Chinese text summary generation model based on RoBERTa-Seq2Seq is proposed. The pre-training model RoBERTa is used to learn the dynamic meaning of current words in a specific context, so as to improve the semantic representation of words. Based on the Seq2Seq model, Luong Attention is used to further enhance global information. The experimental results show that our model’s ROUGE score is higher than some other traditional Seq2Seq models, which indicates that our RoBERTa-Seq2Seq based model can effectively improve the semantic representation ability of the generated summary in Chinese text and improve feature extraction ability of the traditional deep learning model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of the rapid development of global information technology, big data with its huge value is being widely concerned by the community. Based on the perspective of big data-driven, this research selects the data of A-share listed companies in new energy automobile industry as the research object explores the mechanism of policy combination tools and enterprise's own characteristics on patent quality by constructing a moderated mediation model. The main conclusions of this study are as follows:(1) R&D investment plays a mediating role between government subsidies and patent quality;(2) Tax reduction plays a moderating role between government subsidies and R&D investment. (3) The nature of enterprises plays a moderating role between R&D investment and patent quality. (4) Enterprise classification plays a moderating role between R&D investment and patent quality. According to the conclusion, government should pay attention to the effect of policy combination tools, promote enterprise system reform and improve enterprise R & D environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advancement of science and technology, especially the advent of the information age, concepts such as cloud computing, big data, the Internet of Things and mobile Internet have been deeply embedded in people's daily lives. Combining big data technology with the characteristics of higher vocational education and the school's The requirements of teachers' professional development and clarifying that higher vocational colleges have an urgent need for an effective teaching quality(TQ) evaluation system(ES) in today's era. The purpose of this paper is to study the ES and algorithm of higher vocational teachers based on big data. In this paper, the association rules algorithm is used to solve the irrationality and subjectivity of the current TQ evaluation, and a TQ evaluation algorithm based on association rules is proposed to make the TQ ES of higher vocational education institutions rational and efficient. Provide scientific and objective decision-making analysis materials for managers of higher vocational education institutions. The experimental study shows that the evaluation level of teachers with doctoral degrees is good with 14% support and 70% confidence. On the whole, the higher vocational teacher ES proposed in this paper is in line with the current teaching laws in colleges and universities, the evaluation results better reflect the current teaching status, and the reliability of the data is high.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stocks are gaining more and more attention as a form of investment. A good stock trading strategy can help investors gain considerable returns in the rapidly changing stock market. In this paper, we use a deep reinforcement learning model combining LSTM deep neural network and A2C reinforcement learning algorithm, construct time-series data, extract the time-series features of the data using LSTM, and adopt reinforcement learning algorithm to let the agent learn by trial and error in the trading environment, and finally get an end-to-end quantitative trading model adapted to the market. The results show that the model returns higher than 30% on both Dow Jones 30 stocks and 30 stocks data of A-shares. the LSTMA2C model works better on 10 days of time-series data than using only 1 day of data, and returns up to 91% on 30 stocks of A-shares in 2020 with a Sharpe ratio of 2.18.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, nuisance SMS is getting more and more intense, and its forms are diversified, hidden and malicious. Not only does it waste cell phone resources, but it also affects people's normal use of cell phones. If the situation is bad, it will also affect the normal social order. In this paper, we use Word2Vec algorithm and LightGBM algorithm to select feature words and build a nuisance SMS identification model. By using the classical nuisance SMS training set for training, the results show that our model has an accuracy rate of over 99%. And our model has higher performance compared with the popular classification models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth estimation and semantic segmentation are important for applications such as autonomous driving cars and minimally invasive surgery. However, the imperfect predictions make even networks with best performance difficult to apply to these high-safety-demand domains. Therefore, in this paper, using the variational representation method, we propose two uncertainty losses to enable multi-task learning network to predict the uncertainties respectively for predictions of depth values and semantic labels. The experimental results on NYU-Depth-v2 and SUN-RGBD datasets demonstrate the novelty and effectiveness of our proposed uncertainty losses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Weather prediction is one problem of great importance in social life. To predict the future temperature with past data and without computationally intense physical modeling, a hybrid machine learning-based prediction model that combines the empirical mode decomposition (EMD), linear regression, and two different neural networks is proposed in this project. Since the temperature is a timeseries data, the periodic patterns in the data are extracted into intrinsic mode functions (IMFs) with the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and the future values of individual IMFs are forecasted by learning past patterns with the long short-term memory (LSTM) and multilayer perceptron (MLP) models for each IMFs. Linear regression is also used to predict the change in the nonperiodic trend. The predictions are added together to construct the result. By comparing with the actual test set results and the errors of other models, experiments show that this proposed model displays good performance in temperature prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the development and practical application of transformer maintenance technology, this paper solves the core technical problems in on-line monitoring and fault diagnosis of transformer operating conditions. Though the integrated sensor of sound, vibration and temperature, the working conditions and operating state parameters such as sound, vibration and temperature of the transformer are collected. By using network communication to transmit data to the cloud platform for storage, multi-information fusion artificial intelligence modeling is performed on the collected operating conditions and operating status data to obtain an anomaly detection model. An anomaly detection model is deployed on the cloud platform to perform real-time detection on the data collected online, and if the system detects an abnormal state, an abnormality warning is performed. The short-time Fourier transform (STFT) and deep neural network are used to establish a relationship model between monitoring data and transformer faults, and the model is used to realize fault diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of methods and tools for 3D body reconstruction has become an important research area in computer animation. Relevant research shows that 3D reconstruction directly depends on monocular video data needs a lot of computation. Therefore, existing methods can’t meet the requirements of fast real-time 3D body joint computing. With the help of deep learning and pattern recognition, we can easily obtain a large number of continuous two-dimensional joint data from video. In this paper, we propose a fast 3D body reconstruction method with continuous two-dimensional human joint data. We use the expected position to solve the problem of Ambiguity from monocular data restoration. Base on the predefined body pose, we can get continuous 3D joint data quickly, and our result show this method has a good performance in the situation of rapid change of motion amplitude and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aimed at the puzzle that is difficult to detect the cutting tool wear during machining process, the paper discussed a method of tool wear detection based on fusion between Wavelet packet decomposition and extreme learning machine (ELM). In this paper, it firstly analysed the time-frequency correlated characteristics in sound signal of cutting tool, an then based on wavelet packet decomposition it explored the extraction method of statistical feature for cutting tool status-sensitive spectrum energy, and finally based on sound feature recognition, it constructed a sort of fast ELM detection model. Taken the sound signal identification of cutting wear as example in some operation site, the data of actual measuring in work site verified that the explored method above could get faster response speed and higher detection accuracy than other traditionally used methods. The experimental simulation results show that the discussed method is effective and reasonable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Expert stock comments are important basis for accurately predicting stock trends. How to capture the theme of expert stock reviews effectively is an important issue in text classification. The Bidirectional Encoder Representations from Transformers (BERT) model has been widely used for text classification. However, BERT has some limitations. (1) The information extraction of stock comments beyond the limit of the fixed length in model is incomplete. (2) The features extracted from the model are not comprehensive enough. (3) The application efficiency of BERT model is low due to the large number of parameters. To tackle above issues, we propose a Student Bidirectional Encoder Representations from Transformers (Stu-BERT) model for accurately identifying of stock comments. Specifically, we firstly intercept at beginning and end of stock comments beyond the fixed length to improve access to information. Secondly, we fuse all the features of the last layer of the hidden layer in model to improve the topic recognition accuracy. In addition, we distill the BERT model to get Stu-BERT model, which enhance the practicability of applying it to the topic identification. Experimental results on real data demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maximizing returns is a constant pursuit of the financial market. In this paper, we show the relevant models we designed to realize the optimal allocation and combination of gold, bitcoin and cash. Advanced forecasting technology can promote the maturity of transactions. Our model takes the lead in introducing the rolling learning prediction method of feedforward multilayer neural network (MLP), which provides a method for accurately predicting market fluctuations from another way. We improved the original algorithm and changed the unified learning method to the rolling learning method based on the existing data before each day. Our practice has proved that our technology has better prediction effect for assets with less market volatility. For example, for gold, a safe haven asset, our prediction accuracy is 99.5%; For products with large market fluctuations, the prediction accuracy is relatively low. For example, for speculative assets with large market fluctuations such as bitcoin, our prediction accuracy is also as high as 97%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From the point of view of queuing theory, this paper studies the operation efficiency of the coil logistics system in the intermediate warehouse of the steel plant, and puts forward the configuration decision model of the coil logistics system. According to the task mode of the system, based on the queuing network, taking the access task as the object, the sub vehicle as the secondary server, and the parent vehicle as the primary server, the corresponding decision-making mathematical model is established, and the optimal demand analysis objective function is designed to solve and check queuing network model of the system. It is concluded that the sub vehicle and the parent vehicle in the system under different task modes and quantity configurations, The configuration of the number of child and parent cars that can the maximum system throughput. This paper provides a practical theoretical basis for the optimization design of the vehicle logistics system, improving the utilization rate of equipment and saving the system time and cost, especially for logistics optimization design of the heavy haul area of metallurgy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To address the problems that existing garment classification effects are greatly affected by background noise and that network features are not highly expressive. A multi-scale deep network model (MCA-Inception) is proposed based on convolutional networks and attention mechanisms. This network model uses the modified Inception V3 as the backbone network and expands the perceptual field by adding convolutional kernels of different scale sizes to enrich the contextual detail information of the garment content. At the same time, the CBAM attention module is embedded in the improved backbone network to suppress the interference of noisy information such as cluttered background and enhance the representation of adequate feature information. Average classification accuracies of 81.63% and 77.80% were obtained on the publicly available clothing datasets DeepFashion and ACS, respectively. Experimental comparisons with other methods show that the proposed network model performs better in the clothing classification task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the steady development of new generation information technology and rapid growth of social economy, green intelligent building based on advanced algorithm and control engineering technology becomes the inevitable trend of current development. In this paper, starting from the concept of green intelligent building, based on the core of green building, combined with BIM and BIM-Bayes technology, we study the green intelligent building method based on this technology, and finally judge whether the green building based on this basis can achieve the purpose of intelligent green control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The history of Chinese ceramics has become the representative of southern ceramics with its unique celadon carving, black glaze mining, painting, printing and other decorative styles. Minnan architectural ceramics are fresh carriers, samples and memories of Chinese cultural symbols and aesthetic habits. Based on the digital image processing technology, this paper analyzes the decorative patterns of southern Fujian architectural ceramics, introduces the digital image calibration principle and sampling quantification method used in the decorative patterns of southern Fujian architectural ceramics, and then uses image segmentation and binarization analysis, color digital image processing and analysis technology to discover the abstract beauty of Minnan ceramic ornamentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an anti-synchronization model of multi-valued logical drive-response coupling networks, and discusses the dynamic characteristics of multi-valued logic coupling network. the necessary and sufficient conditions of the anti-synchronization model are proved by algebraic method. Then, 3-valued logical networks are choose as an example to demonstrate the effectiveness of the conclusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.