PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Hong-Ming Yin,1 Ke Chen,2 Romeo Meštrović,3 Teresa A. Oliveira,4 Nan Lin5
1Washington State Univ. (United States) 2Hefei Univ. of Technology (China) 3Univ. of Montenegro (Montenegro) 4Univ. Aberta (Portugal) 5The Affiliated Hospital of Putian Univ. (China)
This PDF file contains the front matter associated with SPIE Proceedings Volume 12163, including the Title Page, Copyright information and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Statistical Numerical Analysis and Multivariate Statistical Decision
Cardiovascular disease (CVD) was one of the leading causes of death in the early days. To search the pathogenesis of coronary heart disease, the U.S. Public Health Service set up a future heart study, and it was widely known as Framingham Heart Study soon afterwards. More than half a century ago, Framingham Heart Study had made a huge contribution to heart disease research, including releasing the definition of risk factors and bringing statistical data analysis methods into this research field. Although the statistical research methods have constantly been updated and been chosen more and more detailed, some problems in heart disease study remain unsolved. This review has sorted out the statistical analysis methods used in the Framingham Heart Study and put forward a guideline to give a direction for future Framingham Heart Study with the latest related statistical analysis methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Employees are essential to the company, and suitable employees will greatly promote the company's development. Turnover of employees will bring huge economic and time losses to the company. In order to avoid such loss, this paper establishes a logistic regression model to analyze the important reasons for employee dismissal. We use the IBM employee attrition dataset, to import the data into RStudio; delete the null values and other unnecessary values in the dataset, transform the categorical columns into numerical columns; do the feature selection using a logistic regression algorithm, then build the model and use the ROC curve to get the accuracy of the test. We find that many reasons influence employees to leave the company, but the most important five are marital status, business travel, age, years at the company, and the number of companies. Companies can reduce employee attrition rates by selecting older, married employees who have worked for fewer companies. For employees already on the job, the frequency of business travel should be reduced. The longer employees stay, the less likely they leave.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, racial discrimination is a serious problem in American society. Many movements such as "Black Lives Matter" prompt people to pay attention to whether police violence in law enforcement is related to racial discrimination. This article stands in the objective position and rationally analyzes the causes of police shooting. Based on the data published by the Washington Post, this paper analyzes the subversive results through the data from 2015 to 2020.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of upgrading the agricultural industry and sustained economic growth, agriculture and tourism have become significant pillars of regional economic development. In order to quantitively measure the development level of agriculture and tourism for 16 prefecture-level municipalities in Anhui province, this paper firstly generates the comprehensive index of the development level of agriculture and tourism for these cities by using the factor analysis. Based on the empirical results of factor analysis, these cities can be divided into three categories in terms of the comprehensive index: top-level, medium-level and low level. Hefei has the highest comprehensive index, and Tongling has the lowest comprehensive index. Overall, the development of agriculture and tourism for Anhui province is unbalanced, in which there are certain development gaps between these cities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of society in bio-statistics, the amount of data that data analysts need to process has become heavier. In this regard, the choice of auxiliary platforms for data processing has become more critical. This article will mainly make a preliminary analysis of SAS, R language, and SPSS and conclude that a data processing platform is more suitable for students or scholars before entering enterprises or large-scale projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper constructs a statistical model that could predict housing prices in Beijing. With the fast development of China's society, the real estate market has shown an incredible growth rate. For Beijing, as the capital of China, the housing market would be the most specific example for research. Thus, we gathered about 20 years of housing data from the official site to construct a statistical model to predict housing prices. Furthermore, we use predicted data compared with real data. Since China has strong central control, the market is most affected by policies, and we cannot quantify it. However, we chose several factors that could be put into the model, including income, GDP, population, consumption level, and GDP from construction. These factors are considered the most important elements that will affect housing prices and are the factors we use in our prediction model. In the end, we make the predictions for values in 2019 and 2020, and the predicted price has little difference from the actual data, which could demonstrate our model's accuracy. However, our model is built ideally, and for real-word prediction it needs to consider more unpredictable factors, such as central policies, social change, and so on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to more scientifically and reasonably carried out the comprehensive evaluation of the tech-economic benefit of pumped storage hydropower station, the index system composed of operational effect functional benefit, financial benefit and environmental benefit is selected according to its tech-economic benefit characteristics. Combining the improve method of sequence relation analysis with entropy weight method, the subjective weight is determined with method of sequence relation analysis and the objective weight is determined with entropy weight method, while the evaluation grade of the tech-economic benefit of pumped storage hydropower station is calculated through establishing a comprehensive evaluation model with matter element analysis method. Based on the relevant data of a pumped storage hydropower station, the evaluation indexes are quantitatively processed with the qualitatively and quantitively combined method, and then the relevant evaluation criteria are established, thus the comprehensive evaluation grade for its tech-economic benefit is obtained at last. The result shows that most of the indexes of the tech-economic benefit of the hydropower station reach their expected targets, while the applicability and effectiveness of the evaluation model are verified. Through the analysis on the study result, both the indexes to be strengthened and the indexes those reach the relevant criteria of the hydropower station are obtained, which are not only convenient for targeted taking the corresponding measures to improve the techeconomic benefit of the hydropower station, but also have referential significances for enhancing its tech-economic development in the days to come.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Irrigation and water conservancy is the foundation and guarantee of national food security and modern agricultural development. Improving investment efficiency is an important way to promote the construction and development of farmland water conservancy infrastructure in China. This paper selects the input-output data of farmland water conservancy in 31 provinces from 2002 to 2019, and empirically analyzes the influencing factors with the help of fixed effect model on the basis of objectively evaluating the investment performance of farmland water conservancy infrastructure by using SFA method. The results show that from 2002 to 2019, the overall investment performance of farmland water conservancy infrastructure in various regions has improved, and the investment performance in the central and eastern regions is higher than that in the western region. The investment performance of farmland water conservancy infrastructure is negatively correlated with per capita GDP, the illiteracy rate of rural labor force; the proportion of financial expenditure of farmland water conservancy infrastructure in total financial expenditure, and positively correlated with special agricultural subsidy policy; "small-scale agricultural water" construction, and farmland water conservancy management system reform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using a STIRPAT model combined with statistical data of 1988 to 2013, this study empirically analyzes the effect of diverse factors on China's carbon balance. These include economic level, urbanization level, population size, industrial structure, energy intensity, and energy consumption structure. We find that economic growth, population increase, and urbanization advance increase net carbon emissions, whereas industrial restricting, energy efficiency improvement, and energy consumption restructuring reduce carbon emissions. Specifically, economic growth and energy consumption restructuring constitute primary causes of the increase and decrease in net carbon emissions, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The earth rotates from west to east at a certain angular velocity, so the earth is not a strict inertial reference system. For most engineering technical problems, taking the earth as the inertial reference system can obtain sufficiently accurate calculation results. However, for problems with long operating time, large range of activities, and high precision requirements, such as aerospace, astronomy, and outer space exploration, unallowable errors will be produced if the influence of the earth’s rotation is omitted, which will seriously affect the experimental results. Therefore, how to solve the problem of the particle motion in the non-inertial system is of great significance for analyzing the motion law of each part of the non-inertial system. Taking the projectile motion in a non-inertial frame as an example, this paper proposes a solution method based on numerical calculation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If students get high grades, they can apply for better schools or find better jobs. Therefore, it is important to explore which factors can affect their academic performance. Our intuition and experience tell us that learning time should be positively correlated with achievement. Therefore, the purpose of this study is to find the correlation between learning time and academic performance. We choose to learn time as the independent variable and final score as the dependent variable. This study mainly uses the simple linear regression model, a mathematical regression model, to determine the correlation between variables. We use Rstudio to make this model compute the least-squares parameter β0, β1 and get the fitted line. Finally, several methods, including histogram of residuals, normal Q-Q plot and plot of residuals vs. fitted values, are used to assess whether the fitted model is accurate. The conclusion is that there is a slight negative relationship between learning time and final scores. Besides, the simple linear regression model is appropriate. Therefore, the data collected in this study do not support additional learning time to improve test scores.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Education has always been a key factor in a country's development. Improving student performance is a common goal of students, parents and teachers. Studying the factors that influence student performance can help students focus on their weak points to improve their final grades more effectively. In this paper, random forest algorithm is used to extract the four most important independent variables, which are second-period grade (G2), first-period grade (G1), number of school absences (absences) and number of past class failures (failures) from a dataset on student performance. Then a multiple linear regression model is then established to study the relationship between the dependent variable final grade (G3) and them. After the evaluation of the model fitting accuracy and residual test, a linear model (y = -1.76483 + 0.97847 X1 + 0.14374 X2 + 0.03759 X3 - 0.25720 X4) is built. It simplifies the model and can predict student performance with great accuracy
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the convenience of visualization in mesoscale, numerical study on permeability and mechanical property of pervious concrete has more advantages than physical experiments in mechanism analysis of performance change. Therefore, the permeability and mechanical property of pervious concrete were studied based on numerical method on the mesostructure level. Initially, the finite element models of pore and pervious concrete were generated according to image reconstruction. Then, the permeability and mechanical property were simulated according to CFD (computation fluid dynamics) analysis and CDP (concrete plastic damage) model. Finally, the permeability was well predicted and the mechanical failure mechanism was revealed intuitively. The results indicate that numerical method is significant for deep understanding the property of pervious concrete on mesostructure level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile phones and other portable electronic devices have taken an important role in our daily lives, and people’s expectations of the functionalities of such devices are constantly changing. For the mobile application marketplace to successfully meet the customer requirements, app developers must understand the market trends and users' interests. One way to evaluate the extent of success of an app is its amount of installations. With most existing model forecasts, app installs as a time series of past installation amounts. This article analyses the known features of applications such as category, rating, content rating, genre, etc., with linear regression and Extreme Gradient Boost to extract the relationship between app features and installations. The dataset used for training the models is ‘Google Play Store Apps’ from the world’s largest data science community, Kaggle. Furthermore, the performance of each model is demonstrated and compared with predictions on a testing set. The article describes the details in data processing, model training, and predicting. The results exhibit a strong relationship between several features, including date of last update, genre, and amount of reviews with app installations, and consequently provide a reference for app developers to understand the factors that impact the install amounts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the transportation between China and America. The issues of trade nowadays involves many aspects of problems, from the basic quantity need of goods to the time requirement of different types of products. This work applies a linear programming model to the situation and provides possible solutions. Based on the data collected, analysis of the results is mentioned as well, providing a complete strategy analysis. In this paper, we provide the results of our optimal research and provide factories with a possible way to ensure carbon emission and limit their cost when transporting goods, which can also be applied to different situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews the Brouwer’s fixed-point theorem under the lens of constructive mathematics. Specifically, it presents a proof for the following statement: there is no analog of Brouwer’s fixed-point theorem in one-dimension in constructive mathematics. The paper starts by introducing some basic concepts of constructive mathematics and the Brouwer’s fixed-point theorem in classical topology. It then gives a counterexample to Brouwer’s fixed-point theorem on the interval [0,1] with idea that there exists a non-extendable computable function in constructive mathematics. The result of this work provides further insights on the characteristics of constructive mathematics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several factors have been attributed to life expectancy. This research paper seeks to evaluate the factors that affect life expectancy and establish a significant difference in life expectancy between developing and industrialized countries. After conducting the study, the results indicated that factors such as gross domestic product (GDP), schooling, income composition of resources, body mass index (BMI), and alcohol intake significantly impacted life expectancy. Total expenditures did not have a significant impact on life expectancy. GDP, education level, income composition of resources, and BMI positively impacted life expectancy, while alcohol intake had a negative impact on life expectancy. There was also a significant difference in life expectancy between developing and developed countries. Developed countries have a higher life expectancy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In supply chain management, it is very important for supply, production and commercial distribution to adopt the optimal method to predict the sales of downstream distributors. MFTIWPSO-SVR model is proposed to predict the sales situation of distributors by using Pearson correlation coefficient at all levels in the supply chain. In this study, the support vector regression (SVR) algorithm is introduced to build the model, and a multi information fusion "triple variables with iteration" inertia weight PSO algorithm (MFTIWPSO) is used to optimize the parameters of SVR model, as well as Pearson correlation coefficient method is used to remove the strong correlation features of supply chain data set in order to determine the appropriate number of features. Experiment results show that the proposed model has higher fitting degree and prediction accuracy compared with the traditional PSO algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Investment risk assessment refers to the scientific prediction and assessment of the risk factors that may exist in the project, and then provide the necessary auxiliary decision-making opinions for the implementation of the project. Due to the importance of risk prediction in the process of project implementation, how to carry out investment risk assessment more accurately has become a hot research topic in the scientific community. The main research object of this paper is the highway. At present, the risk assessment methods for highways mainly include questionnaire survey method, expert evaluation method, etc. However, these methods have large subjective errors and cannot deal with a large number of data. In view of the above problems, this paper uses the extreme learning machine to establish the investment risk assessment model, and uses the kernel function and PSO algorithm to optimize the model. Compared with the traditional risk assessment, as well as the SVM, BP neural network and other existing risk assessment methods, the model proposed in this paper has certain advantages in training time and other aspects. At the same time, it also compares the basic ELM, the ELM combined with the kernel function and the ELM combined with the PSO algorithm to predict the risk. The experimental results show that the addition of the kernel function and the PSO algorithm can alleviate the overfitting problem of the ELM model and improve the accuracy of risk prediction. Therefore, the investment risk assessment method based on extreme learning machine proposed in this paper can provide some auxiliary decision-making for highway investment projects in China.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This cross-sectional study examines the association between workplace social support, job strain, and cardiovascular disease prevalence ratio and rate in a random sample of 13,779 in Sweden's working population which contains both males and females. Self-reported psychological work control, job demands, and workplace social support combined, is greater than multiplicatively relations to cardiovascular disease prevalence. Modified demand-control-support model is used for calculating prevalence ratio and prevalence rate. In comparison to the reference group, employees have high expectations, little control, and limited social support, demonstrates an age-adjusted prevalence ratio of 2.17 with a 95% confidence interval from 1.32 to 3.56. After controlling for age and 11 other potential confounding factors, the PRs for this group was around 2.00. By this study, when social support falls, cardiovascular disease prevalence rate and ratio rise in each combination. Limitations of the data accuracy and methodology weakness of cross-sectional design are discussed in this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to comprehensively evaluate the health service situation in Jiangxi, based on the rank-sum ratio method, select the number of health institutions per 1000 population, the number of beds per 1000 population, the number of practicing (assistant) physicians per 1000 population, the number of registered nurses per 1000 population, and the ratio of doctors to nurses. Five indicators were used to calculate the current health resource allocation of 11 cities in Jiangxi in 2019. The result of RSR evaluation was Xinyu was No. 1 and Fuzhou was No.11. The results showed that the allocation of health resources in Jiangxi in 2019 was uneven, and it was necessary to optimize the allocation of resources from the supply side to improve the accessibility of residents to medical treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The essay puts focus on the properties of the random walk, especially the simple random walk. In the properties, recurrence and transience is the most important one to cover with the comprehensive deduction from Pólya random walk theorem. Moreover, simulation in one- and two-dimension in python is used to verify it, and the essay finally gets a more straightforward statement of this theorem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of computer vision, visual servo control has been widely used in various fields. This article analyzed the stability of the image visual servo system in application, and provided theoretical and practical data support to engineering researchers when using the technology. The Jacobian matrix of the image based on point features was deduced in detail, and a simulation model in an ideal environment was built to show the convergence process. A 6-DOF manipulator was used for practical application experiments, and several measures to improve stability were described for the existing divergence problems, and the reliability of the technology in engineering applications was improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper, we investigate the converging behavior of Leland’s strategy in approximative hedging. We make a thorough inquiry on how fast the price of the call option, derived from Leland’s strategy, converges to the buy and hold price comparatively to the speed of the hedging error to vanish. For different strategy parameters, we demonstrate numerical results to expose different converging behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The screw theory is a useful tool to calculate the degrees of freedom of mechanisms. This paper focuses on two parallel mechanisms, 3SRR and SP+SPS+SPR. By establishing the proper coordinate system, the motion-screw system of each limb can be constructed, then the constraint-screw system is given by using a duality relationship. Finally, the mechanism motion-screw system can be obtained from the maximum linear independent constraint-screw subsystem of each limb. The degree of freedom (DOF) of a mechanism can be determined by the dimension of the mechanism motion-screw system
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are testing the assumption that air pollution may increase the risk of negative outcomes at birth. The study looked at live births recorded in the Czech Republic in 1991 (n = 108,173). All monitors in this district measured maternal exposure to Sulfur Dioxide (SO2), total suspended particulate matter (TSP) and Nitrous Oxides (NOx) during each trimester of pregnancy as arithmetic averages. The author included the test of low birth weight, prematurity, and intrauterine growth retardation (IUGR) to determine the relationship between air pollution and birth risk. We mainly use odd ratio (OR) as the method to test the relationship between each of three kinds of air pollution and several risks of birth. The OR of three kinds of air pollution in every three trimesters has been tested. We show that low birth weight and prematurity were associated with SO2 and slightly lower than TSP; NOx has no relationship. We also show that air pollution could bring more effects during the first trimester than the other two trimesters, and the gestational age could limit air pollution. This paper could bring benefits to the future exploration of the relationship between air pollution and birth risk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Limited studies have been conducted on the survival analysis of breast cancer patients. And no study has been investigated using cancer datasets from the UK and Canadian patients. This study aims to qualify the factors contributing to survival time for female breast cancer patients, including patients' age, tumor size, tumor stage, mutation counts, and positive lymph nodes. The hypothesis is proposed that these factors are all associated with the increasing death rate risk for breast cancer patients. The dataset comes from a study conducted on 2510 female breast cancer patients from the UK and Canada, collected by long-term clinical follow-up. The Cox model is applied to each factor to explore their relationship with the survival of patients. All the results are tested, using Schoenfeld residuals. The coefficients between the explanatory variables and survival time are 0.033863 for age, 0.064274 for lymph nodes, 0.007031 for tumor size, 0.010202 for mutation count, and 0.243451 for tumor stage. The C-index of this model is 0.65653558. Our study suggests that on the premise of having some clinical symptoms, the Cox model can be used to predict the survival time of breast cancer patients. The study has some reference value with its convenient procedure and certain accuracy. According to the outcome of Cox regression, the most pivotal explanatory variables are age, lymph nodes examined positive, tumor size, and tumor stage. As these variables increase, the expectation of the survival time of the patients will decrease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anshan, the third largest city in Liaoning Province, plays a pivotal role in the revitalization of Liaoning in 2021—the beginning of the "14th Five-Year Plan” and the transition of two centennial goals. Analyzing economic, population, and innovation data in Anshan over the past ten years; the paper based on the perspective of economic statistics establishes the Cobb-Douglas supply-side economic model of Anshan, and draws the following conclusions: Anshan’s economic returns to scale are increasing. Local government should increase investment promotion, and strengthen scientific research and talent reserves, to speed up the transformation and upgrading of local economic structure and industrial structure innovation, which aim to promote the rapid and high-quality development of Anshan’s economy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Agricultural modernization needs to change the traditional production mode, while the technology diffusion for green agriculture is an important way to realize the change. This paper constructed an asymmetric evolutionary game model, between “government-farmers”, that analyzes the strategy evolution path of each game subject and the factors affecting its strategy evolution, by using the stability theory of a differential equation, and discusses the stability strategy of the system by using the Jacobian matrix. In part of numerical simulation, the interactive evolution-path between the government and farmers in agricultural green technology extension is analyzed; and the influence of government penalties, purchasing costs of green technology; gains from green products and other variables, on the stability strategy of evolutionary systems, are simulated when government penalties are greater than the sum of supervision fees and agricultural subsidies. Research shows that increasing government subsidies for green production, and government penalties for non-green production, can encourage farmers to adopt green production methods which is conducive to the diffusion of green technology. The need to promote this technology is only necessary if the profits obtained by farmers, with green production technology, are higher than a certain threshold; otherwise, this will make it difficult to promote the technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background: limited research about Covid-19 has been done on investigating the relationship between the number of vaccinated people and confirmed cases. We investigate the hypothesis, that the number of confirmed cases would negatively correlate with the number of people fully vaccinated. Methods: the data we chose to analyze is the number of Covid-19 confirmed cases versus the cumulative number of vaccinated people in the U.S. The data is collected from CDC's official website. The data was updated daily from 13 December 2020 (The start date when the Covid-19 vaccine was first available in the U.S.) till the date we collected it, 16 June 2021. Conclusion: our study indicates that the number of Covid-19 confirmed cases decreases as the number of fully vaccinated people increases. The results of this study will provide reasonable suggestions to people who are currently uncertain about the safety and effectiveness of the vaccines and convey profound cosmopolitan implications on other countries to contain the Covid-19 outbreak.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acute liver failure (ALF) is a rare but severe liver dysfunction that is deadly without immediate treatment. It can be associated with numerous living habits and diseases, such as drug abuse, overdose alcohol consumption, viral hepatitis, and diabetes. Most studies focus on a specific potential cause of ALF, while our study, by jointing these potential causes, investigates the risk of getting an ALF. By analyzing data, consisting of 8785 samples with measurements of physical condition, blood test, and diseases, retrieved from open source Kaggle.com, we conducted a logistic model to narrow down the related independent variables for dependent variable — getting an ALF. After analyzing the data, we managed to build a model using Dyslipidemia, poor vision, family diabetes, family hepatitis and hepatitis to predict the risk of getting an ALF. Having symptoms of such five variables would increase the risk of getting acute liver failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taking Xiamen as the research object, based on the monitoring data from 2014 to 2016 (3 years), observe the changes in atmospheric particulate matter (PMs) concentration and analyze its influence mechanism with meteorological factors (wind speed WS, temperature T, dew point temperature DPT, lowest cloud height HCC, sea level pressure SLP, relative humidity RH, etc.). The result shows that: (1) The concentration of atmospheric particulate matter in Xiamen is generally at a relatively low level, and the air quality is relatively good. From the perspective of different seasons, the lowest value of atmospheric particulate matter concentration occurs in summer, the highest value occurs in winter; the highest value of the day occurs late at night and early morning, and the lowest value occurs at Noon. In different seasons, different weather; different urban gradients, and so on; the diurnal variation shows different characteristics. (2) The concentration of PMs has a significant positive correlation with the concentration of other pollutants; a significant negative correlation with temperature, wind speed, minimum cloud height (cloud cover ≥70%), and dew point temperature; and a significant positive correlation with humidity and sea level pressure. In different seasons, the influence of weather factors on the variation of atmospheric particulate concentration, showed different results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Pollutant Index (API) is an indicator for monitoring or evaluating air quality, that is a legal standard for measuring the degree of air pollution. We determined the survey content based on academic pedigrees, and released data from the Guangzhou Ecological Environment Bureau. It captured the index values of 6 major pollutants in Nansha District from 2016 to 2020, including PM2.5, PM10, SO2, NO2, O3, and CO. We use SPSS and Excel software tools, exploratory analysis and visualization comparison research, and found that: (1) The API of Nansha District has a high frequency of 84.60% in a 5-year cycle, and that has no record of serious pollution. (2) In the whole year cycle, some indicators of air pollutants in Nansha District in April or autumn are relatively large. We further analyzed the above survey results and put forward two constructive suggestions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to reduce environmental pollution and the waste of resources, the increase in consumers' willingness to purchase sustainable clothing and their satisfaction with it, as well as ideas on product design optimization and marketing strategies of environmental protection clothing enterprises, is worth discussing. This study put forward an important criterion framework for sustainable clothing through literature discussion and the Delphi method, and used the DANP method to determine importance rankings of factors and the causal relationships among those factors in purchasing sustainable clothing. The results show that the criterion of "sustainable material" is the most important factor in consumers’ decision to buy sustainable clothing. The "consumer habit", "value expression", "environmental protection brand image" and "aesthetics" are also key factors. The evaluation model can be widely used in the field of clothing marketing and design, questionnaire making, evaluation indicators, and enterprise development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the monitoring results of air pollutants in Xi'an from 2016 to 2020, based on the statistical analysis of its annual average concentration, monthly average concentration, Spearman's rank correlation coefficient, comprehensive index of ambient air quality and pollutant load factor in the past five years, the air quality of Xi'an was further understood. The results show that: (1) except for ozone, the annual average concentration of pollutants in Xi'an showed a downward trend, and the monthly average concentration showed a “U” shape except ozone. The concentration increased obviously in autumn and winter, the pollution was more serious, and the air quality improved obviously in spring and summer. (2) TSP is the main air pollutant in Xi'an for five years. Although the concentration of TSP is decreasing, it still needs to be improved. (3) According to the correlation analysis of various air pollutants, PM10 and PM2.5 show significant positive correlation, and PM2.5 and PM10 have high homology with SO2, CO, NO2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Plastic, which is difficult to degrade, leads to a series of ecological problems for the marine environment. However, plastic pollution flux is difficult to estimate. The human development index (HDI) is potentially related to microplastic generation. In this study, we collected MMPW and HDI data of most countries in the world with rivers flowing into the sea. The result shows that the MMPW values in HIC countries (Median = 0.0056 kg/cap/day) is significantly lower than that of LI (Median = 0.032 kg/cap/day), LMI (Median = 0.067 kg/cap/day), UMI (Median = 0.046 kg/cap/day). MMPW increases with HDI in countries with relatively low income (LI, LMI) but decreases with HDI in countries with high income (HIC), indicating the impact different economic development patterns have on microplastic generation. This study helps us gain insight into the relationship between economic development and microplastic pollution, hence provide a more accurate method for global plastic estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Technological transformation is an effective means to ensure the safe and reliable operation of equipment assets of power grid enterprises. Due to the wide variety of equipment and large capital investment in the power grid production technological transformation project, the completion final account often exceeds the initial budget estimate during the implementation of the project transformation. To this end, based on the linear regression analysis method, this article uses a sample of 300 technological transformation projects collected by a provincial power grid company to divide the project cost into construction costs, installation costs, equipment purchase costs, other costs, and static costs according to cost types. In terms of investment and other dimensions, it analyzes and predicts the current situation of project cost control effects, from the aspects of regularity and volatility, and discovers the final accounts of completion, initial estimates, total investment, and cost levels and changes of various sub-items. Through the establishment of regression models, we have better prediction results for the cost of different power grid technical renovation projects, effectively realizing reasonable control of the cost of power grid technical renovation projects, reducing the situation of settlement exceeding the budget, and strengthening the lean management of technical renovation project costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of modern society, electrical energy has always been one of the most important and effective energy sources. Electric energy can be converted into different energy sources, such as light energy, heat energy, mechanical energy, and chemical energy, etc.; through power plants, substations, and lines that transmit electricity to users, a complete power system is formed. The rational operation, scheduling, and maintenance of the power systems can improve the internal network of the power systems through the application of cloud computing technology. Based on cloud computing technology, it integrates data and information resources, so that the power system has strong computer capabilities while ensuring the security of power system data. This article discusses the characteristics of the power system and the improvement direction of the power system. The paper also studies the characteristics of cloud computing technology and its application in actual power systems. This paper tries to develop power system services and improve the efficiency of the power system. This article is composed of three sections. The first part, is to introduce the background information of cloud computing and power system. The second part, is a detailed description of the application of cloud computing in the power system, and the improvement of the application environment, and the advantages and disadvantages. The third part, is to summarize the future development direction of the cloud computing platform and how to improve it in the power system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of 13,779 Swedish employees selected randomly focuses on exploring how one’s working environment can affect their cardiovascular disease (CVD) prevalence. It is found that CVD prevalence can be greatly influenced by self reported work control, job demands, and social support from colleagues combined. The prevalence ratio (PR) is 2.17 (95% CI-1.32,3.56) for those who were under a working environment of high demand, low control, and low social support after adjusted by age compare against the workers of a low demand, high control, and high social support. With having age and other 11 possible confounders in consideration and consecutively controlling them, the PRs are about 2.0 in this group. What else to notice is that the age-adjusted PSs was higher for blue collar men. Unfortunately, no causal inferences can be made because of the very nature of the cross-sectional study design. The methodology used and the weakness of the work stress field are discussed in the context. We also will discuss the limitation of this design and what we can do today to improve the accuracy of this study, which would help more patients suffer from CVD. It is a major application of effect modifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ecological product is a major aspect in the concept of ecological civilization construction in China, which provide a practical grasp and material carrier for the theory of "Two Mountains". The realization mechanism of the value of ecological product is the key path to realize the transformation of "Two Mountains". As the main body of consumption, consumers' ecological consumption behavior directly determines the degree of value realization of ecological product. In this background, based on the theory of planned behavior, the influence mechanism model of consumer ecological consumption behavior is constructed. In this model, the values of ecological consumption is antecedent variable, the attitude of ecological consumption is intermediary variable, the reference group is moderator variable and the behavior of ecological consumption is result variable. The relationship among them was tested by partial least squares structural equation modeling (PLS-SEM). The results show that the ecological consumption values positively predict the ecological consumption attitude, while the ecological consumption attitude positively predicts the ecological consumption behavior. Moreover, the relationship between the ecological consumption values and the ecological consumption attitude is positively regulated by the reference group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article combines the reservoir water temperature numerical model and applies it to the study of the water temperature distribution prediction of large-scale reservoirs. It compares and analyzes the difference between the natural water temperature and the discharged water temperature under different water intake methods. The results show that: Compared with the traditional bottom water intake method, the layered water intake is adopted. By controlling the water intake depth, the temperature difference between the discharged water temperature and the natural water body temperature can be effectively reduced by 1.3°C~2.3°C. It can also effectively reduce the recovery distance of low-temperature water bodies, and is more conducive to the survival of fish and protected animals in downstream rivers. The research results have reference value for reservoir ecological regulation measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid increase in emissions of greenhouse gases such as carbon dioxide after the first Industrial Revolution has been pointed out as the leading cause of global climate deterioration. Global temperatures have risen by 0.6 degrees Celsius in the past 100 years. On this trajectory, global temperatures are projected to increase by 1.5 to 4.5 degrees Celsius by the middle of the 21st century. In the past one hundred years, global temperatures have risen by 0.6 degrees Celsius, and at this rate, they are expected to increase by 1.5 to 4.5 degrees Celsius by the middle of the 21st century. At the same time, sea levels are rising because of increased carbon dioxide. These changes are devastating for wildlife and negatively affect the human environment. The alarm bell to reduce carbon emissions has been sounded, and low carbon emissions are imminent. The following explains the current situation of global carbon emissions from the perspective of sustainable development by combining the structure of global energy consumption and factors affecting carbon emissions. Seven prediction models are established, and additional emission reduction measures are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applied Mathematics and Algorithm Model Prediction Optimization
The novel corona virus named COVID-19 is highly infectious and droplets are its main media in the air. This paper studied the existence time of the virus by focusing on the existence time of droplets. This paper presents the physical analysis which is studied by setting mathematical model and the numerical calculation of the existence time of droplets in air, and discusses the relationship between the existence time of droplets and some environmental factors like the relative humidity and temperature. And it gets the conclusions that for a given initial radius, the droplet existence time increases with increasing ambient relative humidity and decreases with increasing temperature, and that the effect of temperature on the existence time of water droplets is less than the effect of relative humidity, and the on the basis of which make prospects of this study further. We can therefore propose some precautions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we focus on Bernoulli percolation and mainly investigate bounds of the probability of the connectivity of 0 to the distance n. At first, we give a rough bound of the probability, and then refine our result by the high-dimensional RSW theory, which gives a nontrivial bound for a short crossing in ℤ𝒅, as well as the renormalization method. We finish the last step of this section by coupling. Next, we give a more refined bound in ℤ𝟐 using the dual graph. At last, we investigate the behaviors of some subgraphs of ℤ𝟐. The work done and conclusions made in this article are fundamental in percolation theory. The purpose of this article is to introduce Bernoulli percolation in a relatively thorough way to the beginners of percolation theory and provide some insights into different proofs of basic conclusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a track-tunnel-unsaturated soil model for predicting the vibrations induced by the urban underground rail transit. The track structure subsystem and tunnel-unsaturated soil subsystem were coupled by the force between the slab and tunnel invert. Firstly, the radial displacement frequency response function (FRF) at the tunnel invert was calculated. Then, the coupled equations of rail, slab and tunnel were used to obtain the interaction force between slab and tunnel. Finally, other dynamic response could be calculated by multiply the interaction force by the corresponding FRF. After being validated via comparison with existing saturated soil model, the present model was applied to investigate the effects of water saturation, speed and frequency of train load on the dynamic response of the system. The results show that the radial displacement at tunnel invert is increase with the increase of water saturation. The matric suction amplitudes decrease first and then increases with the increase of water saturation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, under the background of the global pandemic, people are required to wear face masks. Thus, it is harder to identify a person because features such as mouths and noses are being covered. Discovering a measure to recognize the identity rapidly is regarded as one of the top urgent priorities. Fortunately, thanks to the boost of various network architecture models, the problem is on the way of solving. Actually, the network architectures pose positive impacts not only on facial recognition, but also be beneficial to all aspects of daily life, like image semantic segmentation and object detection, which can be used to divide the goal and background information. The paper mainly analyzes the differences between network architectures LeNet, AlexNet and VGG, including advantages and drawbacks, and provides possible applications of models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research applies simple statistical models to study the stock market's dependence in the United States for getting an overview of how the stock price of all companies in this country behaves. This study checks out the SPY data and implement the regression model and random walk model analysis to approach future prices into selected periods of the history data. The predicted result from the two models is compared to the actual more current; that is, future data for the past, and find that the regression model can make closer predictions to the future stock-market price. The result displays that the future stock-market price is relative to the behaviours of the price before. The stock market price is dependent. Based on the stock price behaviours of four typical sports companies from 2015 to August 2021, the deviation of individual stock price trends from SPY is being discussed. This presents an exploratory proposal to make individual stock price change steadily, following the normal trend if the company maintains a good income, which requires constant high-quality products and good marketing strategies, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will briefly introduce the theory of site percolation. Meanwhile, the study of the percolation threshold is also introduced. Firstly, the basic model is implemented in Java. Assume that the field of the model is bounded with square lattice. For the squares in the same model, a fixed site vacancy probability p shows the chance of being an open site, which the liquid could pass through. Conversely, the closed sites are those that could block the liquid. Each time's experiment uses a unique seed to guarantee the repeatability of models. A percolation graph with prescribed N and probability p is obtained by inputting the parameters into the code. The pathways which allow the liquid to percolate from the top of the lattice are highlighted in the plots. Secondly, the relationship between the site vacancy probability p and percolation probability is plotted to find the threshold of percolation. The percolating threshold is a critical probability. If the vacancy probability is bigger than the threshold in an infinity system, the system could be percolating. A successful percolation is that there is at least one pathway that could percolate from the top of the lattice to the bottom for the model. The probability of percolation is the rate of successful percolation in the replicated experiment with the same site vacancy probability but different seeds. The accuracy of the threshold is improved as the number of squares and tests increases. The efficiency of the program is also taken into consideration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, Bhalekar-Gejji system has been widely studied for their wealthy dynamic behavior and application background, however, synchronization of fractional-order Bhalekar-Gejji systems has been very little studied. In this paper, the synchronization problem of fractional-order Bhalekar-Gejji system is studied, and the controller and parameter updating method are designed based on fractional calculus theory, backstepping method and adaptive sliding mode control method. Finally, the effectiveness and robustness of the scheme are verified by numerical simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning is a subfield of artificial intelligence that teaches a machine how to learn. It has drawn research interest in many research areas, including computer science, engineering technology, and statistics. It also has growing impacts on our daily life. Our life is gradually being affected by algorithms regardless of realizing it or not. For example, the history of your Internet research is shared with companies. When I searched for “football kits” on google, it suggested 10 or 20 most related links; after I clicked on one of the links, the Internet then recorded this as a piece of data; the computer would learn from it to provide improved suggestions next time. Decision trees are one of the most important machine learning models. It uses a tree-like model of decisions and consequences to help classify experiment sets of data. This article summarises the algorithm of decision trees by investigating its basic theories, algorithms, and implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the unreasonable layout scheme of underground garages, there are some deficiencies in the parking space planning results. Therefore, a research on the parking space layout method of community underground garage based on regional division is proposed. On the premise of analyzing the main influencing factors of parking space design of underground garage, the complex design area is divided into standard parking space area and non-standard parking space area, and the layout schemes of different areas are optimized by genetic algorithm. Under the adaptive strategy, all schemes are traversed, and finally the layout results with the advantages of all schemes are obtained. The experimental results show that the design method can increase the number of parking spaces in the same area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we analyze the BG14 digital signature, and show that its security-proof is incomplete in the sense that a necessary security check is missing in its sign algorithm. Moreover, we offer an alternative approach for the phase 2 security-proof of the BG14 scheme. These two contributions are summarized in our digital signature scheme Π.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, speech synthesis based on machine learning has become more and more popular. At present, there are many kinds of neural network models that can generate synthetic audio which highly imitates human voice. The quality of these generated audio is usually evaluated by mean opinion score (MOS). Voiceprint is an important metric to distinguish the speaker's speech features. Generating voice speech with specific voiceprint features is of great significance to improve the application of speech synthesis. However, the existing speech synthesis models seldom consider the preservation of specific voiceprint features. In this paper, we propose D-MelGAN, a speech synthesis model targeting to high-quality voice speech with specific speaker voiceprint features. The model is based on the non-autoregressive feedforward convolution neural network of GANs. By embedding the d-vector technology used to identify specific voiceprints in GANs, the original audio waveform with the characteristics of specific speaker voiceprints is further generated. The experimental results show that the new model can increase the voiceprint features of the generated audio, and the quality of the synthesized speech can be well maintained, which will make the generated speech have the specific style of a speaker, the text to speech technology will be applied to more fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ambient air pollutants have a direct influence on the human body and the environment, and accurate prediction of PM2.5 concentration is very crucial for air quality control. This paper aims to provide a reasonable model to predict and analyze PM2.5 concentration data of Hangzhou. Forty-five sample data from ten meteorological observation points in Hangzhou in June and July 2021 were selected; factors including PM2.5, PM10 and daily temperatures as input. The BP neural network model is often used to predict the concentration of PM2.5. However, due to large dimension of the input elements, it often reduces the efficiency. To handle the problem, principal component analysis (PCA) is utilized in the input layer to achieve the goal of dimensionality reduction. By training the training-set, the network model structure is determined, and prediction accuracy is tested. As a conclusion, PCA-BP neural network performs equally well compared with BP neural network, while it can significantly save the calculation time and simplify the network structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
China is a large producer of apples, whose yield is influenced by climatic, socioeconomic, and other factors. It is therefore of great practical significance to thoroughly analyse the developmental trend of the apple industry. In this study, a linear regression model for 2005–2012, and a logistic regression model for 2012–2020, were established based on apple yield data from the National Bureau of Statistics. The numerical results showed that the prediction results were valid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces the spline function approximation theory to the multiscale expression of map curves. For the complex shape of map curves, the large amount of data and multiscale characteristics, we first use the DP algorithm to obtain feature points under different thresholds, and integrating them with the minimum distance visible for the human eye, according to the different average distances between adjacent feature points, to select a suitable linear threshold change, the multiscale representation results when obvious differences are formed. The experiments show that, unlike traditional Fourier series and wavelet transform for the occurrence of point shifts at the feature points of the simplified curves, the PIA method can well balance the overall morphology of the simplified curves and the accuracy of local feature points, and it is feasible to establish a continuous multiscale representation model of map curves by using the method through the selection of continuous thresholds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on stochastic infectious disease models in the context of biological problems. Stochastic infectious disease models with mean-reverting processes are studied, and the model studied is a stochastic SIS infectious disease model with mean-reverting birth mortality[1-7]. The persistence and extinction of diseases are discussed in the context of the infectious disease mode[8-11]l, giving thresholds such that if the threshold is less than 1, the disease becomes extinct with probability 1, and if the threshold is greater than 1, the disease persists with probability 1 in the mean sense. From this analysis, we conclude that the greater the intensity of the fluctuation, the faster the disease goes extinct, while the smaller the intensity of the fluctuation, the greater the number of infectious diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hepatitis C is a widespread liver disease that possibly leads to serious symptoms if not diagnosed in time. Currently, several methods are already available for specific screening of Hepatitis C. However, their expensive costs make it hard to allow their broad use in countries with poor conditions. Here, by constructing a mathematical model, we introduce a new method for testing Hepatitis C diagnosis. Our method is based on the results of liver function tests; therefore, it is relatively more cost-saving to do the test. A study was conducted based on the dataset obtained from the UCI Machine Learning Repository at June 10, 2020, containing laboratory values of blood donors and Hepatitis C patients and demographic values like age. χ2 and ANOVA test was used to find the correlation between Hepatitis C and parameters of liver function test. Logistics regression was used to build the model for the prediction of Hepatitis C. The result shows that there’s a significant increase in likelihood of Hepatitis C when there’s increase in AST (β = 0.09, p < 0.001) and BIL (β = 0.057, p < 0.01); and there’s also a significant decrease in likelihood of Hepatitis C when there’s increase in ALT (β = -0.026, p < 0.001) and CHOL (β
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since a wide-range of fake news concerning the COVID-19 virus spreads fast and without restraint, many pessimistic effects come along with it, disturbing people's daily life and interfering with the real news' distribution. To improve the situation nowadays, our study tries to come up with an idea of limiting the spread of fake news through detecting, identifying and classifying it. Such an objective is realized using a dataset named COVID-19 Fake News Dataset from the website of Mendeley Data which was delivered in early 2021. LSTM is also applied to build a related model to do fake news detection. As to the study result, our performance parameters include the value of accuracy, precision, etc. Additionally, we use the loss curve and confusion matrix to analyze the results and discuss accordingly. In conclusion, our research provides strategic references based on the LSTM model to solve problems connected with fake news detection on the COVID-19 virus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the classical Multi-Armed Bandit (MAB) problem, a player selects one out of a set of arms to play at each time, without knowing the reward models. At each time, the player selects one arm to play, aiming to maximize the total expected reward over 𝑇 times. The regret of an algorithm is the expected total loss after 𝑇 steps compared to the ideal scenario of knowing reward models. When the distributions of the arm reward are heavy-tailed, it is difficult to learn which arm has the best reward. In this paper, we introduce an algorithm based on the idea of Upper Confidence Bound (UCB) and prove that the algorithm achieves a sublinear growth of regret for heavy-tailed reward distributions. Furthermore, we consider MAB with gap periods as a dynamic model requiring that the arm will get into a gap period without offering reward immediately after being played. This model finds a broad application area in Internet advertising. Clearly the player should avoid to choose an arm until it gets out of the gap period. We extend the algorithm framework of Deterministic Sequencing of Exploration and Exploitation (DSEE) to the MAB model with gap periods, with regret reaching a growth of optimal order 𝑂ሺlog 𝑇ሻ for light-tailed distributions and a sublinear growth the for the heavy-tailed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Schrödinger equation is a basic and key equation in quantum mechanics. In this thesis, we focus on the one-dimensional cubic-quintic nonlinear Schrödinger equation (CQNLSE). First, we introduce the general formulation of Schrödinger equation as well as some dynamic properties of the general nonlinear Schrödinger equation, including the mass and energy conservation. Second, we present four types of analytical solutions to the normalized CQNLSE and prove the conservation of energy of the CQNLSE. Third, we provide and analyze numerical methods, mainly finite difference methods and pseudo-spectral methods, for the CQNLSE under the zero far-field condition. Finally, we simulate the interaction of two bright solitons and dissect their condition and behavior before and after the collision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, machine learning has become one of the most popular subjects. The general classification problem is similar to the medical diagnosis problem. Measurements are made on some case or object. Based on these measurements, we then want to predict which class the case is in. Someone may think that machine learning sounds too advanced to normal people; however, machine learning is applied everywhere in our lives. A decision tree is one of the important machine learning models. It uses a tree-like model of decisions and consequences to help classify experiment sets of data. In this article, we summarize the algorithm of decision trees by investigating its basic theories, algorithms, and implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new crown epidemic is raging around the world, especially in Tokyo, Japan. After the Olympics, the situation of the new crown epidemic is not optimistic. At the same time, due to the emergence of more unstable factors, the number of newly confirmed and death cases is becoming more and more difficult to predict, which poses great challenges to the prevention and control of the epidemic. An effective forecasting method is urgently needed. In order to deal with the unpredictable Tokyo coronavirus epidemic, this article analyzes the existing coronavirus confirmed and death data and predicts the future trend of the coronavirus epidemic. This article first uses the ARIMA-GARCH model to make predictions, and obtains more accurate prediction results. Furthermore, this article uses the SIR model for fitting and prediction, and finally provides guidance on Tokyo's future anti-epidemic policy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vaccine production scheduling is actually a specific application of the typical workpiece sequencing problem in sequencing theory. On the basis of certain production constraints, the production efficiency can be improved by optimizing the solution of several types of vaccine scheduling problems with fixed processing times. This article first uses an improved profile fitting algorithm to reduce the production blocking time of the workpiece and determines the production sequence of adjacent vaccines; then uses the tabu search algorithm to perform global optimization to improve the convergence performance; finally, the "two exchange method" is used as a test tool. The experimental results show that this method can minimize the total waiting time of vaccine processing and achieve the best production scheduling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To solve the problem of medium and long-term water level prediction of Hongze Lake with small samples, strong fluctuations, and nonlinearity, a statistical model is proposed to predict the water level. The weighed Markov chain (WMCP) is introduced to predict the residual sequence generated by the ARIMA model. The prediction result is converted from the state value to the specific value under the action of the state feature value combined with the linear interpolation method to compensate for the prediction result of the ARIMA model. Taking into account the directivity of the ARIMA model to the trend of water level changes, and improving the conversion process of the residual state, the experimental results show that the improved combined model reduces the water level prediction error by 7.02% and 3.66%, respectively, compared with the single model and the unimproved combined model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past two decades, prediction market has become an increasingly important tool for both practical application and theoretical economic studies. Despite its outstanding predicting accuracy, little is known about the stationary properties of prediction market. In this work we build a dynamic trading model of prediction markets with the inclusion of behavioral traders. In our model, one single market maker is responsible for the issuance, trade and liquidation of a binary ArrowDebreu security; the utilities of behavioral traders are characterized by the prospect theory-based model. These utility functions assume a peculiar ‘S’ shape instead of the usual concave curvature. We prove that the individual trade between behavioral trader and the market maker is well-posed. Simulation to show the existence of these behavioral traders could result in price distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we first introduce the classic PCA, including its ideas and efficient algorithms like SVD and how PCA can be seen as a linear regression problem. Then the author will introduce SPCA by L1 Penalty and also provide an algorithm to calculate its loading matrix. After that, the article focuses on the differences between PCA and SPCA by applying them on several simple cases. Finally, the author will apply them on 2 gene expression cases. It turns out that in the first data set which has more samples than features, PCA performs better than SPCA. While on the second data set which has more features than samples, SPCA performs better than PCA. As a result, the sparsity of SPCA may be useful when data set contains tens of thousands of features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the effects of different temperature and catalyst combinations on ethanol conversion, C4 olefin selectivity and C4 olefin yield. Theoretical references are provided for practical production and experiments. In order to study the relationship between ethanol conversion and C4 olefin selectivity under each catalyst combination condition and temperature more intuitively, a BP neural network prediction model was established to pre-process the experimental data, and it can be analyzed from the function plot that both show positive correlation with temperature under the same catalyst combination. On this basis, the experimental results provided by experiment two were briefly analyzed. On the basis of this model, the functional steepness on different functional segments represented by different catalyst combinations were compared, and the steeper the functional segment, the more effective the catalyst combination. The analysis showed that when the loading ratio of Co/SiO2 to HAP was close to 1:1 and the Co loading and ethanol concentration were close to 1.68 ml/min, it was more favorable to improve the ethanol conversion and C4 olefin selectivity. In analyzing the effect of different temperatures on the reaction, all the predicted data images were superimposed and it was found that the higher the temperature, the better the ethanol conversion and C4 olefin selectivity. Using a genetic algorithm model, the best solution was determined: a charge ratio of 1.1403 for Co/SiO2 and HAP, a Co loading of 1.9276, an ethanol concentration of 1.9472, and a temperature of 322.21 degrees C. The yield of C4 olefins obtained was about 53.04%. In order to achieve the goal of using less raw material in production and ultimately maximizing profits, the search for the optimal solution continued based on the previous study. A neural network prediction model was used to find and then breakthrough around this small region, and five sets of experiments were designed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probability and statistics are widely applied in the modern financial market, and it contributes and prompts the study in stock market analysis. The Brownian motion particularly plays a key role in the scenario on the prediction of stock volatility. This paper is going to discuss the fundamental mathematical and statistical theories applied in the simulation of stock volatility. It involves the introduction to geometric Brownian motion, stochastic process, random walk, and some important statistical theorems. The most frequent model for modeling stock prices is Geometric Brownian Motion (GBM). GBM posits that random shocks accompany a continuous drift. While period returns are normally distributed under GBM, multiperiod price levels (for example, ten days) are lognormally distributed. Forecasting of stock prices acts as an important challenge in nowadays stock market decision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We reviewed the percolation theory and previous classical percolation problems to find some related properties on clusters. A basic two-dimensional percolation model has been built by the Monte Carlo method to simulate the condition. In this model, it has NxN blocks in total, and each site has the same probability p to be open or closed. After running the model, some open blocks will be connected to form clusters. It is possible that the opposite sides of the square are connected. We colour all the clusters in distinct colour to make it easier to be observed. In that case, we can start an experiment on the critical point of phase transition related to physics. This experiment considers the maximum size of cluster with different probabilities. As the size of total blocks increases, the value of critical point will be close to 0.59. The percolation model has many applications in different fields, including forest-fire model, bank model, and information dissemination model. All of them have a common feature that the value shows an obvious leap after a certain point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Output gap is an important index to analyze the macroeconomic operation situation. In the macroeconomic policy framework, the formulation of many policies depends on evaluating the output gap. According to the impact of COVID-19 on China's economic growth in 2020, this paper aims to explore the future change law of China's output gap. Firstly, China's real GDP growth rate data is calculated according to the original GDP data. Secondly, the potential output and output gap are estimated by H-P filtering method. Finally, the output gap series is brought into the ARMA model for fitting and prediction. To sum up, under the influence of COVID-19, China's actual economic growth level was significantly lower than the potential economic growth in 2020, forming a higher negative output gap. The epidemic's impact on China's actual economic growth will last for four years, and China's output gap will return to a stable state slightly less than zero in 2025.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The global outbreak of the new crown in the last two years has brought the issue of epidemic transmission to the forefront of everyone's mind. Epidemic transmission is closely related to the virus itself, epidemic prevention measures and population density. Therefore, the focus of our study is to investigate the effects of population density, migration rate, infection rate and recurrence rate on the transmission of infectious diseases using Markov random walk model which is developed based on traditional SIR and VSIR models. We will first present the history of the development of research epidemics and current advances in epidemic modeling. Secondly, we will build a random walk model (based on SIR) to obtain the basic simulated epidemic transmission charts, and then we will manipulate the variables of population density, mobility, infection rate, and recovery rate, to investigate how much some epidemic prevention measures can change the transmission rate of epidemic diseases. Finally, we introduce a vaccination process into the random walk model and analyze the effectiveness of the vaccine against the spreading of diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process of creating clusters by the irreversible aggregation of tiny particles is a central topic in scientific research. The rate is limited by the diffusion of the particles to the surface during aggregation. The Diffusion limited Aggregation (DLA) was first proposed by T. A. Witten, and L. M. Sander, in 1981. In recent years, scaling researches in nonequilibrium statistical physics based on DLA model have substantially progressed partly because of the utilization of renormalization group approaches. In this paper, the mechanism of three-dimensional random walk based on lattices will be introduced first. Next, the DLA process in three-dimensional space will be reviewed. Then, a brief introduction of the basic idea of 'fractal' and the two-point correlation function approach, which is utilized to determine the fractal dimension of the DLA model, will be given in this work. Finally, the process of fibril formation of a specific rod-like protein ‘type I collagen’ (this process is also known as ‘fibrillogenesis’) based on two-dimensional DLA model, will be discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The total retail sales of social consumer goods is an important index reflecting the consumption level. Predicting its development trend is helpful to grasp the economic development situation. Because the factors affecting the total retail sales of social consumer goods are very complex, it is very difficult to make full use of the value information contained in a single forecasting model. Therefore, this paper proposes a new forecasting method of total retail sales of social consumer goods corrected by combined model and case-based reasoning (CBR). First, a new dataset is constructed by using adaptive noise complete set empirical mode decomposition (CEEMDAN) to eliminate high frequency noise. The differential integration autoregressive moving average (ARIMA), long and short term memory (LSTM), limit gradient enhancement (XGBoost) and support vector regression (SVR) models were established for the new dataset, and then the prediction results of each prediction model were integrated with Gaussian process regression (GPR) to obtain the initial prediction value and error sequence. In addition, in order to solve the problem that the implicit knowledge in the error sequence is difficult to be regularized and quantified by mathematical model, this paper proposes a new error correction method, namely CBR, to improve the prediction accuracy. Experimental results show that compared with single model, the method proposed in this paper has better prediction effect and can effectively improve the prediction accuracy of total retail sales of social consumer goods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using Bayes Theorem and the conditional probability knowledge, this paper provides a method to predict a student’s GRE Score range based on his/her TOEFL Score. To locate the specific GRE range, a data set of 500 undergraduate students who have taken GRE and TOEFL exams is used as the statistics. This paper also borrows the statistics from the official website as the prior probabilities. The result shows that not only is there a positive relationship between the scores of the two exams, polarization also occurs regarding the two standardized exams scores. However, since the data set is retrieved from only one single spot, the reliability and universality are yet to be tested. A combination of more data sets obtained from different sites may help improve the prediction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Insufficient physical activity will cause a large number of health problems. Nowadays schools have played an important role in increasing their students' exercise. We are aimed at the factors that can influence physical activity time provided by schools in England. T-test and linear regression are used in national-based data. The data shows that school type is the biggest factor. Students in independent schools have more PA time than those of state schools (p < 0.001, 95% CI). Age can also impact school-provided physical activity time, with a decline in ages 10-13. Other factors such as sex, location, socioeconomic status have a small effect. This leads to the conclusion that many factors can be related to school provision physical activity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the outbreak of the Covid-19 pandemic in 2020, most countries are still suffering from the virus, and human society has been greatly changed. As the new virus is highly contagious, many people are still infected with the virus every day, and even face death in serious cases. However, there are still a lot of people who do not realize the harm of the virus, in order to make people more intuitive feel the spread of the virus in a certain period, this paper will use two classic epidemiological mathematical models based on the Markov chain called SEIR and SEIRS model for simulating the virus spread in New York City in 180 days. In both models, there are four states: Susceptible, Exposed, Infected, and Recovered. At first, Markov chain was used to randomly generate a populous population, and only one person in the population was infected, and then the changes in the number of people in these four states were observed over time. In addition, by incorporating certain coefficients in the models into a formula, an index for measuring infectious diseases called Reproduction number (R0) will be obtained. The R0 of Covid-19 in New York City is about 5.93, much greater than 1. Indicating that on average one person can infect about six people, which is highly contagious, so measures need to be taken to reduce this number. Finally, the SEIRS model is more suitable by comparing these two models since people do get re-infected over time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As it increasingly sharps in the city, the contradiction of power supply load sees the phenomenon of heavy overload and equipment redundancy, which caused by the un-synchronization of power grid development and load growth is obvious. And the distribution network capacity increase demand is growing. The first choice to increase the capacity of distribution network is using the 0-1 integer linear programming model. Therefore, according to the capacity increase demand of current distribution network, the paper come up with an optimization algorithm of power capacity increase which basing on 0-1 integer linear programming model, and it is applied to industrial park to carry out specific experiments. The experimental results show that this method can realize the demand of power capacity increase and optimize the operation mode of distribution network. It plays a great role in improving the intelligent level of distribution network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regarding the manufacturer's decision problem in selecting suppliers and carriers, this paper uses the grey comprehensive evaluation method to evaluate the importance of suppliers. Then, a simulated annealing algorithm is used to optimize the supplier selection and determine the weight of supplier selection. In addition, the neural network BP is used to build a model to predict the production capacity of the enterprise, and a nonlinear programming model is built to solve multiple objectives, such as large supply, low transportation loss rate, high delivery fulfillment rate, and small production. According to the differences in the supply chain, the problem is solved using MATLAB software; and finally the company's strategy for selecting suppliers and carriers in the future is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The prediction on depression with respect to the effect of safety behaviour during COVID-19 has been seldom investigated previously. Furthermore, the effect of balance of data based on regenerating methods is hardly ever discussed. In this paper, the performance of prediction is investigated with data collected across 26 countries across the world in consideration of the effect on the variables of potential affecting factors. Specifically, the data was retrieved from the open-source dataset conducted by IGHI, at imperial college London, containing 384,250 valid individuals with measurement of age, gender, country, covid status, employment status and behaviour score. Five machine-learning methods, namely logistic regression, MLR, RF, SVM and k-NN, were used for comparison of the performance metric by different statistical measurements. Based on the six chosen latent factors, RF is evaluated as an optimal model with the highest F1 score (0.787) and G-mean (0.503) without using re-sampling methods. Linear SVM, on the other hand, has the highest specificity (0.998) with original data. Furthermore, although there is an increase in sensitivity, using oversampling and undersampling procedure reduce the prediction accuracy to a nearly random value (0.5). Overall, RF without re-sampling method is considered to be the comparatively best model for its highest sensitivity, precision, F1-score and G-mean among all five data analysis algorithms; especially for minimizing the false positive rate such that all patients with depression are successfully identified. These results shed light on the choice of models when applied on prediction of depression status under different scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Animal shelters work around-the-clock to help pets be rescued across the country. In most cases, people think that animal adoption is mainly driven by intuition and emotion. A family comes in, falls in love, and then welcomes a new pet into their home. Through analysis of the data, we identified the features which best predict whether an animal will be adopted. After digging into the data, and trying to understand the physical characteristics of each animal, it is obvious that the data is interesting and requires some simple feature-engineering. Overall, our proposed mathematical model can effectively help improve the adoption performance in practice, and help the animal shelter improve their efficiency, and reach a higher adoption rate. The proposed method was evaluated using major-quantitative measurements of machine learning techniques, and we concluded that the established framework can be implemented to automatically classify the small animals into multiple categories so they can be taken care of to save human labor, and fundamentally, will contribute to the effective and efficient small-animal protection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, more and more people have already realised that optimisation issues constrained by Differential and Algebraic Equations (DAEs) or Partial Differential and Algebraic Equations (PDAEs) are extremely important to our daily lives, as applications of this type of problem are very widely spread. Unfortunately, it is hard to solve these problems and therefore standard optimisation tools are needed to make them be convenient to deal with. It is clear that there are way too many algebraic modeling tools built into high-level programming languages, such as Python and MATLAB. Consequently, additional freedom was allowed by these tools in incorporating new elements, language, and workflows. The purpose of this article is to encourage model solvers to be able to use the most suitable tools in different situations, which including the advantages and disadvantages of existing tools for solving differential equations, to give them a comprehensive summary of performance of those tools. According to the research result, Pyomo.dae is the stable and flexible feature9, which is useful for modal transformations. Besides, Pyomo.dae is also suitable for three different differential equations. In addition, the Pymanopt can be used for automated differentiation when being applied in optimaisation on manifolds. The PyDEns is so flexible for convenient experimentation that it is famous for its applications for neural networks. At last, Pandapower is good at handling parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, a combination of models is used to forecast the GDP of China and analyze the possible parameters that affect the GDP of China. The combined model includes linear regression model to establish relation between the GDP and possible significant parameters that affect the GDP and Autoregressive Integrated Moving Average model (ARIMA) to estimate and forecast those significant parameters. Consequently, these predicted parameters were brought in previous linear regression model, established a new model to forecast the GDP. After that, the new model was compared with the ARIMA model, then the probably more suitable model was used to predict the GDP of China in the future. The result of this work is that multiple linear regression model combined with the ARIMA model is more suitable in predicting the GDP of China. The significance of this work is that it states a new method which combined two models, and the new model is probably more suitable than the previous methods. As a result, other studies about GDP of China can consider this new method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The depletion of energy and the environmental pollution caused by burning energy are serious problems all over the world. There is still no new energy source that can take place of the combustibility source. The best way at present is an energy transition using existing renewable energy sources. In this paper, the author analyzed the amount of total energy consumption (TEC) over the past 40 years and how coal, petroleum, gas and primary energy and other energy (PEE) accounted in TEC. The purpose of this study is to decide the trend for each energy, therefore giving a clearer perspective for the change of energy structure and the increasing rate of consumed PEE . Then constructs two different linear regression models (LR) to determine the future of PEE by analyzing the predicted proportion of PEE consumed in TEC. The linear regression models were analyzed across regression equations, regression coefficients, p-values, the goodness of fits (Multiple R-squared) and confidential interval. The author also drew figures to do the residual error fitting test, residual normality test, residual-variance equality test and outliers’ test. The goal for test models is to find a better model to decide a more precise value of the proportion of PEE, such that observe the increasing process of sustainable energy and give a clearer perspective of the transformation of energy structure. The multiple R-squared of a linear model with all data in the dataset is 0.87 and the accuracy of testing improved to 0.9 by eliminating outliers. The multiple R-squared of the nonlinear model is a greater number of 0.94. In conclusion, the future proportion of sustainable energy and the rate of increase can be predicted by these two models. The PEE can take a larger part in China’s energy construction so as to solve current problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The world energy resource shortage and environmental pollution contradiction have seriously hindered the sustainable development of China's circular economy. The outline of the 13th Five-Year Plan shows that the comprehensive utilization of energy should be dealt with based on the strategic height of China's economic development. The energy consumption of papermaking enterprises has caused the state to monitor them, and the municipal governments at all levels should actively cooperate with the state to standardize the emission behavior of papermaking enterprises. This paper takes the cooperation and decision-making with the environmental protection department as an example, aiming at the supervision of the environmental protection department to the paper enterprises, through the game of bargaining to explore the cooperation costs between the environmental protection department and the paper enterprises, and makes analysis and research on this and puts forward corresponding practical suggestions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One big landfill in Beijing was selected by this study and changing volumes of landfill gas (LFG) were estimated according 18 years buried rubbish data from2003 to 2020. Two typical prediction models Scholl-Canyon and LandGEM were selected to estimate the landfill gas production ration from 2004 to 2020, respectively. By Scholl-Canyon model, the max volumes of landfill gas appeared in 2009. Then, slower decreasing trend appeared after max volume. By LandGEM model, gas production increased in whole study period. Landfill gas production reached an inflection point in 2009 and then slowly increased. The rubbish amount have important effect on the landfill gas production; annual variation analysis of monthly volume of the LFG production showed that an obvious seasonal variation characteristic of landfill gas production and the peak values of it appeared in both summer and autumn seasons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, energy consumption in Qinhuangdao has continued to increase, especially during the heating period in winter, which has caused frequent occurrence of haze weather. Based on the air pollutant emission inventory of Qinhuangdao in 2019, this study simulated the concentration of primary pollutants SO2, NO2 and PM10 through WRF/CALPUFF model, and analyzed the contribution of seven major industries to pollutants under haze weather. The manufacturing industry had the greatest influence on SO2, NO2 and PM10 concentration, with contribution rates of 11.9%, 15.5% and 7.7%, respectively, the second was the energy supply industry, and its contribution rates were 5.11%, 7.68% and 5.16%, respectively, the shipping emission had the least influence SO2, NO2 and PM10 concentration, with contribution rates of 1.42%, 1.56% and 0.98%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prediction technology can overcome the shortcomings of random and intermittent of photovoltaic power, which is of importance for the large-scale PV integration and grid scheduling. Firstly, the mathematical description of output wave momentum of photovoltaic power stations is obtained based on satellite data, and then the cluster analysis of photovoltaic power stations in provincial power grid is carried out, and the spatial correlation characteristics of photovoltaic power stations are summarized. Then, the regional photovoltaic power forecasting model based on K-means clustering and long short-term memory model is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gravity field modelling is an important research content in the field of geophysics, as well as a basic digital resource in natural resource exploration. It plays an important role in navigation, airborne geophysical exploration, and geodesic research. In this paper, based on the algorithms of least squares-collocation (LSC) method and inverse distance weighted (IDW) method, considering the limitations of single IDW modelling method in local gravity field modelling, a method of local gravity field modelling combining LSC and IDW is proposed named IDW-LSC. A group of estimated gravity anomaly data can be calculated by conducting the IDW, then the error sequence can be computed from the estimated data and the original data. Fitting the error sequence by using LSC algorithm, an error model of the survey region can be established, which can be used to optimize the gravity anomaly data estimated by IDW interpolation method, a new and optimized gravity model can be computed. In this study, a dataset of gravity anomaly from a test of airborne gravimetry over an area of China is used to verify this new method. The result shows the new method is more reliable than single method and promising to promote the precision of the gravity model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The global climate problem has always been urgent. This article uses linear programming mathematical methods to establish a carbon emission control optimization model based on the theory of a low-carbon economy. Guangzhou is the most economically developed province in China, and its GDP has been the first in China for 28 consecutive years, so the secondary industry in Guangdong Province of China is the research object of the model. Take the minimum total carbon emissions as the objective function. The emission reduction targets China promised at the Copenhagen meeting, the energy emissions of various industries, and the economic growth of the industries are used as constraints. The results of the model optimization show that the carbon emissions of the secondary industry in Guangdong Province in 2020 will be significantly reduced compared with the carbon emissions in 2005 after applying the low-carbon economic optimization program of structural emission reduction. The development of a low-carbon economy can eliminate outdated production capacity with high energy consumption and high emissions, and increase investment in energy conservation, emission reduction and environmental protection technologies. It can prompt Chinese enterprises to optimize the economic structure and promote the optimization and upgrading of the industrial structure, thereby changing the mode of economic development and accelerating the establishment of a resource-saving society. At the same time, the development and utilization of some new energy projects, environmental protection projects, and green energy-saving and environmental protection industries can alleviate the adverse effects of climate change, create jobs, and ease employment pressure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to study the development efficiency of China's coastal ports, combined with data envelopment analysis model (DEA-SBM) and green total factor productivity GTFP index, this paper systematically evaluates the green development efficiency of China's coastal ports from 2014 to 2019. The results show that: the production efficiency of China's main coastal ports has been improved to varying degrees;Technological progress is the main driving force affecting the efficiency of port green development. According to the research results, the corresponding development countermeasures are put forward to promote the development of green ports in China.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, with the continuous development of global electricity market reforms, electricity has become a freely tradable commodity. Due to the non-storage characteristics of this special commodity, its price will be affected by many complex factors and change in real time, such as time, load, weather and other factors. It is precisely because of this particularity that the electricity price reflects the operating conditions of the electricity market and is a core indicator for evaluating the efficiency of market competition. Therefore, effective forecasting of electricity prices is necessary. Due to the development of computing power and large-scale data storage technology, machine learning has a significant effect on processing time series tasks with nonlinearity and volatility aggregation. This paper proposes a real-time market-based electricity price forecasting method based on CNN-BiLSTM multi-feature fusion, mining the factors that affect electricity price changes, and incorporating external factors such as weather into the forecasting model to effectively improve the forecasting accuracy. Our proposed method has been experimentally evaluated on the Spanish electricity data set, and the experimental results show that the proposed method has a good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper selects the simulated meteorological data of SSP126, SSP245 and SSP585 under the BCC-CSM2-MR model of Shaanxi, Gansu and Ningxia from 2025 to 2100 to predict the future climate change. The results show that, from 2025 to 2100, the climate in this region warms up as a whole under different paths. Under the SSP126 path, the climate of 50.41% of Shaanxi-Gansu-Ningxia region will gradually become dry in the future. Under the SSP245 and SSP585 paths, the precipitation in more than 94% of Shaanxi-Gansu-Ningxia is gradually increasing, and the increasing trend of precipitation in northern and southern Shaanxi is the most obvious. In the future, the temperature and humidity of Gansu Hexi Corridor, north-central Ningxia and Yulin, Shaanxi Province are significantly higher than the historical mean. Compared with the historical average, the southeast of Gansu will increase temperature and reduce humidity under the path of SSP126 and SSP245.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past decade, the number of automated communication and posts on social media platforms has made it much easier to generate and spread hate speech in tandem with the related social implications. Social media companies have experienced intense pressure to address the issue and help minimize incidences of hate speech on their platforms. In this regard, machine language processing techniques such as natural language processing can help detect online hate speech. Natural language processing is a branch of machine language that enables one to understand human speech, analyze, manipulate it, and potentially understand language generated by humans. Other deep learning techniques that could help explore this subject and improve hate speech detection, such as convolutional neural network, recurrent neural network, and graph neural network, will be explored in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of resource constrained devices is on the rise these days, and they are largely used to work with sensitive data. Data security becomes increasingly important for both producers and users nowadays, the main problem that renders these devices vulnerable is a lack of resources. Attackers could take advantage of these flaws to get access to sensitive information. Improving the speed of the devices and avoiding data loss, a carefully chosen, and well-verified mathematical encryption algorithm should be used. Because of its simplicity, the RSA (Rivest, Shamir, and Adleman) algorithm is a well-known public key cryptography used by lots of people. The digital signature algorithm (DSA) is the industry standard for digital signature algorithms. Elliptic curve cryptography (ECC) uses elliptic curves on limited region to encrypt public keys. A comparison of several of these cryptographic algorithms will be presented in this paper, to determine whether one has a better chance of succeeding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational Mathematics and Mobile Computing Visualization
It's always difficult to solve when exploring the integral along contours in the complex plane if we meet several special cases. In this paper, we introduce a way to solve the integral of the function, which has a higher-order of poles. The residue theorem and its limit formula are effective methods to apply. Contrasting to the simple pole case, the higher-order case needs to concentrate more on the contour graph. Understanding how the graph comes is really beneficial for the deduction. From reading the figure of the contour, we can list an equation. The left-hand side will be the calculation of the residue theorem, which is the graphical method. The right-hand side will be the summation of integrals, including four pieces, which is the algebraic method. By distinguishing its real part and complex part, we can finish the proof. After finishing the proof, it’s easier for readers to solve higher-order problems that do not just limit to second-order problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stochastic process is developing vigorously both in theory and application. Its basic knowledge and methods are not only necessary for mathematics and probability statistics majors, but also for applications and research in fields such as communication, control, biology, social science, engineering technology, and economy. For example, the splitting of atoms, the decay of radioactivity; the processing of audio and video signals, the ups and downs of the stock market; the growth of biological populations, and the spread of infectious diseases are all closely related to stochastic process. Stochastic process studies how random variables change with time parameters. In the study of stochastic process, people describe the internal laws of necessity through the appearance of contingency, and describe these laws in the form of probability to realize necessity from the contingency. Probability is a subject of uncertain phenomena, which reveals the manifestation of internal laws contained in accidental phenomena and plays an important role in people's understanding of natural and social phenomena. This paper mainly introduces the definition and theorem proof of Gaussian variable and Gaussian vector for the purpose of taking investigation of Brownian motion in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the research aims of numerical linear algebra is to look for an approximate solution to mathematical problems in a continuous version. These mathematical problems arise in engineering and natural science. Numerical linear algebra as a fundamental computational science tool is frequently used in image and signal processing, data mining, computational finance, bioinformatics, telecommunication, fluid dynamics, and material science simulation. Matrices, one of the fundamental concepts in numerical linear algebra, are ubiquitous in natural science and engineering fields. In this paper, we first take an investigation of essential concepts for matrices. The definition of the production of a matrix and a vector and the definition of the production of two matrices are introduced. The fundamental invariant properties such as range, null space, and rank of a matrix are discussed. We also present several results related to orthogonal properties. Several results related to different norms on matrices are presented. At last, an impact on Householder transformation is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For some simple differential equations, it is possible to calculate exact expressions for a solution. However, for most of the differential equations, it is impossible to obtain their closed-form solutions. Our purpose is to make an approximation to solutions of differential equations, i.e., to look for a function (or some discrete approximation to this function) satisfying a given relationship between various of its derivatives on some given domain of space and/or time with some boundary conditions. Generally, only rarely can an analytic formula be found for the solution. A finite difference method proceeds by replacing the derivatives in the differential equations with finite difference approximations. In this paper, we mainly focus on the forward Euler method and its implementation. We show how to solve the ordinary differential equations by the forward Euler's algorithm through two instances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is feasible to calculate exact expressions for a solution for some basic differential equations. The closed-form solutions to the majority of differential equations, however, are impossible to find. Our aim is to build approximations to differential equation solutions, i.e., to find a function (or a discrete approximation to this function) that satisfies specified relationships between multiple derivatives over a certain domain of space and/or time, as well as some boundary conditions. Generally, only rarely can an analytic formula be found for the solution. The derivatives in differential equations are replaced by finite difference approximations in a finite difference approach. The forward Euler method and its implementation are the major topics of this article. This approach can find the numerical solutions of differential equations and make them almost equal to the precise solutions under specific conditions. Through several examples, we demonstrate how to solve ordinary differential equations using the forward Euler's technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a quasi-optimal sampling strategy for ordinary least square (LSQ) regression. The quasi-optimal sampling strategy allows one to determine efficient sampling positions of the dependent variable when only a few samples of independent variables are available. We also present a greedy algorithm for its implementation and demonstrated its high efficiency and fast convergence via numerical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we prove that the classical Tietze-Urysohn theorem in analysis fails in constructive metric spaces. We find two examples of constructive metric spaces. In each of them, based on the existence of an un-extendable computable function, we create closed sets 𝐴 and 𝐵 such that there does not exist a constructive function 𝑓: 𝑋(𝑜𝑟 𝑌) → [0,1] satisfying 𝑓−1(0) = 𝐴 and 𝑓−1(1) = 𝐵.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high-order accurate finite-difference method is used to simulate the hypersonic flow field around a blunt wedge under the condition of the wall roughness element. The influence of the position of the independent wall roughness element on the interaction between the free flow and the wall is analyzed, and the influence of the position of the roughness element on the wall pressure, the wall friction resistance and wall heat flow are studied. The results show that two compression waves and one expansion wave are formed in the flow field by rough elements, and the wave intensity increases when the position of rough elements move forward. When the center of the rough element is xp≥1.5, a vortex will form at the leading edge of the rough element. The backward moving of the rough element will increase the length of the vortex, inhibit the change of the flow parameters in the first-half of the rough element, promote the change of the flow parameters in the second-half of the rough element, which will reduce the wall friction resistance and inhibit the heating of the incoming flow to the first-half of the rough element.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The first paradox in the world was proposed in ancient Greek, and thousands of years later, it has been solved tentatively. Afterward, with the continuous development and integrity of people’s thoughts, several new paradoxes have been proposed. People have been confused by these paradoxes for a long time. However, some of them can be gradually explained now due to the development of science. This paper analyses several classical paradoxes, namely Murphy’s law, the paradox about waiting buses, gaining diseases, and the paradox of three prisoners. It is found that the probability theory is an indispensable tool for analyzing them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In metric space, for any fixed point outside the compact set, we can always find a point within the compact set that minimizes the distance between two points. Whether it still has the same property in the constructive metric space is the focus of this paper. Based on the constructive real number and constructive metric space, this paper discusses this property in constructive metric space, and we explore the distance function between a point and a compact set in the constructive metric space. Then we constructed an example of constructive metric space 𝑋 such that no algorithm can always choose one point 𝑐 of compact set 𝐶 𝑖𝑛 𝑋 which is the closest point to a given point 𝑥 ∈ 𝑋. Eventually, the conclusion of constructive metric space is different from that in metric space, which shows the difference between metric space and constructive metric space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fractal theory is a significant research subject in which diffusion limited aggregation (DLA) is a valuable model. DLA is a simple model which can reflect a wide range of natural phenomena. Through simple kinematic and dynamic processes, it can produce a self-similar fractal structure with scale invariance. The growth process is dynamic far from equilibrium, but the cluster structure has a stable and definite fractal dimension. In this paper, we study the fractal dimension of DLA, try to control the growth of DLA, and simulate the growth process that may occur in some natural environments under some factors. We use simple java code to build a DLA growth model. We use a fixed number of particles and the DLA of particle aggregation to touch the set boundary as the boundary conditions, respectively. We simulate the process of particle random motion aggregation into DLA clusters and generate images to observe the results. We use the density method to compute the fractal dimension of DLA. To control the growth of DLA, we modify some parameters of the random motion process of particles, including the step size, the probability of moving in different directions, the initial generation region of particles and so on. By analyzing the fractal dimension, we find that the fractal dimension of DLA can be greatly affected by changing the area of particles. In the study of the growth control of DLA, we mainly study the influence of changing the probability of particles moving in different directions on the growth of DLA (some specific factors can cause this effect in the natural environment). Through these changes, we can make DLA grow in a fixed direction. We can analyze the growth of fractal systems in the real situation through these straightforward simulations, which are computer programs based on DLA principles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents Minkowski’s theory of geometry of numbers. Theorem 1 Let M ⊂ Rn be a lattice, 𝐶 ⊂ ℝ𝑛 be a subset. If Vol(𝐶) > Covol(𝑀), then there exists 𝑥 ≠ 𝑦 ∈ 𝐶 such that 𝑥 − 𝑦 ∈ 𝑀. We also use it to give congruence criteria for representability of forms 𝑎2 + 𝑑𝑏2 of prime numbers and prove Fermat’s four squares theorem. Theorem 2 Every positive integer is the sum of four squares.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex numbers are important in a variety of engineering areas. It is widely used in thermodynamics, mechanics, fluid mechanics, solid mechanics and some other areas like that. It has a history of over 1200 years; since 780, when it first came into the platform of math, it has been supplied by over 10 mathematicians. Now, the complex number is a complicated subject which refers to many different subjects in higher mathematics. The paper will mainly discuss several quintessential instances of complex numbers, including formation and relevant operations on the set ℂ, while further explaining the complex conjugation involved thereof. The paper covers the algebraic formation of complex numbers, as well as discuss in detail arithmetic operations of complex numbers, on complex numbers, in this research paper to reveal the intrinsic beauty of the world of complex numbers. We will also apply two examples and their solutions to make the point more specific.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By using the method of residue, we give a hypergeometric representation for the generalized harmonic numbers. Then the sums involving harmonic numbers fall in the scope of classical hypergeometric algorithms. With this method, we give computer-assisted proofs for many known identities. Moreover, we establish several new combinatorial identities involving the generalized harmonic numbers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geometric Brownian motion model is overwhelmingly advantageous as a model to simulate stock price. GBM is a Markov process and could be a martingale under certain situations, which is a feature consistent with stock price volatility; additionally, the GBM model accentuates the percentage instead of an absolute number of changes in stock price, which is another feature consistent with the stock market. Accordingly, “overall time horizons the chances of a stock price simulated using GBM moving in the same direction as real stock prices was a little greater than 50 percent.” In reality, the GBM model has been widely applied in many aspects. This research focuses on combining mathematics, finance, and computer science, applying the GBM model on Python in relatively basic methods to forecast certain stock price changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pell’s equation (y2 − Dx2 = 1) is a problem that has challenged mathematicians for hundreds of years, from 628 in India to finally being solved in 1657. Furthermore, it has very wide applications, from rational approximation, to square roots, to finding solutions for polynomials and cryptography when generalized. Therefore, it is interesting and necessary to try to learn more about it, especially in trying to find a pattern that most or all solutions follow, since it is seldom researched. This paper attempts to gain a deeper understanding to solutions of Pell’s equation by first finding the amount of solutions and then finding all solutions of Pell’s equation with D = 2, 3 and 5. Then, it will be found that all solutions exist in the form of geometric sequence with two variables. However, if the two variables are separated they will only be geometric sequences with the same common ratio if and only if the values of D and number of terms is small.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the lack of the raw material for some essential purposes, using them more efficiently has already become a necessary question. We used optimization to find out what is the best way to allocate the resources. In this essay, the definition is covered, and there are several ways to solve these problems and some ideas about combining these solutions. We also suggest how we could make a more efficient, cost-effective process to solve these problems. We will explain and introduce the way people usually used before, and use the data to show the advantages and disadvantages of using our method. Ultimately, it is shown that among all the five methods that are presented in this essay, the method of UCM is the rapidest way to reach the optimal value within a given precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ridge regression and Lasso refer to two types of regression methods that make up some defects existing in OLS, like OLS regression estimator does not uniquely exit when 𝑥𝑥T is singular, by regularization. This paper reviews and compares the ideas of OLS, ridge regression and Lasso, and sums their main development and previous scholars’ study results. This paper presents that ridge regression adds the 𝑙2 penalty term as a constraint, and Lasso adds the 𝑙1 penalty so that it has a function of select variables, some weakly relative variables’ correlated coefficient is reduced to zero directly. By comparing to OLS, ridge regression and Lasso are biased so the choice of these three requires people to consider the specific situation of use. Facing the problem of short-term hysteresis in the calculation of GDP deflator, this paper uses ridge regression and Lasso in GDP deflator analysis by the regression model of GDP deflator with 14 variables, and compares their results with OLS’s. Meanwhile, post Lasso estimator is included to solve the problem of bias at the end, and the conclusion shows the core inflation is the most economically significant variable to GDP deflator in current period estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nonlinear differential equations are widely used in many fields such as physics, electromagnetics and mechanics, and its solution is always a very important research topic. In Chapter 2, we take solving higher-order nonlinear evolution equations as an example, and use the trial equation method to solve the equation. In Chapter 3, the following fifth order nonlinear equations are solved by trial equation method ut+ αx2 + βuuxx + μuxx + uxxx + P(u)]x = 0, where α, β, μ are constant parameters and P(u) is a cubic polynomial of u : P(u) pu qu2 ru3 , p , q and r are constants. The specific method is to transform the equation by traveling wave, simplify the equation after traveling wave transformation, and then integrate the equation; Secondly, the trial equation is constructed and transformed into elementary integral form; Finally, the traveling wave solutions of the partial differential equations are obtained by the polynomial complete discriminant system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ystems of nonlinear monotone equations have applications in many fields, such as engineering, economics, management science, probability theory, differential equations and other applied sciences. In this paper, a multivariate spectral projection method for solving nonlinear monotone problem is presented. The proposed method which combines a dirivative-free spectral algorithm and projection method is actually a multivariate version of the spectral algorithm. The method also can be regarded as a quasi-Newton type method that uses a non-scalar diagonal matrix as the approximation of the Jacobian matrix. Under the conditions that the nonlinear equations is monotone and Lipschitz continuous, we have shown that the method is globally convergent to a solution of the system. Numerical experiments are also given to show the method is efficient for nonlinear monotone problem
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex analysis is the main branch of complex functions. It is widely utilized in many fields, including applied mathematics and physics, including algebra, combinatorial mathematics, nuclear engineering, aerospace engineering, and fluid mechanics. Thus, we need to dive into complex variables to take advantage of them in abundant fields. To explore complex variables in-depth, we learn and apply a lot of theorems. In this paper, particularly, we aim to evaluate a complicated integral in which f (z) is defined as (az3 + bz2 + cz + d)/(z4 - 1) with a = 10, b = 1 + i, c = -4, d = 1 - i. To solve this problem, we need to delve deeper into Cauchy-Goursat theorem to calculate those analytic points. However, it is often the case that we face some numbers which are not analytic, which means in this situation, Cauchy-Goursat theorem is not useful for us. Then, on these occasions, it seems necessary for us to use Cauchy’s residue theorem, for it is especially useful in terms of points that cannot be analyzed. Afterwards, we use plenty of mathematics strategies such as computation and combining and simplifying equations to secure the final results of the problem. Eventually, we need to put all our residues into the graphs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since linear equations have great contribution in the study of the numerical pde, therefore it has been studied for ages. In this paper, I will write a survey that contains three methods of solving the linear equations, as well as the implementations of these three methods. After that, other methods of finding the inverse matrix of a specific sparse matrix will be shown. Three methods are the Gauss elimination, LU decomposition and Cholesky decomposition. These algorithms can be implemented by invoking related built-in functions in MATLAB. The LU decomposition, as a method of decomposing a matrix into an upper triangular matrix and a lower triangular matrix, can be performed by calling the function lu() in MATLAB. Cholesky factorization can be achieved by calling the function chol() in MATLAB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xinlei An first proposed the An-System in 2010, which has been studied by many scholars, but there are few control studies based on the fractional-order An-System. In this paper, by utilizing the fractional calculus theory and adaptive finite time control method, dynamics of the fractional-order An-System are studied. In order to eliminate the chaotic behavior in the An-System, an adaptive finite-time controller is designed. Then we prove its rationality by Chaos Control Theory. Finally, numerical simulation is applied to verify the effectiveness, convergence and robustness of the controller.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we shall prove the following two basic results of arithmetic dynamics. Let ƒ: P1(K) → P1(K) be a rational function of degree 𝑑 > 1. Discovered during the research, there are different results shown with the different definitions of K. If K is an algebraically closed field of characteristic 0, then #Fix(ƒn) = 𝑑n + 0(1). If K is a number field, then there are only finitely many preperiodic points of ƒ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the beginning of 2020, COVID-19 broke out in Wuhan and quickly swept the world. At present, the global epidemic prevention and control is still facing severe challenges. Scientific and effective measures of the epidemic is crucial to epidemic prevention and control. In this paper, a COVID-19 diffusion prediction model is established based on the impulsive partial differential equation and traditional infectious disease model, which can describe the spatial diffusion of viruses. This is also a lack of other models. The model divides the total population into seven groups: susceptible, quarantine, exposed, asymptomatic, infected, diagnosed and recovered, while considering the influence of time and space on the spread of the virus. In order to test the model, we take Jiangsu Province in China as an example, compare the calculated results with the actual data, and verify the effectiveness of the model through numerical calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear regression can be used to find the influence of independent variables on dependent variable in a linear way. First, the definition, how to use the method of ordinary least squares to estimate regression coefficients, some properties of least squares estimate, and some test methods of simple linear regression model is introduced. Then, the number of independent variables is extended from simple to multiple. Not only multiple linear regression model is illustrated from the same aspects of introducing the simple linear regression, but also the situation when the ordinary least squares method has multiple solutions and collinearity test of independent variables is elaborated. Finally, to avoid the inaccuracy of least squares estimates caused by the collinearity of independent variables, the definition of ridge regression and properties of ridge estimate is stated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper mainly focuses on exploring the relationship between fractal dimensions of diffusion-limited aggregation (DLA) models and traps on the field, that may consume particles. Diffusion-limited aggregation has been applied widely in various fields such as biology, physics, mathematics and engineering. Nevertheless, it has been noticed that one widespread problem exists among research on diffusion-limited aggregations, which is the frequent ignorance of particles loss during Brownian motion. Seldom has it been investigated, that the changes of structures of DLA models are due to environmental variations. After simulations using Java, through observing generated path, data table and scatter plot of each DLA model, it has been concluded that fractal dimensions of DLA models and probabilities of traps on sites of the field satisfy the relationship of a negative correlation under a certain range of probabilities of traps; which means that the increasing probabilities of traps leads to the decreasing of fractal dimensions of DLA models. The research verifies the conjecture that a number of researches over diffusion-limited aggregations may have their results influenced because of ignoring the particles loss problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study the existence of the solutions of fractional Laplace equations with logarithmic nonlinearity by using the fractional logarithmic Sobolev inequality and the linking theorem, we present the existence theorem of the ground state solutions for the nonlocal problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid development of mathematics is largely due to the creation of complex numbers. Although complex numbers are just strange and seemingly meaningless notations for most people, complex numbers play important roles in engineering fields. A better understanding of complex numbers' geometric and algebraic structure enables us to study complex analysis better. Complex numbers convince scientists that our world is magical, full of wonderful insights, and even miraculous. In this paper, I first review several basic properties of complex numbers. The set of complex numbers is a group under the operation of the multiplication and under the operation of addition. Then I visualize complex numbers by building a bijective connection between complex numbers and points on the complex plane. I also give several alternative forms of complex numbers, namely the trigonometric and general forms. By invoking the arithmetic properties of complex numbers, I prove two trigonometric identities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex number is a relatively new concept in the history of algebra, as it was first applied into actual problem solving no earlier than the 16th century. As a form of number that was not admitted by most mathematicians a few hundred years ago, it could be an abstruse and intricate field of study to discover and further explore at that time. Nonetheless, famous mathematicians such as Leonhard Euler, Abraham de Moivre, and Augustin-Louis Cauchy, had developed the subject of complex analysis into a more and more mature subject over the course of its history. This paper will systematically go over basic properties and crucial knowledge about complex numbers, provide analysis of some higher-level properties and theories, introduce important concepts and theorems such as the Cauchy sequence and De Moivre’s theorem. Moreover, there will be visualizations of complex numbers beyond algebra in a more geometric perspective. Eventually, combining the concepts altogether, we will then be able to comprehend and prove a unique and useful theorem involving the application of the complex numbers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the existence of two nonnegative solutions for a class of fractional differential equations with kirchhoff terms is studied. Secondly, the Nehari manifold is introduced by the first order derivative function of energy functional and its dual product, and the corresponding fiber mapping is given. Furthermore, the Nehari manifold is divided into three regions by using the value of the second derivative of the fiber mapping. Thus, it is proved that the equation studied in this paper has two different non-negative solutions
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When remarking on the results in this field of mathematics, mathematicians and engineers are inclined to use terms like magical or even miraculous; and in exploring the topic, both mathematicians and engineers continue to be amazed by the beauty and sweep of the conclusions. One of the algebraically closed number systems that include all real numbers is the complex number field. This article focuses on the geometric and algebraic structures of complex numbers, such as their definition and arithmetic operations and the field and polar form. In order to elaborate on connections between complex numbers and trigonometry, the de Moivre’s Formula, as one of important results in the world of complex numbers, is discussed. Topological properties of the complex plane are also discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Number theory is a very important cornerstone of cryptography. Modern public key cryptography has been applied to many fields due to its security and flexibility. This paper will focus on elliptic curve cryptography. The paper discusses the formula derivation and encryption method of elliptic curve cryptosystem, and then uses the code to implement the elliptic curve cryptosystem, simulates the whole process of encryption and decryption, and proves the security performance of the elliptic curve cryptosystem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex numbers help engineers and scientists better understand the practical world. It is easy for humans to analyze and model systems with inputs in the form of sine or cosine waves such as electric circuits. In this paper, we summarize the fundamental arithmetic properties of complex numbers. First, we review the introduction and definition of complex numbers. Next, we visualize every complex number as a point on the plane.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The four-colour problem came from the practice of drawing maps. It was called the four-color conjecture before it is proved. This paper, attempts to simplify this problem to some extent and gives a simplified proof. Finally, the expected results are obtained. In the process, two theorems which can be used to simplify vertex colouring problems are also obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BCK/BCI algebras are two classes of logic algebras. Previous studies have enriched the relevant theories about that. Some researchers have solved the counting problem of BCI algebras with less than six elements. But the counting problem of general BCI algebra is still unsolved. In this paper, the definition about the coset of BCI-algebra will be given. And some characters about the cosets on some particular sub-algebras will be studied. Finally, the cosets of right-eliminable BCI-algebras will be studied. A theorem about counting BCI-algebra will be given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inverse matrix problem is a hot and active research topic in computational mathematics[1]. It has broad applications in engineering and scientific calculation, and owns a strong background in physics and practical significance[2]. This paper explores the inverse eigenvalue problem of a bordered anti-tridiagonal matrix. It first illustrates the existence and the uniqueness of its solution, the elaborates on the recursive expression of the solution and uses one numerical example to show the effectiveness of the algorithm, and finally concludes that this work is significant and points out suggestions for further study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex number exists as the supplement of the real number set, providing mathematicians insights and practical ways to solve the problems whose solutions were beyond what mathematicians and physicians had known prior to the 18th century. Complex analysis is a subject that deals with the basic notions of mathematical analysis: differentiation and integration, going back to Newton, in the context of functions that are defined and take values in the complex plane. This essay provides a summary of geometric and algebraic properties of complex numbers to facilitate readers to comprehend the integral of complex functions. We’ll discuss about first, the basic definition and algebraic properties of complex number; second, operations and analysis of functions on complex plane; third, integrals along curves on complex planes. And at last, we’ll prove some theorems about complex series.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Differential equations have a wide range of applications in practice. Firstly, the basic theory of differential equation is studied in this paper, and then the application of differential equation in electricity is studied. Finally, the first and second order differential equations are simulated by Multisim simulation, and the solution of differential equation is studied by observing the response of the circuit. Simulating differential equations through Multisim circuit simulation, and according to the characteristics of similar systems, you can also use Multisim circuit simulation to simulate other systems, so that you can get intuitive conclusions. Therefore, simulating differential equations through Multisim circuits is of great significance for the study of other similar systems
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Functions in complex Cl_(0, n+1) are important candidates to study solutions of complex equations in elastic mechanics, fluid mechanics and Maxwell's equations. In this paper, the solutions of a series of complex partial differential equations in multiple complex analysis are connected with the hyperbolic harmonic functions in complex Cl_(0, n+1), and some properties of hyperbolic harmonic functions are studied in detail. Furthermore, sufficient and necessary conditions of hyperbolic harmonic functions are obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex numbers play a fundamental role in a variety of research fields. In this paper, we first review the basic knowledge of complex numbers. To understand complex numbers better, we begin the introduction of complex numbers with coordination pairs. Then, we show that there is no mathematical order that can be defined on the set of complex numbers. We elaborate on both algebraic and geometric structures of complex numbers. We give two elegant results proved by invoking properties of complex numbers. At last, we give detailed proof of a beautiful inequality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is indispensable to seek approximations to ordinary differential equations that are more accurate than the Euler method to increase the speed of convergence of Euler's method. Euler's method is intuitively a linear Taylor polynomial approximation. It is reasonable to design higher-order Taylor approximations; as a result, a family of high-order methods emerges. It is required that higher regularity of the solution derive a Taylor method. Higher-order expressions are obtained by differentiating the differential equation from the solution itself. It is usually time-consuming to get higher-order derivative expressions. The idea behind Runge–Kutta's techniques is to approximate the derivative terms by combining compositions of the function of the differential equation. In this paper, we review the Taylor method and Runge-Kutta methods. We give a detailed proof of how the discrete Galerkin method is equivalent to some implicit Runge-Kutta method. These methods are stable and easy to implement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Pythagorean question, x2+y2 = z2, has an infinite group of solutions (x, y, z), these solutions are geometric-related, which are regarded as Pythagorean groups, such as (3n, 4n, 5n), and (7n, 24n, 25n). From this point, the subject of this paper is based on the conception of Diophantine equation and one of its example—Fermat’s last theorem. Geometric and infinite descending methods will be considered mainly to address this problem. Although geometric method fails to deal with the equation, the bridge between triangle theory and algebra equation can be constructed, and infinite descending method is also useful to this problem. Additionally, some other conclusions will be found like other types of n, and some famous mathematicians’ ideas as an extended content. In the paper, the more simple and understandable methods are looked for even if it will be finally disproved. Next, the final results and conclusions about the Fermat’s last theorem point out that the case n=3, 4 can be solved by infinite descending method which creates smaller and smaller solutions in order to achieve a contradiction although that of geometric shifting can only deal with part of each case (n=3), due to the difficulty of coping with the case cos(a) is in (0, 1/2), so a further study should be operated or it is hard for people to deal with Fermat’s last theorem by triangle-related method by cosine rule. As for research conclusion, the infinite descending method is suitable for the cases of n = 3, 4; although, by creating new pairs of smaller solutions in various ways. Only the cases of cos(c) ∠ 0 when in geometric trials, Fermat’s last theorem can be solved, but other cases (cos(c) < 0) have difficulties in linking the characters of triangle with number theory, which failed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously, people explored that there is a derivative relationship between area and perimeter of some regions in ℝ2 (in ℝ3, it becomes volume and surface area, respectively) which satisfies dA(a) /da = λP(a) (dV(a) /da = λS(a) in ℝ3) and λ is a constant. They called such 𝑎 as a linear dimension of the region. In this paper, we further explore the partial derivative relationship of some regions in two-dimension and three-dimension. That is, discuss the situations when A and P (V and S) depend on several variables. We show that the multi-linear dimensions, which are the distances between the circumcenter and each side of the polygons, exist for some kind of irregular polygons. The necessary conditions are the polygon is inscribed in a circle, and the circumcenter cannot lie on any of its sides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper serves as a collection of research(done by other mathematicians) on polygonal numbers as well as their corresponding theorem proof summary proved by other mathematicians. This paper mainly reviews triangular numbers, square numbers, Gauss's Triangular Number Theorem, Lagrange’s Four-Square Theorem, Cauchy’s Polygonal Number Theorem, Fermat’s Polygonal Number Theorem. This composition also briefly reviews topics regarding Waring’s problem and sum of three cubes. The conclusion is that mathematicians have proved that any positive integer can be decomposed as a fixed number of “2-dimensional” polygonal numbers, such as triangular, square, pentagonal numbers and so on. Later studies related to polygonal numbers will focus on “higher dimensional” polygonal numbers, such as cubic numbers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important capacity for modeling across a wide-ranging area is the rapid creation of high-quality Gaussian random numbers (GRNs). Advances in computation have given us the ability to run replications with very huge amounts of accidental numbers, but they've also given us the problem of satisfying ever-higher standards for the quality of GRN generators. In this paper, the importance of high-quality generation of Gaussian random variables (GRV) is revealed by detailed proof of properties of Brownian motion (BM). Several important properties regarding geometric BM are also discussed. The method of Box-Muller transformation for generation of Gaussian variables is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex numbers, is an important part of modern mathematics. By using the complex number, we can solve many geometry problems. However, it can also be applied to physics. For example, it can play an important role in calculating alternating current. In this paper, we will introduce the definition and several properties of the complex number, also including the point-set topology. We try to prove the important theorems about the complex number; for instance, de Moivre's formula, the problems of connected set and path-connected set; also, the main idea that we cannot compare two complex numbers. To achieve these goals, we will use induction, Euler’s formula; the basic concept of topology, and proving by contradiction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We construct a 1 x 2 matrix (an, bn), whose entries are constructive real numbers. We demonstrate that the values of the minmax and maximin of any matrix can be algorithmically determined. In the second part of the paper, we will show through proof by contradiction that there does not exist a program that can find the coordinates of the maximin or the minmax. Together, these form a proof of the fact that every matrix with constructive entries has a maxmin, but that it is not possible to construct a program that given an arbitrary matrix tells the coordinates of the maxmin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Certain programs equipped with the second program which measures the convergence rate of the Cauchy sequence could produce a constructive real number. Therefore, one cannot determine the exact value of the limit, but instead make some estimations of the limit. In this paper, we first propose the concept of Left (L) numbers, and then prove two theorems associated with left numbers, even without the convergence regulator it might be non-existent. By definition, the L number could be compared using the symbol >, ≥, <, and ≤. The first result is that it is impossible to find the algorithm which for any L numbers a, b and a rational number q, determines if a≤ b + q or a ≥ b − q. The second one is that if given ¬(a > b) (a & b are Left numbers), we can say that a ≤ b is also true. And the methods that we will use are proof by contradictions and some figures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of natural language processing technology and artificial intelligence technology, more and more industrial applications have landed. Our lives have also been improved through intelligent voice assistants. In life, we can control various home appliances by talking to the voice assistant, and we can also chat with the AI voice assistant. In the final analysis, these activities are actually the expansion of the dialogue system, and in the dialogue system, historical dialogue information plays a vital role. The contribution of this article is to use the cosine similarity to compare the current question with the chat information that occurred before. If the similarity is high, it is added to the sentence embedding of the final model input. The advantage of this is that it improves the accuracy of the machine in answering questions in multiple rounds of conversations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid changes in the financial market and the increasingly strict financial supervision, banks need to further optimize their balance sheets and promote the steady development of business. This paper takes Bank of Shanghai as the object of analysis and adopts a multi-objective planning method to establish the optimal allocation model based on the five objectives of growth, liquidity, interest rate risk control, profitability and security of all asset liability portfolios including "stock + increment". The empirical results show that the optimal solutions corresponding to different priorities take into account these five objectives at the same time. Through the comparison between the predicted value and the upper limit value of the forecast period, it is found that there is still room for improvement in assets such as lending funds, loans and advances, and equity such as surplus reserve.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, China's construction industry is developing continuously. With the maturity and perfection of the supply chain, accurate supplier evaluation and selection methods are of great strategic significance. On the basis of Title C of the 2021 National Mathematical Modeling Competition, this paper studies the selection of important building material suppliers. As the analytic hierarchy process assigns the weight of each evaluation index, the evaluation model is established according to technology for order preference by similarity to an ideal solution and grey comprehensive evaluation method. By calculating the importance of each supplier, the top 50 building materials suppliers headed by S229, S140 are analyzed and selected. Based on the model and results, this paper gives suggestions for improvement of strategic decision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid growth of the Internet, more and more people use online social media. Hence, hate speech becomes rampant in social media, and it is important to classify the hate speech and control it before it spreads. With the introduction and the development of deep learning, hate speech detection becomes practice. Many studies utilize data from social platforms such as Twitter and Facebook together with machine learning or deep learning technologies to detect and recognize hate speech. However, there are not enough reviews about this area. Hence, this paper aims to provide a review of using machine learning and deep learning for hate speech detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As COVID-19 has spread worldwide, detecting the patients of COVID-19 and taking effective actions has gained more and more importance. Applying a deep learning framework to detect medical pictures has already been used for years. This paper mainly trained a large number of CT images of patients and normal people on three networks: AlexNet, VGG, and ResNet. Based on PyTorch, we build the network successfully and soon examine the performance of the three networks on the test and validation dataset. Our experiments demonstrate that the ResNet performs the best when detecting the COVID-19 CT images. It reaches the accuracy of 99.5%, which proves that it has a strong fitting ability in our dataset, which is not so large. However, when applying the pre-trained model from the bigger dataset in a smaller dataset, the accuracy of AlexNet and VGGNet will increase accordingly while the accuracy of ResNet decreases. Though we have made many assumptions about the phenomenon, more experiments are needed after the experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The microfluidic system has a wide range of applications in the fields of bioengineering, microelectronic mechanical systems. The flow, mass transfer and heat transfer of fluid in the microchannel have become the frontiers of the research on microelectronic mechanical systems. In this paper, we systematically demonstrate the progress in the microchannel from two perspectives: microchannel heat exchanger and micromixer. Specifically, the features and corresponding applications are discussed for the two categories listed above and theoretical analysis is also applied for better illustration. Regarding the microchannel heat exchanger, it has been increasingly applied in the HVAC (heating, ventilation, and air conditioning & refrigeration) field due to its higher efficiency heat transfer rate, more compact structure, and lower cost. As for micromixers, they represent one of the essential components in integrated microfluidic systems for chemical, biological, and medical applications. These results shed light on the future development of microchannel structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From 2019 to 2021, the coronavirus disease 2019 (COVID-19) epidemic has been a global issue and has attracted such widespread attention of people around the world that an increasing number of scientists are doing research about it. This study aims to construct a dynamic model of COVID-19 and conduct a sensitivity analysis of it. It constructs a Susceptible-Exposed-Infected-Recovered (SEIR) model and adds refinements (vaccination and quarantine) to the basic model. Julia (a programming software) is used to test the sensitivity and obtain the figures. All figures show the sensitivity of all testing parameters and starting times. Starting time for vaccination and quarantine are extremely essential to the peak and the trend of the epidemic. Different parameters in the models have different effects on the epidemic trend (especially parameters that exist in R0). And the findings of this study provide more information to monitor and control the epidemic COVID-19 or new epidemics in the future. It is of great help to bring vaccination and quarantine to the fore.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent years have witnessed growing interests in solving partial differential equations by deep neural networks, especially in the high-dimensional case. In the Deep-Ritz method, proposed by professor E. Weinan, how to optimize the neural network to make it more accurate has become a problem worthy of attention. In our work, we have conducted a comparative study on the network structure and different optimization methods. In terms of network structure, primarily, we introduced the RBF activation function, combined it with the ResNet network, and proposed the Combined-Ritz network (CRM). Comparing it with DRM and RRM (the network simply based on RBF function), We can see that in the case of low-dimensionality, DRM converges slowly and has many parameters, but the accuracy is higher. RRM converges fast, has fewer parameters, and has a lower accuracy. CRM combines the advantages of the two with fewer parameters and higher accuracy. In addition, in the two-dimensional situation, we proposed the CNN network architecture to solve the partial differential equation problem, and achieved good success.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The formation and conformation of linear polymer chains are comprehensively considered as a complicated problem due to the rotation of chemical bonds and the rise and fall of bond length between atoms. In order to simulate polymer chains properly, the Mont-Carlo method is adopted by massive researchers. Generally, the formation can be conducted through on-lattice SAW (self-avoiding random walk) and off-lattice SAW walk. The on-lattice walk, though owning great similarity with real polymer chains on a general perspective and is simple to realize, has some overt drawbacks as it differs from the real polymer chain in the microscopic perspective and is limited by the lattice/cube. The off-lattice walk, on the other hand, performs much better in presenting idealized polymer chains as it can embody different bond-angles and the rise and fall of bond length. The paper conducts the formation of linear polymer chains of both on-lattice SAW module and off-lattice SAW module through the Java language, and present the visualization of it under both 2D and 3D condition. The mean square end-to-end distance is involved to verify the reasonability of each method and how it performs in interpreting the conformation of real polymer chains. Those performances enable further applications of on-lattice and off-lattice walk model in realistic problem and prospective research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of complex functions deals with calculus over complex variables. In this paper, we review the knowledge about complex numbers systematically from a higher vision. Algebraically, the set of all complex numbers forms a field. Geometrically, all complex numbers form a complete metric space, which owns an elegant topological structure. Mathematicians deal with technical problems using complex numbers in a very concise way. We summarize the computational and algebraic properties of complex numbers and discuss their geometric representation. Then, we discuss the group structure of unity of roots in detail and give detailed proofs of several elegant results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present era, the study of stochastic process is getting more and more significant, in case that nearly nothing can be examined in a static sight and most of them are in an evolving procedure. In this article, we focus on one of the most common and most frequently used model, symmetric simple random walk, and discussed its property in R. Firstly, we review the basic terminology in the study of probability theory and stochastic process, clarify the basic concepts as distribution function, expectation, variance, independence, etc.; meanwhile, introduce two 0-1 laws. Then, we summarize the laws of large numbers and central limit theorem, in the order of definition of convergence, weak law, then strong laws; after that, the characteristic functions, the concept of independent and identical distribution, then comes to the Lévy central limit theorem. These parts are all for the preparation of the main conclusion: a limiting process of a symmetric simple random walk is a Brownian process. In this part, we start from the construction of symmetric simple random walk, then create a path to the limiting statue, and prove the final conclusion with the use of the concepts introduced previously; and finally, discuss several properties of Brownian motion; the limiting process of random walk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the application of artificial intelligence in the financial field, quantitative trading is considered to be profitable. Based on this, this paper proposes an improved deep recurrent DRQN-ARBR model because the existing quantitative trading model ignores the impact of irrational investor behavior on the market, making the application effect poor in an environment where the stock market in China is non-efficiency. By changing the fully connected layer in the original model to the LSTM layer and using the emotion indicator ARBR to construct a trading strategy, this model solves the problems of the traditional DQN model with limited memory for empirical data storage and the impact of observable Markov properties on performance. At the same time, this paper also improved the shortcomings of the original model with fewer stock states and chose more technical indicators as the input values of the model. The experimental results show that the DRQN-ARBR algorithm proposed in this paper can significantly improve the performance of reinforcement learning in stock trading.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, people are using credit cards more frequently, especially for online shopping. Credit card fraud has a negative influence on individuals, merchants, and financial institutions. This paper focuses on credit card fraud detection by using three supervised machine learning methods: decision tree, random forest, and AdaBoost algorithm. These three methods are widely used in the medical field, chemical field, visual identification, and finance field. We use the confusion matrix and receiver operating characteristic curve to interpret our model. The random forest has the highest accuracy of 99.6% in predicting non-fraud. The AdaBoost has the highest accuracy of 77% in predicting fraud transitions, while it has the lowest accuracy of 95% in predicting non-fraud. Therefore, the random forest algorithm is the best model to apply in credit card fraud detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classic Buffon’s needle question, throws the needle in a plane that is filled with parallel lines, so the researcher considers if there are more kinds of parallel lines put into the plane would increase the accuracy. Based on the Buffon's needle problem, the mathematical laws and results are obtained by casting the needle, not only in parallel lines, but also in regular triangles which represent three kinds of parallel lines. The derived laws and results are applied to estimate the value of π . By comparing the result’s variance, the result shows that the more kinds of parallel lines filled in a plane, the more accurate result would be driven from the experiment. Moreover, the scope of application of Monte Carlo simulation which is derived from the Buffon's needle problem is analyzed in the article.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning is widely used today for training, classifying, and predicting large amounts of datasets today. One advantage of deep learning is that it can take many datasets as inputs to train a model. Then we can use the trained model to make predictions or do classifications quickly on many data that humans cannot even manage to do. The application of deep learning and computer vision has been used widely in the medical field, especially in the image diagnosis field. Indeed, this approach sometimes has done far better than human doctors. Although there is much research on classifying lung cancers, using deep learning, we still doubt on how different models perform on lung cancer image classification. Also, we are still concerned about how we can improve the accuracy rate of those models. In this paper, we will focus on image classification tasks and review the performance of the VGG16 model and identify whether adding attention layers to the model would make the model perform better. We will use lung cancer datasets collected from Lung Nodule Analysis 2016 to evaluate the model's performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we inquire into the neighbor sum distinguishable edge coloring of join graph, and it’s neighbor sum distinguishable chromatic number was considered. Let G be a complete graphs with order n, and H is a path or circle, we obtains exact value of the neighbor sum distinguishable chromatic number with respect to the join of graphs G and H, its value is 2n .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the problem of the projection synchronization of fractional-order chaotic system in this paper, which has different orders and parameters identification. We also design a new controller of adaptive projection synchronization and identification parameter law. By using the J function criterion, we prove that the synchronization system can be achieved. Finally, from the numerical simulation, we can obtain that the controller we introduced and parameter identification are highly effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ant colony optimization is becoming an attractive hot spot, and it is always a highlight of major algorithms. Enormous advancements have been realized in the ACO algorithm, such as self-adaptive algorithm advancements, boosting the diversity of various groups, enhancing local search improvements. However, with advances in multicore computing theory and technology, implementing ACO has become a new challenge for all scholars. The ACO is a probabilistic algorithm that is used to find optimization routes. Marco Dorigo presented it in 1992, and it was based on the behaviors that ants engage in when searching for food. ACO is a type of simulated evolutionary algorithm that, according to prior study, has a number of advantages. Researchers in numerous sectors have recently been researching and paying more attention to the development of ACO. However, with advances in multicore computing theory and technology, implementing ACO efficiently and parallelly in a multicore computing environment emerges a key challenge in this field. Herein, this article will put forward exactly how ACO works and how ACO algorithm could be applied in travel problems to save time and money and provide the most efficient and safe trip. The result could be a reference; especially in holiday when the traffic reaches the most intensity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the constant-partially accelerated life tests under progressively type-I interval censoring is considered when the competing risks product follow Burr (c,k) distributions. The estimates of the unknown parameters of the different causes are obtained through a maximum likelihood method, and the Fisher information matrix is derived too. The asymptotic and bootstrap confidence intervals are given by using the asymptotic normality theory and parametric bootstrap method. Based on the Fisher information matrix, we also discuss the problem of optimal sample size allocation at each stress level. Finally, a simulation study is carried out to illustrate the performance of the proposed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies the operation efficiency of the provincial power grid enterprises of China based on a nonparametric meta-frontier approach. We divide 25 Chinese provincial power grid enterprises into six groups as North, East and Central China, Northeast, Northwest and Southwest regions. The meta-frontier data envelopment analysis (DEA) method is applied to measure the efficiency and decompose the inefficiency of 25 provincial grid enterprises from 2016 to 2019. We find that: (1) the average efficiency values of the 25 enterprises, under the meta-frontier and group frontiers during the observation period, are 0.6900 and 0.9106, respectively. And none of the efficiency values under the meta-frontier exceeds that under the group frontiers. (2) Technology gap inefficiency (TGI) of provincial grid enterprises varies significantly across regions. The TGR of the Northwest exceeding that of other regions during the observation period. (3) The operational efficiency losses of provincial grid enterprises in Northeast, Central and Southwest China mainly stem from technology gaps. Most of the operational efficiency losses of grid enterprises in East China and Northwest stem from corporate mismanagement, while provincial power grid enterprises in North China are inefficient due to both technology gaps and ineffective management. Finally, we propose the suggestions based on these findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to study the law of river loss to predict the amount of water available in the future and protect water resources, this paper is based on big data collection and analysis, taking the Hotan River Basin in a typical arid desert area as an example, using the principle of water balance and statistical methods. Also, this article puts forward a prerequisite that needs to be met to establish a calculation formula for river water loss. The results show that the monthly river damage and inflow from the lower section of the rivers in the region are highly correlated with the inflow of the upper section, which is a practical method for forecasting the amount of water loss in the river.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The navigation environment near the oil field development platform is of great significance to the safe construction of the working ship, the safe navigation of the nearby ships and the safety management of the oil field. The comprehensive evaluation of navigation environment in the water area near the development platform of oilfield mainly involves in three parts: the selection of navigation environment evaluation index, the calculation of evaluation index weight and the determination of evaluation method. Taking Weizhou 12-1PUQB oilfield development platform as the research object, this paper analyzes the research results of domestic and foreign scholars in the field of navigation environment evaluation, and determines the evaluation index by consulting with experts in the field. The subjective and objective combination weighting method is used to analyze the influence of oilfield development project on the navigation environment of nearby waters, so as to make the selected evaluation index reflect the safety of navigation ships in the waters of oilfield development project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taking the Northwest River Delta as a typical tidal region of the Pearl River Basin, based on big data statistics, the Archimedean copula function is used to construct the joint distributions of the annual maximum flood and the corresponding maximum tide within 48 hours together, with the maximum annual tide and the corresponding maximum flood within 48 hours. Through the joint risk probability model, the joint risk probability of floods encountering tides is calculated. The results show that: there will be a greater risk probability that the flood with a higher return period encounters the tide with a lower return period, and the tide with a higher return period encounters the flood with a lower return period. The joint distribution of flood and tide based on the copula function fits well, and the combined risk analysis is reliable. It provides a theoretical reference for the design risk calculation of the flood control project in the tidal reach of the Pearl River Basin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Climate change is an international problem of universal concern all over the world, and it is also the most serious environmental problem facing human society so far. Greenhouse gases such as CO2, CH4 and NO2 produced by human activities are the main causes of environmental problems. More than 20% of global greenhouse gases come from farming. Low agricultural production efficiency and incomplete utilization of resources have increasingly affected the rural ecological environment. In this paper, Jiangsu is selected as the research object. Based on the agricultural data of these eight cities from 2015 to 2019, the DEA method is used to evaluate the agricultural production efficiency of Jiangsu, and according to the analysis results, countermeasures and suggestions are provided for agricultural development. We should adhere to the principle of giving priority to environmental protection, based on the concept of sustainable development and the principle of ecological and environmental protection, and according to the agricultural conditions in different regions, we should achieve the least investment, the least pollution and the highest utilization rate of resources. At the same time, we should cultivate new professional farmers, guide the development of green, circular, high-quality and efficient characteristic agriculture, and improve agricultural production efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of the economy, surface ozone (O3) has become one of the most critical pollutants which disturb urban air quality improvement and standard management. It is important to clarify the variation characteristics of urban surface O3 concentration and its influence factors for scientifically formulating early warning of O3 pollution and prevention and control schemes. In this paper, based on datasets of surface O3 concentration and meteorological data of Xinyang City during 2015~2019, the variation characteristics of surface O3 concentration and the influence of meteorological factors on it were analysed by using statistical analysis and path analysis. The results show that the annual mean O3 concentration of Xinyang City fluctuated “declining-rising-declining” from 2015 to 2019, and the annual average probability in exceedance of the Chinese grade I and II ambient air quality standards were 41.7% and 10.7%, respectively. The monthly variation of O3 concentration presented a significant M-type fluctuation, with the peak values appearing in May and September, respectively. The meteorological factors had different degrees of influence on O3 concentration, with the influence degree in order air temperature < sunshine duration < atmospheric pressure < relative humidity < precipitation < wind speed. The effects of air temperature and relative humidity on O3 were mainly reflected by direct effects, while that of sunshine duration, precipitation, and atmospheric pressure were mainly reflected through indirect effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stock is a long-term credit tool in the capital market, which can be transferred and traded. Therefore, the stock market has emerged. It is a place for stock issuance and trading, and plays an important role in today's world economy. More and more people are now beginning to pay attention to the stock market and invest, so the stock price prediction has become a concern. People begin to develop various methods to predict the stock price to comply with the market demand. In the stock market, the market changes are related to the national macroeconomic development, the formulation of laws and regulations, the operation of the company, the confidence of shareholders, and so on. This prediction behavior is based only on assumed factors and established preconditions. As a result, stock forecasting is difficult to predict accurately. People focus on developing various systems that can make stock price prediction more accurate and faster. Although there have been many studies on predicting stock prices, there is no systematic review to revise the state-of-the-art research on stock market prediction. Therefore, we should make a new review of stock price prediction methods and make a systematic summary, review, and comparative analysis of the new methods in recent years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning is the study of how to make computers learn better from historical data, to produce an excellent model that can improve the performance of a system. It is widely used to solve complex problems in practical engineering applications, business analysis, other fields. With the development of technology, machine learning based on statistics has attracted people's attention and has been successfully applied in the fields of health care, technology commerce and other aspects. Machine learning estimates the dependence relationship between data based on known samples, to predict and judge unknown or unmeasurable data. It can provide decision-makers with reference opinions and help make better decisions. In the field of economics, it helps merchants to set prices and explore the impact of price changes on sales. This paper describes various Supervised Machine Learning classification techniques, compares various supervised learning algorithms as well as determines the most efficient classification algorithm based on the data set, the number of instances and variables (features). A simple linear regression model is used to study the relationship between salary and years of experience, and python is used to estimate the linear equation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on Baidu index data and the development timeline of COVID-19 in China, this study analyzes the spatial and temporal distribution pattern of network attention in Xi'an under epidemic prevention and control. The results show that : 1) In 2020, the network attention of Xi ' an affected by the epidemic is low. The trend of monthly network attention in the year is consistent with the time axis of domestic epidemic development, showing a ' double peak and double valley ' mode, and it is high in summer and autumn, and low in winter and spring. On the holidays, the attention increased before the festival, and the ' May 1 ' reached the peak one day before the festival, and the ' Eleventh ' reached the peak on the third day of the festival, showing a clear ' blowout ' trend. 2) The spatial distribution of Xi'an network attention is scattered, and shows the characteristics of high network attention in Henan, Sichuan and other surrounding provinces and Guangdong, Jiangsu, Zhejiang and other coastal economic developed areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the trend analysis and mutation analysis methods of the MODIS NDVI data set from 2000 to 2019, the characteristics of the temporal and spatial changes of vegetation coverage in the five northwestern provinces are analyzed.The results show that: (1) Affected by climate, topography and geomorphology, the vegetation coverage of the five northwestern provinces has obvious spatial differences, and the overall trend is increasing from northwest to southeast. The seasonal NDVI is summer, autumn, spring and winter in order from large to small. (2) Vegetation coverage in Northwestern Province has shown an overall steady increase in the past 20 years, but there are obvious differences within the year, with the fastest growth in summer vegetation. (3) The improved area (69.2%) of vegetation coverage in the past 20 years is much larger than the degraded area (4.00%), with obvious regional differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The full width rigid barrier impact test is introduced by Euro NCAP in 2015, with a Hybrid III 5th percentile female dummy seated in the front driver’s seat. According to the statistics of Euro NCAP frontal impact test results, the main injury of front row female dummy is concentrated in the chest. Through data analysis, it is concluded that there is a correlation between the force limit level of safety belt and Chest Compression and Viscous Criterion. According to Euro NCAP and C-NCAP test protocols, the model of the Hybrid III 5th percentile female dummy and Hybrid III 50th percentile male dummy on the front driver's seat are established respectively. The different injuries of the two dummies are compared. The influence of restraint system adjustment on the injury of female dummy is obtained. Improvement of the front female dummy protection performance is conducted by adjusting the restraint system parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The casing evaporator with R134a as refrigerant was numerically simulated by simulation software. The influence of the change of convective heat transfer coefficient at the refrigerant side on the heat transfer area of the evaporator, the parameters at the refrigerant side and COP of the refrigerant system were studied. The simulated data shows that refrigerant convective heat transfer coefficient for each 100 W•m-2•°C-1, the evaporator of the heat transfer area after the first fast slow decreasing trend, the range is between 0.58% and 2.67%, refrigerant mass flow load increase after first fast and slow, the range is between 0.18% and 1.88%, convective heat transfer coefficient of refrigerant side load increase after first fast and slow, the range is between 0.11% and 0.98%.The convection heat transfer coefficient on the refrigerant side increased, and the COP of the refrigerant system increased, which was beneficial to improve the operation performance of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present research carried out the numerical investigation on the collection characteristics of supercooled droplets on the surface of a cylinder with different microstructures, in order to explore the impact of the microstructure, and the surface tension, on the supercooled droplet collection in the flight of aircrafts. A numerical model is established to calculate the collection coefficient, and the impingement limit, on the surface of cylinder with different height of 10μm, 20μm and 30μm; and the effect and mechanism of surface microstructure on the impact characteristics of supercooled water droplets are analyzed. It was found that the surface microstructure caused the droplet collection coefficient of the cylindrical surface to have a sub-peak near the standing point. For the droplet impact diameter was the same, the droplet collection coefficient of the smooth surface reached the maximum near 0.3Ma, while the height of droplet collection coefficient of the hydrophobic surface decreased significantly with the increase of the microstructure height.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The oblique impact damage model of hybrid fiber metal laminates is established, and the process of the bullet impacting aluminum alloy-carbon/glass hybrid fiber laminates with multi-angle and high speed is numerically simulated to explore the influence of impact angle on energy absorption, contact force and interlayer failure area of metal fiber laminates. The results show that the kinetic energy consumption decrease with the increase of the impact angle. The impact angle directly affects the energy absorption characteristics of the hybrid fiber metal laminate. As a whole, the failure area of metal layer and fiber layer decreases with the impact angle. The failure area of the carbon fiber layer is smaller than that of the glass fiber layer, and the failure area of the glass fiber layer from the top to the bottom gradually increases, but the failure area of the carbon fiber layer from the top to the bottom has no obvious change. Because of the impact angle, the kinetic energy dissipation, the maximum fiber stress position and failure area of hybrid fiber metal laminates have significant effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.