One of the main methods for dealing with low-rank problems is the matrix completion, which attempts to recover incomplete values in an incomplete matrix. It has been employed in a large number of real-world applications like recommender systems and image recovery. When the data is highly sparse, many practical problems of recommender systems and image recovery are hard to solve. Therefore, we use the fully connected neural networks for the matrix completion to avoid problems caused by sparse matrices. In this paper, we utilize the local density loss function to measure the difference of the matrix completion results, where trainable parameters are updated via calculating the derivatives according to the influence function. The local density loss function effectively measures the deviation between the predicted value and the real value, and the convergence of the model is guaranteed. So as to validate the effectiveness of the proposed method, we conduct substantial experiments in terms of image recovery and recommender systems. In addition, we employ three metrics, including root mean square error and peak signal-to-noise ratio and structure similarity, to measure the recovery accuracy of missing matrix. Experimental results demonstrate that this framework is superior to other state-of-the-art methods in running time and learning performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.