KEYWORDS: Servomechanisms, Holography, Objectives, Detection and tracking algorithms, Design, Actuators, Logic, Control systems, Digital video discs, Spindles
In this paper, We first analyze the challenges for the collinear holographic data system to adopt the rotating disc mechanism to decrease the complexity and improve the data throughput. Then we propose a technique design to address them. The implementaion gives the details of major control procedures and the necessary optimizations for the control algorithm. Finally, the evaluation results show the effectiveness of the design.
In the digital age, the volume of data is expanding exponentially, rendering it as a fundamental asset. Consequently, the need for storage systems that possess substantial capacity, affordability, exceptional performance, and unwavering dependability has become increasingly pressing. In response to this demand, storage systems have extensively embraced erasure codes, particularly the wide stripe erasure codes, owing to their remarkable storage efficiency and reliability. However, this paper found that the encoding and decoding performance of wide stripe erasure codes significantly deteriorates compared to narrow stripe erasure codes, when using mainstream erasure code acceleration libraries. In order to gain a comprehensive understanding of this phenomenon, extensive experimentation was conducted to test the performance of encoding and decoding process of erasure codes. Further, hardware events during erasure code calculation were analyzed. The results show that the root cause of performance degradation is the increasing L3 cache misses which saw a 240% increase given a set amount of encoding data
Phase-modulated collinear holographic storage is promising high storage density at cost of high raw bit error rate. We first performed a simulation to analyze the bit-error-rate characteristics of phase-modulated collinear holographic storage under different noise intensity. To ensure high storage capacity with acceptable user biterror-rate, LDPC (Low Density Parity Check Code) is introduced to ensure data reliability. We further analyze the LDPC code error correction performance under different factors and determine the appropriate hardware parameters for the LDPC decoder. Finally, we use High Level Synthesis to fast implement and optimize an LDPC FPGA-based hardware decoder, named as HDecoder. HDecoder achieves 204Mbps decoding throughput, 150x and 4850x higher than CPU-based software decoder and the HLS-based vanilla hardware decoder. Compare to HLS-based vanilla LDPC decoder, HDecoder consumes 55x lower hardware resource per Mbps.
KEYWORDS: Data backup, Data storage, Distributed computing, Data processing, Data storage servers, Optoelectronics, Reliability, Databases, Image compression, Image storage
An object-based storage system - integrating advantages of both NAS and SAN - can be applied in large-capacity, lowcost
and large-scale storage systems built from commodity disk devices. The continuous data protection or CDP is a
well-known technique that continuously captures or tracks data modifications and stores changes independent of primary
data, enabling data recovery - from any point in the past. An efficient file system optimized for CDP plays an important
role in object-based storage systems. In this paper, concurrent processes during data backup and data recovery operations
are discussed in details. To fully take the advantage of distributed system architectures, we make the concurrent data
operations as far as possible during read, write, and recovery processes. A new backup data object placement strategy is
present to work in coordination with a replica strategy in
object-based distributed file systems. Backup data object can be
placed in other object storage servers (or OSS for short) instead of the OSS where the original data is residing, when the
backup data object matches certain conditions. For data recovery, we make the related OSSes to concurrently perform
data object movement. All these strategies can efficiently reduce system response times.
KEYWORDS: Data storage, Data backup, Distributed computing, System integration, Optoelectronics, Space operations, Analytical research, Data communications, Data processing, Data storage servers
Object-based storage system integrates advantage of both NAS and SAN, can be applied in large-capacity, low-cost and
large-scale storage systems which are built from commodity devices. Continuous data protection (CDP) is a
methodology that continuously captures or tracks data modifications and stores changes independent of the primary data,
enabling recovery points from any point in the past. An efficient file system optimized for CDP is needed to provide
CDP feature in object-based storage system. In this thesis, a new metadata management method is present. All necessary
meta data information are recorded when changes happened to file system. We have a journal-like data placement
algorithm to store these metadata. Secondly, this metadata management method provides both CDP feature and Object-based
feature. Two type write operations are analyzed to reduce storage space consumption. Object-based data allocation
algorithm can take the advantage of distributed file system to concurrently process CDP operations over storage nodes.
Thirdly, history revisions and recovery operations are discussed. Finally, the experiment test result is present and
analyzed.
KEYWORDS: Data storage, Data backup, System integration, Reliability, Data storage servers, Distributed computing, Data processing, Lithium, Optoelectronics, Logic
Recent advances in large-capacity, low-cost storage devices have led to active research in design of large-scale storage
system built from commodity devices. These storage systems are composed of thousands of storage device and require
an efficient file system to provide high system bandwidth and petabyte-scale data storage. Object-based file system
integrates advantage of both NAS and SAN, can be applied in above environment. Continuous data protection
(CDP) is a methodology that continuously captures or tracks data modifications and stores changes independent of the
primary data, enabling recovery points from any point in the past. All changes to files and file metadata are stored and
managed. A CDP method in Object-based file system is presented in this thesis to improve the system reliability. Firstly,
we can get detail at byte level of every write request because data protection operates at the file system level. It can
consume less storage space. Secondly, every object storage server can compute the recovery strip data object
independently to decrease the recovery time. Thirdly a journal-like metadata management way is introduced to provide
metadata optimization for CDP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.