2008-FAST-Avoiding the Disk Bottleneck in the Data Domain De(2)

时间:2026-01-16

FAST有关论文。。

The specific deduplication approach varies among system vendors. Certainly the different approaches vary in how effectively they reduce data. But, the goal of this paper is not to investigate how to get the greatest data reduction, but rather how to do deduplication at high speed in order to meet the performance requirement for secondary storage used for data protection.

compete with high-end tape libraries. Achieving 100 MB/sec, would require 125 disks doing index lookups in parallel! This would increase the system cost of deduplication storage to an unattainable level.

Our key idea is to use a combination of three methods to reduce the need for on-disk index lookups during the deduplication process. We present in detail each of the

The most widely used deduplication method for three techniques used in the production Data Domain secondary storage, which we call Identical Segment deduplication file system. The first is to use a Bloom Deduplication, breaks a data file or stream into filter, which we call a Summary Vector, as the summary contiguous segments and eliminates duplicate copies of data structure to test if a data segment is new to the identical segments. Several emerging commercial system. It avoids wasted lookups for segments that do systems have used this approach. not exist in the index. The second is to store data

segments and their fingerprints in the same order that

The focus of this paper is to show how to implement a

they occur in a data file or stream. Such Stream-Informed

high-throughput Identical Segment Deduplication

Segment Layout (SISL) creates spatial locality for

storage system at low system cost. The key performance

segment and fingerprint accesses. The third, called

challenge is finding duplicate segments. Given a segment

Locality Preserved Caching, takes advantage of the

size of 8 KB and a performance target of 100 MB/sec, a

segment layout to fetch and cache groups of segment

deduplication system must process approximately 12,000

fingerprints that are likely to be accessed together. A

segments per second.

single disk access can result in many cache hits and thus avoid many on-disk index lookups. An in-memory index of all segment fingerprints could

easily achieve this performance, but the size of the index

Our evaluation shows that these techniques are effective

would limit system size and increase system cost.

in removing the disk bottleneck in an Identical Segment

Consider a segment size of 8 KB and a segment

Deduplication storage system. For a system running on a

fingerprint size of 20 bytes. Supporting 8 TB worth of

server with two dual-core CPUs with one shelf of 15

unique segments, would require 20 GB just to store the

drives, these techniques can eliminate about 99% of

fingerprints.

index lookups for variable-length segments with an average size of about 8 KB. We show that the system An alternative approach is to maintain an on-disk index

indeed delivers high throughput: achieving over 100 of segment fingerprints and use a cache to accelerate

MB/sec for single-stream write and read performance, segment index accesses. Unfortunately, a traditional

and over 210 MB/sec for multi-stream write cache would not be effective for this workload. Since

performance. This is an order-of-magnitude throughput fingerprint values are random, there is no spatial locality

improvement over the parallel indexing techniques in the segment index accesses. Moreover, because the

presented in the Venti system. backup workload streams large data sets through the

system, there is very little temporal locality. Most

The rest of the paper is organized as follows. Section 2

segments are referenced just once every week during the

presents challenges and observations in designing a

full backup of one particular system. Reference-based

deduplication storage system for data protection. Section

caching algorithms such as LRU do not work well for

3 describes the software architecture of the production

such workloads. The Venti system, for example,

Data Domain deduplication file system. Section 4

implemented such a cache [QD02]. Its combination of

presents our methods for avoiding the disk bottleneck.

index and block caches only improves its write

Section 5 shows our experimental results. Section 6

throughput by about 16% (from 5.6MB/sec to

gives an overview of the related work, and Section 7

6.5MB/sec) even with 8 parallel disk index lookups. The

draws conclusions.

primary reason is due to its low cache hit ratios. With low cache hit ratios, most index lookups require disk operations. If each index lookup requires a disk access which may take 10 msec and 8 disks are used for index lookups in parallel, the write throughput will be about 6.4MB/sec, roughly corresponding to Venti’s throughput of less than 6.5MB/sec with 8 drives. While Venti’s performance may be adequate for the archival usage of a small workgroup, it’s a far cry from the throughput goal of deduplicating at 100 MB/sec to

2 Challenges and Observations

2.1 Variable vs. Fixed Length Segments

An Identical Segment Deduplication system could choose to use either fixed length segments or variable length segments created in a content dependent manner. Fixed length segments are the same as the fixed-size blocks of many non-deduplication file systems. For the purposes of this discussion, extents that are multiples of

270

FAST ’08: 6th USENIX Conference on File and Storage TechnologiesUSENIX Association

…… 此处隐藏:3468字,全部文档内容请下载后查看。喜欢就下载吧 ……
2008-FAST-Avoiding the Disk Bottleneck in the Data Domain De(2).doc 将本文的Word文档下载到电脑

精彩图片

热门精选

大家正在看

× 游客快捷下载通道(下载后可以自由复制和排版)

限时特价:4.9 元/份 原价:20元

支付方式:

开通VIP包月会员 特价:19元/月

注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信:fanwen365 QQ:370150219