diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2011-08-02 20:49:21 -1000 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-08-02 20:49:21 -1000 |
commit | f3406816bb2486fc44558bec77179cd9bcbd4450 (patch) | |
tree | 718db1ef45e55314b5e7290f77e70e6328d855a4 /Documentation | |
parent | 4400478ba3d939b680810aa004f1e954b4f8ba16 (diff) | |
parent | ed8b752bccf2560e305e25125721d2f0ac759e88 (diff) |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm
* git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm: (34 commits)
dm table: set flush capability based on underlying devices
dm crypt: optionally support discard requests
dm raid: add md raid1 support
dm raid: support metadata devices
dm raid: add write_mostly parameter
dm raid: add region_size parameter
dm raid: improve table parameters documentation
dm ioctl: forbid multiple device specifiers
dm ioctl: introduce __get_dev_cell
dm ioctl: fill in device parameters in more ioctls
dm flakey: add corrupt_bio_byte feature
dm flakey: add drop_writes
dm flakey: support feature args
dm flakey: use dm_target_offset and support discards
dm table: share target argument parsing functions
dm snapshot: skip reading origin when overwriting complete chunk
dm: ignore merge_bvec for snapshots when safe
dm table: clean dm_get_device and move exports
dm raid: tidy includes
dm ioctl: prevent empty message
...
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/device-mapper/dm-crypt.txt | 21 | ||||
-rw-r--r-- | Documentation/device-mapper/dm-flakey.txt | 48 | ||||
-rw-r--r-- | Documentation/device-mapper/dm-raid.txt | 138 |
3 files changed, 150 insertions, 57 deletions
diff --git a/Documentation/device-mapper/dm-crypt.txt b/Documentation/device-mapper/dm-crypt.txt index 6b5c42dbbe84..2c656ae43ba7 100644 --- a/Documentation/device-mapper/dm-crypt.txt +++ b/Documentation/device-mapper/dm-crypt.txt @@ -4,7 +4,8 @@ dm-crypt Device-Mapper's "crypt" target provides transparent encryption of block devices using the kernel crypto API. -Parameters: <cipher> <key> <iv_offset> <device path> <offset> +Parameters: <cipher> <key> <iv_offset> <device path> \ + <offset> [<#opt_params> <opt_params>] <cipher> Encryption cipher and an optional IV generation mode. @@ -37,6 +38,24 @@ Parameters: <cipher> <key> <iv_offset> <device path> <offset> <offset> Starting sector within the device where the encrypted data begins. +<#opt_params> + Number of optional parameters. If there are no optional parameters, + the optional paramaters section can be skipped or #opt_params can be zero. + Otherwise #opt_params is the number of following arguments. + + Example of optional parameters section: + 1 allow_discards + +allow_discards + Block discard requests (a.k.a. TRIM) are passed through the crypt device. + The default is to ignore discard requests. + + WARNING: Assess the specific security risks carefully before enabling this + option. For example, allowing discards on encrypted devices may lead to + the leak of information about the ciphertext device (filesystem type, + used space etc.) if the discarded blocks can be located easily on the + device later. + Example scripts =============== LUKS (Linux Unified Key Setup) is now the preferred way to set up disk diff --git a/Documentation/device-mapper/dm-flakey.txt b/Documentation/device-mapper/dm-flakey.txt index c8efdfd19a65..6ff5c2327227 100644 --- a/Documentation/device-mapper/dm-flakey.txt +++ b/Documentation/device-mapper/dm-flakey.txt @@ -1,17 +1,53 @@ dm-flakey ========= -This target is the same as the linear target except that it returns I/O -errors periodically. It's been found useful in simulating failing -devices for testing purposes. +This target is the same as the linear target except that it exhibits +unreliable behaviour periodically. It's been found useful in simulating +failing devices for testing purposes. Starting from the time the table is loaded, the device is available for -<up interval> seconds, then returns errors for <down interval> seconds, -and then this cycle repeats. +<up interval> seconds, then exhibits unreliable behaviour for <down +interval> seconds, and then this cycle repeats. -Parameters: <dev path> <offset> <up interval> <down interval> +Also, consider using this in combination with the dm-delay target too, +which can delay reads and writes and/or send them to different +underlying devices. + +Table parameters +---------------- + <dev path> <offset> <up interval> <down interval> \ + [<num_features> [<feature arguments>]] + +Mandatory parameters: <dev path>: Full pathname to the underlying block-device, or a "major:minor" device-number. <offset>: Starting sector within the device. <up interval>: Number of seconds device is available. <down interval>: Number of seconds device returns errors. + +Optional feature parameters: + If no feature parameters are present, during the periods of + unreliability, all I/O returns errors. + + drop_writes: + All write I/O is silently ignored. + Read I/O is handled correctly. + + corrupt_bio_byte <Nth_byte> <direction> <value> <flags>: + During <down interval>, replace <Nth_byte> of the data of + each matching bio with <value>. + + <Nth_byte>: The offset of the byte to replace. + Counting starts at 1, to replace the first byte. + <direction>: Either 'r' to corrupt reads or 'w' to corrupt writes. + 'w' is incompatible with drop_writes. + <value>: The value (from 0-255) to write. + <flags>: Perform the replacement only if bio->bi_rw has all the + selected flags set. + +Examples: + corrupt_bio_byte 32 r 1 0 + - replaces the 32nd byte of READ bios with the value 1 + + corrupt_bio_byte 224 w 0 32 + - replaces the 224th byte of REQ_META (=32) bios with the value 0 diff --git a/Documentation/device-mapper/dm-raid.txt b/Documentation/device-mapper/dm-raid.txt index 33b6b7071ac8..2a8c11331d2d 100644 --- a/Documentation/device-mapper/dm-raid.txt +++ b/Documentation/device-mapper/dm-raid.txt @@ -1,70 +1,108 @@ -Device-mapper RAID (dm-raid) is a bridge from DM to MD. It -provides a way to use device-mapper interfaces to access the MD RAID -drivers. +dm-raid +------- -As with all device-mapper targets, the nominal public interfaces are the -constructor (CTR) tables and the status outputs (both STATUSTYPE_INFO -and STATUSTYPE_TABLE). The CTR table looks like the following: +The device-mapper RAID (dm-raid) target provides a bridge from DM to MD. +It allows the MD RAID drivers to be accessed using a device-mapper +interface. -1: <s> <l> raid \ -2: <raid_type> <#raid_params> <raid_params> \ -3: <#raid_devs> <meta_dev1> <dev1> .. <meta_devN> <devN> - -Line 1 contains the standard first three arguments to any device-mapper -target - the start, length, and target type fields. The target type in -this case is "raid". - -Line 2 contains the arguments that define the particular raid -type/personality/level, the required arguments for that raid type, and -any optional arguments. Possible raid types include: raid4, raid5_la, -raid5_ls, raid5_rs, raid6_zr, raid6_nr, and raid6_nc. (raid1 is -planned for the future.) The list of required and optional parameters -is the same for all the current raid types. The required parameters are -positional, while the optional parameters are given as key/value pairs. -The possible parameters are as follows: - <chunk_size> Chunk size in sectors. - [[no]sync] Force/Prevent RAID initialization - [rebuild <idx>] Rebuild the drive indicated by the index - [daemon_sleep <ms>] Time between bitmap daemon work to clear bits - [min_recovery_rate <kB/sec/disk>] Throttle RAID initialization - [max_recovery_rate <kB/sec/disk>] Throttle RAID initialization - [max_write_behind <sectors>] See '-write-behind=' (man mdadm) - [stripe_cache <sectors>] Stripe cache size for higher RAIDs - -Line 3 contains the list of devices that compose the array in -metadata/data device pairs. If the metadata is stored separately, a '-' -is given for the metadata device position. If a drive has failed or is -missing at creation time, a '-' can be given for both the metadata and -data drives for a given position. - -NB. Currently all metadata devices must be specified as '-'. - -Examples: -# RAID4 - 4 data drives, 1 parity +The target is named "raid" and it accepts the following parameters: + + <raid_type> <#raid_params> <raid_params> \ + <#raid_devs> <metadata_dev0> <dev0> [.. <metadata_devN> <devN>] + +<raid_type>: + raid1 RAID1 mirroring + raid4 RAID4 dedicated parity disk + raid5_la RAID5 left asymmetric + - rotating parity 0 with data continuation + raid5_ra RAID5 right asymmetric + - rotating parity N with data continuation + raid5_ls RAID5 left symmetric + - rotating parity 0 with data restart + raid5_rs RAID5 right symmetric + - rotating parity N with data restart + raid6_zr RAID6 zero restart + - rotating parity zero (left-to-right) with data restart + raid6_nr RAID6 N restart + - rotating parity N (right-to-left) with data restart + raid6_nc RAID6 N continue + - rotating parity N (right-to-left) with data continuation + + Refererence: Chapter 4 of + http://www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf + +<#raid_params>: The number of parameters that follow. + +<raid_params> consists of + Mandatory parameters: + <chunk_size>: Chunk size in sectors. This parameter is often known as + "stripe size". It is the only mandatory parameter and + is placed first. + + followed by optional parameters (in any order): + [sync|nosync] Force or prevent RAID initialization. + + [rebuild <idx>] Rebuild drive number idx (first drive is 0). + + [daemon_sleep <ms>] + Interval between runs of the bitmap daemon that + clear bits. A longer interval means less bitmap I/O but + resyncing after a failure is likely to take longer. + + [min_recovery_rate <kB/sec/disk>] Throttle RAID initialization + [max_recovery_rate <kB/sec/disk>] Throttle RAID initialization + [write_mostly <idx>] Drive index is write-mostly + [max_write_behind <sectors>] See '-write-behind=' (man mdadm) + [stripe_cache <sectors>] Stripe cache size (higher RAIDs only) + [region_size <sectors>] + The region_size multiplied by the number of regions is the + logical size of the array. The bitmap records the device + synchronisation state for each region. + +<#raid_devs>: The number of devices composing the array. + Each device consists of two entries. The first is the device + containing the metadata (if any); the second is the one containing the + data. + + If a drive has failed or is missing at creation time, a '-' can be + given for both the metadata and data drives for a given position. + + +Example tables +-------------- +# RAID4 - 4 data drives, 1 parity (no metadata devices) # No metadata devices specified to hold superblock/bitmap info # Chunk size of 1MiB # (Lines separated for easy reading) + 0 1960893648 raid \ raid4 1 2048 \ 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81 -# RAID4 - 4 data drives, 1 parity (no metadata devices) +# RAID4 - 4 data drives, 1 parity (with metadata devices) # Chunk size of 1MiB, force RAID initialization, # min recovery rate at 20 kiB/sec/disk + 0 1960893648 raid \ - raid4 4 2048 min_recovery_rate 20 sync\ - 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81 + raid4 4 2048 sync min_recovery_rate 20 \ + 5 8:17 8:18 8:33 8:34 8:49 8:50 8:65 8:66 8:81 8:82 -Performing a 'dmsetup table' should display the CTR table used to -construct the mapping (with possible reordering of optional -parameters). +'dmsetup table' displays the table used to construct the mapping. +The optional parameters are always printed in the order listed +above with "sync" or "nosync" always output ahead of the other +arguments, regardless of the order used when originally loading the table. +Arguments that can be repeated are ordered by value. -Performing a 'dmsetup status' will yield information on the state and -health of the array. The output is as follows: +'dmsetup status' yields information on the state and health of the +array. +The output is as follows: 1: <s> <l> raid \ 2: <raid_type> <#devices> <1 health char for each dev> <resync_ratio> -Line 1 is standard DM output. Line 2 is best shown by example: +Line 1 is the standard output produced by device-mapper. +Line 2 is produced by the raid target, and best explained by example: 0 1960893648 raid raid4 5 AAAAA 2/490221568 Here we can see the RAID type is raid4, there are 5 devices - all of which are 'A'live, and the array is 2/490221568 complete with recovery. +Faulty or missing devices are marked 'D'. Devices that are out-of-sync +are marked 'a'. |