Notes on Setting up RAID and Related Issues

Russell Bateman
June 2014
last update:

Table of Contents

Introduction
ZFS on Linux
Some documents about ZFS
RAID notes
mdadm
ZFS practical
Server installation
Pre-launch maintenance
And now, for zfs
Steps
zfs terminology
Beginning the serious work...
We get down to it...
Starting over...
Appendices
Disk table
Disk-related status

Introduction

I decided to lose my VMware ESXi server set-up as, with my latest employment, I simply don't need the technology and I don't have time to use it at home for now. So, I've got this machine (see here and here) freed up on which to set up my new Plex Media server and want to do a RAID 1 thingy.

First, taking into account the notes at the bottom on (just plain) RAID, my assortment of disks isn't too promising for mirroring:

  1. 2Tb WD—empty
  2. 2Tb WD—empty
  3. 4Tb Seagate—empty
  4. 1Tb Seagate—already holding all (circa 300Gb) of my present, Plex-entrusted media*

(* would be nice not to have to back this up and restore it.)

Other details on the machine I'm using:

I'll have to see what I can make of it all.

ZFS on Linux

This was some early work on documenting ZFS. Don't pay too much attention until the "practical" stuff that illustrates some horsing around I did before settling on a final approach, documented in Setting up tol-eressëa.

60,000' overview

https://en.wikipedia.org/wiki/ZFS

Third-party demonstration of installation on Debian

Really, really good treatment. I may just follow this guy's write-up and skip the head-scratching.

http://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/

FAQ

http://zfsonlinux.org/faq.html

Formal documentation for Debian

https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/




mdadm

mdadm is at least as old as 2005. Not sure this is the road I want to walk. Anyway, even if it did a good job and wasn't hard to manage, my first disks were all of different sizes and useless for RAID 1.



RAID notes in general

RAID 0

With RAID 0, the RAID controller tries to evenly distribute data across all disks in the RAID set.

Envision a disk as if it were a plate, and think of the data as a cake. You have four cakes—chocolate, vanilla, cherry and strawberry—and four plates. The initialization process of RAID 0, divides the cakes and distributes the slices across all the plates. The RAID 0 drivers make the operating system feel that the cakes are intact and placed on one large plate. For example, four 9Gb hard disks configured in a RAID 0 set are seen by the operating system to be one 36Gb disk.

RAID 0 can accommodate disks of unequal sizes.

The advantage of RAID 0 is data access speed and viewing several disks as one filesystem. A file that is spread over four disks can be read four times as fast. RAID 0 is often called striping.

RAID 1

Data is cloned on a duplicate disk. This RAID method is therefore frequently called disk mirroring. Think of telling two people the same story so that if one forgets some of the details you can ask the other one to remind you.

A limitation of RAID 1 is that the total RAID size in gigabytes is equal to that of the smallest disk in the RAID set. Unlike RAID 0, the extra space on the larger device isn't used.

RAID 4

Operates likes RAID 0 but inserts a special error-correcting or parity chunk on an additional disk dedicated to this purpose.

RAID 4 requires at least three disks in the RAID set and can survive the loss of a single drive only. When this occurs, the data in it can be recreated on the fly with the aid of the information on the RAID set's parity disk. When the failed disk is replaced, it is repopulated with the lost data with the help of the parity disk's information.

RAID 4 combines the high speed provided by RAID 0 with the redundancy of RAID 1. Its major disadvantage is that the data is striped, but the parity information is not. In other words, any data written to any section of the data portion of the RAID set must be followed by an update of the parity disk. The parity disk can therefore act as a bottleneck. For this reason, RAID 4 isn't used very frequently.

RAID 5

Improves on RAID 4 by striping the parity data between all the disks in the RAID set. This avoids the parity disk bottleneck, while maintaining many of the speed features of RAID 0 and the redundancy of RAID 1. Like RAID 4, RAID 5 can survive the loss of a single disk only.



ZFS practical

Here are some resources I resorted to through this process.

zfs is a highly scalable, future-proof filesystem and logical volume manager designed around the concepts that a) data integrity is paramount, storage administration should be simple and c) everything can be done on-line.

We're setting up zfs, the special filesystem that will allow me to mirror each of the two drive pairs (2 Tb and 4 Tb), then group them together as if one big 6 Tb drive.

The idea is that if an drive goes bad, its mate will still have all the data on it and the system doesn't skip a beat. I will just need to buy a new drive of the same size and replace the bad on before the surviving one ever goes bad.

Here are the Ubuntu stable release of zfs. However, this page is not meticulously helpful. Please follow these instructions instead.

Steps

  1. An article I read, suggested doing this:
    # apt-get install python-software-properties
    
  2. Add the ppa:zfs-native/stable personal package archive (PPA) to system and update the latest list of software with that:
    # add-apt-repository ppa:zfs-native/stable
        (fetches keys)
    The native ZFS filesystem for Linux. Install the ubuntu-zfs package.
    This PPA contains the latest stable release.
    Please join the Launchpad user group if you want to show support for ZoL:
      https://launchpad.net/~zfs-native-users
    The ZoL project home page is: http://zfsonlinux.org/
    Send feedback or requests for help to this email list: <email address hidden>
    A searchable email list history is available at:
      http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/
    Report bugs at: https://github.com/zfsonlinux/zfs/issues
    Get recent daily beta builds at: https://launchpad.net/~zfs-native/+archive/daily
    Press [ENTER] to continue or ctrl-c to cancel adding it
    gpg: keyring `/tmp/tmpyi8f5o2m/secring.gpg' created
    gpg: keyring `/tmp/tmpyi8f5o2m/pubring.gpg' created
    gpg: requesting key F6B0FC61 from hkp server keyserver.ubuntu.com
    gpg: /tmp/tmpyi8f5o2m/trustdb.gpg: trustdb created
    gpg: key F6B0FC61: public key "Launchpad PPA for Native ZFS for Linux" imported
    gpg: Total number processed: 1
    gpg:               imported: 1  (RSA: 1)
    OK
    # apt-get update
    
  3. As I'll want to upgrade zfs when necessary, I elected to create /etc/apt/sources.list.d/zfs.list with these contents.
    # vim /etc/apt/sources.list.d/zfs.list
        deb     http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main
        deb-src http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main
    

    If you get something like the following, it's because you failed to perform the step to add the PPA to the repository successfully:

    W: GPG error: http://ppa.launchpad.net trusty Release: The following signatures couldn't be verified \
            because the public key is not available: NO_PUBKEY 1196BA81F6B0FC61
    
  4. The command to install zfs is here; it takes a very long time (in excess of 10 minutes), there's lots of installing prerequisites, linking, etc.
    # apt-get install ubuntu-zfs
    

Initial tuning

This is listed as a step for Linux installations by an article I read. Apparently, we must limit zfs' advanced replacement cache (ARC) to an appropriate value otherwise it's too slow to release RAM back to the processor and will starve system memory. The article advises setting it at one-half of the total system RAM. In our case, this is 8 Gb. As it has to be in bytes, there's a comment that gives several useful values.

This is done in /etc/modprobe.d/zfs.conf. Here's mine; this file didn't exist before I created it.

# /etc/modprobe.d/zfs.conf
#
# yes you really DO have to specify zfs_arc_max IN BYTES ONLY!
# 16GB=17179869184, 8GB=8589934592, 4GB=4294967296, 2GB=2147483648, 1GB=1073741824
#
options zfs zfs_arc_max=8589934592

I plan on rebooting to make this value effective. After reboot, I see this confirming that I successfully established the maximum number of bytes the ARC can monopolize at any given time to 8 Gb.

[email protected]:/# grep c_max /proc/spl/kstat/zfs/arcstats
c_max                           4    8589934592

zfs terminology

By zfs, data stored is on one or more vdevs (virtual device), which may populate one or more higher-level vdevs. In my case, I predict something like the illustration below, we'll see how well I understood as we go.

A vdev can be a single, physical drive or even partition. In my case, the lowest granularity will be a physical drive. Or, it can be a higher-level concept consisting of multiple drives or partitions.

We'll use something called a zpool to subsume the 2 mirrored vdevs into a RAID10 array logically mapping the 2 Tb and 4 Tb vdevs into one 6 Tb entity.

vdevs are immutable: once created, they cannot be added to. zpools are not: I can add more mirrored vdevs to my pool as needed using zpool add.

What's important in the behavior of a zpool is that the system will try to fill up the vdevs in a balanced way as much as possible. To see the zpool status, do this.

# zpool status
      pool: Avallonea
     state: ONLINE
     scrub: none requested
    config:

            NAME                        STATE     READ WRITE CKSUM
            aquilonde                   ONLINE       0     0     0
              mirror-0                  ONLINE       0     0     0
                wwn-0x50014ee2080259c8  ONLINE       0     0     0
                wwn-0x50014ee2080268b2  ONLINE       0     0     0
              mirror-1                  ONLINE       0     0     0
                wwn-0x50014ee25d4cdecd  ONLINE       0     0     0
                wwn-0x50014ee25d4ce711  ONLINE       0     0     0

(The wwn-... stuff is fictional for now.) As implied, each vdev will contain the same percentage of data as the other. In my case, for 4 Gb of data written, zfs will try to put roughly 1.3 Gb on (2 Tb) mirror-0 (the Western Digital pair) and 2.7 Gb on (4 Tb) mirror-1 (the Seagate pair).

Beginning the serious work...

Before attempting a zpool, let's get a list of the drives the system thinks it knows about.

[email protected]:/# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root  9 Jul 25 19:59 ata-ASUS_DRW-24B1ST_c_CAD0CL247932 -> ../../sr0
lrwxrwxrwx 1 root root  9 Jul 25 19:59 ata-ST3320613AS_6SZ27HJQ -> ../../sdc
lrwxrwxrwx 1 root root 10 Jul 25 19:59 ata-ST3320613AS_6SZ27HJQ-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jul 25 19:59 ata-ST3320613AS_6SZ27HJQ-part2 -> ../../sdc2
lrwxrwxrwx 1 root root 10 Jul 25 19:59 ata-ST3320613AS_6SZ27HJQ-part5 -> ../../sdc5
lrwxrwxrwx 1 root root  9 Jul 25 19:59 ata-ST4000DM000-1F2168_S300ELBZ -> ../../sde
lrwxrwxrwx 1 root root  9 Jul 25 19:59 ata-ST4000DM000-1F2168_S300MZ7G -> ../../sdd
lrwxrwxrwx 1 root root  9 Jul 25 19:59 ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ004702 -> ../../sda
lrwxrwxrwx 1 root root  9 Jul 25 19:59 ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ069805 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jul 25 19:59 dm-name-tol--eressea--vg-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 25 19:59 dm-name-tol--eressea--vg-swap_1 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jul 25 19:59 dm-uuid-LVM-bZfC1RqhLUtA1WXEM9ZDp0OrYWL7UDA90DCf3JdnKoqszQLFuQ8UYdhEIqipYTbe -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jul 25 19:59 dm-uuid-LVM-bZfC1RqhLUtA1WXEM9ZDp0OrYWL7UDA9JnUvrNm3NhEWfiRBdOcmwD8xzmr3Uhi3 -> ../../dm-0
lrwxrwxrwx 1 root root  9 Jul 25 19:59 wwn-0x5000c50013f7f83d -> ../../sdc
lrwxrwxrwx 1 root root 10 Jul 25 19:59 wwn-0x5000c50013f7f83d-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jul 25 19:59 wwn-0x5000c50013f7f83d-part2 -> ../../sdc2
lrwxrwxrwx 1 root root 10 Jul 25 19:59 wwn-0x5000c50013f7f83d-part5 -> ../../sdc5
lrwxrwxrwx 1 root root  9 Jul 25 19:59 wwn-0x5000c5006d9b71cb -> ../../sde
lrwxrwxrwx 1 root root  9 Jul 25 19:59 wwn-0x5000c5007490cd1d -> ../../sdd
lrwxrwxrwx 1 root root  9 Jul 25 19:59 wwn-0x50014ee2076548d3 -> ../../sdb
lrwxrwxrwx 1 root root  9 Jul 25 19:59 wwn-0x50014ee2b21019d9 -> ../../sda

Above, I recognize some names and, to make this more visible, have associated some color with the drives and I will keep this color association throughout this document:

ASUS_DRW-24B1ST_c_CAD0CL247932 The system DVD-RW drive (we won't touch this drive)
ST3320613AS_6SZ27HJQ This is the system drive (320 Gb) (we won't touch these partitions)
WDC_WD20EARX-00PASB0_WD-WCAZAJ etc. These are the original 2 Tb drives Mirror these two drives
ST4000DM000-1F2168_S300 etc. These are the new 4 Tb drives Mirror these two drives
   

The other stuff looks similar to the wwn- designations in the sample zpool status I copied from the article. I also recognize sda, sdb, ... sde. The article tells us that drives can be referred to multiple ways, a) by wwn id, b) by model and serial number as connected to the ATA bus, c) ditto as connected to the virtual, SCSI bus. The article says that the best designation to pick is one reproduced physically on a label on the physical drive. When one goes south, I'll want to be able to identify it, pull it out, then replace it.

I'll have to decide this soon. On my 320 Gb drive, the wwn designation is clearly visible, but the shortest designation is the model plus serial number. Maybe that's best to use.

We get down to it

For this, I had to resort to different resources because the first article, although a really fine one, only mentioned mirroring, but did not show how to set even a simple case up. That article was really good for explaining, useless for showing.

Overall, it took a while to get the commands right and the tutorials are not perfectly helpful since written by people who get it and who don't get not getting it, so they skip explaining or showing. I ended up creating two, mirrored vdevs and wanting to put them into a pool,

[email protected]:/# zpool create vol1 mirror /dev/sda /dev/sdb
[email protected]:/# zpool create vol2 mirror /dev/sdd /dev/sde

Then I thought to combine those two vdevs into a pool.

[email protected]:/# zpool create -m /plex-server aquilonde vol1 vol2
cannot open 'vol1': no such device in /dev
must be a full path or shorthand device name
[email protected]:/# zpool create aquilonde vol1 vol2
cannot open 'vol1': no such device in /dev
must be a full path or shorthand device name

...but that's not how it works. The pool is created at the same time and not separately afterward. So, I had to remove the two original vdevs I created during my trials:

It appears that instead of building from the ground-up, zfs was designed to build from halfway up the ladder down, then off to the side. Clearly object orientation wasn't in the minds of the conceivers. I'll have to deconstruct everything I've done and start over.

[email protected]:~# zpool destroy vol2
[email protected]:~# zpool destroy vol1
[email protected]:~# zpool status
no pools available

Starting over...

Now the right commands seem to be these. Notice how, instead of creating both bottom-level vdevs, we instead create the pool with unnamed vdevs we declare and mirror at the same time. We could do this all in one command, but I've used the add option to allow me to do it in two commands. Finally, we make zfs tell the filesystem about it.

First, let's create pool aquilonde with our first pair of mirrored drives.

[email protected]:~# zpool create aquilonde mirror /dev/sda /dev/sdb
[email protected]:~# zpool status
  pool: aquilonde
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	aquilonde   ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0

errors: No known data errors

So far, so good. Next, we'll add the second pair of mirrored drives.

[email protected]:~# zpool add aquilonde mirror /dev/sdd /dev/sde
[email protected]:~# zpool status
  pool: aquilonde
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	aquilonde   ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sde     ONLINE       0     0     0

errors: No known data errors

It's suggested giving better names than /dev/sdN, etc. since when a disk dies, it might be challenging to figure out which physical disk corresponds to the dead one. Only, it appears you can't first create vdevs using arbitrary names since zpool has to know which disk(s) you are making vdevs. Therefore, first you create using /dev/sdN, then you export the pool to a file, then import it telling it to use the naming from /dev/disk. (I'm keeping the coloring throughout this document to remind me of which disk set is the one in question.)

The last step in setting up a zfs pool is to fix up the naming so that when a drive goes bad, it's not too hard to correlate the observation that a drive is bad with the physical piece of equipment that is the drive.

[email protected]:~# zpool export aquilonde
[email protected]:~# zpool import -d /dev/disk/by-id aquilonde
[email protected]:~# zpool status
  pool: aquilonde
 state: ONLINE
  scan: none requested
config:

	NAME                                          STATE     READ WRITE CKSUM
	aquilonde                                     ONLINE       0     0     0
	  mirror-0                                    ONLINE       0     0     0
	    ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ004702  ONLINE       0     0     0
	    ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ069805  ONLINE       0     0     0
	  mirror-1                                    ONLINE       0     0     0
	    ata-ST4000DM000-1F2168_S300MZ7G           ONLINE       0     0     0
	    ata-ST4000DM000-1F2168_S300ELBZ           ONLINE       0     0     0

errors: No known data errors

The last practical step is to instantiate the pool in the host's filesystem. Check out the command, the result and the diskspace now available to Plex. This is mirrored space!

[email protected]:~# zfs set mountpoint=/plex-server aquilonde
[email protected]:~# ll /plex-server
total 13
drwxr-xr-x  2 root root    2 Jul 27 08:48 ./
drwxr-xr-x 23 root root 4096 Jul 27 09:00 ../
[email protected]:/# /home/russ/diskspace.sh
Filesystem on disk: / (device: /dev/mapper/tol--eressea--vg-root)
   Total disk size: 278Gb
        Used space: 1.9Gb (1%)
        Free space: 262Gb
Filesystem on disk: /sys/fs/cgroup (device: none)
   Total disk size: 4.0Kb
        Used space: 0b (0%)
        Free space: 4.0Kb
Filesystem on disk: /dev (device: udev)
   Total disk size: 7.7Gb
        Used space: 4.0Kb (1%)
        Free space: 7.7Gb
Filesystem on disk: /run (device: tmpfs)
   Total disk size: 1.6Gb
        Used space: 672Kb (1%)
        Free space: 1.6Gb
Filesystem on disk: /sys/fs/cgroup (device: none)
   Total disk size: 4.0Kb
        Used space: 0b (0%)
        Free space: 4.0Kb
Filesystem on disk: /sys/fs/cgroup (device: none)
   Total disk size: 4.0Kb
        Used space: 0b (0%)
        Free space: 4.0Kb
Filesystem on disk: /sys/fs/cgroup (device: none)
   Total disk size: 4.0Kb
        Used space: 0b (0%)
        Free space: 4.0Kb
Filesystem on disk: /boot (device: /dev/sdc1)
   Total disk size: 236Mb
        Used space: 66Mb (30%)
        Free space: 158Mb
Filesystem on disk: /plex-server (device: aquilonde)
   Total disk size: 5.4Tb
        Used space: 128Kb (1%)
        Free space: 5.4Tb

I need to test out the mirroring, but so far, it's the sweet smell of success.

Changing direction a bit...

Despite the extreme convenience of having both mirrors united together as one volume in my host filesystem, it occurs to me that copying a movie to that volume would result in zfs striping it across both disks (and their mirrors) robbing my disks of their autonomy. So, I'm destroying the pool and making two pools again, the larger for television episodes and the smaller for movies.

I choose the bigger volume for television series because they seem to take up more space quicker. So, still in brown for the smaller disk and, therefore, movies while in green for television shows.

[email protected]:/# zpool destroy aquilonde
[email protected]:/# zpool status
no pools available
[email protected]:/# zpool create movies mirror /dev/sda /dev/sdb
[email protected]:/# zpool create television mirror /dev/sdd /dev/sde
[email protected]:/# zpool status
  pool: movies
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	movies      ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0

errors: No known data errors

  pool: television
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	television  ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sde     ONLINE       0     0     0

errors: No known data errors
[email protected]:/# zpool export movies
[email protected]:/# zpool import -d /dev/disk/by-id movies
[email protected]:/# zpool export television
[email protected]:/# zpool import -d /dev/disk/by-id television
[email protected]:/# zpool status
  pool: movies
 state: ONLINE
  scan: none requested
config:

	NAME                                          STATE     READ WRITE CKSUM
	movies                                        ONLINE       0     0     0
	  mirror-0                                    ONLINE       0     0     0
	    ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ004702  ONLINE       0     0     0
	    ata-WDC_WD20EARX-00PASB0_WD-WCAZAJ069805  ONLINE       0     0     0

errors: No known data errors

  pool: television
 state: ONLINE
  scan: none requested
config:

	NAME                                 STATE     READ WRITE CKSUM
	television                           ONLINE       0     0     0
	  mirror-0                           ONLINE       0     0     0
	    ata-ST4000DM000-1F2168_S300MZ7G  ONLINE       0     0     0
	    ata-ST4000DM000-1F2168_S300ELBZ  ONLINE       0     0     0

errors: No known data errors
[email protected]:/# zfs set mountpoint=/plex-movies movies
[email protected]:/# zfs set mountpoint=/plex-television television
[email protected]:/# ll /plex-movies
total 13
drwxr-xr-x  2 root root    2 Jul 30 20:55 ./
drwxr-xr-x 25 root root 4096 Jul 30 20:57 ../
[email protected]:/# ll /plex-television
total 13
drwxr-xr-x  2 root root    2 Jul 30 20:55 ./
drwxr-xr-x 25 root root 4096 Jul 30 20:57 ../
[email protected]:/plex-movies# /home/russ/diskspace.sh
.
.
.
Filesystem on disk: /plex-movies (device: movies)
   Total disk size: 1.8Tb
        Used space: 256Kb (1%)
        Free space: 1.8Tb
Filesystem on disk: /plex-television (device: television)
   Total disk size: 3.6Tb
        Used space: 256Kb (1%)
        Free space: 3.6Tb




Appendices

Disk table

Here's the map I made of the disks after system installation and before zfs. The order of these is logical and also the order you see, top to bottom, in the photo above.

Disk Device Volume Size Use Comments
Seagate 320 Gb
ATA ST3320613AS (scsi)
/dev/sdc1 boot 255 Mb GRUB (still has some MSDOS thing on it? harmless? still investigating)
/dev/sdc2 swap 16.8 Gb Swap space
/dev/sdc5 boot 303 Gb /
Western Digital 2 Tb
ATA WDC WD20EARX-00P (scsi)
/dev/sda X 1.8 Tb X
Western Digital 2 Tb
ATA WDC WD20EARX-00P (scsi)
/dev/sdb X 1.8 Tb X (still has VMware ESXi on it—will be removed by zfs)
Seagate 4 Tb
ST4000DM000 (scsi)
/dev/sdd X 3.6 Tb X
Seagate 4 Tb
ST4000DM000 (scsi)
/dev/sde X 3.6 Tb X

Disk-related status

Now, before I used zfs, yet with all the disks connected and turning, here was the output of a number of a number of commands. There are errors here associated with the two, brand new 4 Tb disks that have never had anything on them.

$ fdisk -lu
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000d048e

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1  3907029167  1953514583+  ee  GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00018ad7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1  3907029167  1953514583+  ee  GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/sdc: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000146f0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *        2048      499711      248832   83  Linux
/dev/sdc2          501758   625141759   312320001    5  Extended
/dev/sdc5          501760   625141759   312320000   8e  Linux LVM

Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000


Disk /dev/sde: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/tol--eressea--vg-root: 302.9 GB, 302946189312 bytes
255 heads, 63 sectors/track, 36831 cylinders, total 591691776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/tol--eressea--vg-swap_1: 16.8 GB, 16848519168 bytes
255 heads, 63 sectors/track, 2048 cylinders, total 32907264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

$ blkid
/dev/sda1: UUID="195fdacc-4891-4250-a901-241b655387b1" TYPE="ext2"
/dev/sda5: UUID="5JtzN6-u6QS-Nju1-uDs5-N1Ml-WGIi-PJgOcX" TYPE="LVM2_member"
/dev/mapper/tol--eressea--vg-root: UUID="359c69e9-fe56-4403-94e0-0ef569518263" TYPE="ext4"
/dev/mapper/tol--eressea--vg-swap_1: UUID="87631cc8-248e-424b-a9dd-d93ee2ec74ca" TYPE="swap"


$ parted -l
Model: ATA WDC WD20EARX-00P (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Model: ATA WDC WD20EARX-00P (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Model: ATA ST3320613AS (scsi)
Disk /dev/sdc: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type      File system  Flags
 1      1049kB  256MB  255MB  primary   ext2         boot
 2      257MB   320GB  320GB  extended
 5      257MB   320GB  320GB  logical                lvm

Error: /dev/sdd: unrecognised disk label

Error: /dev/sde: unrecognised disk label

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/tol--eressea--vg-swap_1: 16.8GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system     Flags
 1      0.00B  16.8GB  16.8GB  linux-swap(v1)


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/tol--eressea--vg-root: 303GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End    Size   File system  Flags
 1      0.00B  303GB  303GB  ext4


$ gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): DDFA998F-8F8A-45F7-8BAA-A9146B1FB61C
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3907029101 sectors (1.8 TiB)

$ gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 8B3E3422-A0CE-4293-9EE2-75C0FA58C285
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3907029101 sectors (1.8 TiB)

$ gdisk -l /dev/sdc
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.
***************************************************************

Disk /dev/sdc: 625142448 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): C52FC97E-3717-481A-B815-70D4446C2F65
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 625142414
Partitions will be aligned on 2048-sector boundaries
Total free space is 4717 sectors (2.3 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          499711   243.0 MiB   8300  Linux filesystem
   5          501760       625141759   297.9 GiB   8E00  Linux LVM

$ gdisk -l /dev/sdd
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries.
Disk /dev/sdd: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 26A0FEB1-9BC7-420F-9763-E85084AC2E25
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 7814037101 sectors (3.6 TiB)

$ gdisk -l /dev/sde
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries.
Disk /dev/sde: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 14288C92-D349-42A2-B789-3659F8888C96
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 7814037101 sectors (3.6 TiB)