Resizing a live raid0 Array

Ahh.. we need more space!!

It has been a solid week since I had to do a live resize of a raid 0 array on 2 HP DL360 gen8 box (with a perc h420i controller) which we use to run graphite. To say the least, it was a pain in the ass trying to get information on how to do it.

Illustration for article titled Resizing a live raid0 Array

During my time as a sysadmin i've never found myself in a position of needing to resize an active raid array, and initial thoughts were that it couldn't be all that bad. Which, it wasn't. The difficult part was getting all the information together and to figure it out.



One of my first major hardware projects since starting at Gawker Media was to get 8 new app servers up and running (4 at each DC). This went by quite smoothly and they were provisioned in no time. It was time to get them in to traffic, and alas, graphite was just not willing to deal with the additional storage requirements of 4 new boxes to each graphite server.


Graphite has somewhat of a torrid history here at Gawker due in part to just how much data we collect and there's a constant struggle with making sure it's operating as it should be. We tend to average about 450-600K metrics/sec which tends to be at the point where Carbon will need some tweaking to really be able to keep on top of it. We were anticipating increasing this by ~100K metrics/sec which was looking to be at the cusp of what we can do currently without any additional tweaking.

Hopefully, i'll be able to address the optimization of carbon in the next few weeks and get it to a point where it's more stable. Until then, I had to at least get it working so we could get this additional capacity live to make everyones lives a bit happier :)



To put it simply, holy crap. HP does a terribly good job at making sure they don't really answer your questions when you're talking to their support folks and unless you know the exact keywords you're somewhat up a creek without a paddle sifting through their documentation.


Initially, I figured i'd just hit up their live chat support and get some info on how to take care of it. I was lovingly instructed to do it via a windows application i'd need to install. This was a big no go since we don't have any windows boxes in our infrastructure. I was told they had no other methods to do it so it was time to whip out the bootcamp win7 install I have as a just in case.

Illustration for article titled Resizing a live raid0 Array

*insert some time figuring out that the HP windows client sucks and requires me to have an app installed on the server itself called hpacucli.. which.. get this.. let's me do everything I need to do from the command line*

Yay! A CLI!

Wooo.. I have a cli I can work with. This is the best ever! Well.. now I had something I could Google the hell out of (What did we do before Google? And please don't say Lycos).


After some awesome Google skills ( ) I came across the very first link which is an awesome cheat sheet for it.…

What can I say but this took me down the appropriate path.. Here's my commands after I actually went through everything.. I completely didn't save my console output as I worked through it.


Let's launch the console and work in there instead of launching it for each command..

sudo hpacucli

Show the controller config so we can figure out the array we want to add drives to..

=> ctrl all show config

Smart Array P420i in Slot 0 (Embedded) (sn: 5001438020CF1EA0)

array A (Solid State SATA, Unused Space: 0 MB)

logicaldrive 1 (93.1 GB, RAID 0, OK)

physicaldrive 1I:1:1 (port 1I:box 1:bay 1, Solid State SATA, 100 GB, OK)

array B (Solid State SATA, Unused Space: 0 MB)

logicaldrive 2 (1.5 TB, RAID 0, OK)

physicaldrive 1I:1:2 (port 1I:box 1:bay 2, Solid State SATA, 400 GB, OK)

physicaldrive 1I:1:3 (port 1I:box 1:bay 3, Solid State SATA, 400 GB, OK)

physicaldrive 1I:1:4 (port 1I:box 1:bay 4, Solid State SATA, 400 GB, OK)

physicaldrive 2I:1:5 (port 2I:box 1:bay 5, Solid State SATA, 400 GB, OK)

I left the output of our controller config so you have an idea of what it's meant to look like after the fact. As you can tell, we needed to add the drives to Slot 0, Array B

=> ctrl slot=0 array B add drives=allunassigned

The command above will add all unassigned drives in Slot 0 to Array B. This is handy if you are needing to add multiple drives at once.


So now you'll find that your logical drive will have a % next to it instead of an "OK" (I bolded it above in the output from `ctrl all show config`. You're going to have to wait for a bit for this part to finish. In my case, it took about 4-5 hours to finish the reslivering process. Once you see that your logicaldrive is back to being OK you can move on to the next part.. extending the logical drive!

=> ctrl slot=0 logicaldrive 2 modify size=max

Woo! (Yes, I say woo a lot).. we're ready for the easier part! And that's resizing the lvm to the new size..




our /dev/sdb isn't configured with lvm in the os..


Well, lucky for you! It seems scary to fix this, but it's not! Until you have corrupt superblocks and then it's just funny and you may as well step away and grab some caffeine as it'll help calm you down. Here's where our awesome story continues...


Downtime? Pfft.. wait.. nooo... downtime!

Well, looks like I had to bring graphite down for a short while to fix this.. which, was not the plan from the beginning.. that's my fault for not looking into the OS part of this from the get go. Much sadness.

Illustration for article titled Resizing a live raid0 Array

Now, all is not lost! This shouldn't take long.. to summarize how this is done.. we first stop our services that are writing to the partition. We have /dev/sdb1 mounted to /opt/graphite so this meant I had to..

/opt/graphite/bin/ stop

stop statsd

service httpd stop

Then I waited.. and waited... until I had no more open files on the partition

lsof /dev/sdb1

Once I was at 0 open files.. it was time to get cracking on this... essentially, you have to delete the partition and remake it with the new extended block being the last block of the partition. Once that's done, you just use resize.ext4 to get it sorted before mounting. Simple enough!

fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to

switch off the mode (command 'c') and change display units to

sectors (command 'u').

The warning above is an easy one to fix.. just type "c" then hit return/enter, then type "u" then hit return/enter.


Here's what my partition looked like before hand ("p" will display your partitions)

Command (m for help): p

Disk /dev/sdb1: 800.1 GB, 800105854464 bytes

255 heads, 63 sectors/track, 97273 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 1048576 bytes

Alignment offset: 229888 bytes

Disk identifier: 0x0febf5c4

Now the 'scary' part.. Delete your partition!

Command (m for help): d

Partition number (1-4): 1

Woo.. partition deleted.. time to create a new one immediately afterwards.. and except all the defaults

Command (m for help): n

Partition type:

p primary

e extended

Select (default p): p

Partition number (1-4, default 1): 1

Since I don't have my output I didn't want to mess with graphite and go through this process again. But accept the default values when prompted for them.


Well.. now! You're ready to write your partition!

Command (m for help): w

Success! You wrote your new partition.. well don't count your chickens just yet.. we need to make sure we run a file check on the new partition...

fsck.ext4 -f /dev/sdb1

If all goes well.. you won't get this.. darn..

[root@graph.bfc /home/kgill]# fsck.ext4 -f /dev/sdb1

e2fsck 1.41.12 (17-May-2010)

fsck.ext4: Superblock invalid, trying backup blocks...

fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb1

The superblock could not be read or does not describe a correct ext2

filesystem. If the device is valid and it really contains an ext2

filesystem (and not swap or ufs or something else), then the superblock

is corrupt, and you might try running e2fsck with an alternate superblock:

e2fsck -b 8193 <device>

Well, this sucks.. But hey! That's why we have superblock backups! Let's find them and try restore from there... Let's look up the superblocks number with mke2fs

[root@graph.bfc /home/kgill]# mke2fs -n /dev/sdb1

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=64 blocks, Stripe width=256 blocks

97673216 inodes, 390678294 blocks

19533914 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

11923 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848

The bolded blocks at the bottom are where our superblock backups are stored.. this looks like i'm in business... and once again.. let's not count our chickens before they hatch..


At this point I went through every superblock and realized.. shit.. this isn't working..

e2fsck -b 32769 /dev/sdb1

Well.. shit. At this point, I hit up @jimbartus and we both came to the idea of recreating the original partition on its original block locations. Hopefully we could get it back to how it was, run a file check and see if we can rebuild everything.


At this point, I fdisk'd /dev/sdb and deleted the new partition and recreated the new one (Thank goodness I don't close console windows often!! Having this info is important in case you screw up like this!).

Upon finally being back at the original partition, a fsck actually found problems and fixed them. This was gratifying to see and I made note to always run a fsck before making any partition changes in the future.


Well.. after the fsck finished.. I went through the process of deleting the partition and creating a new one and the e2fsck -f /dev/sdb1 completed without a hickup. Yay! Time to do the almost last step!

resize2fs /dev/sdb1

Upon completion of the resize.. fdisk looked much better!

Disk /dev/sdb: 1600.2 GB, 1600219340800 bytes

255 heads, 63 sectors/track, 194548 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 1048576 bytes

Disk identifier: 0x00067968

Device Boot Start End Blocks Id System

/dev/sdb1 1 97274 781353373+ 83 Linux

Partition 1 does not start on physical sector boundary.

Mount it up! If you don't have your partition in fstab.. I suggest doing it!

mount -a

woo.. progress! a `ls /opt/graphite` showed my data was still intact and I was ready to rumble!


Finally... home stretch.. ready to almost call it done..

service httpd start

/opt/graphite/bin/ start

start statsd

Wooo... all up and running!

At last! 1 hour later... we were back!

The positives...

Illustration for article titled Resizing a live raid0 Array

Good news is.. I didn't have any issues with superblocks when I went to do the upgrade on the other graphite server!

I learned, the hardware, that a fsck should be done every time before you mess with a partition



How do we look now?

It has been a few days and the app servers have been dumping data into graphite .. the purple peaks in the graphs below are when the apps went into graphite.. SO MANY CREATES!!!

Illustration for article titled Resizing a live raid0 Array
Illustration for article titled Resizing a live raid0 Array

So.. as you can see.. we're doing about 600K metrics/sec now.. now to figure out how to get carbon/graphite to be more responsive!

Share This Story

Get our newsletter