Disk Replication with DRBD

From Alpine Linux
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
This material is obsolete ...

drbd is no longer available for any supported versions of Alpine Linux (Discuss)

This tutorial shows how to configure a Distributed Replicated Block Device (DRBD) device on Alpine Linux. It assumes you are familiar with what DRBD is and why you would want to use it. If this is not the case, see the DRBD home page.

You should also be familiar with creating disk partitions, file systems, and logical volumes using command-line tools.

Unless otherwise noted, all procedures will be carried out on both hosts.

About the Configuration

The examples in this tutorial use two hosts: alpine1.domain and alpine2.domain. Both are virtual machines. Because the VMs need to communicate with each other, it's important to use a bridged adapter or internal network rather than Network Address Translation (NAT).

Virtual Hardware

A single CPU with 512M RAM is sufficient. The example uses a single disk, 8G in size.

Details of virtual machine hypervisor configuration vary and are not in scope for this document.

Alpine O.S.

The hosts are sys installs onto /dev/sda. The ROOT_SIZE environment variable was used to constrain the partition size to 4G (e.g. export ROOT_SIZE=4096 ; setup-alpine) and leave some free space on the disk. The free space will be used to create a logical volume for DRBD.

Creating a Device for DRBD

A DRBD device needs a block device at its foundation. In this tutorial, I'm using a logical volume (/dev/vg0/drbd0). You can also use a partiton on a regular disk such as /dev/sda4 if you want.

The process to create the device requires you to perform these steps:

  1. Install necessary packages
  2. Create a partition for LVM
  3. Create the physical volume, volume group, and logical volume
  4. Configure LVM to start when the system boots.

The commands you'll run to install packages and create the partition look like this:

apk add cfdisk lvm2 cfdisk /dev/sda

You'll need to create a partition in the free space area of the drive, change the type to Linux LVM, write, and quit.

Once that's done, you can go ahead with the logical volume. Assuming /dev/sda4 is the partition for LVM, the commands will look like this:

pvcreate /dev/sda4 vgcreate vg0 /dev/sda4 lvcreate -n drbd0 -L 1G vg0 rc-update add lvm boot

The logical volume /dev/vg0/drbd0 should be ready for use. Run ls /dev/vg0/drbd0 to verify before moving on.

DRBD Packages and Config Files

Before the DRBD device can be created, you'll need to get some things set up.

  1. Install necessary packages
  2. Gather network details for your hosts
  3. Create config files based on network and logical disk details

First, install the packages.

apk add drbd lsblk

Next, you'll need the hostname and IP address for both hosts.

uname -n ifconfig eth0

Finally, you can create the DRBD resource configuration file. In this example it's called drbd0.res and it resides in the /etc/drbd.d/ directory. This file needs to be created on both hosts.

The contents will look similar to this example:

Contents of /etc/drbd.d/drbd0.res

resource drbd0 { device minor 0; disk /dev/vg0/drbd0; meta-disk internal; protocol C; on alpine1.domain { address 192.168.0.11:7789; } on alpine2.domain { address 192.168.0.12:7789; } }

The line disk /dev/vg0/drbd0; should reflect the name of the device you're using. If you're not using a logical volume, it might be something like disk /dev/sda4;

The on alpine1.domain { lines should reflect the names of your hosts. It should match the output of uname -n exactly.

The address 192.168.0.11:7789; line should reflect the IP address of host alpine1. address 192.168.0.12:7789; should reflect host alpine2. The port numbers do not need to be changed.

The DRBD device

Now that the config files are in place, drbdadm is used to create the DRBD device and make it available for use. It only takes two commands.

drbdadm create-md drbd0 drbdadm up drbd0

You can use lsblk and drbdadm status to verify your success.

From the lsblk command, you should see something like the following example:

 # lsblk
 NAME          MAJ:MIN RM    SIZE RO TYPE MOUNTPOINTS
 sda             8:0    0      8G  0 disk
 ├─sda1          8:1    0    100M  0 part /boot/efi
 ├─sda2          8:2    0    920M  0 part [SWAP]
 ├─sda3          8:3    0      4G  0 part /
 └─sda4          8:4    0      3G  0 part
   └─vg0-drbd0 253:0    0      1G  0 lvm
     └─drbd0   147:0    0 1023.9M  0 disk

The output of drbd status will look similar to what's shown below. 'Inconsistent' and 'Connecting' are not a problem at this point.

 # drbdadm status
 drbd0 role:Primary
   disk:Inconsistent
   peer role:Secondary
     replication:Established peer-disk:Connecting

Primary and Secondary Nodes

In this example, we'll designate the alpine1 host as the primary node and alpine2 as the secondary.

The following command should be run from the primary node (alpine1 host) only.

 alpine1:~# drbdadm primary --force drbd0

Now, on the secondary host (alpine2) run this:

 alpine2:~# drbdadm secondary drbd0

Check the status again with drbdadm status and eventually it will show UpToDate for both disk and replication status.

Using the Device

Now that everything is setup, you can use /dev/drbd0 just like any other disk partition. Create a file system, mount it, copy files, etc. From here on out, all commands should be executed on the primary node.

Here are some examples:

 alpine1:~# mkfs.ext4 /dev/drbd0
 alpine1:~# e2fsck /dev/drbd0
 alpine1:~# mount -t ext4 /dev/drbd0 /mnt

None of these commands will work on the secondary node. That is by design of rhe DRBD system. If you try to use the device, you'll see an error like the one below.

 alpine2:~# e2fsck /dev/drbd0
 e2fsck 1.46.4 (18-Aug-2021)
 e2fsck: Read-only file system while trying to open /dev/drbd0
 Disk write-protected; use the -n option to do a read-only
 check of the device.

Again, this is by design. While it is possible to configure a dual primary system, it is beyond the scope of this document. See https://kb.linbit.com/drbd-9-and-dual-primary for more information.

When Things Go Wrong

The whole reason for using DRBD is to protect your data in case the primary node fails. So it's a good idea to look at that situation and how you can recover from it.

Here's the procedure in brief:

  1. Shutdown alpine1 to simulate primary node failure.
  2. Promote alpine2 to primary
  3. Mount drbd0's file system on alpine2.

But before that, create a test file under /mnt on alpine1. Anything is fine, it's just to verify the process works.

Then, when alpine1 is powered off, run the drbdadm command to make alpine2 the primary:

 alpine2:~# drbdadm primary drbd0

Check the status. You should see UpToDate for the disk, but Connecting for the peer. This is expected, because alpine1 is powered off.

 alpine2:~# drbdadm status
 drbd0 role:Primary
   disk:UpToDate
   peer connection:Connecting

The real proof is in the file system and the test file you created. To verify, mount alpine2's /dev/drbd0 on /mnt and see what it contains.

 alpine2:~# e2fsck /dev/drbd0
 e2fsck 1.46.4 (18-Aug-2021)
 /dev/drbd0: clean, 12/65536 files, 8859/262127 blocks
 alpine2:~# mount -t ext4 /dev/drbd0 /mnt
 alpine2:~# ls /mnt
 lost+found  test.txt

Now that alpine2 is primary, alpine1 must remain powered off. If you want to bring alpine1 back up, reconfigure alpine2 as secondary first.

 alpine2:~# drbdadm secondary drbd0

See the LinBit DRBD web site for more information.

Important Reminder

DRBD (or any other replicated storage) is not a substitute for a good backup and recovery plan. If your data is important, be sure to back it up!