Disk Replication with DRBD: Difference between revisions

From Alpine Linux
mNo edit summary
m (Mark this article as obsolete: This package is no longer available in current versions of Alpine. Minor formatting, spelling, and grammar improvements as well.)
Line 1: Line 1:
{{Obsolete|drbd is no longer available for any supported versions of Alpine Linux}}
This tutorial shows how to configure a Distributed Replicated Block Device (DRBD) device on Alpine Linux. It assumes you are familiar with what DRBD is and why you would want to use it. If this is not the case, see the [https://linbit.com/drbd/ DRBD home page].
This tutorial shows how to configure a Distributed Replicated Block Device (DRBD) device on Alpine Linux. It assumes you are familiar with what DRBD is and why you would want to use it. If this is not the case, see the [https://linbit.com/drbd/ DRBD home page].


Line 6: Line 8:


= About the Configuration =
= About the Configuration =
The examples in this tutorial use two hosts: ''alpine1.domain'' and ''alpine2.domain''. Both are virtual machines. Because the VMs need to communicate with each other, it's important to use a bridged adapter or internal network rather than Network Adress Translation (NAT).
The examples in this tutorial use two hosts: ''alpine1.domain'' and ''alpine2.domain''. Both are virtual machines. Because the VMs need to communicate with each other, it's important to use a bridged adapter or internal network rather than Network Address Translation (NAT).


== Virtual Hardware ==
== Virtual Hardware ==
Line 14: Line 16:


== Alpine O.S. ==
== Alpine O.S. ==
The hosts are sys installs onto /dev/sda. The ROOT_SIZE environment variable was used to constrain the partition size to 4G (e.g. export ROOT_SIZE=4096 ; setup-alpine) and leave some free space on the disk. The free space will be used to create a logical volume for DRBD.
The hosts are sys installs onto {{path|/dev/sda}}. The <code>ROOT_SIZE</code> environment variable was used to constrain the partition size to 4G (e.g. export ROOT_SIZE=4096 ; setup-alpine) and leave some free space on the disk. The free space will be used to create a logical volume for DRBD.


= Creating a Device for DRBD =
= Creating a Device for DRBD =
A DRBD device needs a block device at its foundation. In this tutorial, I'm using a logical volume (/dev/vg0/drbd0). You can also use a regular scsi disk device like /dev/sda4 if you want.
A DRBD device needs a block device at its foundation. In this tutorial, I'm using a logical volume ({{path|/dev/vg0/drbd0}}). You can also use a partiton on a regular disk such as {{path|/dev/sda4}} if you want.


The process to create the device requires you to perform these steps:
The process to create the device requires you to perform these steps:
Line 23: Line 25:
# Install necessary packages
# Install necessary packages
# Create a partition for LVM
# Create a partition for LVM
# Create the phyisical volume, volume group, and logical volume
# Create the physical volume, volume group, and logical volume
# Configure LVM to start when the system boots.
# Configure LVM to start when the system boots.


The commands you'll run to install packages and create the partition look like this:
The commands you'll run to install packages and create the partition look like this:


  apk add cfdisk lvm2
{{cmd|apk add cfdisk lvm2
  cfdisk /dev/sda
cfdisk /dev/sda}}


You'll need to create a partition in the free space area of the drive, change the type to Linux LVM, write, and quit.
You'll need to create a partition in the free space area of the drive, change the type to Linux LVM, write, and quit.


Once that's done, you can go ahead with the logical volume. Assuming /dev/sda4 is the partition for LVM, the commands will look like this:
Once that's done, you can go ahead with the logical volume. Assuming {{path|/dev/sda4}} is the partition for LVM, the commands will look like this:


  pvcreate /dev/sda4
{{cmd|pvcreate /dev/sda4
  vgcreate vg0 /dev/sda4
vgcreate vg0 /dev/sda4
  lvcreate -n drbd0 -L 1G vg0
lvcreate -n drbd0 -L 1G vg0
  rc-update add lvm boot
rc-update add lvm boot}}


The logical volume /dev/vg0/drbd0 should be ready for use. Run <code>ls /dev/vg0/drbd0</code> to verify before moving on.
The logical volume {{path|/dev/vg0/drbd0}} should be ready for use. Run <code>ls /dev/vg0/drbd0</code> to verify before moving on.


= DRBD Packages and Config Files =
= DRBD Packages and Config Files =
Line 51: Line 53:
First, install the packages.
First, install the packages.


  apk add drbd lsblk
{{cmd|apk add drbd lsblk}}


Next, you'll need the hostname and IP address for both hosts.
Next, you'll need the hostname and IP address for both hosts.


  uname -n
{{cmd|uname -n
  ifconfig eth0
ifconfig eth0}}


Finally, you can create the DRBD resource configuration file. In this example it's called drbd0.res and it resides in the /etc/drbd.d/ directory. This file needs to be created on both hosts.
Finally, you can create the DRBD resource configuration file. In this example it's called drbd0.res and it resides in the {{path|/etc/drbd.d/}} directory. This file needs to be created on both hosts.


The contents will look similar to this example:
The contents will look similar to this example:
 
{{cat|/etc/drbd.d/drbd0.res|<nowiki>
   resource drbd0 {
   resource drbd0 {
     device minor 0;
     device minor 0;
Line 74: Line 76:
       address 192.168.0.12:7789;
       address 192.168.0.12:7789;
     }
     }
   }
   }</nowiki>}}


The line <code>disk /dev/vg0/drbd0;</code> should reflect the name of the device you're using. If you're not using a logical volume, it might be something like <code>disk /dev/sda4;</code>
The line <code>disk /dev/vg0/drbd0;</code> should reflect the name of the device you're using. If you're not using a logical volume, it might be something like <code>disk /dev/sda4;</code>
Line 85: Line 87:
Now that the config files are in place, drbdadm is used to create the DRBD device and make it available for use. It only takes two commands.
Now that the config files are in place, drbdadm is used to create the DRBD device and make it available for use. It only takes two commands.


  drbdadm create-md drbd0
{{cmd|drbdadm create-md drbd0
  drbdadm up drbd0
drbdadm up drbd0}}


You can use <code>lsblk</code> and <code>drbdadm status</code> to verify your success.
You can use <code>lsblk</code> and <code>drbdadm status</code> to verify your success.
Line 151: Line 153:
# Mount drbd0's file system on alpine2.
# Mount drbd0's file system on alpine2.


But before that, create a test file under /mnt on alpine1. Anything is fine, it's just to verify the process works.
But before that, create a test file under {{path|/mnt}} on alpine1. Anything is fine, it's just to verify the process works.


Then, when alpine1 is powered off, run the drbdadm command to make alpine2 the primary:
Then, when alpine1 is powered off, run the drbdadm command to make alpine2 the primary:
Line 164: Line 166:
     peer connection:Connecting
     peer connection:Connecting


The real proof is in the file system and the test file you created. To verify, mount alpine2's /dev/drbd0 on /mnt and see what it contains.
The real proof is in the file system and the test file you created. To verify, mount alpine2's {{path|/dev/drbd0}} on {{path|/mnt}} and see what it contains.


   alpine2:~# e2fsck /dev/drbd0
   alpine2:~# e2fsck /dev/drbd0

Revision as of 04:50, 6 September 2023

This material is obsolete ...

drbd is no longer available for any supported versions of Alpine Linux (Discuss)

This tutorial shows how to configure a Distributed Replicated Block Device (DRBD) device on Alpine Linux. It assumes you are familiar with what DRBD is and why you would want to use it. If this is not the case, see the DRBD home page.

You should also be familiar with creating disk partitions, file systems, and logical volumes using command-line tools.

Unless otherwise noted, all procedures will be carried out on both hosts.

About the Configuration

The examples in this tutorial use two hosts: alpine1.domain and alpine2.domain. Both are virtual machines. Because the VMs need to communicate with each other, it's important to use a bridged adapter or internal network rather than Network Address Translation (NAT).

Virtual Hardware

A single CPU with 512M RAM is sufficient. The example uses a single disk, 8G in size.

Details of virtual machine hypervisor configuration vary and are not in scope for this document.

Alpine O.S.

The hosts are sys installs onto /dev/sda. The ROOT_SIZE environment variable was used to constrain the partition size to 4G (e.g. export ROOT_SIZE=4096 ; setup-alpine) and leave some free space on the disk. The free space will be used to create a logical volume for DRBD.

Creating a Device for DRBD

A DRBD device needs a block device at its foundation. In this tutorial, I'm using a logical volume (/dev/vg0/drbd0). You can also use a partiton on a regular disk such as /dev/sda4 if you want.

The process to create the device requires you to perform these steps:

  1. Install necessary packages
  2. Create a partition for LVM
  3. Create the physical volume, volume group, and logical volume
  4. Configure LVM to start when the system boots.

The commands you'll run to install packages and create the partition look like this:

apk add cfdisk lvm2 cfdisk /dev/sda

You'll need to create a partition in the free space area of the drive, change the type to Linux LVM, write, and quit.

Once that's done, you can go ahead with the logical volume. Assuming /dev/sda4 is the partition for LVM, the commands will look like this:

pvcreate /dev/sda4 vgcreate vg0 /dev/sda4 lvcreate -n drbd0 -L 1G vg0 rc-update add lvm boot

The logical volume /dev/vg0/drbd0 should be ready for use. Run ls /dev/vg0/drbd0 to verify before moving on.

DRBD Packages and Config Files

Before the DRBD device can be created, you'll need to get some things set up.

  1. Install necessary packages
  2. Gather network details for your hosts
  3. Create config files based on network and logical disk details

First, install the packages.

apk add drbd lsblk

Next, you'll need the hostname and IP address for both hosts.

uname -n ifconfig eth0

Finally, you can create the DRBD resource configuration file. In this example it's called drbd0.res and it resides in the /etc/drbd.d/ directory. This file needs to be created on both hosts.

The contents will look similar to this example:

Contents of /etc/drbd.d/drbd0.res

resource drbd0 { device minor 0; disk /dev/vg0/drbd0; meta-disk internal; protocol C; on alpine1.domain { address 192.168.0.11:7789; } on alpine2.domain { address 192.168.0.12:7789; } }

The line disk /dev/vg0/drbd0; should reflect the name of the device you're using. If you're not using a logical volume, it might be something like disk /dev/sda4;

The on alpine1.domain { lines should reflect the names of your hosts. It should match the output of uname -n exactly.

The address 192.168.0.11:7789; line should reflect the IP address of host alpine1. address 192.168.0.12:7789; should reflect host alpine2. The port numbers do not need to be changed.

The DRBD device

Now that the config files are in place, drbdadm is used to create the DRBD device and make it available for use. It only takes two commands.

drbdadm create-md drbd0 drbdadm up drbd0

You can use lsblk and drbdadm status to verify your success.

From the lsblk command, you should see something like the following example:

 # lsblk
 NAME          MAJ:MIN RM    SIZE RO TYPE MOUNTPOINTS
 sda             8:0    0      8G  0 disk
 ├─sda1          8:1    0    100M  0 part /boot/efi
 ├─sda2          8:2    0    920M  0 part [SWAP]
 ├─sda3          8:3    0      4G  0 part /
 └─sda4          8:4    0      3G  0 part
   └─vg0-drbd0 253:0    0      1G  0 lvm
     └─drbd0   147:0    0 1023.9M  0 disk

The output of drbd status will look similar to what's shown below. 'Inconsistent' and 'Connecting' are not a problem at this point.

 # drbdadm status
 drbd0 role:Primary
   disk:Inconsistent
   peer role:Secondary
     replication:Established peer-disk:Connecting

Primary and Secondary Nodes

In this example, we'll designate the alpine1 host as the primary node and alpine2 as the secondary.

The following command should be run from the primary node (alpine1 host) only.

 alpine1:~# drbdadm primary --force drbd0

Now, on the secondary host (alpine2) run this:

 alpine2:~# drbdadm secondary drbd0

Check the status again with drbdadm status and eventually it will show UpToDate for both disk and replication status.

Using the Device

Now that everything is setup, you can use /dev/drbd0 just like any other disk partition. Create a file system, mount it, copy files, etc. From here on out, all commands should be executed on the primary node.

Here are some examples:

 alpine1:~# mkfs.ext4 /dev/drbd0
 alpine1:~# e2fsck /dev/drbd0
 alpine1:~# mount -t ext4 /dev/drbd0 /mnt

None of these commands will work on the secondary node. That is by design of rhe DRBD system. If you try to use the device, you'll see an error like the one below.

 alpine2:~# e2fsck /dev/drbd0
 e2fsck 1.46.4 (18-Aug-2021)
 e2fsck: Read-only file system while trying to open /dev/drbd0
 Disk write-protected; use the -n option to do a read-only
 check of the device.

Again, this is by design. While it is possible to configure a dual primary system, it is beyond the scope of this document. See https://kb.linbit.com/drbd-9-and-dual-primary for more information.

When Things Go Wrong

The whole reason for using DRBD is to protect your data in case the primary node fails. So it's a good idea to look at that situation and how you can recover from it.

Here's the procedure in brief:

  1. Shutdown alpine1 to simulate primary node failure.
  2. Promote alpine2 to primary
  3. Mount drbd0's file system on alpine2.

But before that, create a test file under /mnt on alpine1. Anything is fine, it's just to verify the process works.

Then, when alpine1 is powered off, run the drbdadm command to make alpine2 the primary:

 alpine2:~# drbdadm primary drbd0

Check the status. You should see UpToDate for the disk, but Connecting for the peer. This is expected, because alpine1 is powered off.

 alpine2:~# drbdadm status
 drbd0 role:Primary
   disk:UpToDate
   peer connection:Connecting

The real proof is in the file system and the test file you created. To verify, mount alpine2's /dev/drbd0 on /mnt and see what it contains.

 alpine2:~# e2fsck /dev/drbd0
 e2fsck 1.46.4 (18-Aug-2021)
 /dev/drbd0: clean, 12/65536 files, 8859/262127 blocks
 alpine2:~# mount -t ext4 /dev/drbd0 /mnt
 alpine2:~# ls /mnt
 lost+found  test.txt

Now that alpine2 is primary, alpine1 must remain powered off. If you want to bring alpine1 back up, reconfigure alpine2 as secondary first.

 alpine2:~# drbdadm secondary drbd0

See the LinBit DRBD web site for more information.

Important Reminder

DRBD (or any other replicated storage) is not a substitute for a good backup and recovery plan. If your data is important, be sure to back it up!