ISCSI Raid and Clustered File Systems: Difference between revisions

From Alpine Linux
(New page: This document describes how to created a raided file system that is exported to multiple host via ISCSI. == Raid Configuration == Very Similar to Setting_up_a_software_raid1_array. ...)
 
(replace /etc/init.d with rc-service)
 
(9 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{Obsolete|SCST is deprecated since Alpine 2.6 in favor of [[Linux_iSCSI_Target_(TCM)|TCM]] and OCFS2 isn't available in Alpine 3.6}}
This document describes how to created a raided file system that is exported to multiple host via ISCSI.
This document describes how to created a raided file system that is exported to multiple host via ISCSI.


== Raid Configuration ==
== Raid Configuration ==


Very Similar to [[Setting_up_a_software_raid1_array]].
Very Similar to [[Setting up a software RAID array]].
 
apk_get install mdadm


mknod /dev/md0 b 9 0
apk install mdadm


mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc
mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc


To see the status of the creation of these devices
To see the status of the creation of these devices


cat /proc/mdstat
cat /proc/mdstat


You Don't have to wait to continue to use the disk.
You Don't have to wait to continue to use the disk.
Line 19: Line 19:
== iSCSI Target Config ==
== iSCSI Target Config ==


[[iSCSI Target and Initiator Configuration]] has a detailed description of setting up the Target. Just make sure to use /dev/md0 as your device.
[https://scst.sourceforge.net/target_iscsi.html SCST] is recommended over [https://iscsitarget.sourceforge.net/ IET], due to bugfixes, performance and RFC compliance.
 
Target iqn.2006-01.com.example:disk2.vol1
      Lun 0 Path=/dev/md0,Type=fileio


== Initiator Config ==
== Initiator Config ==
This was done on Debian because the OCFS2 tools are not in Alpine yet.


iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login
iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login


This should then give you a device /dev/sda. Check by dmesg.
This should then give you a device /dev/sda. Check by dmesg.


fdisk /dev/sda
fdisk /dev/sda


Create a partition to use. sda1 file system type 83
Create a partition to use. sda1 file system type 83


For a GUI to configure make sure to install ocfs2console.  
Add ocfs2 tools (available in Alpine 2.3 or greater)
 
apk add ocfs2-tools


It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it.  
It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it.  
Line 60: Line 58:
         node_count = 2
         node_count = 2
         name = ocfs2
         name = ocfs2
Load modules:
echo ocfs2 >> /etc/modules-load.d/ocfs2.conf
echo dlm >> /etc/modules-load.d/dlm.conf
modprobe ocfs2
modprobe dlm
Mount ocfs2 metafilesystems
echo none /sys/kernel/config configfs defaults 0 0 >> /etc/fstab
echo none /sys/kernel/dlm ocfs2_dlmfs defaults 0 0 >> /etc/fstab
Start ocfs2 cluster
rc-service o2cb start
Run the following command only on one node.  
Run the following command only on one node.  


mkfs.ocfs2 -L LABELNAME /dev/sda1
mkfs.ocfs2 -L LABELNAME /dev/sda1


Run the following command on both nodes.
Run the following command on both nodes.


/etc/init.d/o2cb enable
rc-service o2cb enable


mount /dev/sda1 /media/iscsi1
mount /dev/sda1 /media/iscsi1


Now you can create read/write/change on both machines to the one drive at the same time.
Now you can create read/write/change on both machines to the one drive at the same time.
[[Category:Storage]]

Latest revision as of 10:20, 17 November 2023

This material is obsolete ...

SCST is deprecated since Alpine 2.6 in favor of TCM and OCFS2 isn't available in Alpine 3.6 (Discuss)

This document describes how to created a raided file system that is exported to multiple host via ISCSI.

Raid Configuration

Very Similar to Setting up a software RAID array.

apk install mdadm
mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc

To see the status of the creation of these devices

cat /proc/mdstat

You Don't have to wait to continue to use the disk.

iSCSI Target Config

SCST is recommended over IET, due to bugfixes, performance and RFC compliance.

Initiator Config

iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login

This should then give you a device /dev/sda. Check by dmesg.

fdisk /dev/sda

Create a partition to use. sda1 file system type 83

Add ocfs2 tools (available in Alpine 2.3 or greater)

apk add ocfs2-tools

It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it.

Need to create a /etc/ocfs2/cluster.conf.

This configuration file should be the same on all the nodes in the cluster. Should look similar to the following...

node:

       ip_port = 7777
       ip_address = 192.168.1.202
       number = 0
       name = bubba
       cluster = ocfs2

node:

       ip_port = 7777
       ip_address = 192.168.1.102
       number = 1
       name = bobo
       cluster = ocfs2

cluster:

       node_count = 2
       name = ocfs2

Load modules:

echo ocfs2 >> /etc/modules-load.d/ocfs2.conf
echo dlm >> /etc/modules-load.d/dlm.conf
modprobe ocfs2
modprobe dlm

Mount ocfs2 metafilesystems

echo none /sys/kernel/config configfs defaults 0 0 >> /etc/fstab
echo none /sys/kernel/dlm ocfs2_dlmfs defaults 0 0 >> /etc/fstab

Start ocfs2 cluster

rc-service o2cb start

Run the following command only on one node.

mkfs.ocfs2 -L LABELNAME /dev/sda1

Run the following command on both nodes.

rc-service o2cb enable
mount /dev/sda1 /media/iscsi1

Now you can create read/write/change on both machines to the one drive at the same time.