ISCSI Raid and Clustered File Systems: Difference between revisions

From Alpine Linux
(page updated to reflect OCFS2 support in Alpine)
(Deprecate)
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{Obsolete|SCST is deprecated since Alpine 2.6 in favor of [[Linux_iSCSI_Target_(TCM)|TCM]] and OCFS2 isn't available in Alpine 3.6}}
This document describes how to created a raided file system that is exported to multiple host via ISCSI.
This document describes how to created a raided file system that is exported to multiple host via ISCSI.


== Raid Configuration ==
== Raid Configuration ==


Very Similar to [[Setting_up_a_software_raid1_array]].
Very Similar to [[Setting up a software RAID array]].


  apk install mdadm
  apk install mdadm
Line 30: Line 32:
Create a partition to use. sda1 file system type 83
Create a partition to use. sda1 file system type 83


Add ocfs2 tools (currently in edge/main)
Add ocfs2 tools (available in Alpine 2.3 or greater)


  apk add ocfs2-tools
  apk add ocfs2-tools
Line 86: Line 88:


Now you can create read/write/change on both machines to the one drive at the same time.
Now you can create read/write/change on both machines to the one drive at the same time.
[[Category:Storage]]

Revision as of 09:58, 12 June 2017

This material is obsolete ...

SCST is deprecated since Alpine 2.6 in favor of TCM and OCFS2 isn't available in Alpine 3.6 (Discuss)

This document describes how to created a raided file system that is exported to multiple host via ISCSI.

Raid Configuration

Very Similar to Setting up a software RAID array.

apk install mdadm
mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc

To see the status of the creation of these devices

cat /proc/mdstat

You Don't have to wait to continue to use the disk.

iSCSI Target Config

SCST is recommended over IET, due to bugfixes, performance and RFC compliance. For a detailed config how-to please look at High_performance_SCST_iSCSI_Target_on_Linux_software_Raid

Initiator Config

iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login

This should then give you a device /dev/sda. Check by dmesg.

fdisk /dev/sda

Create a partition to use. sda1 file system type 83

Add ocfs2 tools (available in Alpine 2.3 or greater)

apk add ocfs2-tools

It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it.

Need to create a /etc/ocfs2/cluster.conf.

This configuration file should be the same on all the nodes in the cluster. Should look similar to the following...

node:

       ip_port = 7777
       ip_address = 192.168.1.202
       number = 0
       name = bubba
       cluster = ocfs2

node:

       ip_port = 7777
       ip_address = 192.168.1.102
       number = 1
       name = bobo
       cluster = ocfs2

cluster:

       node_count = 2
       name = ocfs2

Load modules:

echo ocfs2 >> /etc/modules
echo dlm >> /etc/modules
modprobe ocfs2
modprobe dlm

Mount ocfs2 metafilesystems

echo none /sys/kernel/config configfs defaults 0 0 >> /etc/fstab
echo none /sys/kernel/dlm ocfs2_dlmfs defaults 0 0 >> /etc/fstab

Start ocfs2 cluster

/etc/init.d/o2cb start

Run the following command only on one node.

mkfs.ocfs2 -L LABELNAME /dev/sda1

Run the following command on both nodes.

/etc/init.d/o2cb enable
mount /dev/sda1 /media/iscsi1

Now you can create read/write/change on both machines to the one drive at the same time.