ISCSI Raid and Clustered File Systems: Difference between revisions
(use https) |
(replace /etc/init.d with rc-service) |
||
(One intermediate revision by one other user not shown) | |||
Line 19: | Line 19: | ||
== iSCSI Target Config == | == iSCSI Target Config == | ||
[https://scst.sourceforge.net/target_iscsi.html SCST] is recommended over [https://iscsitarget.sourceforge.net/ IET], due to bugfixes, performance and RFC compliance. | [https://scst.sourceforge.net/target_iscsi.html SCST] is recommended over [https://iscsitarget.sourceforge.net/ IET], due to bugfixes, performance and RFC compliance. | ||
== Initiator Config == | == Initiator Config == | ||
Line 75: | Line 74: | ||
Start ocfs2 cluster | Start ocfs2 cluster | ||
rc-service o2cb start | |||
Run the following command only on one node. | Run the following command only on one node. | ||
Line 83: | Line 82: | ||
Run the following command on both nodes. | Run the following command on both nodes. | ||
rc-service o2cb enable | |||
mount /dev/sda1 /media/iscsi1 | mount /dev/sda1 /media/iscsi1 |
Latest revision as of 10:20, 17 November 2023
This material is obsolete ... SCST is deprecated since Alpine 2.6 in favor of TCM and OCFS2 isn't available in Alpine 3.6 (Discuss) |
This document describes how to created a raided file system that is exported to multiple host via ISCSI.
Raid Configuration
Very Similar to Setting up a software RAID array.
apk install mdadm
mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc
To see the status of the creation of these devices
cat /proc/mdstat
You Don't have to wait to continue to use the disk.
iSCSI Target Config
SCST is recommended over IET, due to bugfixes, performance and RFC compliance.
Initiator Config
iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login
This should then give you a device /dev/sda. Check by dmesg.
fdisk /dev/sda
Create a partition to use. sda1 file system type 83
Add ocfs2 tools (available in Alpine 2.3 or greater)
apk add ocfs2-tools
It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it.
Need to create a /etc/ocfs2/cluster.conf.
This configuration file should be the same on all the nodes in the cluster. Should look similar to the following...
node:
ip_port = 7777 ip_address = 192.168.1.202 number = 0 name = bubba cluster = ocfs2
node:
ip_port = 7777 ip_address = 192.168.1.102 number = 1 name = bobo cluster = ocfs2
cluster:
node_count = 2 name = ocfs2
Load modules:
echo ocfs2 >> /etc/modules-load.d/ocfs2.conf echo dlm >> /etc/modules-load.d/dlm.conf
modprobe ocfs2 modprobe dlm
Mount ocfs2 metafilesystems
echo none /sys/kernel/config configfs defaults 0 0 >> /etc/fstab echo none /sys/kernel/dlm ocfs2_dlmfs defaults 0 0 >> /etc/fstab
Start ocfs2 cluster
rc-service o2cb start
Run the following command only on one node.
mkfs.ocfs2 -L LABELNAME /dev/sda1
Run the following command on both nodes.
rc-service o2cb enable
mount /dev/sda1 /media/iscsi1
Now you can create read/write/change on both machines to the one drive at the same time.