ISCSI Raid and Clustered File Systems: Difference between revisions
(New page: This document describes how to created a raided file system that is exported to multiple host via ISCSI. == Raid Configuration == Very Similar to Setting_up_a_software_raid1_array. ...) |
(page updated to reflect OCFS2 support in Alpine) |
||
Line 5: | Line 5: | ||
Very Similar to [[Setting_up_a_software_raid1_array]]. | Very Similar to [[Setting_up_a_software_raid1_array]]. | ||
apk install mdadm | |||
mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc | |||
mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc | |||
To see the status of the creation of these devices | To see the status of the creation of these devices | ||
cat /proc/mdstat | cat /proc/mdstat | ||
You Don't have to wait to continue to use the disk. | You Don't have to wait to continue to use the disk. | ||
Line 19: | Line 17: | ||
== iSCSI Target Config == | == iSCSI Target Config == | ||
[ | [http://scst.sourceforge.net/target_iscsi.html SCST] is recommended over [http://iscsitarget.sourceforge.net/ IET], due to bugfixes, performance and RFC compliance. | ||
For a detailed config how-to please look at [[High_performance_SCST_iSCSI_Target_on_Linux_software_Raid]] | |||
== Initiator Config == | == Initiator Config == | ||
iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login | iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login | ||
This should then give you a device /dev/sda. Check by dmesg. | This should then give you a device /dev/sda. Check by dmesg. | ||
fdisk /dev/sda | fdisk /dev/sda | ||
Create a partition to use. sda1 file system type 83 | Create a partition to use. sda1 file system type 83 | ||
Add ocfs2 tools (currently in edge/main) | |||
apk add ocfs2-tools | |||
It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it. | It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it. | ||
Line 60: | Line 57: | ||
node_count = 2 | node_count = 2 | ||
name = ocfs2 | name = ocfs2 | ||
Load modules: | |||
echo ocfs2 >> /etc/modules | |||
echo dlm >> /etc/modules | |||
modprobe ocfs2 | |||
modprobe dlm | |||
Mount ocfs2 metafilesystems | |||
echo none /sys/kernel/config configfs defaults 0 0 >> /etc/fstab | |||
echo none /sys/kernel/dlm ocfs2_dlmfs defaults 0 0 >> /etc/fstab | |||
Start ocfs2 cluster | |||
/etc/init.d/o2cb start | |||
Run the following command only on one node. | Run the following command only on one node. | ||
mkfs.ocfs2 -L LABELNAME /dev/sda1 | mkfs.ocfs2 -L LABELNAME /dev/sda1 | ||
Run the following command on both nodes. | Run the following command on both nodes. | ||
/etc/init.d/o2cb enable | /etc/init.d/o2cb enable | ||
mount /dev/sda1 /media/iscsi1 | mount /dev/sda1 /media/iscsi1 | ||
Now you can create read/write/change on both machines to the one drive at the same time. | Now you can create read/write/change on both machines to the one drive at the same time. |
Revision as of 03:54, 3 August 2011
This document describes how to created a raided file system that is exported to multiple host via ISCSI.
Raid Configuration
Very Similar to Setting_up_a_software_raid1_array.
apk install mdadm
mdadm --create --level=5 --raid-devices=3 /dev/md0 /dev/hda /dev/hdb /dev/hdc
To see the status of the creation of these devices
cat /proc/mdstat
You Don't have to wait to continue to use the disk.
iSCSI Target Config
SCST is recommended over IET, due to bugfixes, performance and RFC compliance. For a detailed config how-to please look at High_performance_SCST_iSCSI_Target_on_Linux_software_Raid
Initiator Config
iscsiadm --mode node --targetname NAME_OF_TARGET --portal IP_OF_TARGET --login
This should then give you a device /dev/sda. Check by dmesg.
fdisk /dev/sda
Create a partition to use. sda1 file system type 83
Add ocfs2 tools (currently in edge/main)
apk add ocfs2-tools
It can take care of starting and stopping services, copying the cluster.conf between nodes,creating the filesystem, and mounting it.
Need to create a /etc/ocfs2/cluster.conf.
This configuration file should be the same on all the nodes in the cluster. Should look similar to the following...
node:
ip_port = 7777 ip_address = 192.168.1.202 number = 0 name = bubba cluster = ocfs2
node:
ip_port = 7777 ip_address = 192.168.1.102 number = 1 name = bobo cluster = ocfs2
cluster:
node_count = 2 name = ocfs2
Load modules:
echo ocfs2 >> /etc/modules echo dlm >> /etc/modules
modprobe ocfs2 modprobe dlm
Mount ocfs2 metafilesystems
echo none /sys/kernel/config configfs defaults 0 0 >> /etc/fstab echo none /sys/kernel/dlm ocfs2_dlmfs defaults 0 0 >> /etc/fstab
Start ocfs2 cluster
/etc/init.d/o2cb start
Run the following command only on one node.
mkfs.ocfs2 -L LABELNAME /dev/sda1
Run the following command on both nodes.
/etc/init.d/o2cb enable
mount /dev/sda1 /media/iscsi1
Now you can create read/write/change on both machines to the one drive at the same time.