Sunday, 22 March 2026

OCFS2 setup for shareable ( Read/write) Block Volume for Multiple Cluster Compute instances

OCI block volume, attach as shareable iSCSI
Step 1: Create the block volume in OCI
In the OCI console , create a block volume in the same availability domain as your compute instances. Pick size and performance to match your workload.

Step 2: Attach the volume to each cluster node

  1. Open the volume (or the instance) and choose Attach block volume
  2. Set Attachment type to iSCSI (not paravirtualized for this flow).
  3. Set Attachment access to Read/write – shareable.

Attach the same volume to all nodes in cluster, with the same settings each time.

Step 3: Run the iSCSI commands on each node
After each attachment, OCI shows iSCSI IPv4 commands & information for that attachment.
Open it and copy the full set of `iscsiadm` commands  (discover, login, and any optional rescan steps OCI lists).

1. SSH to the node
2. Paste and run those commands as root or with `sudo`, exactly as OCI documents for your image (Oracle Linux / RHEL-style hosts usually use the `iscsiadm` sequence from the console).
Repeat on every node so each host has an active iSCSI session to the same volume.

Check: On each node run `lsblk` (or `fdisk -l`). You should see a new disk (often `/dev/sdb` or similar).

OCFS2 needs a small cluster layout file. The file must list all nodes in the cluster. `node_count` must match how many `node:` blocks you define.
Step 4: Create the config directory
On each node:
sudo mkdir -p /etc/ocfs2

Step 5: Edit `cluster.conf`
sudo vi /etc/ocfs2/cluster.conf
cluster:
    node_count = 2
    name = ocfs2

node:
    number = 0
    cluster = ocfs2
    ip_port = 7777
    ip_address = 10.0.0.94
    name = jay-db-node01

node:
    number = 1
    cluster = ocfs2
    ip_port = 7777
    ip_address = 10.0.0.95
    name = jay-db-node02


Use one cluster name (here `ocfs2`)
Private IPs your nodes use to talk to each other (often the VCN private address).
`ip_port` is commonly 7777 for OCFS2.
`number` must be unique per node (0, 1, 2, …).

copy same `cluster.conf` on every node—the full list of all nodes and their IPs must match on each machine.

Register and configure O2CB
Step 6: Register the cluster
sudo o2cb register-cluster ocfs2

That tells the system which cluster this node belongs to.

Step 7: Configure the driver (one time per node)
[root@jay-db-node01 ~]# sudo /sbin/o2cb.init configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 31
Specify network idle timeout in ms (>=5000) [30000]: 5000
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
checking debugfs...
Loading stack plugin "o2cb": OK
Loading filesystem "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Setting cluster stack "o2cb": OK
Registering O2CB cluster "ocfs2": OK
Setting O2CB cluster timeouts : OK


Step 8: Start O2CB and check status
[root@jay-db-node01 ~]# sudo o2cb register-cluster ocfs2
[root@jay-db-node01 ~]# sudo systemctl start o2cb
[root@jay-db-node01 ~]# sudo o2cb cluster-status ocfs2
Cluster 'ocfs2' is online

Mount point and format the volume
Step 9: Create the mount directory ( All Nodes)
sudo mkdir /Oradb_data

Step 10: Format the shared disk with OCFS2 (Run it one time on one node only)
[root@jay-db-node01 ~]# sudo mkfs.ocfs2 -L Oradb_data /dev/sdb -N 8
mkfs.ocfs2 1.8.6
Cluster stack: classic o2cb
Label: Oradb_data
Features: sparse extended-slotmap backup-super unwritten inline-data strict-journal-super xattr indexed-dirs refcount discontig-bg
Block size: 4096 (12 bits)
Cluster size: 4096 (12 bits)
Volume size: 2199023255552 (536870912 clusters) (536870912 blocks)
Cluster groups: 16645 (tail covers 2048 clusters, rest cover 32256 clusters)
Extent allocator size: 276824064 (66 groups)
Journal size: 268435456
Node slots: 8
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 6 block(s)
Formatting Journals: done
Growing extent allocator: done
Formatting slot map: done
Formatting quota files: done
Writing lost+found: done
mkfs.ocfs2 successful

When you see mkfs.ocfs2 successful, the volume is ready. Do not run `mkfs` again on the other nodes.

fstab and mount on every node
Step 11: Add fstab on all nodes
sudo vi /etc/fstab
/dev/sdb /Oradb_data ocfs2     _netdev,defaults   0 0

If the shared disk shows up as a different device name on another node, use a stable name (UUID or `/dev/disk/by-id/...`) so every node points at the same LUN.

Step 12: Mount on all nodes
sudo mount -a
Check with `df -h /Oradb_data` or `mount | grep Oradb_data`

[root@jay-db-node01 ~]# df -h /Oradb_data
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.0T  4.2G  2.0T   1% /Oradb_data

No comments:

Post a Comment

Note: only a member of this blog may post a comment.