GPFS comes with a systemd unit file (/usr/lpp/mmfs/lib/systemd/gpfs.service) that comes with the RPMs by default.
systemctl status gpfs.service
[root@IBMPOWER_HOST1 ~]# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------
1 IBMPOWER_HOST1 active
2 IBMPOWER_HOST2 active
3 IBMPOWER_HOST3 active
[root@IBMPOWER_HOST1 ~]# systemctl status gpfs.service
● gpfs.service - General Parallel File System
Loaded: loaded (/usr/lib/systemd/system/gpfs.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2020-04-19 00:39:51 EDT; 5 days ago
Process: 65927 ExecStart=/usr/lpp/mmfs/bin/mmremote startSubsys systemd $STARTSUBSYS_ARGS (code=exited, status=0/SUCCESS)
Main PID: 65959 (runmmfs)
Tasks: 689
Memory: 1.0G
CGroup: /system.slice/gpfs.service
├─65959 /usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/runmmfs
└─66297 /usr/lpp/mmfs/bin/mmfsd
Apr 19 00:39:51 IBMPOWER_HOST1 systemd[1]: Started General Parallel File System.
Apr 19 00:39:55 IBMPOWER_HOST1 mmfs[66297]: [N] CCR: failed to connect to node 9.114.75.217:1191 (sock 53 err 79)
Apr 19 00:39:56 IBMPOWER_HOST1 mmfs[66297]: [N] Connecting to 9.114.75.217 IBMPOWER_HOST3 <c0p0>
Apr 19 00:39:56 IBMPOWER_HOST1 mmfs[66297]: [N] This node (9.114.75.216 (IBMPOWER_HOST1)) is now Cluster Manager for SMPICI_gpfs.IBMPOWER_HOST1.
Apr 19 00:39:56 IBMPOWER_HOST1 mmfs[66297]: [N] mmfsd ready
Apr 19 00:39:57 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) appointed as manager for gpfs_fs.
Apr 20 16:08:19 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.215 (IBMPOWER_HOST2) lease renewal is overdue. Pinging to check if it is alive
Apr 20 18:23:10 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) lease renewal is overdue. Pinging to check if it is alive
Apr 20 20:25:15 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) lease renewal is overdue. Pinging to check if it is alive
Apr 20 22:46:10 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) lease renewal is overdue. Pinging to check if it is alive
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl is-active gpfs.service
active
[root@IBMPOWER_HOST1 ~]# systemctl is-enabled gpfs.service
disabled
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl is-failed gpfs.service
active
[root@IBMPOWER_HOST1 ~]#
This will return active if it is running properly or failed if an error occurred.
If the unit was intentionally stopped, it may return unknown or inactive
---------------------------------
[root@IBMPOWER_HOST1 ~]# rm -rf /etc/systemd/system/multi-user.target.wants/gpfs.service
[root@IBMPOWER_HOST1 ~]# systemctl enable gpfs.service
Created symlink from /etc/systemd/system/multi-user.target.wants/gpfs.service to /usr/lib/systemd/system/gpfs.service.
[root@IBMPOWER_HOST1 ~]# ls -alsrt /etc/systemd/system/multi-user.target.wants/gpfs.service
0 lrwxrwxrwx 1 root root 36 Apr 24 03:41 /etc/systemd/system/multi-user.target.wants/gpfs.service -> /usr/lib/systemd/system/gpfs.service
[root@IBMPOWER_HOST1 ~]#
This will create a symbolic link from the system’s copy of the service file (usually in /lib/systemd/system or /etc/systemd/system)
into the location on disk where systemd looks for autostart files (usually /etc/systemd/system/some_target.target.wants.
--------------------------------------------
[root@IBMPOWER_HOST1 ~]# systemctl enable gpfs.service
Created symlink from /etc/systemd/system/multi-user.target.wants/gpfs.service to /usr/lib/systemd/system/gpfs.service.
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl enable gpfs.service
Created symlink from /etc/systemd/system/multi-user.target.wants/gpfs.service to /usr/lib/systemd/system/gpfs.service.
[root@IBMPOWER_HOST1 ~]# ls -alsrt /etc/systemd/system/multi-user.target.wants/gpfs.service
0 lrwxrwxrwx 1 root root 36 Apr 24 03:41 /etc/systemd/system/multi-user.target.wants/gpfs.service -> /usr/lib/systemd/system/gpfs.service
[root@IBMPOWER_HOST1 ~]#
Lets check the status
[root@IBMPOWER_HOST1 ~]# systemctl status gpfs.service
● gpfs.service - General Parallel File System
Loaded: loaded (/usr/lib/systemd/system/gpfs.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2020-04-24 03:35:55 EDT; 10min ago
Main PID: 59025 (runmmfs)
CGroup: /system.slice/gpfs.service
├─59025 /usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/runmmfs
└─59548 /usr/lpp/mmfs/bin/mmfsd
Apr 24 03:35:55 IBMPOWER_HOST1 systemd[1]: Starting General Parallel File System...
Apr 24 03:35:55 IBMPOWER_HOST1 systemd[1]: Can't open PID file /var/mmfs/gen/runmmfsPid (yet?) after start: No such file or directory
Apr 24 03:35:55 IBMPOWER_HOST1 systemd[1]: Started General Parallel File System.
Apr 24 03:36:00 IBMPOWER_HOST1 mmfs[59548]: [N] Connecting to 9.114.75.215 IBMPOWER_HOST2 <c0p2>
Apr 24 03:36:00 IBMPOWER_HOST1 mmfs[59548]: [N] mmfsd ready
Apr 24 03:36:01 IBMPOWER_HOST1 mmfs[59548]: [N] Connecting to 9.114.75.217 IBMPOWER_HOST3 <c0n0>
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------
1 IBMPOWER_HOST1 active
2 IBMPOWER_HOST2 active
3 IBMPOWER_HOST3 active
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl is-enabled gpfs.service
enabled
[root@IBMPOWER_HOST1 ~]#
How to auto mount the filesystem when GPFS daemon starts - if you have not done at installation
export PATH=$PATH:/usr/lpp/mmfs/bin
[root@IBMPOWER_HOST1 ~]# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------
1 IBMPOWER_HOST1 active
2 IBMPOWER_HOST2 active
3 IBMPOWER_HOST3 active
[root@IBMPOWER_HOST1 ~]# mmchfs gpfs_fs -A yes
mmchfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@IBMPOWER_HOST1 ~]#
-A {yes | no | automount}
Indicates when the file system is to be mounted:
yes
When the GPFS daemon starts.
Verify:
[root@IBMPOWER_HOST1 ~]# mmlsfs gpfs_fs -A
flag value description
------------------- ------------------------ -----------------------------------
-A yes Automatic mount option
[root@IBMPOWER_HOST1 ~]#
How to Auto-load and auto-mount Spectrum scale in simple steps ?
[root@IBMPOWER_HOST1 ~]# mmchconfig autoload=yes
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@IBMPOWER_HOST1 ~]#
Reference:
https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1adm_mmchfs.htm
https://www.ibm.com/support/knowledgecenter/en/SSCKLT_2.0.0/UG/sec_ug_starting_scale_in_system_file.html
systemctl status gpfs.service
[root@IBMPOWER_HOST1 ~]# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------
1 IBMPOWER_HOST1 active
2 IBMPOWER_HOST2 active
3 IBMPOWER_HOST3 active
[root@IBMPOWER_HOST1 ~]# systemctl status gpfs.service
● gpfs.service - General Parallel File System
Loaded: loaded (/usr/lib/systemd/system/gpfs.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2020-04-19 00:39:51 EDT; 5 days ago
Process: 65927 ExecStart=/usr/lpp/mmfs/bin/mmremote startSubsys systemd $STARTSUBSYS_ARGS (code=exited, status=0/SUCCESS)
Main PID: 65959 (runmmfs)
Tasks: 689
Memory: 1.0G
CGroup: /system.slice/gpfs.service
├─65959 /usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/runmmfs
└─66297 /usr/lpp/mmfs/bin/mmfsd
Apr 19 00:39:51 IBMPOWER_HOST1 systemd[1]: Started General Parallel File System.
Apr 19 00:39:55 IBMPOWER_HOST1 mmfs[66297]: [N] CCR: failed to connect to node 9.114.75.217:1191 (sock 53 err 79)
Apr 19 00:39:56 IBMPOWER_HOST1 mmfs[66297]: [N] Connecting to 9.114.75.217 IBMPOWER_HOST3 <c0p0>
Apr 19 00:39:56 IBMPOWER_HOST1 mmfs[66297]: [N] This node (9.114.75.216 (IBMPOWER_HOST1)) is now Cluster Manager for SMPICI_gpfs.IBMPOWER_HOST1.
Apr 19 00:39:56 IBMPOWER_HOST1 mmfs[66297]: [N] mmfsd ready
Apr 19 00:39:57 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) appointed as manager for gpfs_fs.
Apr 20 16:08:19 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.215 (IBMPOWER_HOST2) lease renewal is overdue. Pinging to check if it is alive
Apr 20 18:23:10 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) lease renewal is overdue. Pinging to check if it is alive
Apr 20 20:25:15 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) lease renewal is overdue. Pinging to check if it is alive
Apr 20 22:46:10 IBMPOWER_HOST1 mmfs[66297]: [N] Node 9.114.75.217 (IBMPOWER_HOST3) lease renewal is overdue. Pinging to check if it is alive
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl is-active gpfs.service
active
[root@IBMPOWER_HOST1 ~]# systemctl is-enabled gpfs.service
disabled
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl is-failed gpfs.service
active
[root@IBMPOWER_HOST1 ~]#
This will return active if it is running properly or failed if an error occurred.
If the unit was intentionally stopped, it may return unknown or inactive
---------------------------------
[root@IBMPOWER_HOST1 ~]# rm -rf /etc/systemd/system/multi-user.target.wants/gpfs.service
[root@IBMPOWER_HOST1 ~]# systemctl enable gpfs.service
Created symlink from /etc/systemd/system/multi-user.target.wants/gpfs.service to /usr/lib/systemd/system/gpfs.service.
[root@IBMPOWER_HOST1 ~]# ls -alsrt /etc/systemd/system/multi-user.target.wants/gpfs.service
0 lrwxrwxrwx 1 root root 36 Apr 24 03:41 /etc/systemd/system/multi-user.target.wants/gpfs.service -> /usr/lib/systemd/system/gpfs.service
[root@IBMPOWER_HOST1 ~]#
This will create a symbolic link from the system’s copy of the service file (usually in /lib/systemd/system or /etc/systemd/system)
into the location on disk where systemd looks for autostart files (usually /etc/systemd/system/some_target.target.wants.
--------------------------------------------
[root@IBMPOWER_HOST1 ~]# systemctl enable gpfs.service
Created symlink from /etc/systemd/system/multi-user.target.wants/gpfs.service to /usr/lib/systemd/system/gpfs.service.
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl enable gpfs.service
Created symlink from /etc/systemd/system/multi-user.target.wants/gpfs.service to /usr/lib/systemd/system/gpfs.service.
[root@IBMPOWER_HOST1 ~]# ls -alsrt /etc/systemd/system/multi-user.target.wants/gpfs.service
0 lrwxrwxrwx 1 root root 36 Apr 24 03:41 /etc/systemd/system/multi-user.target.wants/gpfs.service -> /usr/lib/systemd/system/gpfs.service
[root@IBMPOWER_HOST1 ~]#
Lets check the status
[root@IBMPOWER_HOST1 ~]# systemctl status gpfs.service
● gpfs.service - General Parallel File System
Loaded: loaded (/usr/lib/systemd/system/gpfs.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2020-04-24 03:35:55 EDT; 10min ago
Main PID: 59025 (runmmfs)
CGroup: /system.slice/gpfs.service
├─59025 /usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/runmmfs
└─59548 /usr/lpp/mmfs/bin/mmfsd
Apr 24 03:35:55 IBMPOWER_HOST1 systemd[1]: Starting General Parallel File System...
Apr 24 03:35:55 IBMPOWER_HOST1 systemd[1]: Can't open PID file /var/mmfs/gen/runmmfsPid (yet?) after start: No such file or directory
Apr 24 03:35:55 IBMPOWER_HOST1 systemd[1]: Started General Parallel File System.
Apr 24 03:36:00 IBMPOWER_HOST1 mmfs[59548]: [N] Connecting to 9.114.75.215 IBMPOWER_HOST2 <c0p2>
Apr 24 03:36:00 IBMPOWER_HOST1 mmfs[59548]: [N] mmfsd ready
Apr 24 03:36:01 IBMPOWER_HOST1 mmfs[59548]: [N] Connecting to 9.114.75.217 IBMPOWER_HOST3 <c0n0>
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------
1 IBMPOWER_HOST1 active
2 IBMPOWER_HOST2 active
3 IBMPOWER_HOST3 active
[root@IBMPOWER_HOST1 ~]#
[root@IBMPOWER_HOST1 ~]# systemctl is-enabled gpfs.service
enabled
[root@IBMPOWER_HOST1 ~]#
How to auto mount the filesystem when GPFS daemon starts - if you have not done at installation
export PATH=$PATH:/usr/lpp/mmfs/bin
[root@IBMPOWER_HOST1 ~]# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------
1 IBMPOWER_HOST1 active
2 IBMPOWER_HOST2 active
3 IBMPOWER_HOST3 active
[root@IBMPOWER_HOST1 ~]# mmchfs gpfs_fs -A yes
mmchfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@IBMPOWER_HOST1 ~]#
-A {yes | no | automount}
Indicates when the file system is to be mounted:
yes
When the GPFS daemon starts.
Verify:
[root@IBMPOWER_HOST1 ~]# mmlsfs gpfs_fs -A
flag value description
------------------- ------------------------ -----------------------------------
-A yes Automatic mount option
[root@IBMPOWER_HOST1 ~]#
How to Auto-load and auto-mount Spectrum scale in simple steps ?
[root@IBMPOWER_HOST1 ~]# mmchconfig autoload=yes
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@IBMPOWER_HOST1 ~]#
Reference:
https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1adm_mmchfs.htm
https://www.ibm.com/support/knowledgecenter/en/SSCKLT_2.0.0/UG/sec_ug_starting_scale_in_system_file.html
No comments:
Post a Comment