I had to make a configuration of corosync, pacemaker and stonith via vCenter working on a Ubuntu 14.04 LTS server.
Following is just a reminder for myself so I will not forget:
- Download “VMware-vSphere-Perl-SDK”. Version 5.5.0 works for me. VMWare-vSphere-Perl-SDK
- Extract the file: tar -zxvpf VMware-vSphere-Perl-SDK-5.5.0-1384587.x86_64.tar.gz
- Change directory to “vmware-vsphere-cli-distrib”
- Install: ./vmware-install.pl. You might need to install extra dependencies.
- Run CPAN shell for the first time: perl -MCPAN -e shell
- While you are still in the CPAN shell execute the following command “install GAAS/libwww-perl-5.837.tar.gz”. We need an older version of libwww for perl.
- We need to edit the file “/usr/lib/vmware-vcli/VMware/share/VMware/VICommon.pm” so perl accepts unknown ssl certificates from the vcenter. (source article).
Add the line “$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} = 0;” after the line “use Data::Dumper;” - Now we need to create a credentials file. For this you will need a usera ccount and a password which is allowed to reset a vmware host via vcenter.
Execute: /usr/lib/vmware-vcli/apps/general/credstore_admin.pl add -s <ip-vcenter> -u “<username>” -p “<password>” –credstore ~/vicredentials.xml - Change permissions on file “vicredentials.xml”: chmod 400 ~/vicredentials.xml
- Move to a save location mv ~/vicredentials.xml /etc/corosync/
- Do this on both nodes.
- Check connection to vCenter is working allright:
Execute VI_SERVER=<ip-vcenter> VI_CREDSTORE=/etc/corosync/vicredentials.xml HOSTLIST=”NODE1″ RESETPOWERON=0 stonith -t external/vcenter -E -S - Result:
Smartmatch is experimental at /usr/lib/stonith/plugins/external/vcenter line 34.
Smartmatch is experimental at /usr/lib/stonith/plugins/external/vcenter line 115.
Smartmatch is experimental at /usr/lib/stonith/plugins/external/vcenter line 152.
Smartmatch is experimental at /usr/lib/stonith/plugins/external/vcenter line 34.
Smartmatch is experimental at /usr/lib/stonith/plugins/external/vcenter line 115.
Smartmatch is experimental at /usr/lib/stonith/plugins/external/vcenter line 152.
info: external/vcenter device OK. - When you get the OK you can continue otherwise solve your problem.
- Add stonith resource to cluster config: crm configure
crm(live)configure# primitive p_stonith_fence_NODE1 stonith:external/vcenter \
params HOSTLIST=”NODE1″ VI_CREDSTORE=”/etc/corosync/vicredentials.xml” VI_SERVER=”<ip-vcenter>” RESETPOWERON=”0″ pcmk_host_check=”static-list” pcmk_host_list=”NODE1″ \
op start interval=”0″ timeout=”120″ \
op stop interval=”0″ timeout=”120″ \
op monitor interval=”3600″ timeout=”300″ start-delay=”15″ \
meta target-role=”Started”
crm(live)configure# location l_stonith_fence_NODE1 p_stonith_fence_NODE1 -inf: NODE1
crm(live)configure# primitive p_stonith_fence_NODE2 stonith:external/vcenter \
params HOSTLIST=”NODE2″ VI_CREDSTORE=”/etc/corosync/vicredentials.xml” VI_SERVER=”<ip-vcenter>” RESETPOWERON=”0″ pcmk_host_check=”static-list” pcmk_host_list=”NODE2″ \
op start interval=”0″ timeout=”120″ \
op stop interval=”0″ timeout=”120″ \
op monitor interval=”3600″ timeout=”300″ start-delay=”15″ \
meta target-role=”Started”
crm(live)configure# location l_stonith_fence_NODE2 p_stonith_fence_NODE2 -inf: NODE2
crm(live)configure# commit
crm(live)configure# quit - Watch and check if the stonith resource is nicely started: crm_mon -rf1
- Result:
NODE1(root@NODE1):~# crm_mon -rf1
Last updated: Thu Jul 9 08:59:45 2015
Last change: Thu Jul 2 03:52:35 2015 via crmd on NODE1
Stack: corosync
Current DC: NODE1(168303913) – partition with quorum
Version: 1.1.10-42f2063
2 Nodes configured
9 Resources configuredOnline: [ NODE1 NODE2]Full list of resources:p_stonith_fence_NODE1 (stonith:external/vcenter): Started NODE2
p_stonith_fence_NODE2 (stonith:external/vcenter): Started NODE1
Resource Group: zabbix-cluster
fs_data (ocf::heartbeat:Filesystem): Started NODE2
virtip (ocf::heartbeat:IPaddr2): Started NODE2
mysqld (ocf::heartbeat:mysql): Started NODE2
zabbix-server (lsb:zabbix-server): Started NODE2
apache2 (lsb:apache2): Started NODE2
Master/Slave Set: ms_drbd_data [drbd_data]
Masters: [ NODE2]
Slaves: [ NODE1]Migration summary:
* Node NODE1:
* Node NODE2: