Discussion:
[Linux-cluster] fence_cisco_ucs issue within cluster.conf
Jayesh Shinde
2015-11-02 14:39:50 UTC
Permalink
Hi ,

I am trying to configure the 2 node cluster with fence_cisco_ucs . The
fence testing is working properly via command line but its not working
within cluster.conf

problem / scenario :--
----------------------------
When I manually shutdown the Ethernet card of of mailbox1 server , then
mailbox2 server detect network failure and trying to fence mailbox1 ,
But its getting fail with "plug" related error ( refer below log ) i.e
Failed: Unable to obtain correct plug status or plug is not available

I refer Redhat KB and google and older mail-thread . As per suggestion
I upgrade the from "fence-agents-3.1.5-35.el6" to
"fence-agents-4.0.15-8.el6.x86_64".
Also checked by doing few other changes in cluster.conf , but that not
worked . Kindly guide where I am doing wrong with "plug" ?

I am using OS is RHEL 6.5
-------------------------------------
cman-3.0.12.1-59.el6.x86_64
rgmanager-3.0.12.1-19.el6.x86_64
fence-virt-0.2.3-15.el6.x86_64
fence-agents-4.0.15-8.el6.x86_64

command line fencing :--
-----------------------------
[***@mailbox2 ~]# /usr/sbin/fence_cisco_ucs -a 172.17.1.30 -l KVM -p
'myPassword' -o status -v -z --plug=mailbox1 --ipport=443
suborg="/org-root/ls-mailbox" ; *echo $?*
<aaaLogin inName="KVM" inPassword="***@ssword" />
<aaaLogin cookie="" response="yes"
outCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3"
outRefreshPeriod="600" outPriv="pn-equipment,pn-maintenance,read-only"
outDomains="" outChannel="noencssl" outEvtChannel="noencssl"
outSessionId="web_29402_B" outVersion="2.2(3d)" outName="KVM"> </aaaLogin>
<configResolveDn
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3"
inHierarchical="false" dn="org-root/ls-mailbox1/power"/>
<configResolveDn dn="org-root/ls-mailbox1/power"
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" response="yes">
<outConfig> <lsPower dn="org-root/ls-mailbox1/power" state="up"/>
</outConfig> </configResolveDn>
*Status: ON*
<aaaLogout inCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" />
<aaaLogout cookie="" response="yes" outStatus="success"> </aaaLogout>

*0*

/etc/hosts on both mailbox1 & mailbox2 server

127.0.0.1 localhost localhost.localdomain
192.168.51.91 mailbox1.mydomain.com
192.168.51.92 mailbox2.mydomain.com

/etc/cluster/cluster.conf :--
------------------------------------

<?xml version="1.0"?>
<cluster config_version="69" name="cluster1">
<clusternodes>
<clusternode name="mailbox1.mydomain.com" nodeid="1">
<fence>
<method name="CiscoFence">
<device name="CiscoFence" port="mailbox1"/>
</method>
</fence>
</clusternode>
<clusternode name="mailbox2.mydomain.com" nodeid="2">
<fence>
<method name="CiscoFence">
<device name="CiscoFence" port="mailbox2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<rm>
<failoverdomains>
<failoverdomain name="failover1" ordered="1" restricted="1">
<failoverdomainnode name="mailbox1.mydomain.com"
priority="2"/>
<failoverdomainnode name="mailbox2.mydomain.com"
priority="1"/>
</failoverdomain>
<failoverdomain name="failover2" ordered="1" restricted="1">
<failoverdomainnode name="mailbox1.mydomain.com"
priority="2"/>
<failoverdomainnode name="mailbox2.mydomain.com"
priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="192.168.51.93/24" sleeptime="10"/>
<fs device="/dev/mapper/mail_1-mailbox1" force_unmount="1"
fsid="28418" fstype="ext4" mountpoint="/mailbox1" name="imap1_fs"
self_fence="1"/>
<script file="/etc/init.d/cyrus-imapd1" name="cyrus1"/>
<ip address="192.168.51.94/24" sleeptime="10"/>
<fs device="/dev/mapper/mail_2-mailbox2" force_unmount="1"
fsid="49388" fstype="ext4" mountpoint="/mailbox2" name="imap2_fs"
self_fence="1"/>
<script file="/etc/init.d/cyrus-imapd2" name="cyrus2"/>
</resources>
<service domain="failover1" name="mailbox1" recovery="restart">
<fs ref="imap1_fs"/>
<ip ref="192.168.51.93/24"/>
<script ref="cyrus1"/>
</service>
<service domain="failover2" name="mailbox2" recovery="restart">
<ip ref="192.168.51.94/24"/>
<fs ref="imap2_fs"/>
<script ref="cyrus2"/>
</service>
</rm>
<fencedevices>
<fencedevice agent="fence_cisco_ucs" ipaddr="172.17.1.30"
ipport="443" login="KVM" name="CiscoFence" passwd="myPassword" ssl="on"
suborg="/org-root/ls-mailbox"/>
</fencedevices>
</cluster>



tail -f /var/log/messages :--
------------------------------

Oct 28 15:42:13 mailbox2 corosync[2376]: [CPG ] chosen downlist:
sender r(0) ip(192.168.51.92) ; members(old:2 left:1)
Oct 28 15:42:13 mailbox2 corosync[2376]: [MAIN ] Completed service
synchronization, ready to provide service.
Oct 28 15:42:13 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:13 mailbox2 rgmanager[2849]: State change: mailbox1 DOWN
Oct 28 15:42:14 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:14 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent
fence_cisco_ucs result: error from agent
Oct 28 15:42:14 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:17 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:17 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:17 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent
fence_cisco_ucs result: error from agent
Oct 28 15:42:17 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:20 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:21 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:21 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent
fence_cisco_ucs result: error from agent
Oct 28 15:42:21 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:25 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:28 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:31 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:35 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:38 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:42 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:45 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:49 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:49 mailbox2 corosync[2376]: [TOTEM ] A processor joined
or left the membership and a new membership was formed.
Oct 28 15:42:49 mailbox2 corosync[2376]: [CPG ] chosen downlist:
sender r(0) ip(192.168.51.92) ; members(old:1 left:0)
Oct 28 15:42:49 mailbox2 corosync[2376]: [MAIN ] Completed service
synchronization, ready to provide service.
Oct 28 15:42:52 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012
Oct 28 15:42:56 mailbox2 python: Failed: Unable to obtain correct plug
status or plug is not available#012


Regards
Jayesh Shinde
Marek marx Grác
2015-11-04 09:20:57 UTC
Permalink
Hi,

Thanks for a great report. I believe that problem is that in cluster.conf you set up ‘suborg=“/org-root/ls-mailbox” but on the command line you have it without “--" so it is not used at all. It is best shown at that verbose output.
command line fencing :--
-----------------------------
'myPassword' -o status -v -z --plug=mailbox1 --ipport=443
suborg="/org-root/ls-mailbox" ; *echo $?*
Post by Jayesh Shinde
outCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3"
outRefreshPeriod="600" outPriv="pn-equipment,pn-maintenance,read-only"
outDomains="" outChannel="noencssl" outEvtChannel="noencssl"
outSessionId="web_29402_B" outVersion="2.2(3d)" outName="KVM">
Post by Jayesh Shinde
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3"
inHierarchical="false" dn="org-root/ls-mailbox1/power"/>
Post by Jayesh Shinde
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" response="yes">
As you can see dn is org-root/ls-mailbox1/... with suborg set it will be org-root/ls-mailbox/ls-mailbox1/...
It should be enough to remove sub-org part from your cluster.conf

m,
--
Linux-cluster mailing list
Linux-***@redhat.com
https://www.redhat.com/mailman/listinf
Jayesh Shinde
2015-11-04 14:36:47 UTC
Permalink
Hello ,

My issue resolved by removing the --suborg from cluster.conf

Redhat KB https://access.redhat.com/solutions/785903# syntax not worked
with me for RHEL 6.5 i.e for --suborg

IN KB its mentioned as :-- "While invoking |fence_cisco_ucs| agent from
command line, please add |/org-| prefix to the real organization path
like below

|$ fence_cisco_ucs --ip="a.b.c.d" --username="admin" --password="XXXXX"
-z 1 --plug="UCSPROFILE2" --suborg="/org-RHEL/" -o status -v|


with above stntax , Error remain same as follows below :--

[***@mailbox2 ~]# fence_cisco_ucs --ip="172.17.1.30" --username="KVM"
--password="myPassword" -z 1 --plug="mailbox1"
--suborg="/org-root/ls-mailbox" -o status -v
[***@mailbox2 ~]# fence_cisco_ucs --ip="172.17.1.30" --username="KVM"
--password="myPassword" -z 1 --plug="mailbox1" --suborg="/org-root/" -o
status -v

*Failed:* Unable to obtain correct plug status or plug is not available.

Where As if I remove --suborg , then its working properly , as follows.

[***@mailbox2 ~]# fence_cisco_ucs --ip="172.17.1.30" --username="KVM"
--password="myPassword" -z 1 --plug="mailbox1" -o status -v
*Status:* ON

The same logic I used in cluster.conf and now fencing working fine :--)

<fencedevices>
<fencedevice agent="fence_cisco_ucs" ipaddr="172.17.1.30"
ipport="443" login="KVM" name="CiscoFence" passwd="myPassword" ssl="on"/>
</fencedevices>

Thanks

Regards
Jayesh Shinde
Hi,
Thanks for a great report. I believe that problem is that in cluster.conf you set up ‘suborg=“/org-root/ls-mailbox” but on the command line you have it without “--" so it is not used at all. It is best shown at that verbose output.
command line fencing :--
-----------------------------
'myPassword' -o status -v -z --plug=mailbox1 --ipport=443
suborg="/org-root/ls-mailbox" ; *echo $?*
Post by Jayesh Shinde
outCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3"
outRefreshPeriod="600" outPriv="pn-equipment,pn-maintenance,read-only"
outDomains="" outChannel="noencssl" outEvtChannel="noencssl"
outSessionId="web_29402_B" outVersion="2.2(3d)" outName="KVM">
Post by Jayesh Shinde
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3"
inHierarchical="false" dn="org-root/ls-mailbox1/power"/>
Post by Jayesh Shinde
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" response="yes">
As you can see dn is org-root/ls-mailbox1/... with suborg set it will be org-root/ls-mailbox/ls-mailbox1/...
It should be enough to remove sub-org part from your cluster.conf
m,
Loading...