This is a very intermittent drop of das planned shutdown notification seen in scenarios 10 and 11.
The failed constraint was the Planned Shutdown for the DAS was not received by one of the clustered instances.
(Scenario 10 explicitly stops DAS in middle of scenario to verify GroupLeadership change.)
This failure only happened in one out of 32 runs and for only one instance in the cluster. So it is definitely quite intermittent.
There is a strong possibility that this was a dropped UDP message. While I have fixed dropped UDP broadcast messages
in this release, this is unfortunately a boundary case that I can not address with current design, the rebroadcast of the missed event
can not take place since the last event the DAS broadcast was it shutdown. So when the clustered instance noticed
it missed an event, the instance it would request to rebroadcast the missed event no longer exist so it can not rebroadcast
the dropped UDP packet. So this would be nontrivial
to fix and not advised to attempt at this late stage of the release.
Luckily, the DAS is not part of replicating data so this missed PlannedShutdown of a SPECTATOR member would not impact HA.
There is no application that I am aware of that is dependent on planned shutdown notification of the SPECTATOR das. Everything else is okay in the logs.
The instance was notified of a new GroupLeader to replace the Shutdown DAS and the list of current alive and ready members is correct.
(reflects the DAS "server" is no longer part of cluster)
Extracted from http://aras2.us.oracle.com:8080/logs/gf31/gms///set_01_11_11_t_13_45_23/scenario_0010_Tue_Jan_11_23_55_27_PST_2011/easqezorro8_n1c1m7.log
[#|2011-01-12T07:56:38.260+0000|INFO|glassfish3.1|ShoalLogger|_ThreadID=16;_ThreadName=Thread-1;|GMS1093: adding GroupLeadershipNotification signal leadermember: n1c1m1 of group: clusterz1|#]
[#|2011-01-12T07:56:38.260+0000|INFO|glassfish3.1|ShoalLogger|_ThreadID=16;_ThreadName=Thread-1;|GMS1092: GMS View Change Received for group: clusterz1 : Members in view for MASTER_CHANGE_EVENT(before change analysis) are :
1: MemberId: n1c1m1, MemberType: CORE, Address: 10.133.184.208:9132:184.108.40.206:31524:clusterz1:n1c1m1
2: MemberId: n1c1m2, MemberType: CORE, Address: 10.133.184.209:9154:220.127.116.11:31524:clusterz1:n1c1m2
3: MemberId: n1c1m3, MemberType: CORE, Address: 10.133.184.211:9140:18.104.22.168:31524:clusterz1:n1c1m3
4: MemberId: n1c1m4, MemberType: CORE, Address: 10.133.184.213:9196:22.214.171.124:31524:clusterz1:n1c1m4
5: MemberId: n1c1m5, MemberType: CORE, Address: 10.133.184.214:9147:126.96.36.199:31524:clusterz1:n1c1m5
6: MemberId: n1c1m6, MemberType: CORE, Address: 10.133.184.137:9195:188.8.131.52:31524:clusterz1:n1c1m6
7: MemberId: n1c1m7, MemberType: CORE, Address: 10.133.184.138:9121:184.108.40.206:31524:clusterz1:n1c1m7
8: MemberId: n1c1m8, MemberType: CORE, Address: 10.133.184.139:9194:220.127.116.11:31524:clusterz1:n1c1m8
9: MemberId: n1c1m9, MemberType: CORE, Address: 10.133.184.140:9191:18.104.22.168:31524:clusterz1:n1c1m9