If the first node fails a reboot after scinstall completes with the following errors, read on for the fix:
Jan 27 11:52:50 node1 genunix: WARNING: CCR: Invalid CCR table : rgm_rt_SUNW.LogicalHostname:4 cluster global.
Jan 27 11:52:50 node1 genunix: WARNING: CCR: Invalid CCR table : rgm_rt_SUNW.SharedAddress:2 cluster global.
...
Jan 27 11:53:55 node1 Cluster.RGM.global.rgmd: [ID 349049 daemon.alert] CCR reported invalid table rgm_rt_SUNW.LogicalHostname:4; halting node
Jan 27 11:53:55 node1 Cluster.RGM.global.rgmd: [ID 349049 daemon.alert] CCR reported invalid table rgm_rt_SUNW.SharedAddress:2; halting node
The fix is to boot the node outside of cluster and repair the directory table:
cd /etc/cluster/ccr/global
vi directory
You want to remove these two lines:
rgm_rt_SUNW.LogicalHostname:4
rgm_rt_SUNW.SharedAddress:2
Save the file and bless it:
ccradm recover -o directory
Reboot back into cluster and proceed. After both nodes have rebooted and are in cluster mode, register these two resource types:
clresourcetype register SUNW.LogicalHostname
clresourcetype register SUNW.SharedAddress