Merge lp://qastaging/~gnuoy/charms/trusty/hacluster/1478980 into lp://qastaging/~openstack-charmers/charms/trusty/hacluster/next
Status: | Work in progress |
---|---|
Proposed branch: | lp://qastaging/~gnuoy/charms/trusty/hacluster/1478980 |
Merge into: | lp://qastaging/~openstack-charmers/charms/trusty/hacluster/next |
Diff against target: |
132 lines (+69/-4) 3 files modified
hooks/hooks.py (+14/-4) hooks/pcmk.py (+36/-0) hooks/utils.py (+19/-0) |
To merge this branch: | bzr merge lp://qastaging/~gnuoy/charms/trusty/hacluster/1478980 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Billy Olsen | Needs Fixing | ||
Review via email:
|
Unmerged revisions
- 53. By Liam Young
-
Use status return code rather than lookinig for an arbitrary string
- 52. By Liam Young
-
Update resources parameters if the principle has requested a different value to what was initially set
- 51. By James Page
-
Update copyright for included rbd ocf.
- 50. By Edward Hope-Morley
-
[hopem,r=gnuoy]
Fixes unicast ipv6
Closes-Bug: 1461145
- 49. By Edward Hope-Morley
-
[hopem,r=gnuoy]
Fixup corosync.conf configuration
Closes-Bug: 1461849
- 48. By Billy Olsen
-
[hopem,r=wolsen]
Refactor and clean-up the hacluster charm.
This makes the code format and layout more consistent with
the rest of the openstack charms. - 47. By Liam Young
-
[gnuoy, trivial] Propogate plugin licenses to copyright file
- 46. By Liam Young
-
[bradm, r=gnuoy] Adding nrpe checks to the hacluster to check the status of corosync.
- 45. By James Page
-
[beisner,
r=james- page] auto normalize amulet test definitions and amulet make targets; charm-helper sync. - 44. By Edward Hope-Morley
-
[gnuoy,r=hopem]
Allow corosync.conf netmtu to be set regardless of inet
mode (ipv4/ipv6).
Minus the fact its missing unit tests, this looks pretty decent to me. In particularly busy clusters, there is the possibility that an update will fail and I think the retry decorator should be added. Otherwise, it gets my +1.
As a general note, there's a certain amount of uncertainty during the process of changing configuration values such as the VIP. For instance, changing it may:
1. cause the vip to move to another node
2. have the VIP resource updated serving the new IP while the old IP goes unserved while the rest of the services are reconfiguring
3. Wreak more havoc across the environment if cluster is split-brain (? maybe - thinking both vips being served at the same time)
I don't have the bandwidth to test each and every scenario, but I wanted to drop it to give extra thought to these various scenarios (and more).