Merge lp://qastaging/~rackspace-titan/nova/xen_vmops_rescue_fixes into lp://qastaging/~hudson-openstack/nova/trunk

Proposed by Dan Prince
Status: Work in progress
Proposed branch: lp://qastaging/~rackspace-titan/nova/xen_vmops_rescue_fixes
Merge into: lp://qastaging/~hudson-openstack/nova/trunk
Diff against target: 71 lines (+20/-14)
1 file modified
nova/virt/xenapi/vmops.py (+20/-14)
To merge this branch: bzr merge lp://qastaging/~rackspace-titan/nova/xen_vmops_rescue_fixes
Reviewer Review Type Date Requested Status
Chris Behrens (community) Needs Information
Nova Core security contacts Pending
Review via email: mp+75770@code.qastaging.launchpad.net

Description of the change

Fix some issues in rescue and unrescue issues in xenapi/vmops.py:

  1) check for and delete rescue instances when deleting an instance

  2) _find_rescue_vbd now works if there is no swap vbd

  3) fix 'device in use' errors that would happen when unrescuing an image
   in which the user actually mounted the original image VBD.
   We no longer try to individually hot unplug the VBDs on the rescue instance
   before shutdown. Simply shutting down the rescue instance and removing the
   VDIs seems to be a more robust unrescue method.

To post a comment you must log in.
Revision history for this message
Chris Behrens (cbehrens) wrote :

Small typo on line 9: "We try use the second"

And hm. Doesn't this destroy the instance's disks? Seems like just shutting down rescue and calling destroy on all of the VDIs will remove the disk we want to keep around. I'd figure we'd still want to unplug the disk... but maybe do it after shutdown.

review: Needs Information
1580. By Dan Prince

small typo in comment.

Revision history for this message
Dan Prince (dan-prince) wrote :

> And hm. Doesn't this destroy the instance's disks? Seems like just shutting
> down rescue and calling destroy on all of the VDIs will remove the disk we
> want to keep around. I'd figure we'd still want to unplug the disk... but
> maybe do it after shutdown.

Chris, Looking into this now. I don't think I had an issue with it but it probably won't hurt to try that.

Revision history for this message
Dan Prince (dan-prince) wrote :

Okay Chris. Looks like you were right. I can't just not unplug them. Leaving like it was causes me 'device in use' errors. We spoke about this on IRC a bit... Do you have any other ideas?

Revision history for this message
Chris Behrens (cbehrens) wrote :

Hm. Did you try doing the unplugging/VBDs destroy after the rescue instance is shutdown? If that still gives 'device in use'.. that'd be pretty strange.

Ie, I was thinking if you restore _destroy_rescue_vbds() and just change the order in _destroy_rescue_instance() to the below, it should work?

def _destroy_rescue_instance(self, rescue_vm_ref):
    """Destroy a rescue instance."""
    self._shutdown_rescue(rescue_vm_ref)
    self._destroy_rescue_vbds(rescue_vm_ref)
    self._destroy_rescue_vdis(rescue_vm_ref)

I also noticed that _destroy_vdis() and _destroy_rescue_vdis() are very similar (I know you didn't modify them)... the former waits for the vdi destroys to complete. I wonder if those could be consolidated as a part of this, even though it's somewhat unrelated.

Unmerged revisions

1580. By Dan Prince

small typo in comment.

1579. By Dan Prince

Fix some issues in rescue and unrescue issues in xenapi/vmops.py:

1) check for and delete rescue instances when deleting an instance

2) _find_rescue_vbd now works if there is no swap vbd

3) fix 'device in use' errors that would happen when unrescuing an image
 in which the user actually mounted the original image VBD.
 We no longer try to individually hot unplug the VBDs on the rescue instance
 before shutdown. Simply shutting down the rescue instance and removing the
 VDIs seems to be a more robust unrescue method.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
The diff is not available at this time. You can reload the page or download it.