Merge lp://qastaging/~pfalcon/linaro-android-build-tools/deploy-script into lp://qastaging/linaro-android-build-tools

Proposed by Paul Sokolovsky
Status: Merged
Approved by: Paul Sokolovsky
Approved revision: 145
Merged at revision: 143
Proposed branch: lp://qastaging/~pfalcon/linaro-android-build-tools/deploy-script
Merge into: lp://qastaging/linaro-android-build-tools
Diff against target: 44 lines (+40/-0)
1 file modified
control/deploy-control-node (+40/-0)
To merge this branch: bzr merge lp://qastaging/~pfalcon/linaro-android-build-tools/deploy-script
Reviewer Review Type Date Requested Status
Paul Sokolovsky Approve
James Westby (community) Needs Information
Review via email: mp+58526@code.qastaging.launchpad.net

Description of the change

Script to deploy codebase updates to linaro-cloud-buildd master. So far deals with mirror service update (tested), but is intended to update all codebases in one turn (to not make stuff more manual by making operator to select what to update).

Ideally, setup-control-node would setup/configure distro packages and system setup (like users), and then recurse to this script to setup linaro codebases.

To post a comment you must log in.
Revision history for this message
James Westby (james-w) wrote :

I guess this would disrupt any jobs that were currently waiting on the
response from the mirror service?

Would it be possible to gracefully replace the running service?

Thanks,

James

review: Needs Information
Revision history for this message
Paul Sokolovsky (pfalcon) wrote :

On Wed, 20 Apr 2011 16:23:37 -0000
James Westby <email address hidden> wrote:

> Review: Needs Information
> I guess this would disrupt any jobs that were currently waiting on the
> response from the mirror service?

Yes, so update should be done when there're no queued/running jobs,
and announcing the downtime (and later, when we move into full
production mode, planning it beforehand apparently).

>
> Would it be possible to gracefully replace the running service?

If you mean to add the check that there're no requests currently
being serviced, and if there're, then wait for completion, then yes, I
guess this can be done, though it will require some thought on how to
do that with twisted. I hope that's not the high priority though, but I
for sure will keep this in mind (regarding updates to all components,
maybe we'll need to think about a sorry page or read-only mode).

>
> Thanks,
>
> James
>

--
Best Regards,
Paul

144. By Paul Sokolovsky

Add check and warning about outstanding mirror ops before killing service.

Revision history for this message
Paul Sokolovsky (pfalcon) wrote :

Actually, if we talk about automation, then it's indeed makes sense to add a check for such condition from the start, even if crude one.

Revision history for this message
James Westby (james-w) wrote :

On Wed, 20 Apr 2011 17:54:31 -0000, Paul Sokolovsky <email address hidden> wrote:
> Yes, so update should be done when there're no queued/running jobs,
> and announcing the downtime (and later, when we move into full
> production mode, planning it beforehand apparently).

Ok.

> If you mean to add the check that there're no requests currently
> being serviced, and if there're, then wait for completion, then yes, I
> guess this can be done, though it will require some thought on how to
> do that with twisted. I hope that's not the high priority though, but I
> for sure will keep this in mind (regarding updates to all components,
> maybe we'll need to think about a sorry page or read-only mode).

I was thinking more that we cause the existing process to stop listening
on the socket, but to continue servicing existing connections and then
exit. The new process can start listening on the socket with only a very
short window.

I guess it would also require co-ordination between the old and new
processes to not cause them both to try and pull to the same
repositories. There could be a simple approach there involving the old
process not doing anything more and returning the original manifest to
the client, or the most up-to-date manifest that it has.

Also, I'm a little unsure about the moving of the directories. Perhaps
it is better to name the directories after the revision number they
contain and run the processes from there. Perhaps adding a "latest"
symlink for someone to be able to find the dir that should be used.

What you have looks fine for now. I just wanted to see if we could do
this now, but it is certainly the direction I would like to be moving
in.

Thanks,

James

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

On Wed, 20 Apr 2011 18:21:33 -0000, James Westby <email address hidden> wrote:
> On Wed, 20 Apr 2011 17:54:31 -0000, Paul Sokolovsky <email address hidden> wrote:
> > Yes, so update should be done when there're no queued/running jobs,
> > and announcing the downtime (and later, when we move into full
> > production mode, planning it beforehand apparently).
>
> Ok.
>
> > If you mean to add the check that there're no requests currently
> > being serviced, and if there're, then wait for completion, then yes, I
> > guess this can be done, though it will require some thought on how to
> > do that with twisted. I hope that's not the high priority though, but I
> > for sure will keep this in mind (regarding updates to all components,
> > maybe we'll need to think about a sorry page or read-only mode).
>
> I was thinking more that we cause the existing process to stop listening
> on the socket, but to continue servicing existing connections and then
> exit. The new process can start listening on the socket with only a very
> short window.

It may be overkill, but the "proper" way to do this is probably with
something like haproxy, right? You install the new version, start it
up, switch haproxy over to the new service and kill the old one when all
requests to it have completed.

Coordinating locking between the old and new services will be a bit of
an issue I guess, but it would be easy enough to put locks on the
filesystem or in a db -- or maybe git does sufficient locking by itself
that we don't have to worry (I have no idea).

Cheers,
mwh

Revision history for this message
James Westby (james-w) wrote :

On Thu, 21 Apr 2011 08:41:13 +1200, Michael Hudson-Doyle <email address hidden> wrote:
> It may be overkill, but the "proper" way to do this is probably with
> something like haproxy, right? You install the new version, start it
> up, switch haproxy over to the new service and kill the old one when all
> requests to it have completed.

Yep, I think that at least closes the window where nothing will be
listening, and means you can use two ports to split the old and new
services.

> Coordinating locking between the old and new services will be a bit of
> an issue I guess, but it would be easy enough to put locks on the
> filesystem or in a db -- or maybe git does sufficient locking by itself
> that we don't have to worry (I have no idea).

Yeah, that was pretty much my thought pattern too :-)

Thanks,

James

145. By Paul Sokolovsky

We must use umask 0022 as git-daemon and mirror service are run by dif. users.

Revision history for this message
Paul Sokolovsky (pfalcon) wrote :

> Also, I'm a little unsure about the moving of the directories. Perhaps
> it is better to name the directories after the revision number they
> contain and run the processes from there. Perhaps adding a "latest"
> symlink for someone to be able to find the dir that should be used.

Well, that's of course poor-man's release tagging approach ;-). I wanted to be sure there's easy procedure for rollback right from the start. And I yet need to figure out what are bzr/lp/linaro practices regarding (deployment) release tagging, and we can implement either in-place update (smells like git ;-) ) or the approach you suggest.

Revision history for this message
Paul Sokolovsky (pfalcon) wrote :

I opened lp:768302 to capture feature of graceful service upgrade, linking back to this discussion.

Revision history for this message
Paul Sokolovsky (pfalcon) wrote :

Auto-approve.

review: Approve
Revision history for this message
James Westby (james-w) wrote :

On Thu, 21 Apr 2011 12:36:34 -0000, Paul Sokolovsky <email address hidden> wrote:
> Well, that's of course poor-man's release tagging approach ;-). I
> wanted to be sure there's easy procedure for rollback right from the
> start. And I yet need to figure out what are bzr/lp/linaro practices
> regarding (deployment) release tagging, and we can implement either
> in-place update (smells like git ;-) ) or the approach you suggest.

Well, you can do in-place update with bzr, but that is likely to cause
issues in a scheme with multiple daemons running at once for the
handover.

Thanks,

James

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
The diff is not available at this time. You can reload the page or download it.

Subscribers

People subscribed via source and target branches