Merge lp://qastaging/~thomir-deactivatedaccount/core-result-checker/ci-result-checker-config into lp://qastaging/core-result-checker

Proposed by Thomi Richards
Status: Needs review
Proposed branch: lp://qastaging/~thomir-deactivatedaccount/core-result-checker/ci-result-checker-config
Merge into: lp://qastaging/core-result-checker
Diff against target: 96 lines (+24/-9)
2 files modified
core-service.conf (+15/-1)
core_result_checker/__init__.py (+9/-8)
To merge this branch: bzr merge lp://qastaging/~thomir-deactivatedaccount/core-result-checker/ci-result-checker-config
Reviewer Review Type Date Requested Status
Paul Larson Needs Fixing
Review via email: mp+259452@code.qastaging.launchpad.net

Description of the change

This branch is a proof-of-concept only. PLEASE DO NOT MERGE THIS YET.

This branch shows how we could use configuration values to change core-result-checker to be re-usable with different input and output queues.

* Please Comment on the trello card: https://trello.com/c/FuwQzdA4/22-core-result-checker-publish-to-pm-results *

I started by adding the queue it should listen to as a configuration item, then realised that it also needs to know what the test input queue should be (to retry tests). At that point, I though "screw it, why not make all the queue names configurable?".

This is different to the original proposal, which was to have the result and dead letter queues be part of the message payload, but the more I think about it, the more I think that was a bad idea (it was my bad idea), for a few different reasons:

I think we want multiple result-checker deployments for multiple test systems (one for snappy selftests, one for proposed-migration tests, for example). So while we may want the result checker to be re-usable (although that's up for debate as well), I don't think we want it to be dynamically configurable at runtime - i.e.- every payload should be processed in the same way for a given deployment.
I think that the way a component processes a payload should be constant between payloads, and that making the payload alter where messages get routed will make it much harder to reason about what the system is doing. I'd rather deploy N result checker components with slightly different configurations than one result-checker that does different things based on the payload contents.
There is, however a drawback: Currently we only ever deploy a single component with a single configuration. We'd need to tweak our deployment system so we could say "deploy this result checker here with this config and over here with this other config, and have it DTRT. I don't think that's possible with the current setup, but I might be wrong here.

I'm writing all this because I'd like agreement from the team that this is the best approach. If people want to see a MP that explores the other solution, I can do that as well.

To post a comment you must log in.
Revision history for this message
Thomi Richards (thomir-deactivatedaccount) wrote :

didn't mention this above - one reason to not merge this yet is that it doesn't actually post anything to the results queue yet :D

Revision history for this message
Paul Larson (pwlars) wrote :

Celso has some proposed changes we should discuss, but I don't *think* they would impact this.
A few comments below.
We should also probably take API_VERSION out of constants.py to avoid confusion.

review: Needs Fixing
Revision history for this message
Joe Talbott (joetalbott) wrote :

I think this is a sensible approach. Would we be deploying our legion of result-checkers as a group or each type would be deployed along with its fellow services?

Revision history for this message
Celso Providelo (cprov) wrote :

Thomi,

As we have been discussing, I don't much benefit in making input/output queues configurable when both config & code are equally static/immutable to us, specially while we can't find a way to materialise & relate config-file to deployment (1:1).

We talked about this in the standup and I've tried to capture most of our ideas in https://trello.com/c/FuwQzdA4/22-1-core-result-checker-publish-to-pm-results.

Revision history for this message
Francis Ginther (fginther) wrote :

If the goal is to have a result-checker for each 'service-set' [1], then I think this actually means deploying each result-checker with it's own mojo spec. There is then a 1:1 mapping of spec to configuration which I feel is one of the primary purposes that mojo was built to solve. For example if IS is deploying a blog for two different organizations, they would not use a single mojo spec and just change the config to give the blogs different names, they would set this up as distinct mojo specs. Also, if we were to hand off this selftest service-set to a different team to own, they would need that distinct mojo spec to properly own it.

I am in favor of pulling out these queue naming details into the configuration. IFF the difference between two deployments is the name of a few variables but they perform the exact same logic then it makes sense to keep them cohesive through a common code base. I would even take this a step further and just deploy a single result checker for everything, but the design does not support that (yet).

[1] 'service-set' - I'm trying to come up with a term that allows us to talk about all of the deployed micro-services that make up a usable thing. Suggestions welcome.

Revision history for this message
Para Siva (psivaa) wrote :

The issue that I am seeing in using the same code to deploy a service being part of different service-sets is that we're basically using the uservice similar to using a library in two products. Whilst this will reduce the initial work upfront, this runs a risk of needing to maintain compatibility for future changes to the solutions that those two 'service-set's provide. i.e. when for some reasons there is a need arises to change the service code for one service-set. Then we'd have to make sure that the change does not break the other service-set due to this change.

Keeping them isolated despite having tiny differences will be less of a pain imho.

Revision history for this message
Joe Talbott (joetalbott) wrote :

On Wed, May 20, 2015 at 07:46:58PM -0000, Parameswaran Sivatharman wrote:
> The issue that I am seeing in using the same code to deploy a service being part of different service-sets is that we're basically using the uservice similar to using a library in two products. Whilst this will reduce the initial work upfront, this runs a risk of needing to maintain compatibility for future changes to the solutions that those two 'service-set's provide. i.e. when for some reasons there is a need arises to change the service code for one service-set. Then we'd have to make sure that the change does not break the other service-set due to this change.
>
> Keeping them isolated despite having tiny differences will be less of a pain imho.

We discussed in our second standup having a project repo for such a
service and then having branches with the tiny differences in that repo.
I think it's a good idea and gives us the ability to keep similar
services up-to-date with regard to things like security updates but keep
from having to have several services all depend on the same code.

Revision history for this message
Para Siva (psivaa) wrote :

> We discussed in our second standup having a project repo for such a
> service and then having branches with the tiny differences in that repo.
> I think it's a good idea and gives us the ability to keep similar
> services up-to-date with regard to things like security updates but keep
> from having to have several services all depend on the same code.
I'd +1 for that too. Thanks for this.

Unmerged revisions

16. By Thomi Richards

Show proff-of-concept for a re-usable result checker that uses configuration values to determine which queues to use.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
The diff is not available at this time. You can reload the page or download it.

Subscribers

People subscribed via source and target branches

to all changes: