Merge lp://qastaging/~justin-fathomdb/nova/strongly-typed-image-model into lp://qastaging/~hudson-openstack/nova/trunk
Status: | Work in progress | ||||
---|---|---|---|---|---|
Proposed branch: | lp://qastaging/~justin-fathomdb/nova/strongly-typed-image-model | ||||
Merge into: | lp://qastaging/~hudson-openstack/nova/trunk | ||||
Diff against target: |
623 lines (+233/-119) 9 files modified
nova/api/ec2/cloud.py (+15/-11) nova/api/openstack/images.py (+54/-48) nova/api/openstack/servers.py (+9/-8) nova/image/glance.py (+20/-4) nova/image/local.py (+11/-3) nova/image/models.py (+89/-0) nova/image/s3.py (+19/-15) nova/image/service.py (+5/-24) nova/tests/api/openstack/test_images.py (+11/-6) |
||||
To merge this branch: | bzr merge lp://qastaging/~justin-fathomdb/nova/strongly-typed-image-model | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Devin Carlen (community) | Disapprove | ||
Review via email:
|
Description of the change
**Please check it out but don't actually merge into nova (yet). I'm trying to gather feedback on the idea first! Not sure how best to do that...
I've implemented a proof-of-concept of using a strongly-typed model for Images, to try to address the seemingly endless bugs due to our use of dictionaries and not being sure where a value is found or what format it's in or whether it's optional or whether it's an integer or... (you get the idea!)
The idea is to rely on the Python runtime to throw errors, and ideally for tools like PyLint to be able to tell us when we've made a mistake before that. This is how our database models work, and we have far fewer of these bugs there (do we have any?)
This is just a very minimal implementation which is sufficient to let the unit tests pass. However, I think I can see lots of existing bugs just by reworking it into more structured code.
There's a little hack in there which lets both PyLint and Eclipse PyDev pick up on the types, so they do argument verification and auto-complete. If anyone knows a cleaner way to implement function (Image:
If people agree this is a good idea, I'll probably rebase it off Vishy's bigger image patch to avoid conflicts and work from there.
Our model to gather feedback on an idea is to discuss it at a design summit... I realize those only occur every 6 months so it's not applicable for things that _need_ to land quickly, but for all others, that's the way to go (faster consensus than MLs)
The second best model (for things that need to land quickly) is to use the ML... Adding one more review to the review queue to get feedback on a proof-of-concept sounds a bit counter-productive at that point in the cycle, and more people read the MLs than the merge proposal queue.
For this idea in particular, that sounds like a good discussion to have at a summit in Santa Clara, if you'll be there.