Merge lp://qastaging/~raharper/charms/precise/swift-proxy/add-proxy-node-timeout into lp://qastaging/~openstack-charmers-archive/charms/precise/swift-proxy/trunk
Status: | Superseded |
---|---|
Proposed branch: | lp://qastaging/~raharper/charms/precise/swift-proxy/add-proxy-node-timeout |
Merge into: | lp://qastaging/~openstack-charmers-archive/charms/precise/swift-proxy/trunk |
Diff against target: |
119 lines (+27/-4) 8 files modified
config.yaml (+13/-0) hooks/swift_context.py (+3/-1) revision (+1/-1) templates/essex/proxy-server.conf (+2/-0) templates/grizzly/proxy-server.conf (+2/-0) templates/havana/proxy-server.conf (+2/-0) templates/icehouse/proxy-server.conf (+2/-0) unit_tests/test_templates.py (+2/-2) |
To merge this branch: | bzr merge lp://qastaging/~raharper/charms/precise/swift-proxy/add-proxy-node-timeout |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Pending | ||
Review via email:
|
This proposal has been superseded by a proposal from 2014-06-18.
Description of the change
Deploying swift-proxy on physical systems with slowish disks and putting large (>1G) objects would fail with 503 return code. swift-proxy logs included ChunkWrite timeouts. The default timeout to hand off to other services (swift-storage) is 10 seconds. When the swift-storage host was flushing data out to disk; this stalled longer than 10 seconds, failing the operation.
This branch updates swift-proxy to expose new config options:
node-timeout and recoverable-
http://
The default values for the config options were raised to something reasonable that worked for slower disks, 60 seconds and 30 respectively.
Tested on precise-icehouse