Swift Container sharding – locked db POC – Benchmarking observations
The latest POC is at the benchmarking stage, and in the most part it’s going well. I have set up 2 clusters in the cloud, not huge, but 2 proxies and 4 storage nodes each. A benchmarking run involves pointing an ssbench master at each cluster and putting each cluster under load. In both cases we only use 1 container, and on one cluster this container will have sharding turned on.
So far it’s looking pretty good. I’ve done many runs, and usually find a bug at scale.. but as of recently I’ve done two runs of the latest revision alternating the sharded cluster (the cluster that will be benchmarking with the container with sharding on). Below shows the grafana statsd output of the second run. Note that cluster 2 is the sharded cluster in this run:
Looking at the picture there are a few observations we can make, the peaks in the ‘Container PUT Latency – Cluster 2’ correspond when a container is sharded (in this case, the one container and then shards sharding).
As I mentioned earlier ssbench is running the benchmark and the benchmark is very write (PUT) heavy. Here is the sharding scenario file:
{ "name": "Sharding scenario", "sizes": [{ "name": "zero", "size_min": 0, "size_max": 0 }], "initial_files": { "zero": 100 }, "run_seconds": 86400, "crud_profile": [200, 50, 0, 5], "user_count": 2, "container_base": "shardme", "container_count": 1, "container_concurrency": 1, "container_put_headers": { "X-Container-Sharding": "on" } }
The only difference with this and non-sharding one is not setting the X-Container-Sharding meta on the initial container PUT. The crud profile shows that we are heady on PUTs and GETs. But because jobs are randomised, I don’t expect the exact the same numbers when it comes to object count on the servers however there is a rather large discrepancy with the object counts on both servers:
Cluster 1:
HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 11291190 Accept-Ranges: bytes X-Storage-Policy: gold X-Container-Bytes-Used: 0 X-Timestamp: 1482290574.52856 Content-Type: text/plain; charset=utf-8 X-Trans-Id: tx9dd499df28304b2d920aa-00585b2d3e Date: Thu, 22 Dec 2016 01:32:46 GMT
Cluster 2:
Content-Length: 0 X-Container-Object-Count: 6909895 X-Container-Sharding: True X-Storage-Policy: gold X-Container-Bytes-Used: 0 X-Timestamp: 1482290575.94012 Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes X-Trans-Id: txba7b23743e0d45a68edb8-00585b2d61 Date: Thu, 22 Dec 2016 01:33:27 GMT
So cluster 1 has about 11 million objects and cluster 2 about 7 million. That quite a difference. Which gets me wondering what’s causing such a large difference in PUT through put?
The only real difference in the proxy object PUT when comparing sharded to unsharded is the finding of the shard container the object server will need to update, in which case another request is made to the root container asking for the pivot (if there is one). Is this extra request really causing an issue? I do note the object-updater (last graph in the image) is also working harder, as the number of successes during the benchmarks are much higher, meaning there are more requests falling into async pendings.
Maybe the extra updater work is because of the extra load on the container server (this additional request)?
To test this theory, I can push the sharder harder and force container updates into the root container. This would stop the extra request.. but force more traffic to the root container (which we are kinda doing anyway). We should still see benefits as root container would be much smaller (because it’s sharded) then the non sharded counter part. And this will allow us to see if this is causing the slower through put.
Update: I’m currently running a new scenario which is all PUTs so lets see how that fairs. Will keep you posted.