A post like this carries a heavy risk of this:
So, some friendly caveats:
- I wrote the first draft of this a bit after midnight in the middle of a week where we weathered 3 ES outages on two different clusters, none of which were the fault of ES, but all of which could have been much less painful.
- We’re currently running ES 1.3.4. We’ve been using ES since 0.17, and it has gotten immensely better along the way. We couldn’t build many features without it.
- I know many of these problems are being actively worked on, I’m publishing this in the hopes of providing more context to the series of github issues I’ve submitted.
With that out of the way, I think ES has some significant problems when it comes to operating and iterating on a large cluster with terabytes of data. I think these problems contribute to misunderstandings about ES and end up giving developers and sys admins a bad experience. I think many of the fixes (though not all) require rethinking default configurations rather than implementing complex algorithms.
Shard Recovery When a Node Fails
This sort of complaint about ES seems to come up regularly:
Reading the original issue, the confusion is around how the “elastic” part of ES moves shards around. I’ll quote:
- Cluster has 3 data nodes, A, B, and C. The index has a replica count of 1…
- A crashes. B takes over as master, and then starts transferring data to C as a new replica.
- B crashes. C is now master with an impartial dataset.
- There is a write to the index.
- A and B finally reboot, and they are told that they are now stale. Both A and B delete their local data.
- all the data A and B had which C never got is lost forever.
Two of three nodes failed back to back, and there were only 2 replicas of the data. ES devs correctly pointed out that you should expect to lose data. However, the user lost almost all of the existing data because the shard was moved from A to C. Almost all of the data existed on the disk of A, but it was deleted.
Elasticsearch is being too elastic. Its great to reshuffle data when nodes get added to a cluster. When a node fails though, reshuffling data is almost always a terrible idea. Let’s go through a timeline we see when an ES node fails in production:
- Cluster is happily processing millions of queries and index ops an hour…
- A node fails and drops out of the cluster
- The master allocates all the shards that were on that node to another node and starts recovery on those nodes.
- Meanwhile: alerts are fired to ops, pagers go off, people start fixing things.
- The shards being recovered start pulling terabytes of data over 1Gbit connections. Recovering all shards will take hours.
- Ops fixes whatever minor hardware problem occurred and the down node rejoins the cluster.
- All of the data on the down node gets tossed away. That node has nothing to do.
- Eventually all of the data is recovered.
- Slowly, shards are relocated to the down node
- About 10-16 hours later the cluster has finally recovered.
I’ve seen this happen at least a half dozen times. A single node going down triggers a half day event. Many nodes going down can take a day or two to recover.
There’s at least a couple of ways things can go very wrong because of the above flow:
- Because it takes days to recover all the shards of your data, if there is another hardware failure (or human error) during this time then you are at risk of not having enough redundancy of your data. As it currently stands eventually we’ll catch the triple event within a week that will cause us to lose data.
- Moving that much data across the network puts a huge strain on your infrastructure. You can easily cause other problems when there is already something going wrong. A router going down and bringing down part of your ES cluster is exactly the wrong time to move around lots of data and further increase your network load.
- When a node fails its shards get allocated elsewhere. Very often this means that if enough nodes fail at once, then you can end up running out of disk space. When I’ve got 99 shards and very little disk space on a node why the %&#^ are you trying to put 40 more on that node? I’ve watched in slow motion horror as my disk space shrinks as I try and delete shards to force them onto other nodes with very little success.
- Sometimes you won’t have enough replicas – oops, you made a mistake – but losing data does not need to be a binary event. Losing 5% of your data is not the same as losing 95%. Yes, you need to reindex in either case, but for many search applications you can work off a slightly stale index and users will never notice while you rebuild a new one.
- Because a down node triggers moving shards around, you can’t just do a restart of your cluster, you need a procedure to restart your cluster. In our case that’s evolved into a 441 line bash script!
I could complain about this problem for a while, but you get the idea.
Why don’t we just recover the local data? By default ES should not move shards around when a node goes down. If something is wrong with the cluster, why take actions that will make the situation worse? I’ve slowly come to the conclusion that moving data around is the wrong approach.
Cluster Restart Times
Actual IRC transcript between me and our head of systems:
gibrown – restarting ES cluster
gibrown – will take three days, cluster will be yellow during that time
bazza – maybe we should make `deploy wpcom` take 3 days 🙂
bazza – then we could call ourselves Elastic WordPress!
Ouch… that stung. At WordPress.com we do about 100-200 deploys a day and each updates thousands of servers in less than 20 seconds. We optimize for iteration speed and fixing problems quickly because performance problems in particular are hard to predict. Elasticsearch development in comparison has been excruciatingly slow.
We recently did an upgrade from 1.2.1 to 1.3.4. It took 6 or 7 days to complete because ES replicas do not stay in sync with each other. As real time indexing continues the replicas each do their own indexing so when a node get restarted its shard recovery requires moving all of the modified segments from another node. After bulk indexing, this means moving your entire data set across the network once for each replica you have.
Recently, we’ve been expanding our indexing from the 800 mil docs we had to about 5 billion. I am not exaggerating when I say that we’ve spent at least two months out of the last twelve waiting for ES to restart after upgrading versions, changing our indexing, or trying to improve our configuration.
Resyncing across the network completely kills our productivity and how fast we can iterate. I’d go so far as to say it is our fundamental limitation to using ES in more applications. It feels like this syncing should at least partially happen in the backgorund whenever the cluster is green.
Field Data and Heap
So you want to sort 1000 of your 5 billion documents by date? Sure we can do that, just hold on a second while I load all 5 billion dates into memory…
I didn’t fully appreciate this limitation until just recently. Field data has been a thorn in ES developers sides from the beginning. It is the single easiest way to crash your cluster, and I bet everyone has done it. The new-ish breakers help a lot, but they still don’t let me do what I’m trying to do. I don’t need 5 billion dates to sort 1000 docs, I only need 1000.
I shouldn’t complain too loudly about this, because there is actually a solution! doc_values will store your fields on disk so that you don’t need to load all values into memory, only those that you need. They are a little slower, but slower is far better than non-functional. I think they should be the default.
Onward
Elasticsearch has evolved a lot over the past years, I think its time to rethink a number of default settings to improve the configuration when dealing with large amounts of data. A lot of this comes down to thinking about the user experience of Elasticsearch. A lot of thought has clearly been put into the user experience for a new user just starting out. That’s what makes ES so addictive. I think some more thought could be put into the experience of sys admins and developers who are iterating on large data sets.
Awesome write up! We have our fair share of big data issues with multi-terabyte clusters too.
Even 10gig nics take time to spew the data back and forth. Memory usage is one of our current issues, heap and garbage collects and how performance is impacted.
How about ES not quite balancing evenly. Last time I tried the more aggressive balance settings they didn’t do much. I’m ready to try manually balancing shards myself to even it out.
How’s 1.3.4 compare to 1.1.x?
When you looking to try 1.4.x? Lots of tasty checksum verification at lucent level now.
LikeLike
As mentioned above, we had a lot of these problems recently due to sorting on field data. doc_values helped immensely. We also found a lot of cases where we needed to mark filters as not cacheable because they had high cardinality. blog_id for instance has almost 80m possible values and so its a waste to cache filters for it.
We use a very large number of shards. Our main cluster has 1400 primaries and 2800 replicas on 42 data nodes. Ensures that there is enough randomness. We still see some shards that end up being much larger than others due to some blogs having many more posts than others and all posts for a blog being in a single shard.
Much, much better. You should upgrade. We actually skipped from 0.90 to 1.2 and have since upgraded to 1.3. 1.4 is still a little too bleeding edge for us, but I agree there are some great new features and performance improvements.
LikeLiked by 1 person
I’ve added a comment related to some of your ideas to an issue I logged about near instantaneous recovery on elasticsearch:
https://github.com/elasticsearch/elasticsearch/issues/6069
LikeLike
Ha, nice. Ya, I had seen that issue before. Mine (#7288) links to it. 🙂
LikeLike