bcache is a Linux kernel’s block layer cache (hence the name, block cache). It allows one or more fast storage devices such as an SSD to act as a cache for one or more slower drives, effectively creating hybrid drive. Sounds like just the right tool for the job.
bcache has a few interesting features the following are worth noting:
- A single cache device can be used to cache multiple devices.
- Recovers from unclean shutdown.
- Many write options: Writethrough, writeback and writearound.
- Designed for SSD’s by never performing random writes and by turning them into sequential writes instead.
- It was merged into the Linux kernel mainline in kernel version 3.10.
Installtion and configuration
In ubuntu this is really straight forward:
If you remember this is what our disks structure looked like:
1 2 3 4 5 6 7
We first need to make sure that our backing device
/dev/xvdd (EBS) and our cache device
/dev/xvdc (SSD)are both formatted with ext4:
1 2 3 4
Then we have to remove non bcache superblocks from each device just in case:
1 2 3 4 5 6
Next we create our bcache devices using
-B for backing devices and
-C for cache device as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
bcache-tools now ships udev rules, and bcache devices are known to the kernel
immediately in systems such as ubuntu. The devices show up as
/dev/bcache<N> as well as (with udev)
Next we format the new device with ext4 as follows
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
and mount it:
1 2 3 4
And finally we need to attach the cache device to the backing device, this can be done copying the
/sys/fs/bcache/ and runnin the following command:
Replace with your own
By default, bcache uses writethrough caching. With writethrough, only reads are cached and writes are written directly to the backing drive:
We can get some serious improvements by enabling writeback caching:
Caution: using writeback mode is not as reliable as writethrough.
By default, bcache doesn’t cache everything. It tries to skip sequential IO -because you really want to be caching the random IO, and if you copy a 10 gigabyte file you probably don’t want that pushing 10 gigabytes of randomly accessed data out of your cache. But since we will be benchmarking reads from cache, we want to disable that behaviour:
Now we are ready for some testing we will do two tests, one with writethrough and another with writeback. Note that we will perform the tests with warm cache.
- Sequential and random read results after warming the cache is similar to native SSD which is fantastic.
- Sequential and random write results when using writethrough caching is exactly the same as EBS which is expected.
- When turning on writeback caching significant we see double the performance of EBS but once again nowhere near the performance of the SSD. I must be doing something wrong here because the bcache performance testing is reporting faster random write speeds compared to the SSD on its own.
Nonetheless these are improved results compared to the part 2. Its is important to note that the cache device can be detached as required. Also note that under writeback caching there is a potential of dataloss if the SSD ephemeral disk is lost before the data is flushed to the backing device.
Next is part 4 where we will look at at ZFS.