ladybug coccinelle project

How to make DRBD compatible to the Linux kernel

The Linux kernel’s interface towards device drivers is not set in stone. It evolves with the evolvement of Linux. Sometimes driven by hardware improvements, sometimes driven by general evolution of the code base. In tree drivers this is not a big issue since they get changed as the interfaces are modified, therefore are called “tree wide changes”.

DRBD development happens out-of-tree before code gets sent upstream. We at LINBIT need to track these tree wide changes and follow them in the DRBD device driver. But, not all of our users are using the same kernel.

Some are strictly sticking to the kernel that was shipped with the distribution, running a kernel that is years behind Linus’ version of Linux.

That creates a problem. DRBD should be compatible with both: Years old, carefully maintained “Vendor kernels”, and the latest and greatest Linus kernel.

In order to do that we have a kernel compatibility layer in DRBD. It contains two main parts:

  1. Detecting the capabilities of the kernel we want to build against. The background is that a “Vendor kernel” is not just a random old Linus kernel. It starts as some release of the vanilla kernel, and then the vendors cherry-pick upstream changes and fixes they deem to be relevant to their vendor kernel.
  2. The compatibility layer itself. Up to DRBD-9.0.19 this was a huge file containing many #IFDEFs. It became a maintenance nightmare. It was hard to extend, hard to understand and debug, and hard to remove old compat. Everything was ugly.

Coccinelle is French for Ladybug

The Coccinelle project from INRIA contains a tool to apply semantic patches to the source code, or expand the semantic patches to conventional patches. A few of us DRBD developers practically fell into love with that tool. It allows us to express how some code that is compatible with the upstream kernel needs to be changed in order to be compatible with some older versions of the kernel. 

This allows our DRBD upstream code to be in a form that has clean Linux upstream code, containing no compatibility hacks.

This allows us to automatically transform DRBD to be compatible with random old kernels or vendor kernels. The result, after the transformation, is clean C code without confusing macros and #IFs. It is wonderful.

The new kernel compatibility mechanism:

  1. Detect kernel capabilities (as before)
  2. Create a compat patch using spatch (from Coccinelle)
  3. Apply the compat patch and compile DRBD

Where there is light there must be shadow

The spatch tool is not available on all Linux distributions. For a little older kernels we even require a very recent version of spatch, which is even less available. The researchers at INRIA write the tool in a programming language, “OCaml”, which is right for them and the challenge, but not familiar to many in the open source community.

This complex build dependency makes it harder for community members to build drbd-9.0.20 and higher compared to how it was before.

The shortcut through the maze

For a number of well known vendor kernels (RHEL/Centos, Ubuntu-LTS, SLES, Oracle linux, Amazon Linux) we include the complete compat patches in the distribution source tar.gz. Meaning, during the DRBD build process it executes all the COMPAT tests and calculates a hash value out of that.

If it finds a pre-generated compat.patch for that hash value, the build process can continue without a call to spatch! Complex build dependency avoided!

The hard route through the maze

When you are building from a GIT-checkout, or you are building for a kernel for which we did not include the pre-generated compat.patch you need spatch.

If necessary you can run step 2 (using spatch) on a different machine then approach step 1 (testing kernel capabilites) and step 3 (compiling the drbd kernel module).

Use ‘make’ to start the compilation process. If it fails just after this output:


 COMPAT  sock_create_kern_has_five_parameters

 COMPAT  sock_ops_returns_addr_len

 COMPAT  submit_bio_has_2_params

 CHK     /home/phil/src/drbd09/drbd/compat.4.15.18.h

 UPD     /home/phil/src/drbd09/drbd/compat.4.15.18.h

 CHK     /home/phil/src/drbd09/drbd/compat.h

 UPD     /home/phil/src/drbd09/drbd/compat.h

 GENPATCHNAMES   4.15.0-48-generic

 SPATCH   27e10079afbff16b2b82fae9f7dbe676


Please take note of the hash value after “SPATCH”. That is like a fingerprint containing all the results of the countless “COMPAT” tests that were executed just before.

Then you need to copy the results of the COMPAT tests to machine/VM/container that has the same drbd source directory and a recent spatch.


rsync -rv drbd/drbd-kernel-compat/cocci_cache/27e10079afbff16b2b82fae9f7dbe676 \

 [email protected]:src/drbd-9.0.20/drbd/drbd-kernel-compat/cocci_cache/


Then you run the spatch part of the build process there:


ssh [email protected] "make -C src/drbd-9.0.20/drbd compat"


After that you copy the resulting compat.patch back:


rsync -rv \ [email protected]:src/drbd-9.0.20/drbd/drbd-kernel-compat/cocci_cache/ \



Call ‘make’ to restart the build process. If you did it right, it will find the generated compat.patch and finish the compilation process.

Get a Ladybug

If you’d like to get a spatch that is recent enough for building the DRBD driver, use a docker container we published on dockerhub


docker pull linbit/coccinelle


Then put the following shell script under the name ‘spatch’ into your $PATH.



docker run -it --rm -v "$PWD:$PWD" -w "$PWD" coccinelle:latest spatch "[email protected]"

DRBD compatible to the Linux kernel

All of this is great for making the code more readable, easier to understand, and more likely to contain less bugs. And, having the DRBD code, without backward compatibility clutter, is an important milestone on the path to getting DRBD-9 into Linus’ vanilla kernel and replacing drbd-8.4 with drbd-9 from there.


Philipp Reisner on Linkedin
Philipp Reisner
Philipp Reisner is founder and CEO of LINBIT in Vienna/Austria. His professional career has been dominated by developing DRBD, a storage replication for Linux. Today he leads a company of about 30 employees with locations in Vienna/Austria and Portland/Oregon.

August 2019 – Newsletter



We’ll be speaking at the Flash Memory Summit on August 7th. Come see your favorite Engineer David Hay’s presentation on the “Key-Value Store and Linux Technologies.”

Cheap Votes: DRBD Diskless Quorum

Prevent inadequate fencing! Read about DRBD Diskless Quorum!

DRBD/LINSTOR vs Ceph – a technical comparison

Ever wonder what the differences are between Ceph and DRBD/LINSTOR? Well, we did too and we’re sharing it with you.

Coming Soon, a New DRBD Proxy Release

The next release of DRBD Proxy will come with improvements in data replication and compression. Check out what you have to look forward to!

Service & Support

Our first priority is you. Don’t hesitate to contact us.

Facebook    Twitter    LinkedIn    YouTube    LINBIT

Performance drbd10 drbd9

Performance Gains with DRBD 10

A key factor in evaluating storage systems is their performance. LINBIT has been working to further improve the performance of DRBD. The recent DRBD 10 alpha release demonstrates significant gains.

The performance gains particularly help with highly concurrent workloads. This is an area that has been steadily rising in importance, and looks set to continue to rise. Improvements in single core speed appear to be stagnating while the availability of ever increasing numbers of cores is growing. Hence software systems need to utilize concurrency effectively to make the most of the computing resources.

We tested DRBD 10 with 4K random writes and various concurrency levels. In this test, the data is being replicated synchronously (“protocol C”) between two nodes. These numbers are for a single volume, not an aggregate over many volumes. I/O was generated by 8 processes. The tests show improvements in raw random write performance of up to 68%.

drbd10 performance gains

These improvements were achieved by using a finer-grained locking scheme. This allows, for instance, one core to be sending a request while a second core is submitting the next request. The result is better utilization of the available cores and overall higher throughput.

Technical details

The above tests were carried out on a pair of 16 core servers equipped with NVMe storage and a direct ethernet connection. The software versions used were DRBD 10.0.0a1 and its most recent ancestor from the DRBD 9 branch (8e93a5d93b62). I/O was generated using the fio tool with the following parameters:

fio --name=test --rw=randwrite --direct=1 --numjobs=8 --ioengine=libaio --iodepth=$IODEPTH --bs=4k --time_based=1 --runtime=60 --size=48G --filename=/dev/drbd500

Ongoing development on DRBD 10

LINBIT is working on a number of exciting major features for DRBD 10.

  • Request forwarding. DRBD will send data to geographically distant sites only once and it will be replicated there.
  • PMEM journaling. DRBD can already access its metadata in a PMEM optimized fashion. That will be extended to using a PMEM device as a write-back cache, resulting in improved performance in latency-sensitive scenarios.
  • Erasure coding. DRBD will be able to erasure code and distribute its data. This provides the same functionality as RAID5/6, but with an arbitrary number of parity nodes. The result is lower disk usage with similar redundancy characteristics.

Stable releases of DRBD 10 are planned for 2020 – until then stay tuned for upcoming updates!


Joel Colledge on Linkedin
Joel Colledge
Joel is a software developer at LINBIT with a background in mathematics. A polyglot programmer, Joel enjoys working with many different languages and technologies. At LINBIT, he has been involved in the development of LINSTOR and DRBD. Originally from England, Joel is now based in Vienna, Austria.