A few weeks ago, LINBIT publicly released the LINSTOR CSI (Container Storage Interface) plugin. This means LINSTOR now has a standardized way of working with any container orchestration platform that supports CSI. Kubernetes is one of those platforms, so our developers put in the work to make LINSTOR integration with Kubernetes easy, and I’ll show you how!
You’ll need a couple things to get started:
- Kubernetes Cluster (1.12.x or newer)
- LINSTOR Cluster
LINSTOR’s CSI plugin requires certain Kubernetes feature gates be enabled on the
kube-apiserver and each
CSIDriverRegistry feature gates on the
kube-apiserver by adding,
--feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true, to the list of arguments passed to the
kube-apiserver system pod in the
/etc/kubernetes/manifests/kube-apiserver.yaml manifest. It should look something like this:
# cat /etc/kubernetes/manifests/kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver ... snip ... - --feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true ... snip ...
To enable these feature gates on the Kubelet, you’ll need to add the following argument to the
KUBELET_EXTRA_ARGS variable located in the
--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true. Your config should look something like this:
# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true"
Once you’ve modified those two configurations, you can prepare your configuration for the CSI plugin’s sidecar containers.
curl down the latest version of the plugin definition:
# curl -O \ https://raw.githubusercontent.com/LINBIT/linstor-csi/master/examples/k8s/deploy/linstor-csi.yaml
value: of each instance of
LINSTOR-IP in the
linstor-csi.yaml to the IP address of your LINSTOR Controller. The placeholder IP in the example yaml is 192.168.100.100, so we can use the following command to update this address (or you can edit it with an editor), simply set
CON_IP to your controller’s IP address:
# CON_IP="x.x.x.x"; sed -i.example s/192\.168\.100\.100/$CON_IP/g linstor-csi.yaml
Finally, apply the yaml to the Kubernetes cluster:
# kubectl apply -f linstor-csi.yaml
You should now see the
linstor-csi sidecar pods running in the
# watch -n1 -d kubectl get pods --namespace=kube-system --output=wide
Once running, you can define storage classes in Kubernetes pointing to our LINSTOR storage pools that we can then provision persistent, and optionally replicated by DRBD, volumes from for our containers.
Here is an example yaml definition that describes a LINSTOR storage pool in my cluster named,
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: linstor-autoplace-1-thin-lvm provisioner: io.drbd.linstor-csi parameters: autoPlace: "1" storagePool: "thin-lvm"
And here is an example yaml definition for a persistent volume claim carved out of the above storage class:
apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: volume.beta.kubernetes.io/storage-class: linstor-autoplace-1-thin-lvm name: linstor-csi-pvc-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
Put it all together and you’ve got yourself an open source, high performance, block device provisioner for your persistent workloads in Kubernetes!
There are many ways to craft your storage class definitions for node selection, storage tiering, diskless attachments, or even off site replicas. We’ll be working on our documentation surrounding new features, so stay tuned, and don’t hesitate to reach out for the most UpToDate information about LINBIT’s software!
Read more: CSI Plugin for LINSTOR Complete.
With the LINSTOR volume driver for OpenStack, Linux storage created in OpenStack Cinder can be easily provisioned, managed and seamlessly replicated across a large Linux cluster.
LINSTOR is an open-source storage orchestrator designed to deliver easy-to-use software-defined storage in Linux environments. LINSTOR uses LINBIT’s DRBD to replicate block data with minimal overhead and CPU load. Managing a LINSTOR storage cluster is as easy as a few LINSTOR CLI commands or a few lines of Python code with the LINSTOR API.
LINSTOR pairs with Openstack
OpenStack paired with LINSTOR brings even greater power and flexibility by enabling Linux to become your SDS platform. Replicate storage wherever you need it with simple mouse clicks. Provision snapshots. Create new volumes with those snapshots. LINSTOR volumes can then be paired with the right compute nodes just as easily. Together, OpenStack and LINSTOR bring tremendous potential to provide robust infrastructure with ease, all powered by open-source.
Data replicated with LINSTOR can minimize downtime and data loss. Running your cloud on commodity hardware with the native Linux features underneath provides the most flexible, reliable, and cost-effective solution to hosting customized OpenStack deployment anywhere.
In addition to storage management and replication, LINBIT also offers Geo-Clustering solutions that work with LINSTOR to enable long-distance data replication inside private and public cloud environments.
For a quick recap, please check out this video on deploying LINSTOR volumes with OpenStack’s Horizon GUI.
More information about LINBIT’s DRBD and LINSTOR visit:
For LINSTOR OpenStack Drivers
For LINSTOR Driver Documentation:
For LINBIT’s LINSTOR webpage:
It’s Official. LINSTOR volume driver is now part of OpenStack.
With this code merge, LINSTOR volume driver is now officially part of OpenStack and brings a new level of software-defined-storage (SDS) service to Cinder, the OpenStack’s volume service.
While the next OpenStack release named ‘Stein’ won’t be out until April, the latest LINSTOR driver is already available on our GitHub repo.
Stay tuned for more news and updates from LINBIT.
System maintenance, whether planned or in response to failure, is a necessary part of managing infrastructure. Everyone hopes for the former, rather than the latter. We do our system maintenance quarterly here at LINBIT in hopes that the latter is avoided. These maintenance windows are where we install hardware and software updates, test failovers, and give everything a once over to ensure configurations still make sense.
Normally in the case of planned maintenance, users are left waiting for access while IT does whatever they need to do. This leads to a bad user experience. In fact, that is precisely what lead to this blog post. I was looking for a BIOS update for a motherboard in the server room and was presented with this lovely message:
I just had a bad user experience. And to further the experience, I have no indication as to when it will be back up or available. I guess I’m supposed to keep checking back until I get what I was looking for… if I remember to.
Here at LINBIT we use DRBD for all of our systems. This ensures that they are always on and always available for the end users and our customers. If for some reason you landed on this site and aren’t familiar with DRBD, DRBD is an open source project developed by us, LINBIT. In its simplest form you can think of it as network raid 1, however instead of having independent disks, you have two (or more if you’re using DRBD9) independent systems. You essentially now need to lose twice the hardware to experience downtime of services.
One commonly ignored or unrealized benefit of using DRBD is that system maintenance and upgrades can be done with minimal to no interruption of services. The length of the interruption is generally tied to the type of deployment – for example if you’re using virtual machines, live migration can be achieved using DRBD resulting in no downtime. If you’re running services on hardware and they need to be stopped and restarted, your downtime will be limited to the failover time.
So how do we do this? Let say you have two servers; Frodo and Sam – Frodo is Primary (running services) and Sam is Secondary. In this example we need to update the BIOS and upgrade the RAM of our servers. Follow these steps
- First put the cluster into maintenance mode
- Next power off Sam (the secondary server)
- We can now install any upgrades or hardware we need to
- Power the system up, enter the BIOS and make sure everything is OK
- Reboot and update the BIOS
- Boot Sam into the OS
- At this point you can install any OS updates and reboot again if needed
- Once Sam is back up and everything is verified to be in good condition, bring the cluster out of maintenance mode
- Now migrate services to Sam – again depending on how things are configured this may or may not cause a few seconds of unavailability of services
- Repeat steps 1-4 for Frodo
There you have it, one of the better kept secret benefits of using DRBD.
This CSI plug-in allows for the use of LINSTOR volumes on Container Orchestrators that implement CSI, such as Kubernetes.
Preliminary work on the CSI plugin for LINSTOR is now complete and is capable of operating as an independent driver. CSI is a project by the Cloud Native Computing Foundation which aims to serve as an industry standard interface for container orchestration platforms. This allows storage vendors to write one storage driver and have it work across multiple platforms with little or no modification.
In practical terms, this means that LINSTOR is primed to work with current and emerging cloud technologies that implement CSI. Currently, work is being done to provide example deployments for Kubernetes, which should allow an easy way for Kubernetes admins to deploy the LINSTOR CSI plug-in. We expect full support for Kubernetes integration in early 2019.
Get the code on GitHub.
In the tech world, a “layered cake” approach allows IT shops to enable new software capabilities while minimizing disruption. Need a new feature? Add a new layer. But like “too much of a good thing,” this approach has caused the IT stack to grow over the years. Software layer bloat and complexity can lead to poor system performance.
Keeping all the layers of a tall stack version-tested and conflict-free can be both complex and costly. This is especially the case when you deal with virtualization and containers. You want VM-native and container-native semantics and ease of use, but you also want speed, robustness, and maintainability. What’s the best path? Do you add a new layer, or add functionality to existing layers?
LINSTOR uses existing features
There is a better way. The idea is to fully leverage the functionality that already exists and has been taken through its paces. So, before adding anything, users should see what features are natively supported in their environment. You can start with the OS kernel and work your way up. We went through this analysis when we built LINBIT SDS and LINSTOR.
It turns out there is a lot of SDS-type functionality already inside the Linux kernel and the OS. Software RAID adds drive protection, LVM does snapshotting; there is replication, deduplication, compression, SSD Caching, etc. So, no need to reinvent the wheel when building a storage system. But these efficient and reliable tools need to be presented in a VM-native or container-native way so that the software is easy to use and fits the way users interact with VMs or containers. This is where LINSTOR comes in.
Integrations to Kubernetes and OpenStack
LINSTOR is a Linux block device provisioner that lets you create storage pools, defined by which Linux block storage software you want to use. From these storage pools you can easily provision volumes either through a command-line interface (CLI) or your favorite cloud/container front-end (Kubernetes, OpenStack, etc). Simply create your LINSTOR cluster, define your storage pool(s), and start creating resources where they’re needed on-demand.
With LINSTOR, you can:
- Scale out by designating nodes as controllers or satellites
- Display usage statistics from participating nodes
- Define storage pools for different classes of storage
- Set security/encryption for volumes
- Deploy in Non-Converged or Hyper-Converged infrastructures
- Define redundancy for each volume
In short: you can do everything a more complicated SDS product provides, but in a lighter weight, more maintainable, faster, and more reliable fashion.
Don’t just take our word for it. In a recent report titled, “Will the Operating System Kill SDS?“, Storage Switzerland covers this approach.
Try it out. Test it. Play with it. Commit code to it. Send us your feedback. As always, we’d love to hear from you about how you use the software so that we can make the software better in future revs. Our goal is the same as it has been for nearly 20 years. We are reducing the cost and complexity of enterprise storage by enabling customers to choose Linux for their back-end storage in any environment.
Building on open-source software and leveraging the capabilities in the Linux kernel results lower TCO, always-on apps, and always-available data for enterprises everywhere.
Customers are Recognized as Key to Business Success
Beaverton, Ore., September 6, 2018 – LINBIT USA, the pioneer in open source High Availability (HA), Disaster Recovery (DR) and Software-Defined Storage (SDS), has reached a milestone this September, celebrating 10 years of business in the Americas. Recognizing the deep collaboration with its customers, LINBIT USA credits its loyal customer base as a critical component for driving its tremendous growth over the last decade.
After developing the company’s flagship HA software, DRBD(R), in 2001, Philipp Reisner, CEO of LINBIT, founded the company in Vienna, Austria. After years of break through performance, DRBD was integrated into the Linux kernel in 2009. Today, DRBD has over 2 million downloads and is the foundation for LINBIT’s expansion into the SDS and Container space.
LINBIT USA was established in 2008 to offer development, consultancy, 24/7 support, and OEM/ISV integration services in North, Central and South America and boasts an impressive enterprise customer list of names, such as Cisco, Microfocus, CN Rail, Bechtel, Adamson Systems, Performance Matters and TruckPro.
TruckPro, with over 150 retail locations in 33 states, relies on LINBIT for resiliency. “Uptime is very important for our business and anything we can do to quickly recover from any issue is paramount,” said Henry SantaMaria, Director of Infrastructure at TruckPro. “Our investment [in LINBIT’s software] yielded a noticeable increase in performance and stability which we did not have before.”
“It gives me great pride in joining LINBIT employees, clients and business partners in celebrating the 10th anniversary of LINBIT USA, said Brian Hellman, COO of LINBIT and head of LINBIT USA. “We have accomplished a great deal since the business started, and as the company continues to thrive and expand, we are very excited about our future. Our customers have made us who we are today.”
Read the LINBIT USA Anniversary blog for a look back on the history of LINBIT.
LINBIT is the force behind DRBD and the de facto open standard for High Availability (HA) software for enterprise and cloud computing. The LINBIT DRBD software is deployed in millions of mission-critical environments worldwide to provide High Availability (HA), Geo Clustering for Disaster Recovery (DR), and Software Defined Storage (SDS).