cloudfest rust

Meet us at Cloudfest in Rust, Germany!

Come and meet LINBIT at CloudFest 2019 in Europa-Park, Rust, Germany. (23rd – 29rd of March 2019)

cloudfest 2019 booth linbit

CloudFest has made a name over the last few years as one of the the best cloud-focused industry events in which to network and have a good time. This year more than 7,000 people are attending the event. Attendees will hear from leaders in the business and get the latest industry buzz. 

The speaker line-up includes names like Dr. Ye Huang, Head of Solution Architects at Alibaba, Will Pemble, CEO at Goal Boss, Bhavin Turakhia, CEO at Flock, or Brian Behlendorf, Inventor of the Apache Web server. 

VISIT us at Booth H24!

LINBIT is announcing some exciting news at Cloudfest: NVMe-oF with LINSTOR! Meaning LINSTOR can now be used as a standalone product, independent from DRBD. NVMe-oF supports Infiniband with RDMA and allows ultrafast performance, easily handling workloads for Big Data Analytics or Artificial Intelligence. Come say hello at Cloudfest! Visit us at Booth H24. 

We are looking forward to you!

Booth visitors will be rewarded with a surprise that even your family will love! 🙂 

March 2019 Newsletter

February 2019 Newsletter

January 2019 Newsletter

LINSTOR CSI Plugin for Kubernetes

A few weeks ago, LINBIT publicly released the LINSTOR CSI (Container Storage Interface) plugin. This means LINSTOR now has a standardized way of working with any container orchestration platform that supports CSI. Kubernetes is one of those platforms, so our developers put in the work to make LINSTOR integration with Kubernetes easy, and I’ll show you how!

You’ll need a couple things to get started:

  • Kubernetes Cluster (1.12.x or newer)
  • LINSTOR Cluster

LINSTOR’s CSI plugin requires certain Kubernetes feature gates be enabled on the kube-apiserver and each kubelet.

Enable the CSINodeInfo and CSIDriverRegistry feature gates on the kube-apiserver by adding, --feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true, to the list of arguments passed to the kube-apiserver system pod in the /etc/kubernetes/manifests/kube-apiserver.yaml manifest. It should look something like this:

# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    ... snip ...
    - --feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true
    ... snip ...

To enable these feature gates on the Kubelet, you’ll need to add the following argument to the KUBELET_EXTRA_ARGS variable located in the /etc/sysconfig/kubelet: --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true. Your config should look something like this:

# cat /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true"

Once you’ve modified those two configurations, you can prepare your configuration for the CSI plugin’s sidecar containers. curl down the latest version of the plugin definition:

# curl -O \
https://raw.githubusercontent.com/LINBIT/linstor-csi/master/examples/k8s/deploy/linstor-csi.yaml

Set the value: of each instance of LINSTOR-IP in the linstor-csi.yaml to the IP address of your LINSTOR Controller. The placeholder IP in the example yaml is 192.168.100.100, so we can use the following command to update this address (or you can edit it with an editor), simply set CON_IP to your controller’s IP address:

# CON_IP="x.x.x.x"; sed -i.example s/192\.168\.100\.100/$CON_IP/g linstor-csi.yaml

Finally, apply the yaml to the Kubernetes cluster:

# kubectl apply -f linstor-csi.yaml

You should now see the linstor-csi sidecar pods running in the kube-system namespace:

# watch -n1 -d kubectl get pods --namespace=kube-system --output=wide

Once running, you can define storage classes in Kubernetes pointing to our LINSTOR storage pools that we can then provision persistent, and optionally replicated by DRBD, volumes from for our containers.

Here is an example yaml definition that describes a LINSTOR storage pool in my cluster named, thin-lvm:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor-autoplace-1-thin-lvm
provisioner: io.drbd.linstor-csi
parameters:
  autoPlace: "1"
  storagePool: "thin-lvm"

And here is an example yaml definition for a persistent volume claim carved out of the above storage class:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-class: linstor-autoplace-1-thin-lvm
  name: linstor-csi-pvc-0
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Put it all together and you’ve got yourself an open source, high performance, block device provisioner for your persistent workloads in Kubernetes!

There are many ways to craft your storage class definitions for node selection, storage tiering, diskless attachments, or even off site replicas. We’ll be working on our documentation surrounding new features, so stay tuned, and don’t hesitate to reach out for the most UpToDate information about LINBIT’s software!

Matt Kereczman on Linkedin
Matt Kereczman
Matt is a Linux Cluster Engineer at LINBIT with a long history of Linux System Administration and Linux System Engineering. Matt is a cornerstone in LINBIT’s support team, and plays an important role in making LINBIT’s support great. Matt was President of the GNU/Linux Club at Northampton Area Community College prior to graduating with Honors from Pennsylvania College of Technology with a BS in Information Security. Open Source Software and Hardware are at the core of most of Matt’s hobbies.

Why you should use LINSTOR in OpenStack

With the LINSTOR volume driver for OpenStack, Linux storage created in OpenStack Cinder can be easily provisioned, managed and seamlessly replicated across a large Linux cluster.

LINSTOR is an open-source storage orchestrator designed to deliver easy-to-use software-defined storage in Linux environments. LINSTOR uses LINBIT’s DRBD to replicate block data with minimal overhead and CPU load. Managing a LINSTOR storage cluster is as easy as a few LINSTOR CLI commands or a few lines of Python code with the LINSTOR API.

LINSTOR pairs with Openstack

OpenStack paired with LINSTOR brings even greater power and flexibility by enabling Linux to become your SDS platform. Replicate storage wherever you need it with simple mouse clicks. Provision snapshots. Create new volumes with those snapshots. LINSTOR volumes can then be paired with the right compute nodes just as easily. Together, OpenStack and LINSTOR bring tremendous potential to provide robust infrastructure with ease, all powered by open-source.

Data replicated with LINSTOR can minimize downtime and data loss. Running your cloud on commodity hardware with the native Linux features underneath provides the most flexible, reliable, and cost-effective solution to hosting customized OpenStack deployment anywhere.

In addition to storage management and replication, LINBIT also offers Geo-Clustering solutions that work with LINSTOR to enable long-distance data replication inside private and public cloud environments.

For a quick recap, please check out this video on deploying LINSTOR volumes with OpenStack’s Horizon GUI.

More information about LINBIT’s DRBD and LINSTOR visit:

For LINSTOR OpenStack Drivers
https://github.com/LINBIT/openstack-cinder

For LINSTOR Driver Documentation:
https://docs.linbit.com/docs/users-guide-9.0/#ch-openstack-linstor

For LINBIT’s LINSTOR webpage:
https://www.linbit.com/en/linstor/

 

Woojay Poynter
IO Plumber
Woojay is working on data replication and software-defined-storage with LINSTOR, built on DRBD @LINBIT. He has worked on web development, embedded firmwares, professional culinary education, power carving with ice and wood. He is a proud father and likes to play with legos.

LINSTOR is officially part of OpenStack

It’s Official. LINSTOR Volume Driver is Now a Part of OpenStack Cinder.

It’s Official. LINSTOR volume driver is now part of OpenStack.

With this code merge, LINSTOR volume driver is now officially part of OpenStack and brings a new level of software-defined-storage (SDS) service to Cinder, the OpenStack’s volume service. 

While the next OpenStack release named ‘Stein’ won’t be out until April, the latest LINSTOR driver is already available on our GitHub repo.

Stay tuned for more news and updates from LINBIT.

Server Maintenance

Minimize Downtime During Maintenance

System maintenance, whether planned or in response to failure, is a necessary part of managing infrastructure. Everyone hopes for the former, rather than the latter. We do our system maintenance quarterly here at LINBIT in hopes that the latter is avoided. These maintenance windows are where we install hardware and software updates, test failovers, and give everything a once over to ensure configurations still make sense.

Normally in the case of planned maintenance, users are left waiting for access while IT does whatever they need to do. This leads to a bad user experience. In fact, that is precisely what lead to this blog post. I was looking for a BIOS update for a motherboard in the server room and was presented with this lovely message:

We are sorry!

I just had a bad user experience. And to further the experience, I have no indication as to when it will be back up or available. I guess I’m supposed to keep checking back until I get what I was looking for… if I remember to.

Here at LINBIT we use DRBD for all of our systems. This ensures that they are always on and always available for the end users and our customers. If for some reason you landed on this site and aren’t familiar with DRBD, DRBD is an open source project developed by us, LINBIT. In its simplest form you can think of it as network raid 1, however instead of having  independent disks, you have two (or more if you’re using DRBD9) independent systems. You essentially now need to lose twice the hardware to experience downtime of services.

One commonly ignored or unrealized benefit of using DRBD is that system maintenance and upgrades can be done with minimal to no interruption of services. The length of the interruption is generally tied to the type of deployment – for example if you’re using virtual machines, live migration can be achieved using DRBD resulting in no downtime. If you’re running services on hardware and they need to be stopped and restarted, your downtime will be limited to the failover time.

So how do we do this? Let say you have two servers; Frodo and Sam – Frodo is Primary (running services) and Sam is Secondary. In this example we need to update the BIOS and upgrade the RAM of our servers. Follow these steps

  1. First put the cluster into maintenance mode
  2. Next power off Sam (the secondary server)
    1. We can now install any upgrades or hardware we need to
    2. Power the system up, enter the BIOS and make sure everything is OK
    3. Reboot and update the BIOS
  3. Boot Sam into the OS
    1. At this point you can install any OS updates and reboot again if needed
  4. Once Sam is back up and everything is verified to be in good condition, bring the cluster out of maintenance mode
  5. Now migrate services to Sam – again depending on how things are configured this may or may not cause a few seconds of  unavailability of services
  6. Repeat steps 1-4 for Frodo

There you have it, one of the better kept secret benefits of using DRBD.

CSI Plugin for LINSTOR Complete

This CSI plug-in allows for the use of LINSTOR volumes on Container Orchestrators that implement CSI, such as Kubernetes.

Preliminary work on the CSI plugin for LINSTOR is now complete and is capable of operating as an independent driver. CSI is a project by the Cloud Native Computing Foundation which aims to serve as an industry standard interface for container orchestration platforms. This allows storage vendors to write one storage driver and have it work across multiple platforms with little or no modification.

In practical terms, this means that LINSTOR is primed to work with current and emerging cloud technologies that implement CSI. Currently, work is being done to provide example deployments for Kubernetes, which should allow an easy way for Kubernetes admins to deploy the LINSTOR CSI plug-in.  We expect full support for Kubernetes integration in early 2019.

Get the code on GitHub.

business-server-woman-SDS

OS driven Software-Defined Storage (SDS) with Linux

In the tech world, a “layered cake” approach allows IT shops to enable new software capabilities while minimizing disruption. Need a new feature? Add a new layer. But like “too much of a good thing,” this approach has caused the IT stack to grow over the years. Software layer bloat and complexity can lead to poor system performance.

Keeping all the layers of a tall stack version-tested and conflict-free can be both complex and costly.  This is especially the case when you deal with virtualization and containers. You want VM-native and container-native semantics and ease of use, but you also want speed, robustness, and maintainability. What’s the best path? Do you add a new layer, or add functionality to existing layers?

LINSTOR uses existing features

There is a better way. The idea is to fully leverage the functionality that already exists and has been taken through its paces. So, before adding anything, users should see what features are natively supported in their environment. You can start with the OS kernel and work your way up. We went through this analysis when we built LINBIT SDS and LINSTOR.

It turns out there is a lot of SDS-type functionality already inside the Linux kernel and the OS. Software RAID adds drive protection, LVM does snapshotting; there is replication, deduplication, compression, SSD Caching, etc. So, no need to reinvent the wheel when building a storage system. But these efficient and reliable tools need to be presented in a VM-native or container-native way so that the software is easy to use and fits the way users interact with VMs or containers. This is where LINSTOR comes in.

Integrations to Kubernetes and OpenStack

LINSTOR is a Linux block device provisioner that lets you create storage pools, defined by which Linux block storage software you want to use. From these storage pools you can easily provision volumes either through a command-line interface (CLI) or your favorite cloud/container front-end (Kubernetes, OpenStack, etc). Simply create your LINSTOR cluster, define your storage pool(s), and start creating  resources where they’re needed on-demand.

With LINSTOR, you can:

  •       Scale out by designating nodes as controllers or satellites
  •       Display usage statistics from participating nodes
  •       Define storage pools for different classes of storage
  •       Set security/encryption for volumes
  •       Deploy in Non-Converged or Hyper-Converged infrastructures
  •       Define redundancy for each volume
  •       etc.

In short: you can do everything a more complicated SDS product provides, but in a lighter weight, more maintainable, faster, and more reliable fashion.

Don’t just take our word for it. In a recent report titled, “Will the Operating System Kill SDS?“, Storage Switzerland covers this approach.

 

Try it out. Test it. Play with it. Commit code to it. Send us your feedback. As always, we’d love to hear from you about how you use the software so that we can make the software better in future revs. Our goal is the same as it has been for nearly 20 years. We are reducing the cost and complexity of enterprise storage by enabling customers to choose Linux for their back-end storage in any environment.

 

Building on open-source software and leveraging the capabilities in the Linux kernel results lower TCO, always-on apps, and always-available data for enterprises everywhere.

Greg Eckert on Linkedin
Greg Eckert
In his role as the Director of Business Development for LINBIT America and Australia, Greg is responsible for building international relations, both in terms of technology and business collaboration. Since 2013, Greg has connected potential technology partners, collaborated with businesses in new territories, and explored opportunities for new joint ventures.