control room linstor management

The technology inside LINSTOR (Part I)

Spotlight on LINSTOR’s design and technology: What we do and how we do it to create a powerful, flexible and robust storage cluster management software

LINSTOR is an application that is typically integrated with highly automated systems, such as software defined storage systems or virtualization environments. Users often interact with the management interface of some other application that uses LINSTOR to manage the storage required for that application’s use case, which also means that the users may not have direct access to the storage systems or to the LINSTOR user interface.

A single storage cluster can be the backend of multiple independent application systems, so the biggest challenge for a software like LINSTOR is to remain responsive even if some actions or components of the cluster fail. At the same time, the software should be flexible enough to cover all use cases, to enable future extension or modification, and despite all the complexity that is the result of these requirements, it should at the same time be easy to understand and easy to use for the administrators who are tasked with installing and maintaining the storage system.

It is quite clear to anyone who has worked on a bigger software project as a developer that many of those requirements work against each other. Customizability, flexibility, an abundance of features cause complexity, but complexity is the natural enemy of usability, reliability and maintainability. When we started the development of LINSTOR, our challenge was to design and implement the software so that it would achieve our goals with regards to feature richness and flexibility while at the same time remaining reliable and easy to use.

Modularity

One of the most important aspects of LINSTOR’s design is its modularity. We divided the system into two components, the Controller and the Satellite, so that the Controller component could remain as independent as possible from the Satellite component – and vice versa.

Even inside those two components, many parts of the software are exchangeable – the communication layer, the serialization protocol, the database layer, all of its API calls, even all of the debug commands that we use for internal development, as well as many other implementation details are exchangeable parts of the software. This provides not only a maximum of flexibility for future extensions, it also acts as a sort of safety net. For example, if support for the database or the serialization protocol that we use currently were dropped by their maintainers, we could simply exchange those parts without having to modify every single source code file of the project, because implementation details are hidden behind generic interfaces that connect various parts of our software.

Another positive side effect is that many of those components, being modular, are naturally able to run multiple differently configured instances. For example, it is possible to configure multiple network connectors in LINSTOR, each bound to different network interfaces or ports.

Linstor Linbit Opennebula openstack

A single communication protocol

As a cluster software, LINSTOR must of course have some mechanism to communicate with all of the nodes that are part of the cluster. Integration with other applications also requires some means of communication between those applications and the LINSTOR processes, and the same applies to any kind of user interface for LINSTOR.

There are lots of different technologies available, but many of them are only suitable for certain kinds of communication. Some clusters use distributed key/value stores like etcd for managing their configuration, but use D-Bus for command line utilities and a REST interface for connecting other applications.

Instead of using many different technologies, LINSTOR uses a single versatile network protocol for communication with all peers. The protocol used for communication between the Controller and the Satellites is the same as the one used for communication between the Controller and the command line interface or any other application. Since this protocol is implemented on top of standard TCP/IP connections, it also made all aspects of LINSTOR’s communication network transparent. An optional SSL layer can provide secure encrypted communication. Using a single mechanism for communication also means less complexity, as the same code can be used for implementing different communication channels.

Transaction-safety

Even though LINSTOR keeps its configuration objects in memory, there is an obvious need for some kind of persistence. Ideally, what is kept in memory should match what is persisted, which means that any change should be a transaction, both in memory and on persistent storage.

Most Unix/Linux applications have traditionally favored line-based text files for the configuration of the software and for persisting its state, whereas LINSTOR keeps its configuration in a database. Apart from the fact that a fully ACID-compliant database is an ideal foundation for a building a transaction-safe application, using a database also has other advantages. For example, if an upgrade of the software requires changes to the persistent data structures, the upgrade of the data can be performed as a single transaction, so that the result is either the old version or the new version of the data, but not some broken state in between. Database constraints also provide an additional safeguard that helps ensuring the consistency of the data. Assuming that there were a bug in our software, so that it would fail to detect duplicate volume numbers being assigned to storage volumes, the database would abort the transaction for creating the volume due to constraint violations, thereby preventing inconsistencies in the corresponding data structures.

To avoid requiring users to set up and maintain a database server, LINSTOR uses its own integrated database by default – it is simply started as an integral part of the Controller component. Optionally, the Controller can also access a centralized database by means of a JDBC driver.

Block storage drbd compression deduplication

Linux Data Deduplication and Compression: One more reason to use block level data replication.

Having recently returned from my 6th Red Hat Summit (RHS), I’m writing this blog to answer a common question: “why replicate at the block level?” Using block-level replication, we can easily add high availability or disaster recovery features to any application that doesn’t natively support them.

The most frequently asked question we heard at RHS was, “how do you compare to [insert application replication OR filesystem here?]”. In most cases, the answer was, “LINBIT’s replication software, DRBD, replicates data at the block level.” It would be an extreme task to run performance comparisons vs all of the other replication technologies on the market, so generally we provide background information, including:

Block storage drbd compression deduplication

  • DRBD can usually replicate with 1-3 percent overhead to the cluster’s backing disks, as measured by FIO
  • In dual-primary mode, overhead increases to 15-20 percent
  • DRBD is compatible with any application or Linux filesystem, and is effective at replicating multiple applications simultaneously.
  • DRBD has a read-balancing feature. If you are running a read intensive application, DRBD will pass through reads to secondary nodes once the primary is running at maximum capacity, enabling you to leverage all of your replicated systems. One test showed 1.7x the read performance compared to the advertised speed of the drive.

Deduplication and Compression

Generally, it comes down to efficiency. EMC, NetApp, and the other big storage players use block level replication in their appliances because this way the replication doesn’t need to go “all the way up the stack.” It enables flexibility, stability, and performance. And now, Red Hat has given us one more reason to replicate at the block level: Deduplication and Compression.

In the most recent Red Hat Enterprise Linux 7.5 release, Red Hat announced integration of Red Hat VDO, or Virtual Data Optimizer. VDO is used for deduplication and compression of Linux environments. Though it can be paired with other replication technologies, it can only be fully leveraged when the replication sits underneath the VDO device. Why? You want to deduplicate and compress your data before replicating it for efficiency gains.

Effective transfer times

According to Louis Imershein, Red Hat’s Principal Product Manager for data reduction technologies, “Solutions like LINBIT’s DRBD are able to capture data below the VDO layer.  This means that datasets that benefit from deduplication and compression get replicated in their dehydrated form. With less data to move, Red Hat Enterprise Linux customers with LINBIT DRBD can benefit from faster effective transfer times and reduced bandwidth requirements.”

So, as you’re thinking about underlying storage for your applications, ensure you are using a solution which allows you to maximize the benefit of the existing Linux utilities built in, and around, your Operating System. Thanks to Red Hat, block level replication is now more important than ever.

 

Greg Eckert on Linkedin
Greg Eckert
In his role as the Director of Business Development for LINBIT America and Australia, Greg is responsible for building international relations, both in terms of technology and business collaboration. Since 2013, Greg has connected potential technology partners, collaborated with businesses in new territories, and explored opportunities for new joint ventures.

Cluster-wide management of replicated storage with LINSTOR

The new generation of LINBIT’s storage management system focuses on ease-of-use, resilience and scalability.

Todays IT installations often consist of many individual servers, each running some part of the software infrastructure that together form the kind of service that the installation is supposed to provide. Software processes rely on data, and high availability or disaster recovery solutions, which have typically included replication of the data to one or multiple other physically independent systems.

LINSTOR is the new generation of the software component that implements the automatic management of replicated storage resources in the LINBIT SDS system. Besides adding new features that users have been previously requested, such as the ability to make use of multiple-tier storage, LINBIT has also improved the existing features.

Linstor Linbit Opennebula openstack

Linstor features

Ease-of-use

Our experience has shown that administrators of complex IT environments typically struggle with two things: figuring out how to make the system do what they want, and determining the cause of the problem when a system fails to do what the administrators expect. Creating a new product has given us the opportunity to consider these issues during the design phase and to focus on making the new software easier to use and troubleshoot. Two examples of related enhancements are the more consistent and logical naming of LINSTOR objects and commands, and the greatly enhanced logging and problem reporting.

Resilience

Another area of improvement that we focused on is the resilience of the system as a whole, which depends not only on the LINSTOR software, but also on the entire external environment. For this reason, we designed LINSTOR to manage unexpected changes and to recover from many different types of failures of external components.

Scalability

LINSTOR greatly increases scalability by its ability to perform changes on multiple resources and on multiple nodes concurrently, while still remaining responsive to new requests.

Multi tier storage

Many users have requested the support of multiple-tier storage, and we are pleased to announce that, by adding the concept of storage pools, it has been implemented in LINSTOR. We made this a flexible feature, so that multiple storage pools can be configured, even using different storage backend drivers per storage pool and/or per node if necessary.

The new software is also capable of dealing with multiple network interface cards, each of which can be used as the replication link for DRBD resources or as the communication link for LINSTOR. This feature enables splitting the control network (providing LINSTOR communication) from the data network (providing DRBD replication link communication). IPv6 is  supported for both, LINSTOR communication and DRBD replication links.

RoadmapProduction release roadmap

The roadmap for the production release includes support for:

  • taking snapshots of replicated resources
  • thinly provisioned LVM storage
  • ZFS storage
  • encrypted and authenticated network communication within LINSTOR
  • taking advantage of LINSTOR’s multi-user-capability
Robert Altnoeder on Linkedin
Robert Altnoeder
Robert joined the LINBIT development team in 2013. He had worked with
DRBD at a startup company in the SaaS field before joining LINBIT. His
current primary field of work is the architecture and implementation of
LINSTOR, the cluster management component of LINBIT's SDS software.
Split brain

Split Brain? Never Again! A New Solution for an Old Problem: DRBD Quorum

While attending OpenStack Summit in Atlanta, I sat in a talk about the difficulties of implementing High Availability (HA) clusters. At one point, the speaker presented a picture of a split-brain, discussed the challenges in resolving them, and implementing STONITH in certain environments. As many of you know, “split-brain” is a condition that can happen when each node in a cluster thinks that it is the only active node. The system as a whole loses grip on its “state”; nodes can go rogue, and data sets can diverge without making it clear which one is primary. Data loss or data corruption can result, but there are ways to make sure this doesn’t happen, so I was interested in probing further.

Fencing is not always the solution

Split brain

The Split brain problem can be solved by DRBD Quorum.

To make it more interesting, it turned out that the speaker’s company uses DRBD and Pacemaker for HA, a setup that is very familiar to us. After the talk, I approached the speaker and recommended that they consider “fencing” as a way to avoid split-brain. Fencing regulates access to a shared resource and can be a good safeguard. As the resource needs separate communication path best practices suggest not using the same one that it is trying to regulate, so it needs a separate communication path. Unfortunately, in his environment, redundant networking was not possible. We needed another method.

Split brain is solved via DRBD Quorum

After talking to the speaker, it was clear to me that a new option for avoiding split brain or diverging data sets was needed since existing solutions may not always be feasible in certain infrastructures. This got me thinking about the various options for avoiding split-brain and how fencing could be implemented by using the built-in communication found in DRBD 9. It turns out that the capability of mirroring more than two nodes, found in DRBD 9 is a viable solution.

That idea sparked the work on the newest feature in DRBD: Quorum.

Shortly thereafter, the LINBIT team developed and integrated a working solution into DRBD. The code was pushed to the LINBIT repository and ready for testing.

Interest was almost immediate!

Later on, I happened to meet a few folks from IBM UK. They were working on IBM MQ Advanced Software, the well-known messaging middleware software that helps integrate applications and data across multiple platforms. They intended to use DRBD for their replication needs and quickly became interested in the idea of using a Quorum mechanism to mitigate split-brain situations.

DRBD Quorum takes new perspective

IBM LogoThe DRBD Quorum feature takes a new approach to avoiding data divergence.  A cluster partition may only modify the replicated data set if the number of nodes that can communicate is greater than half of the overall number of nodes within the defined cluster. By only allowing writes on a node that has access to over half the nodes in a given partition, we avoid creating a diverging data set.

The initial implementation of this feature would cause any node that lost Quorum (and was running the application/data set) to be rebooted.  Removing access to the data set is required to ensure the node stops modifying data. After extensive testing, the IBM team suggested a new idea that instead of rebooting the node, terminate the application. This action would then trigger the already available recovery process, forcing services to migrate to a node with Quorum!

Attractive alternative to fencing

As usual, the devil is in the details. Getting the implementation right with the appropriate resync decisions was not as straightforward as one might think. In addition to our own internal testing, many IBM engineers also tested it as well. We are happy to report that current implementation does exactly what was expected!

Bottom line:

If you need to mirror your data set three times, the new DRBD Quorum feature is an attractive alternative to hardware fencing.

In case you want to learn more about the Quorum implementation in DRBD
please see the DRBD9 user’s guide:
https://docs.linbit.com/docs/users-guide-9.0/#s-feature-quorum
https://docs.linbit.com/docs/users-guide-9.0/#s-configuring-quorum

Image  (Lloyd Fugde – stock.adobe.com)

Philipp Reisner on Linkedin
Philipp Reisner
Philipp Reisner is founder and CEO of LINBIT in Vienna/Austria. His professional career has been dominated by developing DRBD, a storage replication for Linux. Today he leads a company of about 30 employees with locations in Vienna/Austria and Portland/Oregon.

 

 

LINBIT’s DRBD ships with integration to VCS

The LINBIT DRBD software has been updated with an integration for Veritas Infoscale Availability (VIA). VIA, formerly known as Veritas Cluster Server (VCS), is a proprietary cluster manager for building highly available clusters on Linux. Examples of application cluster capabilities are Network File Sharing databases or e-commerce websites. VCS solves the same problem as the Pacemaker Open Source projects.  

Yet, in contrast to Pacemaker, VCS has a long history on the Unix Platform. VCS came to Linux as Linux began to surpass legacy Unix platforms. In addition to its longevity, VCS has a strong and clean user experience. For example, VCS is ahead of the Pacemaker software when it comes to clarity of log files. Notably, the Veritas Cluster Server has slightly fewer features than Pacemaker. (With great power comes complexity!)

Gear-drbd-integration-VCS

The gear runs even smoother. DRBD has an integration for VCS.

VCS integration for DRBD

Since January 2018, DRBD has been shipping with an integration to VCS. Users are now able to use VCS instead of Pacemaker and even control DRBD via VCS. It consists of two agents: DRBDConfigure and DRBDPrimary that enable drbd-8.4 and drbd-9.0 for VCS.

Full documentation can be found here on our website:

https://docs.linbit.com/docs/users-guide-9.0/#s-feature-VCS

and

https://github.com/LINBIT/drbd-utils/tree/master/scripts/VCS

Besides VCS Linbit DRBD supports variety of Linux software so you can keep your system up and running.

Besides VCS Linbit DRBD supports variety of Linux software so you can keep your system up and running.

Pacemaker 1.0.11 and up
Heartbeat 3.0.5 and up
Corosync 2.x and up

 

Reach out to [email protected] for more information.

We are driven by the passion of keeping the digital world running. That’s why hundreds of customers trust in our expertise, services and products. Our OpenSource product DRBD has been installed several million times. Linbit established DRBD® as the industry standard for High-Availability (HA) and data redundancy for mission critical systems. DRBD enables disaster recovery and HA for any application on Linux, including iSCSI, NFS, MySQL, Postgres, Oracle, Virtualization and more.

Philipp Reisner on Linkedin
Philipp Reisner
Philipp Reisner is founder and CEO of LINBIT in Vienna/Austria. His professional career has been dominated by developing DRBD, a storage replication for Linux. Today he leads a company of about 30 employees with locations in Vienna/Austria and Portland/Oregon.

 

Why Does Higher Education Require Always-On Capabilities?

People understand the importance of hospital systems needing to be Highly Available. This is easy to explain since people’s LIVES depend on medical equipment and information being accessible at all times. Likewise, people understand the importance of banks needing High Availability (HA) — they expect access to their MONEY on-demand and want it protected. You don’t have to be a techie to quickly understand why hospitals and banks need to be constantly available. However, the need for HA at educational institutions is a bit more difficult to initially identify, because they are not often thought of as places where ‘mission-critical’ systems are a real requirement. I believe the story is told less, as it has an underwhelming shock factor– people’s lives are not at stake, nor is their money hanging in the balance. At LINBIT, we have many educational customers, including prestigious universities, and we wanted to get their perspective on why HA and why LINBIT. Read more

Dreaded Day of Downtime

Some say that no one dreads a day of downtime like a storage admin.

I disagree. Sure, the storage admins might be responsible for recovering a whole organization if an outage occurs; and sure, they might be the ones who lose their jobs from an unexpected debacle, but I would speculate that others have more to lose.

First, the company’s reputation takes a big, possibly irreparable hit with both clients and  employees. Damage control usually lasts far longer than the original outage.  Take the United Airlines case from earlier in 2017 when a computer malfunction led to the grounding of all domestic flights. Airports across the country were forced to tweet out messages about the technical issues after receiving an overwhelming number of complaints. Outages such as this one can take months or years to repair the trust with your customers. Depending upon the criticality of the services, a company could go bankrupt. Despite all this, even the company isn’t the biggest loser; it is the end-user: and that is what the rest of this post will focus on.

Let’s say you’re a senior in college. It’s spring term, and graduation is just one week away.  Your school has an online system to submit assignments which are due at midnight, the day before finals week. Like most students at the school, you log into the online assignment submission module, just like you have always done.  Except this time, you get a spinning wheel. Nothing will load. It must be your internet connection. You call a friend to have them submit your papers, but she can’t login either. The culprit: the system is down.

Now, it’s 10:00 PM and you need to submit your math assignment before midnight. At 11:00 PM you start to panic. You can’t log-in and neither can your classmates.  Everyone is scrambling. You send a hastily written email to your professor explaining the issue. She is unforgiving because you shouldn’t have procrastinated in the first place. At 1:00 AM, you refresh the system and everything is working (slowly), but the deadlines have passed. The system won’t let you submit anything. Your heart sinks as you realize that without that project, you will fail your math class and not be able to graduate.

This system outage caused heartache, stress and uncertainty for the students and teachers along with a whole lot of pain for the administrators.  The kicker is that the downtime happened when traffic was anticipated to be the highest! Of course, the servers are going to be overloaded during the last week of Spring term. Yet, notoriously, the University will send an email stating that it experienced higher than expected loads; and that ultimately, they weren’t prepared for it.

During this time, traffic was 15 times its normal usage, and the Hypervisor hosting the NFS server and the file sharing system was flooded with requests.  It blew a fan and eventually overheated. Sure, the data was still safe inside the SAN on the backend.  However, none of that mattered when the students couldn’t access the data until the admin rebuilt the Hypervisor. By the time the server was back up and running, the damage was done.

High Availability isn’t a simple concept but it is critical for your organization, your credibility, and even more importantly, for your end-users or customers. In today’s world, the bar for “uptime” is monstrously high therefore downtime is simply unacceptable.

If you’re a student, an admin or a simple system user- I have a question for you (and don’t just think about yourself, think about your boss, colleagues, and clients):

What would your day look like if your services went unresponsive right… NOW?!
Learn more about the costs and drivers of data loss, and how to avoid it, by reading the paper from OrionX Research.

 

Greg Eckert on Linkedin
Greg Eckert
In his role as the Director of Business Development for LINBIT America and Australia, Greg is responsible for building international relations, both in terms of technology and business collaboration. Since 2013, Greg has connected potential technology partners, collaborated with businesses in new territories, and explored opportunities for new joint ventures.

The Top Issues and Topics for HA-DR in 2018

2017 is coming to a close and it is a good time to look back and then look forward. Thank you to our customers, partners, and the broader open source community for your participation, 2017 was a year of many accomplishments for LINBIT. We celebrated over 1.6 million downloads of DRBD, expanded into China, and released 4 new technical guides: HA NFS on RHEL 7, HA iSCSI on RHEL 7, HA & DR for ActiveMQ, and DRBD with Encryption. Read more

Deploy a DRBD/Pacemaker Cluster using Ansible

We get asked the question, “do you have a sandbox cluster we can play around in?”, by admins and potential clients looking to get a feel for managing a DRBD/Pacemaker cluster fairly often. Instead of spinning up some cloud instances and doling out access, we decided it would be better for our potential clients to be able to see how it all works in their actual environment. Ansible seemed like the best way to create a “one size fits all” solution for deploying such clusters into an unknown environment, and after a few days hacking together a playbook, it proved to be a good choice.

The end result was an Ansible playbook that can deploy a few different cluster configurations onto a pair of nodes in any environment. The playbook prompts the user for some inputs that will specify which type of cluster to deploy, which LINBIT contract to register the target nodes with, and which credentials to use for said registration; all of which could be set in your inventory file or passed via extra arguments on the command line to avoid prompting. After the playbook runs, you’re left with an initialized DRBD device and Pacemaker cluster at the very least, or a full blown HA cluster serving out either iSCSI or NFS (expect more later) that you can test with until your heart’s content.

You can find directions and my Ansible playbook’s repo on GitHub.

Matt Kereczman on Linkedin
Matt Kereczman
Matt is a Linux Cluster Engineer at LINBIT with a long history of Linux System Administration and Linux System Engineering. Matt is a cornerstone in LINBIT’s support team, and plays an important role in making LINBIT’s support great. Matt was President of the GNU/Linux Club at Northampton Area Community College prior to graduating with Honors from Pennsylvania College of Technology with a BS in Information Security. Open Source Software and Hardware are at the core of most of Matt’s hobbies

 

LINBIT Delivers High Availability and Disaster Recovery for Apache ActiveMQ Messaging Software

LINBIT Solution Simplifies HA and DR for ActiveMQ

BEAVERTON, Ore., Dec. 6, 2017 — LINBIT, a leader in open source High Availability (HA), Software Defined Storage (SDS), Disaster Recovery (DR) and the force behind the DRBD software, today announced that it is bringing disaster recovery capabilities to Apache ActiveMQ™, the most popular open source messaging and Enterprise Integration Pattern (EIP) server software.  

The LINBIT solution simplifies HA and DR for ActiveMQ because it does not require a clustered file system or shared database, a common requirement in current HA/DR implementations.

“Reliable communication in a distributed environment is a critical part of modern IT systems,” said Philipp Reisner, CEO of LINBIT. “LINBIT DR for ActiveMQ reduces cost and complexity for data centers and mitigates the risk often seen with SAN, clustered file systems, or shared databases.”

At TruckPro, the LINBIT DRBD software “is used primarily for resiliency,” stated Henry Santamaria, Director of Infrastructure. “Uptime is important for our business and anything we can do to quickly recover from any issue is paramount. Our investment in LINBIT yielded a noticeable increase in performance and stability which we did not have before.”

Known for its stability and performance over the last 15 years, LINBIT software is used by thousands of organizations across the globe, and is embedded in products from independent software vendors and established equipment manufacturers under OEM agreements. “With over 10,000 downloads per month, it is easy to see why even the most demanding environments rely on LINBIT to reduce risk and improve performance,” said Brian Hellman, LINBIT COO.

About LINBIT (http://www.linbit.com)

LINBIT is the force behind DRBD and the de facto open standard for High Availability (HA) software for enterprise and cloud computing. The LINBIT DRBD software is deployed in thousands of mission-critical environments worldwide to provide High Availability (HA), Geo Clustering for Disaster Recovery (DR), and Software Defined Storage (SDS) for OpenStack based clouds. Visit us at http://www.LINBIT.com, https://twitter.com/linbit, or https://www.linkedin.com/company/linbit. LINBIT is Keeping the Digital World Running.

Read it on PRWeb »