What makes REX-Ray different than others?

REX-Ray prides itself being simple and focused. It is also an experienced tool inclusive of architecture choices and enterprise features. It leverages libStorage to ensure a consistent user experience, interoperability with cloud native tools, and integration to external storage platforms. REX-Ray’s build process includes a mature CI/CD pipeline that does extensive automated testing for community and team contributions.

What’s the difference between REX-Ray and libStorage?

REX-Ray is a container storage orchestration engine. It ensures the front-end interoperability with cloud native orchestrators and runtimes. It leverages a cloud native storage library called libStorage to perform back-end storage orchestration and volume lifecycle functions irrespective of storage platform. libStorage has been absorbed into REX-Ray to simplify the process for end-user support for any part of REX-Ray’s feature tree; this unification creates a fully cohesive project and implementation of CSI.

What is libStorage?

libStorage is a mature and proven storage library, framework, and implementation of a storage interface for cloud native tools. It provides reference client and server implementations that handle communication using a defined controller API. libStorage is stateless, abstracting at a control-plane layer while leaving storage resource identifiers in their original form. It is embedded as middleware in REX-Ray to perform storage functions.

libStorage enables a client/server architecture where a centralized service (the controller) accepts requests (from the client) to jointly perform storage orchestration functionality. The client/server architecture can take multiple forms. The standalone version packages the client and server together for a single stateless engine. The centralized version can segregate simple clients to endpoint worker nodes and the server can be centralized to contain specific storage platform information including credentials.

libStorage has been absorbed into REX-Ray to simplify the process for end-user support for any part of REX-Ray’s feature tree; this unification creates a fully cohesive project and implementation of CSI.

Can REX-Ray Support Multiple Storage Providers?

Yes! The controller can include drivers for multiple storage platforms. It takes different configuration parameters to enable specific platforms. The modules section in the configuration file enables multiple storage drivers to be activated per controller.

See the Docker managed plugins for REX-Ray, which is the REX-Ray engine packaged in a container image specifically for individual drivers, to deploy multiple plugins. All configuration for this method is performed through environment variables

What language is REX-Ray written in?

Golang of course. Go is friendly to open source development, and simple to operate since it can be deployed without dependencies. We love it.

What’s up with the dog logo?

In the early days of {code}, the container persistence projects had an overarching theme called “Dogged”. We were dogged-ly pursuing persistent data for containers, which meant that each project name held true to the dog theme; REX-Ray was one of these project names. How did that name come about? Well, REX because it’s a standard dog name. “Come here Rex! Good dog Rex!”. The X-Ray portion, has its own story. Today, when using REX-Ray, a configuration file or set of environment variables instruct REX-Ray what storage platforms to use with the accompanying credentials. However, in the beginning, REX-Ray performed a process of “introspection” when the service was executed. This meant a series of tests ran and automatically discovered which type of storage platform it had access to. This is where the X-Ray comes in. Hence:

REX + X-Ray = REX-Ray.

What’s the history of REX-Ray?

Clint Kitson and Andrew Kutz are the founders of REX-Ray and libStorage. Clint worked with the Docker team to learn about and contribute to the new volume driver plugin interface being developed. After several weeks of creating storage drivers for initial storage platforms, the initial REX-Ray project came to be. Andrew soon took over the reigns of the project with his extensive framework and storage experience. With an eye towards interoperability, the first cloud native storage library was created, libStorage.

As an open source project, REX-Ray has always been (and will continue to be) built with community contribution and collaboration.

Are there any examples or reference architectures?

Yes there is! Look at the Reference Architecture section to learn more.