Neither public clouds nor private cloud technologies like VMware or OpenStack are efficient for the new generation of data intensive applications. And, disk-heavy bare metal servers suffer from a lack of flexibility and manageability.
By using racks of traditional commodity servers that each have a preconfigured number of hard drives to run Hadoop clusters and other big data workloads, administrators struggle to strike a balance between CPU and storage in their data centers. Modern applications are constantly changing and even if these two components are initially well-balanced, a sudden influx of data or a job request from data scientists can instantly destabilize it. Since traditional racks lack flexibility, these fluctuations make it hard for organizations to remain agile.
“With current technologies and the rigid marriage of storage and compute within data centers, it becomes impossible to strike the right balance,” said Gene Banman, CEO at DriveScale. “Organizations need flexibility in the data center to maximize the potential of each rack and get the most from their big data clusters. Disaggregating the disk from the CPU nodes restores this flexibility.”
DriveScale provides organizations with scale-out data center architecture and disaggregated direct-attached storage for servers, enabling clients to easily support big data workloads of any size, as they scale while helping them realize the full benefit of their investments in big data technologies such as Hadoop. DriveScale can also reduce TCO by as much as 60 percent over five years, replacing the need to add traditional servers and racks to existing data center infrastructures.
Using DriveScale’s scale-out architecture, administrators can deploy clusters easily and efficiently without disrupting their existing infrastructure.
DriveScale provides the only rack scale architecture that enables administrators to bring the full benefit of their big data investments to their organization
With tightly integrated hardware and on-premises software, the DriveScale System allows enterprise organizations to combine and reconfigure servers with any mix of storage and compute in real time.
For instance, companies that work with Hadoop nodes tend to silo clusters, creating inefficiencies throughout the process. In order to manage increasing volumes of data, these companies have traditionally needed to upgrade their commodity servers to take advantage of faster CPUs and discard disk drives that still had plenty of storage capacity. The DriveScale System removes these inefficiencies in the clusters by decoupling storage and compute, without requiring the implementation of an entirely new data center infrastructure.
“DriveScale provides easy cloud-based administration of deployments, managing and optimizing Hadoop clusters and other big data application workloads throughout their lifecycles,” Banman said. “By taking a software-defined approach to data center infrastructure, we actively prevent one of the most common problems data center administrators face: over-provisioning. Allowing them to rebalance infrastructure, based on changing application stacks and volume of data flow means that admins can proactively determine the necessary infrastructure for their organization’s needs—without worrying about fixed storage to compute ratios.”
Scale-out infrastructure has the potential to completely remake the data center, but existing challenges to adoption remain. DriveScale provides a solution for enterprises to quickly adopt an effective scale-out infrastructure that gives data center administrators—both within Hadoop clusters and the broader big data industry—the ability to say yes to more big data workloads than ever before.