HOW OBJECT STORAGE TACKLES THORNY EXASCALE PROBLEMS


Traditional storage-area network (SAN), network-attached storage (NAS), and unified-storage systems are not — and never have been — designed to deal with exascale problems, which involve more than 1,000 petabytes (PB) or 1 million terabytes (TB) of data. In fact, most are barely capable of managing low levels of petascale problems. Even scale-out SAN and NAS are not capable of solving exascale problems today.

Some IT pros think they have little to be concerned about and will deal with this problem when it hits their horizon. Yet that horizon is closer than it may appear: Service providers and an increasing number of enterprise-IT organizations are already dealing with exascale problems.

Users and service providers report that data storage consumption rates are accelerating, not decelerating. Current market consensus on data storage growth is approximately 62 percent CAGR. A simple mathematical calculation demonstrates that storage capacities must double every 18 months to keep pace. Hard disk drives (HDDs) stopped keeping pace several years ago. Last year the largest HDD only grew from 2 TB to 3 TB, or a 50 percent growth rate. This year they’ve increased to 4 TB, or a 33 percent increase. Next year the biggest HDD is expected to be 5 TB — a 25 percent increase.

Solid-state drives (SSDs) are not doing any better. When the media cannot keep up with consumption the result is larger storage systems as well as a lot more of them. The current storage model is only manageable when capacities are projected into the low double-digit PBs. At that point more systems are required.

To download the white paper, register here: