• ArgoTech Ethernet Storage Fabric for Petascale environments

    Petascale storage environments must meet a wide range of operating criteria based on capacity, read and write performance.

    A compelling price point is crucial but this not at the sake of maintaining absolute high availability.

In these large data centre environments, often with multi-tenancy, the storage platform needs to be able to deliver predictability in terms of operational characteristic and ongoing management and maintenance requirements. To meet these needs, the ArgoTech Ethernet Storage Fabric (AESF) has been designed using industry standard technology based on a simple yet innovative architecture.

Built for simplicity

Building a Petascale storage solution is simple. It starts with our Perseus NAS appliance which runs our core ‘single image’ Argo Operating Systems. Multiple Perseus NAS appliances connected via standard Ethernet switch at 10GB/40GB are used to create our ArgoTech Ethernet Storage Fabric (AESF) capable of scaling to multiple Petabytes as part of our massively parallel storage architecture.

Built for scale

The use of ATA over Ethernet for the interconnection layer used by the fabric delivers very low latency and performance overhead. While all application facing data transfer uses industry standard protocols such as NFS, CIFS and optionally iSCSI for block based applications. The ZFS based file system can offer a single standard namespace of up to 2.2 petabytes which be extended further as required.

Built for capacity and performance

The Argo Ethernet Storage Fabric allows attachment of multiple Ethernet Layer2 Storage Shelves offering either 36, 72 or 90 drive slots in a compact footprint with a flexible mix of either NL-SATA or SSD drives. The 90 slot shelf offers 360Tb in 6U using 4TB drives with throughput of up to 4,800 MB/s and in excess of 700,000 IOPS. As a true fabric and scale-out architecture, adding more Perseus NAS appliances and Ethernet Storage Shelfs will scale-out performance to meet the growing storage capacity.

Built for reliability

The built in Redundant Array of Independent Nodes (RAIN), a technology developed as a joint research program between the California Institute of Technology, NASA's Jet Propulsion Laboratory and the Defense Advanced Research Projects Agency, is particularly well suited to the highly fault tolerant demands of large storage environments. With RAIN, data is striped across multiple disks and nodes on separate paths through multiple Perseus NAS appliances. If a disk, node or NAS appliance should fail, the data is still available. An automatic self-healing process then begins, even before the failed element is recovered, to rebalance and protect any impacted data. RAIN high availability features are built directly into the AESF and operating system and completely transparent to every application connected to the storage.

With advanced features

The ArgoTech Ethernet Storage Fabric for Petascale environments includes all the elements that large data storage users require plus an extended set of advanced features that add additional value.

Advanced features include:

  • Unified NFS/ CIFS and iSCSI protocol support allows simultaneous data sharing among different Operating System and virtualization environments

  • Tuneable read/ write cache enables faster performance across a range of usage pattern, scale, and data transfer needs than legacy NAS

  • Selectable data compression algorithms that can reduce storage by up to 50% while maintaining application performance

  • Leverages ZFS, making Perseus the only solution that protects against silent data corruption and transmission errors

  • Support for storage automation with Rest API’s and support for Amazon S3

  • Unlimited snapshots and clones provide for the unique needs of highly virtualized, development, and dynamic environments

  • Geographically dispersed clusters across racks, rooms or buildings for multi-site Business Continuity without additional software licensing


Please click on the flag if you would prefer to view our russian language site.