To allow your HPC applications to scale you need the lowest latency and fastest data communication between compute nodes. OCF solutions include InfiniBand and low latency Ethernet to give the best possible performance
Creating efficient interconnect links between the servers and storage within your cluster can include many different types of technologies being transmitted over various distances. Ethernet, Omni-Path and InfiniBand are just three of the most commonly used within the world of HPC.
Ethernet is widely regarded as a cost-efficient solution used to connect smaller clusters, whereas InfiniBand and Omni-Path feature very high throughput and very low latency which is more useful for any larger requirements you may have.
InfiniBand has become the de facto standard for HPC clusters with high bandwidth and exceptionally low latency. An InfiniBand interconnect will ensure that you achieve the highest performance and application scalability on your cluster.
Being high performance, InfiniBand fabrics are also scalable, with the majority of OCF customer systems either based around a single switch (today's latest Mellanox 1U HDR 200Gb switches feature up to 40 x HDR 200Gb/s ports or 80 x HDR100 100Gb/s ports) or multiple switches configured in a 'fat-tree' topology using one or more 'core' & multiple 'leaf' switches. Using this technique, fabrics scaling to 1000's of machines can be built out of 1U InfiniBand top of rack switches.
With increasing throughputs and ever reducing latencies, Ethernet is more regularly being considered for HPC applications. The reason for this varies, it may be because the majority of a customer's workloads do not scale to multiple machines, or due to the cost savings of only having one network (even with an InfiniBand network for HPC storage and MPI traffic, an Ethernet network is still required for management).
Whatever the reason Ethernet is being considered, the OCF team can review your workloads and applications and ensure that the most appropriate solution is being considered.