NVIDIA Supercharged Computing

Nvidia GPU's for high performance computing


Fuelled by the insatiable demand for better 3D graphics, and the massive scale of the gaming market, NVIDIA has evolved the GPU into a computer brain at the exciting intersection of virtual reality, high performance computing, and artificial intelligence.

OCF has been an NVIDIA partner for over 10 years supplying many hundreds of NVIDIA GPU’s into the Research market for visualisation, accelerated compute and deep learning & artificial intelligence applications.

OCF holds the highest level of NVIDIA Compute DGX Accreditation (ELITE) in addition to holding NVIDIA Visualization Accreditation (PREFERRED). OCF is able to supply NVIDIA branded products including NVIDIA DGX A100, DGX Station A100, Standalone GPU’s including the A100, RTX6000 and RTX8000.

In addition to NVIDIA branded products OCF offers systems and solutions from NVIDIA OEM partners Dell Technologies, Fujitsu, Gigabyte, Lenovo, Supermicro, supporting infrastructure from NVIDIA Networking (Mellanox) and NVIDIA DGX POD reference architectures from storage partners DDN, DELL Technologies, IBM and Netapp.

OCF can design, deploy and support a balanced, end-to-end solution centred around the latest NVIDIA GPU technologies at practically any scale.

GPU & Accelerated Servers


The ground-breaking NVIDIA H100 Tensor Core GPU makes the NVIDIA DGX H100 an AI powerhouse. This latest iteration of NVIDIA’s iconic DGX system has been designed to maximize AI throughput, while providing enterprises with a highly refined, systemised, and scalable platform, to assist achieving breakthroughs in things such as natural language processing, data analytics and much more.

The NVIDIA DGX H100 is available on-premises and through a wide variety of access and deployment options. The DGX H100 delivers the performance that enterprises need to tackle the largest challenges in AI today. Featuring 6x more performance, 2x faster networking with NVIDIA ConnectX-7 networking interface cards and NVIDIA BlueField, the next-generation architecture is supercharged for complex AI tasks.

The DGX H100 also boasts a total GPU memory capacity of 640GB, 32 PetaFLOPS, 4 x NVSwitch and dual x86 CPUs with a total system memory of 2TB. The DGX H100 also comes with 2 x 1.92TB NVMe M.2 drives and 8 x 3.84TB NVMe U.2 drives.

In terms of the software, each DGX H100 system comes with preinstalled DGX OS, which is based on Ubuntu Linux and includes the standard DGX software stack. Optionally, customers can install Ubuntu Linux or Red Hat Enterprise Linux and the required DGX software stack separately.



NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference. DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads. This ensures that the largest and most complex jobs are supported, along with the simplest and smallest. Running the DGX software stack with optimized software from NGC, the combination of dense compute power and complete workload flexibility make DGX A100 an ideal choice for both single node deployments and large scale Slurm and Kubernetes clusters deployed with NVIDIA DeepOps.



NVIDIA DGX Station A100 brings AI supercomputing to data science teams, offering data center technology without a data center or additional IT infrastructure. Designed for multiple, simultaneous users, DGX Station A100 leverages server-grade components in an office-friendly form factor. It's the only system with four fully interconnected and Multi-Instance GPU (MIG)-capable NVIDIA A100 Tensor Core GPUs with up to 320 GB of total GPU memory that can plug into a standard power outlet, resulting in a powerful AI appliance that you can place anywhere.



The NVIDIA EGX AI platform delivers the power of accelerated AI computing from data centre to edge with a range of optimised hardware, an easy-to-deploy, cloud-native software stack and management service. The EGX hardware portfolio ranges from the tiny, power-efficient NVIDIA® Jetson™ family for tasks such as image recognition and sensor fusion at the edge, to entire racks of NVIDIA-Certified Systems™ in the data centre which can deliver more than 10,000 tera operations per second (TOPS) to serve hundreds of users with real-time speech recognition and other complex AI experiences.




The NGC catalogue provides GPU-optimised software for deep learning (DL), machine learning (ML), and high-performance computing (HPC) that meet the needs of data scientists, developers, and researchers with various levels of AI expertise. Quickly deploy AI frameworks with containers, get a head start with pre-trained models or resources, and use domain-specific SDKs, use-case-based Collections, and Helm charts for the fastest AI implementations, giving faster time-to-solution.



NVIDIA Clara is a healthcare application framework for AI-powered imaging, genomics, and for the development and deployment of smart sensors. It includes full-stack GPU-accelerated libraries, SDKs and reference applications for developers, data scientists and researchers to create real-time, secure and scalable solutions.



NVIDIA Metropolis

NVIDIA Metropolis is leading an AI revolution, giving you the tools, technologies, and expertise to meet every challenge with smarter, faster applications ability to gather data from trillions of sensors and other IoT devices and extract actionable insights. NVIDIA Metropolis uses the low power of NVIDIA® Jetson™ in cameras and appliances at the edge, the massive compute of NVIDIA Tesla® servers in the cloud, and the NVIDIA DeepStream SDK powered by NVIDIA TensorRT™ to deliver a complete IVA solution.

request a call back to discuss nvidia metropolis


The NVIDIA Isaac Software Development Kit (SDK) brings intelligence to robots. The platform comes stacked with comprehensive tools, application frameworks, GPU-enabled algorithms, reference designs, and pre-trained capabilities to accelerate development workflows for robotics applications.

Isaac Sim 2020, built on NVIDIA Omniverse, is a robotics app designed to import, build, and test robots in a photorealistic and high-fidelity physics 3D environment. It works in both local and cloud-based systems.

request a call back to discuss nvidia isaac



Jarvis is a fully accelerated application framework for building multimodal conversational AI services that use an end-to-end deep learning pipeline. Developers at enterprises can easily fine-tune state-of-art-models on their data to achieve a deeper understanding of their specific context and optimise for inference to offer end-to-end real-time services that run in less than 300 milliseconds (ms) and delivers 7x higher throughput on GPUs compared with CPUs.

The Jarvis framework includes pre-trained conversational AI models, tools in the NVIDIA AI Toolkit, and optimised end-to-end services for speech, vision, and natural language understanding (NLU) tasks.

Request a call back to discuss nvidia jarvis



The new NVIDIA H100 Tensor Core GPU provides unprecedented performance, power and security for any data center tackling heavy workloads.  With NVIDIA NVLink Switch System, up to 256 H100 GPU cards can be connected to quicken Exascale workloads, while the dedicated Transformer Engine support trillion-parameter language models.

This NVIDIA H100 GPU uses innovations in the NVIDIA Hopper architecture to provide industry-leading conversational AI, speeding up language models by an incredible 30x over the previous edition.

NVIDIA’s Hopper based GPU is paired with the Grace CPU technology, which uses NVIDIA’s ultra-fast chip-to-chip interconnect, while delivering a staggering 900GB/s of bandwidth, this being 7x quicker than PCIe Gen5.


NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.



NVIDIA A40 GPU's unprecedented performance and multi-workload capabilities from the data centre, combining professional graphics with powerful compute and AI acceleration to deliver today’s design, creative and scientific challenges. NVIDIA A40 brings state-of-the-art features for ray-traced rendering, simulation, virtual production, and more to professionals in the form of virtual workstations or server-based workloads.




NVIDIA® Quadro® RTX 6000, powered by the NVIDIA Turing™ architecture and the NVIDIA RTX platform, brings the most significant advancement in computer graphics in over a decade to professional workflows. Designers and artists can now wield the power of hardware-accelerated ray tracing, deep learning, and advanced shading to dramatically boost productivity and create amazing content faster than ever before.


NVIDIA​ Quadro RTX 8000

Bring the power of RTX to the data center with the NVIDIA Quadro RTX 8000, and Quadro Virtual Data Center Workstation (Quadro vDWS) software, built on the NVIDIA Turing architecture and the NVIDIA RTX platform for powerful server-based visual computing solutions.

Accelerate multiple data center workloads including batch rendering, data science, virtual workstation, simulation, and augmented or virtual reality over 5G networks. Customers can also serve multiple powerful virtual workstations with Quadro vDWS software.




The NVIDIA A2 Tensor Core GPU

is a compact, lower power product, that delivers entry-level acceleration for Deep Learning, Graphics and Video processing in any server. It is a half-height (low profile), half-length, single slot card featuring 16 GB of GDDR6 memory and a 60 W maximum power limit. The A2 supports x8 PCIe Gen4 connectivity. It is a passively cooled card with a superior thermal design that requires system airflow to operate and handles challenging ambient environments with ease (NEBS-3 capable).

The NVIDIA A2 is powered by the NVIDIA Ampere Architecture. It provides revolutionary multiprecision performance to accelerate deep learning and machine learning training, as well as inference, video transcoding, AI audio and video effects, rendering, data analytics, virtual workstation, virtual desktop, and many other workloads

The NVIDIA® A10 Tensor Core (GPU)

Delivers a versatile platform for Graphics and Video processing, as well as Deep Learning Inferencing in distributed computing environments. It combines the 2nd generation NVIDIA® TensorRT™ cores, 3rd generation tensor cores with 24 GB of GDDR6 memory in a single-slot 10.5-inch PCI Express Gen4 form
factor, with 150 W maximum board power. The card is passively cooled that requires system airflow to operate within its thermal envelope.

Powered by the NVIDIA Ampere architecture, the NVIDIA A10 universal GPU provides revolutionary multi-precision performance to accelerate mixed workloads from a single, GPU accelerated infrastructure. When combined with NVIDIA RTX™ Virtual Workstation (vWS) software, A10 is ideal for running high-performance virtual workstations running professional visualization applications or combine with NVIDIA Virtual PC (vPC) software for multimediarich virtual desktops. It can also support deep learning and machine learning training and inference, video transcoding, cloud gaming, AI audio and video effects, rendering, data
analytics, and many other workloads.


PCI Express Gen4 graphics processing unit (GPU) card that is ideal for providing high-user density for Virtual Desktop Infrastructure (VDI) environments. It is a full height, full length (FHFL) design with four GPUs on a single board. The A16 is a dual-slot card featuring 64 GB of GDDR6 memory and a 250 W maximum power limit. The A16 also supports x16 PCIe Gen4 connectivity. It is a passively cooled card with a superior thermal design that requires system airflow to operate and handles challenging ambient environments with ease (NEBS-3 capable).

Powered by NVIDIA Ampere architecture, the NVIDIA A16 provides the highest encoder throughput and frame buffer for the best user experience in a VDI environment using NVIDIA Virtual PC (vPC) software. Video transcoding and Android™ cloud gaming are among the other workloads that can take advantage of the multiple encoders and decoders on the A16 GPU. The quad GPU design enables the highest frame buffer, encoder, and decoder density in a dual-slot form factor for VDI use cases

The NVIDIA® A30 Tensor Core (GPU)

Delivers a versatile platform for mainstream enterprise workloads like AI inference, training, and high-performance computing (HPC). It combines 3rd generation tensor cores with 24 GB of HBM2 memory in a
dual-slot 10.5-inch PCI Express Gen4 form factor, with 165 W maximum board power. The card is passively cooled that requires system airflow to operate within its thermal envelope.

Built on the latest NVIDIA Ampere architecture, the NVIDIA A30 brings innovations like Tensor Float 32 (TF32) and Tensor Core FP64, as well as end-to-end software stack solutions, including the NVIDIA AI Enterprise suite to ensure that mainstream AI and HPC jobs can be rapidly solved. In addition to these features, the A30 supports double precision (FP64), single precision (FP32), half precision (FP16), Brain Float 16 (BF16) and Integer (INT8) computations, unified virtual memory, and page migration engine capability. The Multi-Instance GPU (MIG) feature ensures quality of service (QoS) with secure, hardware-partitioned, right-sized GPUs across all compute workloads for a diverse set of users and maximizes the utilization of GPU resources

NVIDIA® Tesla® V100

NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that
were once thought impossible.



The NVIDIA T4 data center GPU now supports virtualization workloads. Based on the latest NVIDIA Turing™ architecture, this solution can be deployed with Tesla T4 – the most universal GPU to date capable of running any workload.



Get in touch 

Contact us here to begin your NVIDIA journey