All-In-One & Ready to Use

CompecTA SuperNova Engine™ is a super-fast and energy-efficient server solution for Deep Learning needs.

Built with the world’s best hardware, software, and systems engineering and years of experience of CompecTA HPC Solutions.

It comes with Deep Learning Neural Network libraries, Deep Learning Frameworks, Operating System and Drivers pre-installed and ready to use.

Enterprise Level Support

CompecTA SuperNova Engine™ comes with an Enterprise Level Support package from HPC experts with over 20 years of experience in the field.

We will assist you with any problem you may encounter with your system.


Free CPU Time

CompecTA SuperNova Engine™ comes with 100 thousands hours of CPU Time on CompecTA's in-house HPC Cluster service called FeynmanGrid™.

Use this opportunity to gain experience on using HPC Clusters or performance test any application or to test some new code/application.

NVIDIA Preferred Solution Provider
DL Framewroks

CompecTA SuperNova™ Series

Super-fast & Scalable Deep Learning Server Systems

SuperNova Engine™ P100

74.8 TFLOPS of FP16 Performance
37.2 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Good Inferencing

$45,217 (Excluding Taxes)
CNVAIS1-P100V1

P100
  • 19" 1U Rackmount Chassis 4-way GPGPU
  • Intel® Xeon® Processor E5-2697A v4
  • 256 GB DDR4 2400MHz Memory
  • 4 x NVIDIA Tesla P100 16GB HBM2 at 732 GB/s Memory
  • 14,336 Total CUDA® Cores
  • 4 x 512GB SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services

Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA
  • Faster Intel® Xeon® Scalable Processors with DDR4 2666MHz memory

SuperNova Engine™ V100 NEW

448 TFLOPS of Tensor Performance
56 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Fast Inferencing

REQUEST A QUOTE
CNVAIS1-V100V1

V100
  • 19" 1U Rackmount Chassis 4-way GPGPU
  • Intel® Xeon® Processor E5-2697A v4
  • 256 GB DDR4 2400MHz Memory
  • 4 x NVIDIA Tesla V100 16GB HBM2 at 900 GB/s Memory
  • 20,480 Total CUDA® Cores
  • 2,560 Total Tensor Cores
  • 4 x 512GB SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA
  • Faster Intel® Xeon® Scalable Processors with DDR4 2666MHz memory

SuperNova Engine™ NP104

84.8 TFLOPS of FP16 Performance
42.4 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Good Inferencing

REQUEST A QUOTE
CNVAIS1-NP104V1

NP104
  • 19" 1U Rackmount Chassis 4-way SXM2
  • Intel® Xeon® Processor E5-2697A v4
  • 512 GB DDR4 2400MHz Memory
  • 4 x NVIDIA P100 SXM2 with NVLink 16GB HBM2 at 732 GB/s Memory
  • Up to 80 GB/s GPU-to-GPU NVLINK
  • 14,336 Total CUDA® Cores
  • 4 x 512GB SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services

Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA
  • Faster Intel® Xeon® Scalable Processors with DDR4 2666MHz memory
  • One additional NVIDIA® Tesla P40 for super-fast Inferencing

SuperNova Engine™ NV104 NEW

500 TFLOPS of Tensor Performance
62.8 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Fast Inferencing

REQUEST A QUOTE
CNVAIS1-NV104V1

NP104
  • 19" 1U Rackmount Chassis 4-way SXM2
  • Intel® Xeon® Processor E5-2697A v4
  • 512 GB DDR4 2400MHz Memory
  • 4 x NVIDIA V100 SXM2 with NVLink 16GB HBM2 at 900 GB/s Memory
  • Up to 300GB/s GPU-to-GPU NVLINK
  • 20,480 Total CUDA® Cores
  • 2,560 Total Tensor Cores
  • 4 x 512GB SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA
  • Faster Intel® Xeon® Scalable Processors with DDR4 2666MHz memory
  • One additional NVIDIA® Tesla P40 for super-fast Inferencing

SuperNova Engine™ NP108

170 TFLOPS of FP16 Performance
84.8 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Ultra-fast Neural Network Training
Good Inferencing

REQUEST A QUOTE
CNVAIS1-NP108V1

NP108
  • 19" 4U Rackmount Chassis 8-way SXM2
  • Intel® Xeon® Processor E5-2697A v4
  • 512 GB DDR4 2400MHz Memory
  • 8 x NVIDIA P100 SXM2 with NVLink 16GB HBM2 at 732 GB/s Memory
  • Up to 80 GB/s GPU-to-GPU NVLINK
  • 28,672 Total CUDA® Cores
  • 4 x 512GB SSD Disks
    (4 hot-swap)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services

Optional:
  • Up to 4 x Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA

SuperNova Engine™ NV108 NEW

1000 TFLOPS of Tensor Performance
125.6 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Ultra-fast Neural Network Training
Fast Inferencing

REQUEST A QUOTE
CNVAIS1-NV108V1

NV108
  • 19" 4U Rackmount Chassis 8-way SXM2
  • Intel® Xeon® Processor E5-2697A v4
  • 512 GB DDR4 2400MHz Memory
  • 8 x NVIDIA V100 SXM2 with NVLink 16GB HBM2 at 900 GB/s Memory
  • Up to 300GB/s GPU-to-GPU NVLINK
  • 40,960 Total CUDA® Cores
  • 5,120 Total Tensor Cores
  • 4 x 512GB SSD Disks
    (4 hot-swap)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Up to 4 x Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA

SuperNova Engine™ P40

48 TFLOPS of FP32 Performance
188 TOPS of INT8 Performance

Best suited for:
Deep Learning Development
Good Neural Network Training
Super-fast Inferencing

$46,970 (Excluding Taxes)
CNVAIS1-P40V1

P40
  • 19" 1U Rackmount Chassis 4-way GPGPU
  • Intel® Xeon® Processor E5-2697A v4
  • 256 GB DDR4 2400MHz Memory
  • 4 x NVIDIA Tesla P40 24GB Memory
  • 15,360 Total CUDA® Cores
  • 4 x 512GB SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA
  • Faster Intel® Xeon® Scalable Processors with DDR4 2666MHz memory

Get more information!

Our experts can answer all your questions.

info@compecta.com
+90 216 455-1865