All-In-One & Ready to Use

CompecTA SuperNova Engine™ is a super-fast and energy-efficient server solution for Deep Learning needs.

Built with the world’s best hardware, software, and systems engineering and years of experience of CompecTA HPC Solutions.

It comes with Deep Learning Neural Network libraries, Deep Learning Frameworks, Operating System and Drivers pre-installed and ready to use.

Enterprise Level Support

CompecTA SuperNova Engine™ comes with an Enterprise Level Support package from HPC experts with over 20 years of experience in the field.

We will assist you with any problem you may encounter with your system.

Free CPU Time

CompecTA SuperNova Engine™ comes with 100 thousands hours of CPU Time on CompecTA's in-house HPC Cluster service called FeynmanGrid™.

Use this opportunity to gain experience on using HPC Clusters or performance test any application or to test some new code/application.

NVIDIA Preferred Solution Provider

CompecTA SuperNova™ Series

Super-fast & Scalable Deep Learning Server Systems

SuperNova Engine™ P40

48 TFLOPS of FP32 Performance
188 TOPS of INT8 Performance

Best suited for:
Deep Learning Development
Good Neural Network Training
Super-fast Inferencing

$46,970 (Excluding Taxes)
CNVAIS1-P40

  • 19" 1U Rackmount Chassis 4-way GPGPU
  • Intel® Xeon® Processor E5-2697A v4
  • 256 GB DDR4 2400MHz Memory
  • 4 x NVIDIA Tesla P40 24GB Memory per GPU
  • 15,360 Total CUDA® Cores
  • 4 x 512 SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA

SuperNova Engine™ P100

74.8 TFLOPS of FP16 Performance
37.2 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Good Inferencing

$45,217 (Excluding Taxes)
CNVAIS1-P100V1

  • 19" 1U Rackmount Chassis 4-way GPGPU
  • Intel® Xeon® Processor E5-2697A v4
  • 256 GB DDR4 2400MHz Memory
  • 4 x NVIDIA Tesla P100 16GB HBM2 Memory per GPU
  • 14,336 Total CUDA® Cores
  • 4 x 512 SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA

SuperNova Engine™ NP104 NEW

84.8 TFLOPS of FP16 Performance
42.4 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Good Inferencing

REQUEST A QUOTE
CNVAIS1-NP100-4NVLNK

  • 19" 1U Rackmount Chassis 4-way SXM2
  • Intel® Xeon® Processor E5-2697A v4
  • 512 GB DDR4 2400MHz Memory
  • 4 x NVIDIA P100 SXM2 with NVLink 16GB HBM2 Memory per GPU
  • 14,336 Total CUDA® Cores
  • 4 x 512 SSD Disks
    (2 hot-swap and 2 fixed)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA
  • One additional NVIDIA® Tesla P40 for super-fast Inferencing

SuperNova Engine™ NP108 NEW

170 TFLOPS of FP16 Performance
84.8 TFLOPS of FP32 Performance

Best suited for:
Deep Learning Development
Ultra-fast Neural Network Training
Good Inferencing

REQUEST A QUOTE
CNVAIS1-NP100-8NVLNK

  • 19" 4U Rackmount Chassis 8-way SXM2
  • Intel® Xeon® Processor E5-2697A v4
  • 512 GB DDR4 2400MHz Memory
  • 8 x NVIDIA P100 SXM2 with NVLink 16GB HBM2 Memory per GPU
  • 28,672 Total CUDA® Cores
  • 4 x 512 SSD Disks
    (4 hot-swap)
  • Ubuntu 14.04 / 16.04
  • NVIDIA-qualified driver
  • NVIDIA® DIGITS™
  • NVIDIA® CUDA® Toolkit
  • NVIDIA® cuDNN™
  • Caffe, Theano, Torch, BIDMach
  • 100k CPU Time on FeynmanGrid™
  • Enterprise Level Support from CompecTA® HPC Professional Services
Optional:
  • Up to 4 x Mellanox EDR 100Gb/s or FDR 56Gb/s Infiniband for GPU Direct RDMA

Get more information!

Our experts can answer all your questions.

info@compecta.com
+90 216 455-1865