CompecTA SuperNova Engine™ is a super-fast and energy-efficient server solution for Deep Learning and AI needs.
Built with the world’s best hardware, software, and systems engineering and years of experience of CompecTA HPC Solutions.
It comes with Deep Learning Neural Network libraries, Deep Learning Frameworks, Operating System and Drivers pre-installed and ready to use.
CompecTA SuperNova Engine™ comes with an Enterprise Level Support package from HPC experts with over 20 years of experience in the field.
We will assist you with any problem you may encounter with your system.
CompecTA SuperNova Engine™ comes with 100 thousands hours of CPU Time on CompecTA's in-house HPC Cluster service called FeynmanGrid™.
Use this opportunity to gain experience on using HPC Clusters or performance test any application or to test some new code/application.
Super-fast & Scalable Server Systems
660 TFLOPS of Tensor Performance
41.2 TFLOPS of FP32 Performance
20.8 TFLOPS of FP64 Performance
1,320 TOPS of INT8 Tensor Performance
Best suited for:
Deep Learning Development
Fast Neural Network Training
Fast Inferencing
prices starting from
$38,995 (Excluding Taxes)
CNVAIS1-A30V1
1,248 TFLOPS of Tensor Performance
78 TFLOPS of FP32 Performance
38.8 TFLOPS of FP64 Performance
2,496 TOPS of INT8 Tensor Performance
Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Super-fast Inferencing
prices starting from
$79,495 (Excluding Taxes)
CNVAIS1-A100V2
1,248 TFLOPS of Tensor Performance
78 TFLOPS of FP32 Performance
38.8 TFLOPS of FP64 Performance
2,496 TOPS of INT8 Tensor Performance
Best suited for:
Super-fast Multi-GPU Workloads
Deep Learning Development
Super-fast Neural Network Training
Super-fast Inferencing
REQUEST A QUOTE
CNVAIS1-NA104V2
2,500 TFLOPS of Tensor Performance
156 TFLOPS of FP32 Performance
78 TFLOPS of FP64 Performance
5,000 TOPS of INT8 Tensor Performance
Best suited for:
Super-fast Multi-GPU Workloads
Deep Learning Development
Super-fast Neural Network Training
Super-fast Inferencing
REQUEST A QUOTE
CNVAIS1-NA108V2
Following products are no longer available to order.
74.8 TFLOPS of FP16 Performance
37.2 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Good Inferencing
NOT AVAILABLE
CNVAIS1-P100V1
448 TFLOPS of Tensor Performance
56 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVAIS1-V100V1
84.8 TFLOPS of FP16 Performance
42.4 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Good Inferencing
NOT AVAILABLE
CNVAIS1-NP104V1
500 TFLOPS of Tensor Performance
62.8 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Super-fast Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVAIS1-NV104V1
170 TFLOPS of FP16 Performance
84.8 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Ultra-fast Neural Network Training
Good Inferencing
NOT AVAILABLE
CNVAIS1-NP108V1
1000 TFLOPS of Tensor Performance
125.6 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Ultra-fast Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVAIS1-NV108V1
48 TFLOPS of FP32 Performance
188 TOPS of INT8 Performance
Best suited for:
Deep Learning Development
Good Neural Network Training
Super-fast Inferencing
NOT AVAILABLE
CNVAIS1-P40V1
Our experts can answer all your questions. Please reach us via mail or phone.