CompecTA Nova Engine Mini™ is an all-in-one, compact, cool and quiet deskside solution equipped with 2 GPUs for Deep Learning & AI needs.
Built with the world’s best hardware, software, and systems engineering for deep learning & AI in a powerful solution in compact form factor.
It comes with Deep Learning Neural Network library, Deep Learning Frameworks, Operating System and Drivers pre-installed and ready to use.
CompecTA Nova Engine Mini™ comes with an Enterprise Level Support package from HPC experts with over 20 years of experience in the field.
We will assist you with any problem you may encounter with your system.
CompecTA Nova Engine Mini™ comes with 50 thousands hours of CPU Time on CompecTA's in-house HPC Cluster service called FeynmanGrid™.
Use this opportunity to gain experience on using HPC Clusters or performance test any application or to test some new code/application.
Mini Deep Learning Developer Box with NVIDIA DIGITS™
1.32 petaFLOPS Tensor Performance
83 TFLOPS of FP16 Performance
83 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Super-Fast Neural Network Training
Super-Fast Inferencing
REQUEST A QUOTE
CNVMDB3-R49V1
570 TFLOPS Tensor Performance
70 TFLOPS of FP16 Performance
70 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Fast Neural Network Training
Fast Inferencing
prices starting from
$7,995 (Excluding Taxes)
CNVMDB3-R39V1
To Be Announced
To Be Announced
To Be Announced
Best suited for:
Deep Learning Development
Super-Fast Neural Network Training
Super-Fast Inferencing
REQUEST A QUOTE
CNVMDB3-RTX6V1
619.4 TFLOPS Tensor Performance
77.42 TFLOPS of FP16 Performance
77.42 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Fast Neural Network Training
Fast Inferencing
prices starting from
$14,475 (Excluding Taxes)
CNVMDB3-RA6V1
Following products are no longer available to order.
215.2 TFLOPS DL Performance
53.8 TFLOPS of FP16 Performance
26.9 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Fast Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVMDB2-R80TIV1
261 TFLOPS DL Performance
65.2 TFLOPS of FP16 Performance
22.6 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Fast Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVMDB2-TIRV1
237 TFLOPS DL Performance
59.2 TFLOPS of FP16 Performance
29.6 TFLOPS of FP32 Performance
14.8 TFLOPS of FP64 Performance
Best suited for:
Deep Learning Development
Machine Learning
Super-Fast Neural Network Training
Super-Fast Inferencing
NOT AVAILABLE
CNVMDB2-GV100V1
261 TFLOPS DL Performance
412.2 TOPS of INT8 Performance
65.2 TFLOPS of FP16 Performance
32.6 TFLOPS of FP32 Performance
Best suited for:
Deep Learning Development
Machine Learning
Super-Fast Neural Network Training
Super-Fast Inferencing
NOT AVAILABLE
CNVMDB2-QR8V1
220 TFLOPS of DL Performance
27.6 TFLOPS of FP32 Performance
13.8 TFLOPS of FP64 Performance
Best suited for:
Deep Learning Development
Super-Fast Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVMDB2-TIVV1
24.2 TFLOPS of FP32 Performance
94 TOPS of INT8 Performance
Best suited for:
Deep Learning Development
Good Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVMDB2-TXPV2
22.6 TFLOPS of FP32 Performance
90 TOPS of INT8 Performance
Best suited for:
Deep Learning Development
Good Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVMDB2-G80TIV1
41.4 TFLOPS of FP16 Performance
20.6 TFLOPS of FP32 Performance
10.4 TFLOPS of FP64 Performance
Best suited for:
Deep Learning Development
Fast Neural Network Training
Fast Inferencing
NOT AVAILABLE
CNVMDB2-GP100V1
Our experts can answer all your questions. Please reach us via mail or phone.