- Inference Accelerator card R100
The R100 is equipped with the XPU-R architecture developed by the GPU customer, inheriting the mature software stack of the customer's second-generation chip, which greatly reduces the adaptation and use cost of users. As a low-power, small-size acceleration card, the R100 can be widely used in various edge reasoning scenarios such as smart retail, intelligent transportation, smart parks and intelligent manufacturing to achieve more efficient AI reasoning calculations.
- Technical specification
Name | Specifications |
Accuracy | INT8/INT16 FP16/FP32 |
Memory | 12 GB GDDR6 |
Memory bandwidth | 384 GB/s |
System interconnection | PClE 4.0x16 compatible 3.0/2.0/1.0 |
Encoding/decoding capability | Channel 84 1080P@30FPS decode Channel 27 1080P@30FPS decode |
Heat dissipation | Passive |
ECC | Support |
Application | Reasoning, Training |
Power consumption | 100W |
Size | Half-height Half-length Single slot |
Operating temperature | 0℃-45℃ |