Caffe: Will Caffe support TX1's FP16?

Created on 21 Jan 2016  ·  12Comments  ·  Source: BVLC/caffe

Will Caffe support TX1's FP16?
Have any suggestions about that?

Thanks!

Most helpful comment

Working on it (FP16 support on CUDA and OpenCL Caffe). Expect a release by mid January at latest.

All 12 comments

NVIDIA's Caffe fork has one branch which seems to support FP16.

https://github.com/NVIDIA/caffe/tree/experimental/fp16

@Darwin2011 thank you. But it seems not to support.

@Darwin2011 because I have not found “#include ” and “half” in the whole project.
Do you have any other tips?

That branch does support fp16. native fp16 only works currently on jetson tx1 and is designed for inference (forward passes). fp16 is handled via the templates for mtype and dtype. CAFFE_FP16 and float16 are the things you are looking for. include/caffe/util has the headers for fp16 handling.

@Darwin2011 @thatguymike
That branch does support fp16, thank you.

Are there somebody know that whether mali T880 support Fp16?

@Darwin2011
@thatguymike
I got an error when I am using _caffe-experimental-fp16_'s pycaffe. https://github.com/NVIDIA/caffe/issues/117

Could you give me some clues?

@feiliz All generations of the Mali Midgard architecture (including the 4th generations Mali-T880) support FP16.

how this issue now?

Working on it (FP16 support on CUDA and OpenCL Caffe). Expect a release by mid January at latest.

Any update on this? Thanks in advance.

Yes, sorry, it is a bit delayed, but basically FP16 works: https://github.com/naibaf7/caffe
However, if I were you, I'd wait until I have merged this here: https://github.com/BVLC/caffe/tree/opencl
If you're really curious, you can already give it a try (first link).

Was this page helpful?
0 / 5 - 0 ratings

Related issues

hawklucky picture hawklucky  ·  3Comments

lixin7895123 picture lixin7895123  ·  3Comments

dfotland picture dfotland  ·  3Comments

OpenHero picture OpenHero  ·  3Comments

sdemyanov picture sdemyanov  ·  3Comments