Numpy 可以针对不同的 BLAS 实现(MKL、ACML、ATLAS、GotoBlas 等)链接/编译".这并不总是很容易配置,但它是可能的.
Numpy can be "linked/compiled" against different BLAS implementations (MKL, ACML, ATLAS, GotoBlas, etc). That's not always straightforward to configure but it is possible.
是否也可以针对 NVIDIA 的 CUBLAS 实现链接/编译"numpy?
我在网络上找不到任何资源,在我花太多时间尝试之前,我想确保它完全可行.
Is it also possible to "link/compile" numpy against NVIDIA's CUBLAS implementation?
I couldn't find any resources in the web and before I spend too much time trying it I wanted to make sure that it possible at all.
一句话:不,你不能那样做.
In a word: no, you can't do that.
有一个相当不错的 scikit 提供从 scipy 访问 CUBLAS 的功能,称为 scikits.cuda
建立在 PyCUDA 之上.PyCUDA 提供了一个类似 numpy.ndarray
的类,它允许使用 CUDA 无缝地操作 GPU 内存中的 numpy 数组.因此,您可以将 CUBLAS 和 CUDA 与 numpy 一起使用,但您不能只链接 CUBLAS 并期望它能够工作.
There is a rather good scikit which provides access to CUBLAS from scipy called scikits.cuda
which is built on top of PyCUDA. PyCUDA provides a numpy.ndarray
like class which seamlessly allows manipulation of numpy arrays in GPU memory with CUDA. So you can use CUBLAS and CUDA with numpy, but you can't just link against CUBLAS and expect it to work.
还有一个商业库,它提供类似 numpy 和 cublas 的功能,并且具有 Python 接口或绑定,但我将把它留给他们的一个工人来填补.
There is also a commercial library that provides numpy and cublas like functionality and which has a Python interface or bindings, but I will leave it to one of their shills to fill you in on that.
这篇关于Numpy、BLAS 和 CUBLAS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!