I am running arch linux 4.2.5-1 with aur caffe-git r3443.62ed0d2-1 and boost 1.60 and I am getting the following error when trying to use classify.py. (parameters --pretrained_model snapshot_iter_600.caffemodel --model_def mydeploy.prototxt --images_dim 256 AF05HAS.JPG out)
Traceback (most recent call last):
File "/opt/caffe/python/classify.py", line 138, in <module>
main(sys.argv)
File "/opt/caffe/python/classify.py", line 110, in main
channel_swap=channel_swap)
File "/opt/caffe/python/caffe/classifier.py", line 29, in __init__
in_ = self.inputs[0]
File "/opt/caffe/python/caffe/pycaffe.py", line 54, in _Net_inputs
return [list(self.blobs.keys())[i] for i in self._inputs]
File "/opt/caffe/python/caffe/pycaffe.py", line 28, in _Net_blobs
return OrderedDict(zip(self._blob_names, self._blobs))
TypeError: No to_python (by-value) converter found for C++ type: boost::shared_ptr<caffe::Blob<float> >
I tried using the same code on boost 1.59.0-2 and caffe r3422.603cbfb-1 with no problems. I tried combinations of caffe r3443 and boost 1.59 and 1.60, but with no success.
I'm on archlinux too with caffe-git r3443 and I encounter the same problem trying to load GoogleNet
Probably this is arch linux issue or python module simply doesn't work with boost 1.60. I run into this issue after upgrading boost from 1.59 to 1.60. What I did to be able to use caffe again was downgrading boost with abs (building boost package from source) and building caffe from the latest checkout cloned from github afterwards. This solves my problem temporally, but breaks other dependencies in system, so I wonder whether it is in fact not possible to use caffe with the latest boost libraries?
I have the same problem when I did my first installation on Linux Mint. At the beginning, the boost installation was a mess. I decided to restart by removing protobuf and boost then installed boost (1.59) -> protobuf (newest) -> caffe (newest). Now at least, it run this example successfully and I think that the installation is done. http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb
It works fine with boost (1.59). Make sure 1.60 is uninstalled. Also remake caffe.
if .so file error comes try this
echo $LD_LIBRARY_PATH
if it is empty, try to set library path(default is /usr/local/lib/)
export LD_LIBRARY_PATH=/usr/local/lib/
But the question remains whether it is possible to use caffe python module with the latest boost release. Or maybe someone could give a hint how to prepare Makefile.conf in order to build and use caffe with boost 1.59 placed in custom location? For now I have broken some system dependencies to keep working with caffe and this is not desirable situation.
me too(macos, boost 1.6)
@errord I have the same issue (macos boot 1.6) when I try to load a caffemodel :
solver.net.copy_from()
Traceback (most recent call last):
File "./caf.py", line 19, in <module>
solver.net.copy_from(caffemodel_filename)
TypeError: No to_python (by-value) converter found for C++ type: boost::shared_ptr<caffe::Net<float> >
Any solutions?
I managed to get it working with boost 1.6 by editing the python/caffe/_caffe.cpp file and registering the Blobs shared pointer. I.e in line 257 or so add:
// Fix for caffe pythonwrapper for boost 1.6
boost::python::register_ptr_to_python<boost::shared_ptr<Blob<Dtype> > >();
// End fix
bp::class_<Blob<Dtype>, boost::shared_ptr<Blob<Dtype> >, boost::noncopyable>(
"Blob", bp::no_init)
@datomnurdin Yes there is ! @jmccormac solution will do the job ! But there is a PR : https://github.com/BVLC/caffe/pull/3575 that could close this issue but it's not merged yet.
I cant find it from my caffe library
#include <Python.h> // NOLINT(build/include_alpha)
// Produce deprecation warnings (needs to come before arrayobject.h inclusion).
#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
#include <boost/make_shared.hpp>
#include <boost/python.hpp>
#include <boost/python/raw_function.hpp>
#include <boost/python/suite/indexing/vector_indexing_suite.hpp>
#include <numpy/arrayobject.h>
// these need to be included after boost on OS X
#include <string> // NOLINT(build/include_order)
#include <vector> // NOLINT(build/include_order)
#include <fstream> // NOLINT
#include "caffe/caffe.hpp"
#include "caffe/layers/memory_data_layer.hpp"
#include "caffe/layers/python_layer.hpp"
#include "caffe/sgd_solvers.hpp"
// Temporary solution for numpy < 1.7 versions: old macro, no promises.
// You're strongly advised to upgrade to >= 1.7.
#ifndef NPY_ARRAY_C_CONTIGUOUS
#define NPY_ARRAY_C_CONTIGUOUS NPY_C_CONTIGUOUS
#define PyArray_SetBaseObject(arr, x) (PyArray_BASE(arr) = (x))
#endif
namespace bp = boost::python;
namespace caffe {
// For Python, for now, we'll just always use float as the type.
typedef float Dtype;
const int NPY_DTYPE = NPY_FLOAT32;
// Selecting mode.
void set_mode_cpu() { Caffe::set_mode(Caffe::CPU); }
void set_mode_gpu() { Caffe::set_mode(Caffe::GPU); }
// For convenience, check that input files can be opened, and raise an
// exception that boost will send to Python if not (caffe could still crash
// later if the input files are disturbed before they are actually used, but
// this saves frustration in most cases).
static void CheckFile(const string& filename) {
std::ifstream f(filename.c_str());
if (!f.good()) {
f.close();
throw std::runtime_error("Could not open file " + filename);
}
f.close();
}
void CheckContiguousArray(PyArrayObject* arr, string name,
int channels, int height, int width) {
if (!(PyArray_FLAGS(arr) & NPY_ARRAY_C_CONTIGUOUS)) {
throw std::runtime_error(name + " must be C contiguous");
}
if (PyArray_NDIM(arr) != 4) {
throw std::runtime_error(name + " must be 4-d");
}
if (PyArray_TYPE(arr) != NPY_FLOAT32) {
throw std::runtime_error(name + " must be float32");
}
if (PyArray_DIMS(arr)[1] != channels) {
throw std::runtime_error(name + " has wrong number of channels");
}
if (PyArray_DIMS(arr)[2] != height) {
throw std::runtime_error(name + " has wrong height");
}
if (PyArray_DIMS(arr)[3] != width) {
throw std::runtime_error(name + " has wrong width");
}
}
// Net constructor for passing phase as int
shared_ptr<Net<Dtype> > Net_Init(
string param_file, int phase) {
CheckFile(param_file);
shared_ptr<Net<Dtype> > net(new Net<Dtype>(param_file,
static_cast<Phase>(phase)));
return net;
}
// Net construct-and-load convenience constructor
shared_ptr<Net<Dtype> > Net_Init_Load(
string param_file, string pretrained_param_file, int phase) {
CheckFile(param_file);
CheckFile(pretrained_param_file);
shared_ptr<Net<Dtype> > net(new Net<Dtype>(param_file,
static_cast<Phase>(phase)));
net->CopyTrainedLayersFrom(pretrained_param_file);
return net;
}
void Net_Save(const Net<Dtype>& net, string filename) {
NetParameter net_param;
net.ToProto(&net_param, false);
WriteProtoToBinaryFile(net_param, filename.c_str());
}
void Net_SetInputArrays(Net<Dtype>* net, bp::object data_obj,
bp::object labels_obj) {
// check that this network has an input MemoryDataLayer
shared_ptr<MemoryDataLayer<Dtype> > md_layer =
boost::dynamic_pointer_cast<MemoryDataLayer<Dtype> >(net->layers()[0]);
if (!md_layer) {
throw std::runtime_error("set_input_arrays may only be called if the"
" first layer is a MemoryDataLayer");
}
// check that we were passed appropriately-sized contiguous memory
PyArrayObject* data_arr =
reinterpret_cast<PyArrayObject*>(data_obj.ptr());
PyArrayObject* labels_arr =
reinterpret_cast<PyArrayObject*>(labels_obj.ptr());
CheckContiguousArray(data_arr, "data array", md_layer->channels(),
md_layer->height(), md_layer->width());
CheckContiguousArray(labels_arr, "labels array", 1, 1, 1);
if (PyArray_DIMS(data_arr)[0] != PyArray_DIMS(labels_arr)[0]) {
throw std::runtime_error("data and labels must have the same first"
" dimension");
}
if (PyArray_DIMS(data_arr)[0] % md_layer->batch_size() != 0) {
throw std::runtime_error("first dimensions of input arrays must be a"
" multiple of batch size");
}
md_layer->Reset(static_cast<Dtype*>(PyArray_DATA(data_arr)),
static_cast<Dtype*>(PyArray_DATA(labels_arr)),
PyArray_DIMS(data_arr)[0]);
}
Solver<Dtype>* GetSolverFromFile(const string& filename) {
SolverParameter param;
ReadSolverParamsFromTextFileOrDie(filename, ¶m);
return SolverRegistry<Dtype>::CreateSolver(param);
}
struct NdarrayConverterGenerator {
template <typename T> struct apply;
};
template <>
struct NdarrayConverterGenerator::apply<Dtype*> {
struct type {
PyObject* operator() (Dtype* data) const {
// Just store the data pointer, and add the shape information in postcall.
return PyArray_SimpleNewFromData(0, NULL, NPY_DTYPE, data);
}
const PyTypeObject* get_pytype() {
return &PyArray_Type;
}
};
};
struct NdarrayCallPolicies : public bp::default_call_policies {
typedef NdarrayConverterGenerator result_converter;
PyObject* postcall(PyObject* pyargs, PyObject* result) {
bp::object pyblob = bp::extract<bp::tuple>(pyargs)()[0];
shared_ptr<Blob<Dtype> > blob =
bp::extract<shared_ptr<Blob<Dtype> > >(pyblob);
// Free the temporary pointer-holding array, and construct a new one with
// the shape information from the blob.
void* data = PyArray_DATA(reinterpret_cast<PyArrayObject*>(result));
Py_DECREF(result);
const int num_axes = blob->num_axes();
vector<npy_intp> dims(blob->shape().begin(), blob->shape().end());
PyObject *arr_obj = PyArray_SimpleNewFromData(num_axes, dims.data(),
NPY_FLOAT32, data);
// SetBaseObject steals a ref, so we need to INCREF.
Py_INCREF(pyblob.ptr());
PyArray_SetBaseObject(reinterpret_cast<PyArrayObject*>(arr_obj),
pyblob.ptr());
return arr_obj;
}
};
bp::object Blob_Reshape(bp::tuple args, bp::dict kwargs) {
if (bp::len(kwargs) > 0) {
throw std::runtime_error("Blob.reshape takes no kwargs");
}
Blob<Dtype>* self = bp::extract<Blob<Dtype>*>(args[0]);
vector<int> shape(bp::len(args) - 1);
for (int i = 1; i < bp::len(args); ++i) {
shape[i - 1] = bp::extract<int>(args[i]);
}
self->Reshape(shape);
// We need to explicitly return None to use bp::raw_function.
return bp::object();
}
bp::object BlobVec_add_blob(bp::tuple args, bp::dict kwargs) {
if (bp::len(kwargs) > 0) {
throw std::runtime_error("BlobVec.add_blob takes no kwargs");
}
typedef vector<shared_ptr<Blob<Dtype> > > BlobVec;
BlobVec* self = bp::extract<BlobVec*>(args[0]);
vector<int> shape(bp::len(args) - 1);
for (int i = 1; i < bp::len(args); ++i) {
shape[i - 1] = bp::extract<int>(args[i]);
}
self->push_back(shared_ptr<Blob<Dtype> >(new Blob<Dtype>(shape)));
// We need to explicitly return None to use bp::raw_function.
return bp::object();
}
BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS(SolveOverloads, Solve, 0, 1);
BOOST_PYTHON_MODULE(_caffe) {
// below, we prepend an underscore to methods that will be replaced
// in Python
bp::scope().attr("__version__") = AS_STRING(CAFFE_VERSION);
// Caffe utility functions
bp::def("set_mode_cpu", &set_mode_cpu);
bp::def("set_mode_gpu", &set_mode_gpu);
bp::def("set_device", &Caffe::SetDevice);
bp::def("layer_type_list", &LayerRegistry<Dtype>::LayerTypeList);
bp::class_<Net<Dtype>, shared_ptr<Net<Dtype> >, boost::noncopyable >("Net",
bp::no_init)
.def("__init__", bp::make_constructor(&Net_Init))
.def("__init__", bp::make_constructor(&Net_Init_Load))
.def("_forward", &Net<Dtype>::ForwardFromTo)
.def("_backward", &Net<Dtype>::BackwardFromTo)
.def("reshape", &Net<Dtype>::Reshape)
// The cast is to select a particular overload.
.def("copy_from", static_cast<void (Net<Dtype>::*)(const string)>(
&Net<Dtype>::CopyTrainedLayersFrom))
.def("share_with", &Net<Dtype>::ShareTrainedLayersWith)
.add_property("_blob_loss_weights", bp::make_function(
&Net<Dtype>::blob_loss_weights, bp::return_internal_reference<>()))
.def("_bottom_ids", bp::make_function(&Net<Dtype>::bottom_ids,
bp::return_value_policy<bp::copy_const_reference>()))
.def("_top_ids", bp::make_function(&Net<Dtype>::top_ids,
bp::return_value_policy<bp::copy_const_reference>()))
.add_property("_blobs", bp::make_function(&Net<Dtype>::blobs,
bp::return_internal_reference<>()))
.add_property("layers", bp::make_function(&Net<Dtype>::layers,
bp::return_internal_reference<>()))
.add_property("_blob_names", bp::make_function(&Net<Dtype>::blob_names,
bp::return_value_policy<bp::copy_const_reference>()))
.add_property("_layer_names", bp::make_function(&Net<Dtype>::layer_names,
bp::return_value_policy<bp::copy_const_reference>()))
.add_property("_inputs", bp::make_function(&Net<Dtype>::input_blob_indices,
bp::return_value_policy<bp::copy_const_reference>()))
.add_property("_outputs",
bp::make_function(&Net<Dtype>::output_blob_indices,
bp::return_value_policy<bp::copy_const_reference>()))
.def("_set_input_arrays", &Net_SetInputArrays,
bp::with_custodian_and_ward<1, 2, bp::with_custodian_and_ward<1, 3> >())
.def("save", &Net_Save);
bp::class_<Blob<Dtype>, shared_ptr<Blob<Dtype> >, boost::noncopyable>(
"Blob", bp::no_init)
.add_property("shape",
bp::make_function(
static_cast<const vector<int>& (Blob<Dtype>::*)() const>(
&Blob<Dtype>::shape),
bp::return_value_policy<bp::copy_const_reference>()))
.add_property("num", &Blob<Dtype>::num)
.add_property("channels", &Blob<Dtype>::channels)
.add_property("height", &Blob<Dtype>::height)
.add_property("width", &Blob<Dtype>::width)
.add_property("count", static_cast<int (Blob<Dtype>::*)() const>(
&Blob<Dtype>::count))
.def("reshape", bp::raw_function(&Blob_Reshape))
.add_property("data", bp::make_function(&Blob<Dtype>::mutable_cpu_data,
NdarrayCallPolicies()))
.add_property("diff", bp::make_function(&Blob<Dtype>::mutable_cpu_diff,
NdarrayCallPolicies()));
bp::class_<Layer<Dtype>, shared_ptr<PythonLayer<Dtype> >,
boost::noncopyable>("Layer", bp::init<const LayerParameter&>())
.add_property("blobs", bp::make_function(&Layer<Dtype>::blobs,
bp::return_internal_reference<>()))
.def("setup", &Layer<Dtype>::LayerSetUp)
.def("reshape", &Layer<Dtype>::Reshape)
.add_property("type", bp::make_function(&Layer<Dtype>::type));
bp::register_ptr_to_python<shared_ptr<Layer<Dtype> > >();
bp::class_<LayerParameter>("LayerParameter", bp::no_init);
bp::class_<Solver<Dtype>, shared_ptr<Solver<Dtype> >, boost::noncopyable>(
"Solver", bp::no_init)
.add_property("net", &Solver<Dtype>::net)
.add_property("test_nets", bp::make_function(&Solver<Dtype>::test_nets,
bp::return_internal_reference<>()))
.add_property("iter", &Solver<Dtype>::iter)
.def("solve", static_cast<void (Solver<Dtype>::*)(const char*)>(
&Solver<Dtype>::Solve), SolveOverloads())
.def("step", &Solver<Dtype>::Step)
.def("restore", &Solver<Dtype>::Restore)
.def("snapshot", &Solver<Dtype>::Snapshot);
bp::class_<SGDSolver<Dtype>, bp::bases<Solver<Dtype> >,
shared_ptr<SGDSolver<Dtype> >, boost::noncopyable>(
"SGDSolver", bp::init<string>());
bp::class_<NesterovSolver<Dtype>, bp::bases<Solver<Dtype> >,
shared_ptr<NesterovSolver<Dtype> >, boost::noncopyable>(
"NesterovSolver", bp::init<string>());
bp::class_<AdaGradSolver<Dtype>, bp::bases<Solver<Dtype> >,
shared_ptr<AdaGradSolver<Dtype> >, boost::noncopyable>(
"AdaGradSolver", bp::init<string>());
bp::class_<RMSPropSolver<Dtype>, bp::bases<Solver<Dtype> >,
shared_ptr<RMSPropSolver<Dtype> >, boost::noncopyable>(
"RMSPropSolver", bp::init<string>());
bp::class_<AdaDeltaSolver<Dtype>, bp::bases<Solver<Dtype> >,
shared_ptr<AdaDeltaSolver<Dtype> >, boost::noncopyable>(
"AdaDeltaSolver", bp::init<string>());
bp::class_<AdamSolver<Dtype>, bp::bases<Solver<Dtype> >,
shared_ptr<AdamSolver<Dtype> >, boost::noncopyable>(
"AdamSolver", bp::init<string>());
bp::def("get_solver", &GetSolverFromFile,
bp::return_value_policy<bp::manage_new_object>());
// vector wrappers for all the vector types we use
bp::class_<vector<shared_ptr<Blob<Dtype> > > >("BlobVec")
.def(bp::vector_indexing_suite<vector<shared_ptr<Blob<Dtype> > >, true>())
.def("add_blob", bp::raw_function(&BlobVec_add_blob));
bp::class_<vector<Blob<Dtype>*> >("RawBlobVec")
.def(bp::vector_indexing_suite<vector<Blob<Dtype>*>, true>());
bp::class_<vector<shared_ptr<Layer<Dtype> > > >("LayerVec")
.def(bp::vector_indexing_suite<vector<shared_ptr<Layer<Dtype> > >, true>());
bp::class_<vector<string> >("StringVec")
.def(bp::vector_indexing_suite<vector<string> >());
bp::class_<vector<int> >("IntVec")
.def(bp::vector_indexing_suite<vector<int> >());
bp::class_<vector<Dtype> >("DtypeVec")
.def(bp::vector_indexing_suite<vector<Dtype> >());
bp::class_<vector<shared_ptr<Net<Dtype> > > >("NetVec")
.def(bp::vector_indexing_suite<vector<shared_ptr<Net<Dtype> > >, true>());
bp::class_<vector<bool> >("BoolVec")
.def(bp::vector_indexing_suite<vector<bool> >());
// boost python expects a void (missing) return value, while import_array
// returns NULL for python3. import_array1() forces a void return value.
import_array1();
}
} // namespace caffe
@jmccormac 's solution worked for me, just had to add a semicolon after the second line. Thanks!
@jmccormac 's solution worked for me too.
Sorry all, the root cause of this is not clear. Please give your
and verify that you compiled both boost-python and caffe against the same Python, and that it is likewise the Python running when you import caffe. I have not encountered the need for the patch in #3575 so I'd like to understand why/where this fixes a problem before merge.
I added @jmccormac 's solution but it seems to be insufficient on my setup, requiring one more addition for make pytest to succeed:
// Fix for caffe pythonwrapper for boost 1.6
boost::python::register_ptr_to_python<boost::shared_ptr<Blob<Dtype> > >();
boost::python::register_ptr_to_python<boost::shared_ptr<Net<Dtype> > >();
// End fix
Without the line with Net, make pytest results in the following:
ERROR: test_net_memory (test_solver.TestSolver)
Check that nets survive after the solver is destroyed.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/zopf/Documents/Repos/caffe/python/caffe/test/test_solver.py", line 29, in setUp
size=self.solver.net.blobs['label'].data.shape)
TypeError: No to_python (by-value) converter found for C++ type: boost::shared_ptr<caffe::Net<float> >
======================================================================
ERROR: test_snapshot (test_solver.TestSolver)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/zopf/Documents/Repos/caffe/python/caffe/test/test_solver.py", line 29, in setUp
size=self.solver.net.blobs['label'].data.shape)
TypeError: No to_python (by-value) converter found for C++ type: boost::shared_ptr<caffe::Net<float> >
======================================================================
ERROR: test_solve (test_solver.TestSolver)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/zopf/Documents/Repos/caffe/python/caffe/test/test_solver.py", line 29, in setUp
size=self.solver.net.blobs['label'].data.shape)
TypeError: No to_python (by-value) converter found for C++ type: boost::shared_ptr<caffe::Net<float> >
----------------------------------------------------------------------
Ran 21 tests in 0.091s
FAILED (errors=3)
make: *** [pytest] Error 1
With the Net line, all tests in make pytest pass :)
Edit: my version details:
Python 2.7.10
OS X 10.11.3
Boost 1.60.0_1
Second edit: ah, I see that #3575 actually does include the Net fix as well as one for Solver (which apparently the tests don't run into). Keeping this comment intact in case others are confused as well.
Fixed by #3575. Thanks everyone for the detailed reports -- I was able to reproduce the issue and verify the fix.
I open IPython to test that PyCaffe is working. I input "ipython" command, and enter to the ipython page.
Then, I input the command "import caffe", but I got below warnings:
/root/code/caffe-master/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared_ptrcaffe::Net
/root/code/caffe-master/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared_ptrcaffe::Blob
/root/code/caffe-master/python/caffe/pycaffe.py:13: RuntimeWarning: to-Python converter for boost::shared_ptrcaffe::Solver
Would you pls help to give me some suggestions to resolve it?
@AmandaYingYiWu24 Depending on your boost version #3575 either fixes pycaffe (boost 1.60) or causes the warnings you say (boost 1.56? I think). These warnings are harmless, but they are annoying, so a follow-up PR should likely make the fix in #3575 conditional on boost version.
Most helpful comment
I managed to get it working with boost 1.6 by editing the python/caffe/_caffe.cpp file and registering the Blobs shared pointer. I.e in line 257 or so add: