pybind11
pybind11 copied to clipboard
[BUG]: `__new__` does not initialize STL containers and resulting in undefined behavior
Required prerequisites
- [X] Make sure you've read the documentation. Your issue may be addressed there.
- [X] Search the issue tracker and Discussions to verify that this hasn't already been reported. +1 or comment there if it has.
- [ ] Consider asking first in the Gitter chat room or in a Discussion.
What version (or hash if on master) of pybind11 are you using?
2.10.3
Problem description
I'm writing a C++ class with some STL-like containers as its members. Then I bind it with pybind11
and expose it to Python. However, the bind C++ class cannot behave like a normal Python class as expected.
In [1]: !pip3 install optree
...: from optree import PyTreeSpec
In [2]: PyTreeSpec
Out[2]: <class 'optree.PyTreeSpec'>
In [3]: PyTreeSpec.mro()
Out[3]: [<class 'optree.PyTreeSpec'>, <class 'pybind11_builtins.pybind11_object'>, <class 'object'>]
In [4]: PyTreeSpec() # an error raised as expected
TypeError: optree.PyTreeSpec: No constructor defined!
In [5]: spec = PyTreeSpec.__new__(PyTreeSpec) # expect to raise an error
In [6]: repr(spec) # segfault due to invalid memory access
[1] 31095 segmentation fault ipython3
All bind types created by py::class_<CppClass>
are inherit from pybind11_builtins.pybind11_object
. As the comment says:
https://github.com/pybind/pybind11/blob/3cc7e4258c15a6a19ba5e0b62a220b1a6196d4eb/include/pybind11/detail/class.h#L346-L370
pybind11_object_new
only allocates space for the C++ object, but doesn't call the constructor. That means if someone calls BoundCppClass.__new__(BoundCppClass)
, they will get undefined results since the C++ object is not initialized at all. Also, default values in the C++ class definition are not used even if there is a default constructor.
Reproducible example code
- Clone https://github.com/pybind/cmake_example:
git clone https://github.com/pybind/cmake_example.git
cd cmake_example
- Paste the following content to
src/main.cpp
:
#include <string>
#include <sstream>
#include <vector>
#include <pybind11/pybind11.h>
namespace py = pybind11;
using ssize_t = py::ssize_t;
class MyList {
private:
std::vector<int> data = {0, 1, 2, 3};
public:
MyList() = default;
ssize_t size() const { return data.size(); }
std::string repr() const {
std::ostringstream os;
os << "MyList([";
for (int i = 0; i < data.size(); ++i) {
if (i != 0) {
os << ", ";
}
os << data[i];
}
os << "], size=" << size() << ")";
return os.str();
}
};
PYBIND11_MODULE(cmake_example, m) {
auto cls = py::class_<MyList>(m, "MyList");
cls.def(py::init<>());
cls.def("size", &MyList::size);
cls.def("__repr__", &MyList::repr);
}
- Create a new virtual environment and install:
python3 -m venv venv
source venv/bin/activate
pip3 install -U pip setuptools ipython
pip3 install -e .
- Run the following code in
ipython
:
In [1]: from cmake_example import MyList
In [2]: l = MyList()
In [3]: l.size()
Out[3]: 4
In [4]: l # calls repr()
Out[4]: MyList([0, 1, 2, 3], size=4)
In [5]: l = MyList.__new__(MyList)
In [6]: l.size()
Out[6]: -23599346664417
In [7]: l # calls repr()
[1] 31601 segmentation fault ipython3
In [1]: from cmake_example import MyList
In [2]: class MyAnotherList(MyList):
...: def __new__(cls):
...: inst = super().__new__(cls)
...: inst.default_size = inst.size() # default size from the C++ default constructor
...: return inst
...:
In [3]: l = MyAnotherList()
In [4]: l # calls repr()
Out[4]: MyList([0, 1, 2, 3], size=4)
In [5]: l.size()
Out[5]: 4
In [6]: l.default_size # expect 4
Out[6]: -23607549586738
In [7]: MyAnotherList().default_size # undefined
Out[7]: -5769946945
In [8]: MyAnotherList().default_size # undefined
Out[8]: -5769947144
In [9]: MyAnotherList().default_size # undefined
Out[9]: -23639255465217
Is this a regression? Put the last known working version here if it is.
Not a regression
You can override the behavior for new on the C++ side though (we have some tests to demonstrate this). See work #1693 . Then you can have the C++ class call any new method you want, see https://github.com/pybind/pybind11/blob/442261da585536521ff459b1457b2904895f23b4/tests/test_class.cpp#L89
You can override the behavior for new on the C++ side though (we have some tests to demonstrate this). See work #1693 . Then you can have the C++ class call any new method you want, see
https://github.com/pybind/pybind11/blob/442261da585536521ff459b1457b2904895f23b4/tests/test_class.cpp#L89
@Skylion007 Thanks for the example. This works for me:
PyTreeSpecTypeObject
.def_static(
"__new__",
[](const py::type& cls) {
throw py::type_error(cls.attr("__module__").cast<std::string>() + "." +
cls.attr("__name__").cast<std::string>() +
" cannot be instantiated");
},
py::arg("cls"))
.def(py::init<>()) // no-op
Now it works as expected (raise a TypeError
while calling __new__
and do nothing in __init__
):
In [1]: !pip3 install -e .
...: from optree import PyTreeSpec
In [2]: PyTreeSpec
Out[2]: <class 'optree.PyTreeSpec'>
In [3]: PyTreeSpec.mro()
Out[3]: [<class 'optree.PyTreeSpec'>, <class 'pybind11_builtins.pybind11_object'>, <class 'object'>]
In [4]: PyTreeSpec() # an error raised as expected
TypeError: optree.PyTreeSpec cannot be instantiated
In [5]: PyTreeSpec.__new__(PyTreeSpec) # an error raised as expected
TypeError: optree.PyTreeSpec cannot be instantiated
In [6]: spec = tree_structure({'a': 1, 'b': (2, 3)})
In [7]: spec
Out[7]: PyTreeSpec({'a': *, 'b': (*, *)})
In [8]: spec.__new__(PyTreeSpec) # an error raised as expected
TypeError: optree.PyTreeSpec cannot be instantiated
In [9]: spec.__init__() # noop
Minor bug found in the test case:
https://github.com/pybind/pybind11/blob/442261da585536521ff459b1457b2904895f23b4/tests/test_class.cpp#L87-L90
Should be:
-.def(py::init([](const NoConstructorNew &self) { return self; })) // Need a NOOP __init__
+.def(py::init<>()) // Need a NOOP __init__
otherwise, you will get:
>>> print(NoConstructorNew.__init__.__doc__)
__init__(self: NoConstructorNew, arg0: NoConstructorNew) -> None
This works for me:
PyTreeSpecTypeObject .def_static( "__new__", [](const py::type& cls) { throw py::type_error(cls.attr("__module__").cast<std::string>() + "." + cls.attr("__name__").cast<std::string>() + " cannot be instantiated"); }, py::arg("cls")) .def(py::init<>()) // no-op
Now it works as expected (raise a
TypeError
while calling__new__
and do nothing in__init__
):In [1]: !pip3 install -e . ...: from optree import PyTreeSpec In [2]: PyTreeSpec Out[2]: <class 'optree.PyTreeSpec'> In [3]: PyTreeSpec.mro() Out[3]: [<class 'optree.PyTreeSpec'>, <class 'pybind11_builtins.pybind11_object'>, <class 'object'>] In [4]: PyTreeSpec() # an error raised as expected TypeError: optree.PyTreeSpec cannot be instantiated In [5]: PyTreeSpec.__new__(PyTreeSpec) # an error raised as expected TypeError: optree.PyTreeSpec cannot be instantiated In [6]: spec = tree_structure({'a': 1, 'b': (2, 3)}) In [7]: spec Out[7]: PyTreeSpec({'a': *, 'b': (*, *)}) In [8]: spec.__new__(PyTreeSpec) # an error raised as expected TypeError: optree.PyTreeSpec cannot be instantiated In [9]: spec.__init__() # noop
Reopened for found another issue. If I raise TypeError
in __new__
to forbid instantiation, then the serialization support will be broken (__new__ + __setstate__
).
In [1]: from optree import PyTreeSpec, tree_structure
In [2]: spec = tree_structure({'a': 1, 'b': (2, 3)})
In [3]: import pickle as pkl
In [4]: pkl.dumps(spec)
Out[4]: b'\x80\x04\x95w\x00\x00\x00\x00\x00\x00\x00\x8c\x06optree\x94\x8c\nPyTreeSpec\x94\x93\x94)\x81\x94((K\x01K\x00NNNK\x01K\x01t\x94(K\x01K\x00NNNK\x01K\x01t\x94(K\x01K\x00NNNK\x01K\x01t\x94(K\x03K\x02NNNK\x02K\x03t\x94(K\x05K\x02]\x94(\x8c\x01a\x94\x8c\x01b\x94eNNK\x03K\x05t\x94t\x94\x89\x8c\x00\x94\x87\x94b.'
In [5]: pkl.loads(pkl.dumps(spec))
TypeError: optree.PyTreeSpec cannot be instantiated
Ref:
-
Python docs:
obj.__setstate__(state)
-
CPython:
static PyObject *instantiate(PyObject *cls, PyObject *args)
-
Pybind11 docs: Classes - Pickling support
In the Pybind11 pickling example, a new instance is returned in __setstate__
rather than updating the current object state. This means that even if the class can be instantiated, obj1.__setstate__(obj2.__getstate__())
will have no effect on obj1
.
Python pickle
module uses:
inst = cls.__new__()
inst.__setstate__(state)
The return value for __setstate__
is ignored:
https://github.com/python/cpython/blob/65fb7c4055f280caaa970939d16dd947e6df8a8d/Modules/_pickle.c#L6630-L6644
@XuehaiPan That's an oddly specific usecase. you have full access to the object state and you could pass it in the lambda you are using for the __setstate__
function and return a reference to this
, right? Or if that still doesn't work, you could just define __setstate__
yourself, although it would be really brittle due to C++ only data that might be needed: https://github.com/pybind/pybind11/blob/4768a6f8f5e1abe106b4e3c9899d5866d88d77f6/include/pybind11/detail/init.h#L375
Python docs:
obj.__setstate__(state)
CPython:
static PyObject *instantiate(PyObject *cls, PyObject *args)
Pybind11 docs: Classes - Pickling support
In the Pybind11 pickling example, a new instance is returned in
__setstate__
rather than updating the current object state. This means that even if the class can be instantiated,obj1.__setstate__(obj2.__getstate__())
will have no effect onobj1
.Python
pickle
module uses:inst = cls.__new__() inst.__setstate__(state)
The return value for
__setstate__
is ignored:https://github.com/python/cpython/blob/65fb7c4055f280caaa970939d16dd947e6df8a8d/Modules/_pickle.c#L6630-L6644
Finding
- If the instance is allocated but hasn't been initialized yet, then
obj.__setstate__(state)
will update the instance's internal state. - If the instance has been initialized or is created by implicit type casting, then
obj.__setstate__(state)
will take no effect.
I investigate Pybind11's setstate
implementation. It does update the instance state by setting the value_and_holder.value_ptr()
.
https://github.com/pybind/pybind11/blob/442261da585536521ff459b1457b2904895f23b4/include/pybind11/detail/init.h#L117-L142
But I have no idea why it has no effect on the Python side obj.__setstate__(state)
:
PyTreeSpecTypeObject
.def_static(
"__new__",
[](const py::type& cls) {
return std::make_unique<PyTreeSpec>();
},
py::arg("cls"))
.def(py::init<>()) // no-op
.def(py::pickle([](const PyTreeSpec& t) { return t.ToPicklable(); },
// Return a new instance, `setstate` will update `value_and_holder.value_ptr()`
[](const py::object& o) { return PyTreeSpec::FromPicklable(o); })
obj.__setstate__(other.__getstate__())
obj # unchanged
I found:
- If the instance is allocated but hasn't been initialized yet, then
obj.__setstate__(state)
will update the instance's internal state. - If the instance has been initialized or is created by implicit type casting, then
obj.__setstate__(state)
will take no effect.
I tried manually calling vvalue_and_holder.type->init_instance
. And I did not use .def_static("__new__", ...)
(implicit cast) but set tp_new
instead.
auto PyTreeSpecTypeObject =
py::class_<PyTreeSpec>(mod, "PyTreeSpec", "Representing the structure of the pytree.");
auto PyTreeSpec_Type = *(reinterpret_cast<PyTypeObject*>(PyTreeSpecTypeObject.ptr()));
auto pybind11_object_new = PyTreeSpec_Type.tp_new;
auto new_func = [&pybind11_object_new](
PyTypeObject* type, PyObject* args, PyObject* kwargs) -> PyObject* {
auto* self = pybind11_object_new(type, args, kwargs);
auto* inst = reinterpret_cast<py::detail::instance*>(self);
auto v_h = inst->get_value_and_holder(py::detail::get_type_info(typeid(PyTreeSpec)));
v_h.type->init_instance(inst, nullptr);
return self;
};
PyTreeSpec_Type.tp_new = reinterpret_cast<decltype(pybind11_object_new)>(&new_func);
Still:
-
v_h.type->init_instance(inst, nullptr)
does not initialize STL containers. -
__new__ + __setstate__
works as expected (because the instance is not initialized) -
obj.__setstate__(state)
has no effect whenobj
is already initialized. (addv_h.value_ptr() = new PyTreeSpec()
intp_new
)
Overall, I cannot let both "always initialize instance in __new__
" and "pickling support" work as expected.
@XuehaiPan would this PR solve most of your issue? If so, we can try to revive it and get something similar integrated into pybind11: https://github.com/pybind/pybind11/pull/4116
In a #4621 comment @Skylion007 asked:
Would this flag help with the set_state problems in #4549?
Not sure. At first glance "probably not", but I could be wrong. I'd need a significant block of time to really understand.
We (@wangxf123456 and myself) hit on something related - I think - the other day. Ultimately I think it has to do with this:
https://github.com/pybind/pybind11/blob/d930de0bca046774acf2cd0c09b1e4ef84d8c0bb/include/pybind11/detail/smart_holder_type_casters.h#L658-L668
That's smart_holder code. Pointing that out here because that is meant to clearly identify and cleanly handle the "not initialized" case. I'd have
- experiment to see how that plays with the
MyList.__new__(MyList)
use case here. - and what we could do on master to prevent the UB.
Sorry, nothing concrete/actionable right now.
I just finished minimizing another example of this issue while debugging what originally looked like something completely different. Once I got it minimized enough to convince me it was pybind11 or pytest I started searching issue trackers and found this issue which allowed me to minimize it even further: https://github.com/davisp/tiledb-pybind11-bug
I may be able to get by with just throwing an exception from __new__
like @XuehaiPan did originally because I'm pretty sure we're don't support pickling our wrapped objects. However, the original cause for this was something in pytest attempting to repr an uninitialized value so I'm guessing I'll fix the segfault just to be left with pytest failing on an internal exception. But at that point its a pytest issue as far as I can tell.
I'm no expert on Python internals, but I'm curious why nothing else caught that this is what was happening. Once I figured out what was going on, I immediately started to wonder why the allocated memory isn't zeroed out and then checked against nullptr before invoking methods on it. I assume that's just a performance trade off since most folks aren't doing separate __new__
and __init__
calls, so adding them slows everyone down at the expense of a few edge cases that fail weirdly?
Either way, I'll report back if throwing from __new__
works for my use case.
Apparently I skimmed the discussion on __new__
entirely too quickly because that's not at all what I was thinking it was doing there. I guess I'll have to poke around for a different solution.
Looking at this it crossed my mind: does #4762 make a difference for the original issue reported here?
IIUC @davisp used the newest pybind11 master when debugging his issue. But if that not the same as this issue we have two questions, possibly with two root causes.
What would really help: Reproducers in pybind11 PRs.
@rwgk I only used what was newest on PyPI. However, that reproducer should be fairly easily reproducible as a standalone test. I can give that a whirl if you think its worth the time.
However, I'm not entirely certain on what the expected behavior should be here. My current understanding is that this is similar to placement new in that we've allocated the space to hold the wrapped C++ object, then failed to successfully instantiate the object, which means if we attempt to apply a method on the un-instantiated random bit of memory we can easily segfault the interpreter.
Which is to say, I can at least get things into the shape of a test, but I'm not entirely certain what the behavior is expected to be in order to know what specifically to assert and what not.
Which is to say, I can at least get things into the shape of a test, but I'm not entirely certain what the behavior is expected to be in order to know what specifically to assert and what not.
I did not mean to suggest looking for (or constructing) a problem.
Offering a couple more thoughts for completeness:
if we attempt to apply a method on the un-instantiated random bit of memory we can easily segfault the interpreter.
Is that acceptable? — I'm not sure. It doesn't sound ideal, but then again, there might be arguments against runtime overhead that comes with a guard.
A related question: Is the behavior safer when using py::smart_holder
(smart_holder branch)? — I expect it to throw an exception instead of segfaulting, but I haven't tried it out.
Is that acceptable? — I'm not sure. It doesn't sound ideal, but then again, there might be arguments against runtime overhead that comes with a guard.
This is precisely the question I don't know how to answer. On the one hand it feels like its an odd edge case that most folks don't encounter so tacking on some non-zero overhead for everyone feels a bit heavy handed. And on the other hand, knowing there's a basic inheritance issue that can lead to segfaults seems not awesome either.
If I find some time, I'll try that smart_holder branch and report back what I find.
Fail into this again while testing pickling support with tp_traverse
.
How pickle
for pybind11
works:
- The Python
pickle
module callstp_new
(pybind11_object_new
) to create a new Python object with a holder point to the user C++ class. The holder does not initialize the STL container ofpy::object
s. - The
py::pickle::SetState
function creates a new user C++ class instance based on the passed state arguments. - Replace the Python object's holder instance (uninitialized in (1)) with the initialized instance in (2).
Now my SetState
function raised an error due to getting a malformed state. The program stops at step (2) and the Python object's holder is still not initialized yet. Then the gc
module calls tp_traverse
during garbage collection, and that traversal on the uninitialized STL container causes a segmentation fault.
A small snippet to demonstrate:
class OwnsPythonObjects {
public:
std::vector<py::object> sequence;
};
{
py::class_<OwnsPythonObjects> cls(
mod, "OwnsPythonObjects", py::custom_type_setup([](PyHeapTypeObject *heap_type) {
auto *type = &heap_type->ht_type;
type->tp_flags |= Py_TPFLAGS_HAVE_GC;
type->tp_traverse = [](PyObject *self_base, visitproc visit, void *arg) {
if (PY_VERSION_HEX >= 0x03090000) [[likely]] { // Python 3.9
Py_VISIT(Py_TYPE(self_base));
}
auto &self = py::cast<OwnsPythonObjects &>(py::handle(self_base));
for (auto &item : self.sequence) { // <<< segmentation fault due to uninitialized vector
Py_VISIT(item.ptr());
}
return 0;
};
}));
cls.def(py::pickle([](const OwnsPythonObjects &t) { return py::cast<py::tuple>(t.sequence); },
[](const py::object &o) { raise py::value_error("malformed state"); }));
// immutable type does not need tp_clear
reinterpret_cast<PyTypeObject *>(cls.ptr())->tp_flags |= Py_TPFLAGS_IMMUTABLETYPE;
}
Fail into this again while testing pickling support with
tp_traverse
.
I believe that's a different problem.
To me it looks like the way the py::pickle
mechanism as set up (https://pybind11.readthedocs.io/en/stable/advanced/classes.html#pickling-support), it's impossible to safely support GC.
I think this alternative pickle mechanism will support your use case:
https://github.com/google/pybind11k/pull/30094/files
Note that the production code change is tiny: the delta is just -4 lines, +13 lines in pybind11.h.
I think with that, you could just add a default constructor (__getinitargs__
could be left undefined) and then there wouldn't be a problem if __setstate__[non-constructor]
raises an exception.
It should be really easy for you to try out, just patch pybind11.h locally.
Please let me know if that works or not.
I think this alternative pickle mechanism will support your use case:
google/pybind11k#30094 (files)
Note that the production code change is tiny: the delta is just -4 lines, +13 lines in pybind11.h.
I think with that, you could just add a default constructor (
__getinitargs__
could be left undefined) and then there wouldn't be a problem if__setstate__[non-constructor]
raises an exception.It should be really easy for you to try out, just patch pybind11.h locally.
Please let me know if that works or not.
@rwgk Unfortunately, __getinitargs__
does not fit my use case because I do not bind the C++ ctor to Python __init__
.
To me it looks like the way the
py::pickle
mechanism as set up (pybind11.readthedocs.io/en/stable/advanced/classes.html#pickling-support), it's impossible to safely support GC.
I resolve this by adding a guard check in my tp_traverse
implementation:
py::custom_type_setup([](PyHeapTypeObject *heap_type) {
auto *type = &heap_type->ht_type;
type->tp_flags |= Py_TPFLAGS_HAVE_GC;
type->tp_traverse = [](PyObject *self_base, visitproc visit, void *arg) {
if (PY_VERSION_HEX >= 0x03090000) [[likely]] { // Python 3.9
Py_VISIT(Py_TYPE(self_base));
}
+ auto* instance = reinterpret_cast<py::detail::instance*>(self_base);
+ if (!instance->get_value_and_holder().holder_constructed()) [[unlikely]] {
+ // The holder is not constructed yet. Skip the traversal to avoid segfault.
+ return 0;
+ }
auto &self = py::cast<OwnsPythonObjects &>(py::handle(self_base));
for (auto &item : self.sequence) {
Py_VISIT(item.ptr());
}
return 0;
};
})
But this relies on the implementation details for pybind11 objects.
I think the ultimate solution is to initialize the allocated memory in tp_new
. For example, PyTuple_New
both allocates the memory and sets all entries to NULL in the tuple rather than leaving them as random pointers.
As an opt-in feature, I think we can add a new option (e.g., py::init_on_new()
) to type_record
used in py::class_
to call the default ctor of the user C++ class in pybind11_object_new
.
But this relies on the implementation details for pybind11 objects.
Cool! Everything around it does anyway. Just be happy!