On arm processors (AWS:gravition2, rhel), I get the following failure in version 0.23.1
1038 >>> from sklearn.model_selection import train_test_split
1039 >>> X, y = make_classification(random_state=0)
1040 >>> X_train, X_test, y_train, y_test = train_test_split(
1041 ... X, y, random_state=0)
1042 >>> clf = GradientBoostingClassifier(random_state=0)
1043 >>> clf.fit(X_train, y_train)
1044 GradientBoostingClassifier(random_state=0)
1045 >>> clf.predict(X_test[:2])
1046 array([1, 0])
1047 >>> clf.score(X_test, y_test)
Expected:
0.88
Got:
0.84
pytest -v sklearn/ensemble/_gb.py::sklearn.ensemble._gb.GradientBoostingClassifier
PASSED is thrown.
FAILED is thrown.
1047 >>> clf.score(X_test, y_test)
Expected:
0.88
Got:
0.84
System:
python: 3.6.8 (default, Dec 5 2019, 16:02:25) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
executable: /usr/bin/python3
machine: Linux-4.18.0-193.1.2.el8_2.aarch64-aarch64-with-redhat-8.2-Ootpa
Python dependencies:
pip: 20.1.1
setuptools: 39.2.0
sklearn: 0.23.1
numpy: 1.14.3
scipy: 1.0.0
Cython: 0.29
pandas: 1.0.5
matplotlib: 3.2.1
joblib: 0.14.0
threadpoolctl: 2.1.0
Built with OpenMP: True
Linux-4.18.0-193.1.2.el8_2.aarch64-aarch64-with-redhat-8.2-Ootpa
Traceback (most recent call last):
File "<string>", line 3, in <module>
NameError: name 'Python' is not defined
Thanks for the report @murata-yu , I can reproduce in https://github.com/scikit-learn/scikit-learn/pull/17996
Not yet sure if it's an indication of an actual issue of it we should just increase the tolerance.
From a user point of view, 4% of change in accuracy in the 0.8 range looks like more than a small numerical rounding discrepancy. It's worth investigating.
I can reproduce the problem locally by building scikit-learn in an arm64 miniforge environment in docker / qemu container as described in https://github.com/scikit-learn/scikit-learn/pull/17644#issuecomment-663857435.