ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators'

Question:

I am using the below code :

import sklearn
from imblearn.pipeline import make_pipeline

but it is showing the below error for me:

ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators' (/databricks/python/lib/python3.8/site-packages/sklearn/utils/metaestimators.py)

Here is the complete error:

> ImportError                               Traceback (most recent call
> last) <command-3963464708539101> in <module>
>       1 import sklearn
> ----> 2 from imblearn.pipeline import make_pipeline
>       3 from imblearn.over_sampling import SMOTE
>       4 from imblearn.under_sampling import NearMiss
>       5 
> 
> /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py
> in import_patch(name, globals, locals, fromlist, level)
>     160             # Import the desired module. If you’re seeing this while debugging a failed import,
>     161             # look at preceding stack frames for relevant error information.
> --> 162             original_result = python_builtin_import(name, globals, locals, fromlist, level)
>     163 
>     164             is_root_import = thread_local._nest_level == 1
> 
> /databricks/python/lib/python3.8/site-packages/imblearn/__init__.py in
> <module>
>      51 else:
>      52     from . import combine
> ---> 53     from . import ensemble
>      54     from . import exceptions
>      55     from . import metrics
> 
> /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py
> in import_patch(name, globals, locals, fromlist, level)
>     160             # Import the desired module. If you’re seeing this while debugging a failed import,
>     161             # look at preceding stack frames for relevant error information.
> --> 162             original_result = python_builtin_import(name, globals, locals, fromlist, level)
>     163 
>     164             is_root_import = thread_local._nest_level == 1
> 
> /databricks/python/lib/python3.8/site-packages/imblearn/ensemble/__init__.py
> in <module>
>       4 """
>       5 
> ----> 6 from ._easy_ensemble import EasyEnsembleClassifier
>       7 from ._bagging import BalancedBaggingClassifier
>       8 from ._forest import BalancedRandomForestClassifier
> 
> /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py
> in import_patch(name, globals, locals, fromlist, level)
>     160             # Import the desired module. If you’re seeing this while debugging a failed import,
>     161             # look at preceding stack frames for relevant error information.
> --> 162             original_result = python_builtin_import(name, globals, locals, fromlist, level)
>     163 
>     164             is_root_import = thread_local._nest_level == 1
> 
> /databricks/python/lib/python3.8/site-packages/imblearn/ensemble/_easy_ensemble.py
> in <module>
>      19 from ..utils._docstring import _random_state_docstring
>      20 from ..utils._validation import _deprecate_positional_args
> ---> 21 from ..pipeline import Pipeline
>      22 
>      23 MAX_INT = np.iinfo(np.int32).max
> 
> /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py
> in import_patch(name, globals, locals, fromlist, level)
>     160             # Import the desired module. If you’re seeing this while debugging a failed import,
>     161             # look at preceding stack frames for relevant error information.
> --> 162             original_result = python_builtin_import(name, globals, locals, fromlist, level)
>     163 
>     164             is_root_import = thread_local._nest_level == 1
> 
> /databricks/python/lib/python3.8/site-packages/imblearn/pipeline.py in
> <module>
>      16 from sklearn.base import clone
>      17 from sklearn.utils import _print_elapsed_time
> ---> 18 from sklearn.utils.metaestimators import available_if
>      19 from sklearn.utils.validation import check_memory
>      20 
> 
> ImportError: cannot import name 'available_if' from
> 'sklearn.utils.metaestimators'
> (/databricks/python/lib/python3.8/site-packages/sklearn/utils/metaestimators.py)

output of sklearn.version : ‘0.24.1’

I have tried with a lot of things but it is not working. Please let me know if you have a solution for this. The can be incompatibility of versions but don’t know which version works well.

Edit:
Also, I am getting the below data:

!pip install scikit-learn==1.1.1
print(sklearn.__version__)
sklearn.__path__

Output:

Requirement already satisfied: scikit-learn==1.1.1 in /databricks/python3/lib/python3.8/site-packages (1.1.1)
Requirement already satisfied: scipy>=1.3.2 in /databricks/python3/lib/python3.8/site-packages (from scikit-learn==1.1.1) (1.6.2)
Requirement already satisfied: numpy>=1.17.3 in /databricks/python3/lib/python3.8/site-packages (from scikit-learn==1.1.1) (1.19.2)
Requirement already satisfied: joblib>=1.0.0 in /databricks/python3/lib/python3.8/site-packages (from scikit-learn==1.1.1) (1.0.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /databricks/python3/lib/python3.8/site-packages (from scikit-learn==1.1.1) (2.1.0)
Out[45]: '0.24.1'
Out[46]: ['/databricks/python/lib/python3.8/site-packages/sklearn']

Here, it is getting version from /python/ and installing in /python3/
It might be the issue. Don’t know the solution.

Asked By: Sajjad Manal

||

Answers:

Did you try using a virtual environment.
To create a venv(virtual environment):
python -m venv venv_name
To activate the venv:
source venv_name/bin/activate

Answered By: GOGHI

I had to specific versions of numpy, pandas, dask-ml and scikit-learn to resolve this.

numpy 1.22.4
pandas 1.2.4
dask-ml 2022.5.27
scikit-learn 1.1.1
Answered By: Sajjad Manal
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.