You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run the RandomForest regressor in parallel. I am getting the following error. I have checked the environment for the following class and apparently, there is _FuncWrapper in that fixes.py.
Caused by: org.apache.spark.api.python.PythonException: 'AttributeError: Can't get attribute '_FuncWrapper' on <module 'sklearn.utils.fixes' from '/databricks/python/lib/python3.7/site-packages/sklearn/utils/fixes.py'>'. Full traceback below:
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/worker.py", line 636, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/databricks/spark/python/pyspark/worker.py", line 77, in read_command
command = serializer.loads(command.value)
File "/databricks/spark/python/pyspark/serializers.py", line 466, in loads
return pickle.loads(obj, encoding=encoding)
AttributeError: Can't get attribute '_FuncWrapper' on <module 'sklearn.utils.fixes' from '/databricks/python/lib/python3.7/site-packages/sklearn/utils/fixes.py'>
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:598)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:733)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:716)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:551)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
The text was updated successfully, but these errors were encountered:
Hi,
I am trying to run the RandomForest regressor in parallel. I am getting the following error. I have checked the environment for the following class and apparently, there is _FuncWrapper in that fixes.py.
Caused by: org.apache.spark.api.python.PythonException: 'AttributeError: Can't get attribute '_FuncWrapper' on <module 'sklearn.utils.fixes' from '/databricks/python/lib/python3.7/site-packages/sklearn/utils/fixes.py'>'. Full traceback below:
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/worker.py", line 636, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/databricks/spark/python/pyspark/worker.py", line 77, in read_command
command = serializer.loads(command.value)
File "/databricks/spark/python/pyspark/serializers.py", line 466, in loads
return pickle.loads(obj, encoding=encoding)
AttributeError: Can't get attribute '_FuncWrapper' on <module 'sklearn.utils.fixes' from '/databricks/python/lib/python3.7/site-packages/sklearn/utils/fixes.py'>
The text was updated successfully, but these errors were encountered: