-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple setup for it-hadoop-client #25
Comments
I don't really see any actions here I need to do. You can do this or any other stuff if you want. What exactly do you require from CMSSpark? In another ticket #24 I tested your code where I didn't specified custom configuration and it works on it-hadoop-client. Therefore please provide concrete action/issue you're asking to fix or close this ticket. |
Its more of a note for people here that may find useful. I'm making suggestions to simplify the codebase. |
@nsmith I tried it on SWAN python notebook with analytix/spark/hadoop, but it can not find a class: My spark env looks like: Do you know how to pass the needed packages in SWAN? |
Bockjoo,
it is unclear to me how you run CMSSpark. But if you did run its run_spark
script it setup proper jars, e.g. see its settings here:
https://github.com/dmwm/CMSSpark/blob/master/bin/run_spark#L63-L119
The problem is that we have several clusters and environments to support.
Some env are set via sourcing LCG scripts, others via manual settings. The later
was done by the time we only started and not libs were in place.
The SWAN setup is out of my scope of expertise, and I'll advise you to open SNOW
ticket for that. While if you need help with running spark job from lxplus or
other places I can show you how we do it with run_spark wrapper.
Best,
Valentin.
…On 0, Bockjoo Kim ***@***.***> wrote:
@nsmith I tried it on SWAN python notebook with analytix/spark/hadoop, but it can not find a class:
Py4JJavaError: An error occurred while calling o82.load.
: java.lang.ClassNotFoundException: Failed to find data source: org.apache.spark.sql.avro.AvroFileFormat. Please find packages at http://spark.apache.org/third-party-projects.html
I think I passed CMSSpark because it seems to have databriks.spark.avro, too.
I tried it on the command line, but have the same error:
bash-4.2$ spark-submit --packages com.databricks:spark-avro_2.11:4.0.0 test.py 2>&1 | grep Fail
: java.lang.ClassNotFoundException: Failed to find data source: org.apache.spark.sql.avro.AvroFileFormat. Please find packages at http://spark.apache.org/third-party-projects.html
My spark env looks like:
SPARK_HOME=/cvmfs/sft.cern.ch/lcg/releases/spark/2.4.0-cern1-6f44e/x86_64-centos7-gcc7-opt
SPARKMONITOR_KERNEL_PORT=36803
SPARK_CLUSTER_NAME=analytix
SPARK_CONF_DIR=/cvmfs/sft.cern.ch/lcg/etc/hadoop-confext/etc/spark.analytix/conf
SPARK_PORTS=32832,40308,36371,34794,36297,39138
SPARK_USER=bockjoo
HADOOP_TOKEN_FILE_LOCATION=/spark/hadoop.toks
PYSPARK_PYTHON=/cvmfs/sft.cern.ch/lcg/releases/Python/2.7.15-c333c/x86_64-centos7-gcc7-opt/bin/python
PYSPARK_DRIVER_PYTHON=/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/bin/python
SPARK_CONFIG_SCRIPT=/cvmfs/sft.cern.ch/lcg/etc/hadoop-confext/hadoop-setconf.sh
SPARK_LOCAL_IP=172.17.0.5
SPARK_DIST_CLASSPATH=/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/etc/hadoop:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/common/lib/*:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/common/*:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/hdfs:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/hdfs/lib/*:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/hdfs/*:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/yarn/lib/*:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/yarn/*:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/mapreduce/lib/*:/cvmfs/sft.cern.ch/lcg/views/LCG_95a/x86_64-centos7-gcc7-opt/share/hadoop/mapreduce/*
In [2]:
Do you know how to pass the needed packages in SWAN?
Should I add the classpath for org.apache.spark.sql.avro.AvroFileFormat?
If so, which where is the jar file for that?
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#25 (comment)
|
Have you tried not using CMSSpark at all? If you just follow https://hadoop-user-guide.web.cern.ch/hadoop-user-guide/getstart/client_cvmfs.html it works fine for me on lxplus. It should work on SWAN if you include the 'CMSSpark options' checkbox. (you don't need to setup the session in SWAN though) |
Actually I lied, it works fine on the edge nodes but not lxplus. |
@vkuznet Thanks Velentine! I will try to set up avro jars following run_spark script. I will see if setting the jar like run_spark will help. |
@bockjoo if you're interested in a high-level comparison of these popularity sources, I can forward you some material. |
@nsmith Yes, please. Thanks! |
This works on SWAN Bleeding Edge ( @nsmith I am not sure if you meant this Edge or k8 edge or something else ): |
@nsmith- , @bockjoo , this is what I suspected, even though the usage is trivial as Nick pointed out in his first post, the number of use-cases when you'll look at different data-sources will grow and eventually to accommodate all of them you'll converge on something CMSSpark is aiming to. I'm not against this simple solution, but you should keep in mind that it will be adopted/modified to every source we have on HDFS. Unfortunately, due to heterogeneous sources/formats/structures there is no simple "solution" and I rather in favor to keep such details hidden from end-users and simplify access to all sources via common framework. |
I found that by just following the instructions at https://hadoop-user-guide.web.cern.ch/hadoop-user-guide/gettingstarted_md.html I can submit this minimal job:
with
Perhaps this is a better soft introduction than the RDD complexity? Also, there seem to be lxplus options.
The text was updated successfully, but these errors were encountered: