Skip to content

Latest commit

 

History

History
33 lines (19 loc) · 2.15 KB

README.md

File metadata and controls

33 lines (19 loc) · 2.15 KB

Go to Docker Hub

Apache Spark docker container image (Standalone mode)

Standalone Spark cluster mode requires a dedicated instance called master to coordinate the cluster workloads. Therefore the usage of an additional cluster manager such as Mesos, YARN or Kubernetes is not necessary. However Standalone cluster can be used with all of these cluster managers. Additionally Standalone cluster mode is the most flexible to deliver Spark workloads for Kubernetes, since as of Spark version 2.4.0 the native Spark Kubernetes support is still very limited.

Starting up

Clone this repo and use docker-compose to bring up the sample standalone spark cluster.

docker-compose up

Note that the default configuration will expose ports 8080 and 8081 correspondingly for the master and the worker containers.

Configuration

Standalone mode supports to container roles master and worker. Depending on which one you need to start you may pass either master or worker command to this container. Worker requires only one argument which is the spark host URL (spark://host:7077).

Fine-tune configuration maybe achieved by mounting /spark/conf/spark-defaults.conf, /spark/conf/spark-env.sh or by passing SPARK_* environment variables directly. See the links bellow for more details:

Important: scratch volumes

  • /spark/work - directory to use for scratch space and job output logs; only on worker. Can be overridden via -w path CLI argument.
  • /tmp - directory to use for "scratch" space in Spark, including map output files and RDDs that get stored on disk (spark.local.dir setting).

Authors