Skip to main content Skip to complementary content

Configuring Spark Job Server

Procedure

  1. Open the <Spark_Job_Server_Path>/settings.sh file.
  2. Edit the following line to specify the IP address that you want the Spark Job Server service to use. The service will listen on this IP address.
    sjs_host=localhost
  3. Edit the following line to specify the port that you want the Spark Job Server service to use. The service will listen on this port.
    sjs_port=8098
  4. Edit the following line to specify the path to the Hadoop cluster settings directory. Note that this must be a local path: you can obtain the settings files (such as hdfs-site.xml, mapred-side.xml, core-site.xml, and yarn-site.xml) from your Hadoop cluster and copy them to the machine where you installed Spark Job Server.
    hadoop_conf_dir=/path/to/hadoop/cluster/settings/directory
  5. To use Spark Job Server server with a secure Hadoop cluster (using Kerberos), add the following line to the file. Note that this must be a local path: you can obtain the krb5.conf file from your Hadoop cluster and copy it to the machine where you installed Spark Job Server.
    krb5.config=/path/to/Kerberos/configuration/file/krb5.conf
  6. Save your changes to the settings file.
  7. Restart Spark Job Server for your changes to be taken into account.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!