This guide is written for everyone interested in deploying graphhopper on a server.
See the installation section on how to start the server.
Then you can embed these commands in a shell script and use this from e.g. Docker or systemd.
For production usage you have a web service included where you can use this configuration Increase the -Xmx/-Xms parameters of the command accordingly.
You can reduce the memory requirements for the import step when you run the "import" command explicitly before the "server" command:
java [options] -jar *.jar import config.yml
java [options] -jar *.jar server config.yml # calls the import command implicitly, if not done before
Try different garbage collectors (GCs) like ZGC or Shenandoah for serving the
routing requests. The G1 is the default GC but the other two GCs are better suited for JVMs with bigger heaps (>32GB) and low pauses.
You enable them with -XX:+UseZGC
or -XX:+UseShenandoahGC
. Please note that especially ZGC and G1 require quite a
bit memory additionally to the heap and so sometimes speed can be increased when you lower the Xmx
value.
If you want to support none-CH requests you should consider enabling landmarks or limit requests to a
certain distance via routing.non_ch.max_waypoint_distance
(in meter, default is 1) or
to a node count via routing.max_visited_nodes
.
Otherwise it might require lots of RAM per request! See #734.
By default, the GraphHopper UI uses Omniscale and/or Thunderforest as layer service. Either you get a plan there, then set the API key in the options.js file, or you have to remove Omniscale from the JavaScript file.
GraphHopper uses the GraphHopper Directions API for geocoding. To be able to use the autocomplete feature of the point inputs you have to:
- Get your API Token at: https://www.graphhopper.com/ and set this in the options.js
- Don't forget the Attribution when using the free package
GraphHopper can handle the world-wide OpenStreetMap road network.
Parsing this planet file and creating the GraphHopper base graph requires ~60GB RAM and takes ~3h for the import. If you can accept
much slower import times (3 days!) this can be reduced to 31GB RAM when you set datareader.dataaccess=MMAP
in the config file.
As of May 2022 the graph has around 415M edges (150M for Europe, 86M for North America).
Running the CH preparation, required for best response times, needs ~120GB RAM and the additional CH preparation takes ~25 hours (for the car profile with turn cost) but heavily depends on the CPU and memory speed. Without turn cost support, e.g. sufficient for bike, it takes much less (~5 hours).
Running the LM preparation for the car profile needs ~80GB RAM and the additional LM preparation takes ~4 hours.
It is also possible to add CH/LM preparations for existing profiles after the initial import. Adding or modifying profiles is not possible and you need to run a new import instead.
Avoid swapping e.g. on linux via vm.swappiness=0
in /etc/sysctl.conf. See some tuning discussion in the answers here.
When using the MMAP setting (default for elevation data), then ensure /proc/sys/vm/max_map_count
is enough or set it via sysctl -w vm.max_map_count=500000
. see also graphhopper#1866.
If you want to use elevation data you need to increase the allowed number of open files. Under linux this works as follows:
- sudo vi /etc/security/limits.conf
- add:
* - nofile 100000
which means set hard and soft limit of "number of open files" for all users to 100K - sudo vi /etc/sysctl.conf
- add:
fs.file-max = 90000
- reboot now (or sudo sysctl -p; and re-login)
- afterwards
ulimit -Hn
andulimit -Sn
should give you 100000