Skip to main content Link Search Menu Expand Document (external link)


In order to run a ContextMod instance you must first you must install it somewhere.

ContextMod can be run on almost any operating system but it is recommended to use Docker due to ease of deployment.

PROTIP: Using a container management tool like CE will help with setup/configuration tremendously.

Images available from these registeries:

  • Dockerhub -
  • GHCR -

An example of starting the container using the minimum configuration:

  • Bind the directory where your config file, logs, and database are located on your host machine into the container’s default DATA_DIR by using -v /host/path/folder:/config
    • Note: You must do this or else your configuration will be lost next time your container is updated.
  • Expose the web interface using the container port 8085
docker run -d -v /host/path/folder:/config -p 8085:8085

The location of DATA_DIR in the container can be changed by passing it as an environmental variable EX -e "DATA_DIR=/home/abc/config

Linux Host

NOTE: If you are using rootless containers with Podman this DOES NOT apply to you.

If you are running Docker on a Linux Host you must specify user:group permissions of the user who owns the configuration directory on the host to avoid docker file permission problems. These can be specified using the environmental variables PUID and PGID.

To get the UID and GID for the current user run these commands from a terminal:

  • id -u – prints UID
  • id -g – prints GID
docker run -d -v /host/path/folder:/config -p 8085:8085 -e PUID=1000 -e PGID=1000


The included docker-compose.yml provides production-ready dependencies for CM to use:


The included docker-compose.yml file is written for Docker Compose v2.

For new installations copy config.yaml into a folder named data in the same folder docker-compose.yml will be run from. For users migrating their existing CM instances to docker-compose, copy your existing config.yaml into the same data folder.

Read through the comments in both docker-compose.yml and config.yaml and makes changes to any relevant settings (passwords, usernames, etc…). Ensure that any settings used in both files (EX mariaDB passwords) match.

To build and start CM:

docker compose up -d

To include Grafana/Influx dependencies run:

docker compose --profile full up -d



Clone this repository somewhere and then install from the working directory

git clone .
cd context-mod
npm install
tsc -p .

An example of running CM using the minimum configuration with a configuration file:

node src/index.js run

Heroku Quick Deploy

NOTE: This is still experimental and requires more testing.


This template provides a web and worker dyno for heroku.

  • Web – Will run the bot and the web interface for ContextMod.
  • Worker – Will run just the bot.

Be aware that Heroku’s free dyno plan enacts some limits:

  • A Web dyno will go to sleep (pause) after 30 minutes without web activity – so your bot will ALSO go to sleep at this time
  • The Worker dyno will not go to sleep but you will NOT be able to access the web interface. You can, however, still see how Cm is running by reading the logs for the dyno.

If you want to use a free dyno it is recommended you perform first-time setup (bot authentication and configuration, testing, etc…) with the Web dyno, then SWITCH to a Worker dyno so it can run 24/7.

Memory Management

Node exhibits lazy GC cleanup which can result in memory usage for long-running CM instances increasing to unreasonable levels. This problem does not seem to be an issue with CM itself but with Node’s GC approach. The increase does not affect CM’s performance and, for systems with less memory, the Node should limit memory usage based on total available.

In practice CM uses ~130MB for a single bot, single subreddit setup. Up to ~350MB for many (10+) bots or many (20+) subreddits.

If you need to reign in CM’s memory usage for some reason this can be addressed by setting an upper limit for memory usage with node args by using either:


Value is megabytes. This sets an explicit limit on GC memory usage.

This is set by default in the Docker container using the env NODE_ARGS to --max_old_space_size=512. It can be disabled by overriding the ENV.


Tells Node to optimize for (less) memory usage rather than some performance optimizations. This option is not memory size dependent. In practice performance does not seem to be affected and it reduces (but not entirely prevents) memory increases over long periods.