Komodo FAQ, Tips, and Tricks
Everything you wanted to know but were afraid to ask 🦎
A semi-organized list of FAQs, tips, and tricks for using Komodo. This is a follow-up to my migration guide and Introduction for Komodo
This is living guide that will be updated as Komodo is updated and community knowledge is consolidated. For feedback, contributions, and corrections:
- PRs are welcome
- Use the Giscuss widget at the bottom of the post with your Github account
- Or directly comment on the discussion thread for this post
- Available in the Komodo Discord only1 as
FoxxMD
FAQ
Can Komodo Core update itself?
Yes! If using systemd Periphery agent you can re-deploy a Stack with Komodo Core without issue. If you are using the Docker agent it’s recommended to keep the periphery and core services in different stacks so the UI continues to work during deployment, but not necessary.
Can Periphery Agents updates be automated?
Not from within Komodo the same way Core be can updated, unfortunately. However, if you are familiar with Ansbile there several playbooks available from the community to automate this process:
- from mbecker (Komodo creator) https://github.com/moghtech/komodo/discussions/220
- from bpbradley https://github.com/bpbradley/ansible-role-komodo
How do I send alerts to platforms other than Discord/Slack?
You will need to create an Alerter that uses the Custom endpoint with a service that can ingest it and forward it to your service.
I have developed a few Alerter implementations for popular notification platforms:
- ntfy
- gotify
- discord (more customization than built-in discord)
- apprise (can be used to notify to any of the 100+ providers apprise supports including email)
And the Komodo community is creating more implementations too:
- telegram uses Cloudflare Workers
- (more to be added)
Generally, these are standalone Stacks you can run on Komodo. After the Stack is deployed, create a new Alerter with a Custom endpoint and point it to the IP:PORT of the service to finish setup.
How do I stop Komodo from sending transient notifications?
You may find Komodo sends notifications for unresolved events like StackStateChange
when it is redeploying a Stack. Or it sends alerts for 100% CPU when it’s only a temporary spike.
For ntfy/gotify/discord/apprise implementations I developed you can use UNRESOLVED_TIMEOUT_TYPES
and UNRESOLVED_TIMEOUT
to “timeout” temporary events: If the event of type
is unresolved
and the alerter sends another event of the same type
before timeout
milliseconds then it cancels sending the notification.
My notification service isn’t listed here! How do I get it to work?
First, you should check if it’s supported by apprise. If it is then use the apprise implementation from above as that is probably the easiest route.
If it is not supported by apprise or you want to build your own then check out my repository where I implemented notification Alerters, https://github.com/FoxxMD/komodo-utilities. The repo uses VS Code Devcontainers for easy environment setup and each implementation uses the official Komodo Typescript API client to make things simple. It should be straightforward to fork my repo, copy-paste one of the existing implementations, and modify program.ts
to work with your service.
Run Directory is defined but the entire repo is downloaded?
In a Stack config the Run Directory only determines the working directory for Komodo to run compose up -d
from.
Komodo does not do anything “smart” when downloading the repo, even if it knows the Run Directory. It’s not possible for it to know if you only use files from that directory for the Stack.
If you are concerned about cloning/pulling the same repo for each Stack see Stacks in Monorepo vs. Stack Per Repo below.
How do I view logs in real time?
Komodo doesn’t support “true” realtime log viewing yet but “near realtime” logging can be enabled by toggling the Poll switch on any Log tab. Dozzle is a good alternative if you need consolidated, realtime logging for all containers with rich display, search, regex filtering, etc…
How do I shell/exec/attach to a container?
Komodo does not yet support container exec but it a popular requested feature. As an alternative Dozzle now supports shell/attach to container or I have created a bash script for “fuzzy search and attach to container” that can be used as a shortcut.
Environmental Variables/Secrets don’t work!
This is likely a misunderstanding of how Compose file interpolation and environmental variables in Compose work. Please read this guide for a better understanding of how .env
--env-file
env_file:
and environment:
work in Docker as well as how Komodo fits into them.
How do I deploy a service that doesn’t have a published Docker Image?
Dockerfile exists and no modification needed
If the service has a project git repository with a Dockerfile
and you know the project is “ready” and just needs to be built from the Dockerfile (example) then this can be done within your compose.yaml
file! Compose’s build context
supports directories or a URL to a git repository so:
1
2
3
4
5
6
services:
logdy:
build:
context: https://github.com/logdyhq/logdy-core.git
# only needed if not in root dir and named Dockerfile
# dockerfile: Dockerfile
Dockerfile needs modification
Docker Compose also allows inlining Dockerfile
contents so if it’s a simple setup it can be yolo’d:
1
2
3
4
5
6
7
services:
myService:
build:
context: . # or use a git URL to build with a repository
dockerfile_inline: |
FROM baseimage
RUN some command
My setup is more complex…
If you need to keep better track of your changes, want to build the image before the stack is deployed, or want n+1 machines on your network to able to use the same build then you need to build and publish the image rather than building it inline in the stack.
Standalone Container
If the use-case is building one image that can be deployed to one, standalone container than the convenient way to do this is to:
- setup a local Builder and configure a Build without any Image Registry (not publishing externally)
- Build the image
- Create a Deployment with the Komodo build you just made
Same-Machine Stack
If you want to keep everything in a Stack then
- follow the same steps above (Builder, configure Build)
- On the Build…
- Make sure to set Image Name
- Add an Extra Arg
--load
This will push the built image to the local registry on the machine where the Builder ran. You can then use the Image Name in a Stack deploy to that same machine only.
Any-Machine Stack
This is the same as the Same-Machine Stack but requires setting up a local registry that Komodo can push to and your other machines can pull from. Popular, self-hosted git repo software like Forgejo and Gitea have registries built in and are easy to use but Docker requires registries to be secure-by-default (no HTTP) and covering reverse proxies or modifying the Docker Daemon are out the scope for this FAQ. You may want to check out my post on LAN-Only DNS + HTTPS + Reverse Proxy with NGINX for where to get started. (Traefik version coming soon!)
Tips and Tricks
Stacks in Monorepo vs. Stack Per Repo
There are valid reasons to use individual repositories per stack such as organizational preference, webhook usage for deployment, permissions, large/binary files, etc…
But with majority text-based repositories concerns regarding data usage and performance (clone for new stack or pull repo on each deployment) are not usually valid.
The Reciepts 🧾
My own monorepo for Komodo contains 100+ stacks (folders) ranging from full *arr/Plex sized stacks to single service test stacks.
A full clone of this repository is 2MB on disk. Benchmarking a full clone of this monorepo against a repo containing only a few text files, both from github, on my Raspberry Pi 4:
1
2
3
4
5
Benchmark 1: git clone https://github.com/FoxxMD/[myrepo] myMonoRepoFolder
Time (abs ≡): 884.1 ms [User: 306.9 ms, System: 216.4 ms]
Benchmark 1: git clone https://github.com/FoxxMD/compose-env-interpolation-example mySimpleFolder
Time (abs ≡): 389.9 ms [User: 150.7 ms, System: 107.4 ms]
So, 800ms for the full monorepo and only ~500ms slower than an almost empty repo. On a low power ARM machine. The subsequent pulls to update the repo on redeployment is in the tens of milliseconds.
If you are critically space constrained the size on disk for each stack may be a valid reason to go with per-stack repos but otherwise even an RP4 with a 512GB sd card is not going to have an issue with this setup.
Shell-into-Container Shortcut
The bash script below can be used to “fuzzy search” for containers by name and then exec into a shell in that container.
Example Usage
1
2
$ dex
Usage: CONTAINER_FUZZY_NAME [SHELL_CMD:-sh]
1
2
3
$ ./dex.sh sonarr
# Found media-sonarr-1
/app #
1
2
3
4
$ ./dex.sh test
Found: test-new-app-1
Found: test-1
More than one container found, be more specific
1
2
3
$ ./dex.sh sonarr /bin/ash
# Found container media-sonarr-1
/app #
Bash Script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#!/bin/bash
# * Does a fuzzy search for container by name so you only need a partial name
# * If there is more than one container with partial name it will print all containers
# * And if one is an exact match then use it otherwise exit
# * If there is only one container matching it execs
# * Second arg can be shell command to use, defaults to sh
# EX
# ./dex.sh sonarr
# ./dex.sh sonarr bash
#
# Can be set in a function in .bashrc for easy aliasing
if [[ -z "$1" ]]; then
printf "Usage: CONTAINER_FUZZY_NAME [SHELL_CMD:-sh]\n"
else
names=$(docker ps --filter name=^/.*$1.*$ --format '')
lines=$(echo -n "$names" | grep -c '^')
name=""
if [ "$lines" -eq "0" ]; then
printf "No container found\n"
elif [ "$lines" -gt "1" ]; then
while IFS= read -r line
do
printf "Found: %s\n" "$line"
if [ "$line" = "$1" ]; then
name="$1"
fi
done < <(printf '%s\n' "$names")
if [[ -z "$name" ]]; then
printf "More than one container found, be more specific\n"
else
printf "More than one container found but input matched one perfectly.\n"
fi
else
name="$names"
printf "Found: %s\n" "$name"
fi
if [[ -n "$name" ]]; then
docker container exec -it $name ${2:-sh}
fi
fi
Save this script and chmod +x
it on each machine, then add it as an alias to the appropriate user’s .bashrc
to make it a command line shortcut:
1
alias dex="~/dex.sh"
Docker Data Agnostic Location
One of the benefits to Komodo is being able to re-deploy a stack to any Server with basically one click. What isn’t so easy, though, is moving (or generally locating) any persistent data that needs to be mounted into those services.
If you use named volumes and have a backup strategy already this is a moot point but if you are like me and use bind mounts I found a good approach is to use a host-specific ENV as a directory prefix when writing compose files.
This has the advantage of making the compose bind mount location agnostic to the host it is on and makes moving data, or rebuilding a host, much easier since compose files don’t need to be modifed if the data location changes parent directories.
An example:
1
2
3
4
5
services:
my-service:
image: #...
volumes:
- $DOCKER_DATA/my-service-data:/app/data
As long as DOCKER_DATA
is set as an ENV on each host then the compose file becomes storage location agnostic. It doesn’t matter whether you use /home/MyUser/docker
or /opt/docker
or whatever.
To do this you’ll need to set this ENV in either the shell used by Periphery (.bashrc
or .profile
), set in the Periphery’s docker container ENVs, or set it in the systemd configuration for a systemd periphery agent.
Setting ENV for systemd periphery
For systemd periphery check which periphery.service
install path you used and then add a folder periphery.service.d
with file override.conf
with the contents:
1
2
[Service]
Environment="DOCKER_DATA=/home/myUser/docker-data"
and then restart the periphery service
EX
1
2
/home/foxx/.config/systemd/user/periphery.service <--- systemd unit for periphery
/home/foxx/.config/systemd/user/periphery.service.d/override.conf <--- config to provide `Environment`
Setting ENV for docker periphery
For docker periphery container make sure you add DOCKER_DATA
to your environment:
1
2
3
4
5
6
7
services:
periphery:
image: ghcr.io/moghtech/komodo-periphery:latest
# ...
environment:
# ...
DOCKER_DATA: /home/myUser/docker-data
and then restart the periphery container.
Monitoring Services with Komodo and Uptime Kuma
Uptime Kuma has the Docker Container monitor type but using Komodo’s API has the advantage of being able to monitor a Stack/Service status independent of what Server it is deployed to and what the container name is.
Prerequisites
You’ll need an API Key and Secret for a Komodo User. (Settings -> Users -> Select User -> Api Keys section)
I would recommend creating a new “Read Only” Service User. Give it only permissions for Server/Stack Read. Create the API Key and copy the Secret as it will not be shown again.
Create Uptime Kuma Monitor
Create a new Monitor with the type HTTP(s) - Json Query
HTTP Options
- Method:
POST
- Body Encoding:
JSON
Body
Visit the Stack in Komodo UI and copy the ID after /stacks/
from the URL. Use it in stack
value below:
1
2
3
4
5
6
{
"type": "ListStackServices",
"params": {
"stack": "67913976afe9cffd0fa1f963"
}
}
Headers
Use the Api Key and Secret created earlier:
1
2
3
4
{
"X-Api-Key": "YourKey",
"X-Api-Secret": "YourSecret"
}
URL
1
http://YOUR_KOMODO_SERVER/read
Json Query / Expected Value
To monitor all services in the stack and report UP only if all are running
- Json Query:
$count($.container[state!='running'].state ) = 0
- Expected Value:
true
To monitor a specific service in the stack and report UP if it is running
- Json Query:
$[service="SERVICE_NAME_FROM_COMPOSE"].container.state
- Expected Value:
running
Please do not DM me unless we have discussed this prior. I get way too much discord DM spam and will most likely ignore you. @ me on the Komodo server instead. ↩︎