- Jun 5, 2009
- Reaction score
IntroductionSo, I've lurked in the community for over a decade at this point and I owe my current career and life to OpenTibia development (which is where I started developing/programming) and thought it was way past the point that I should give back to the community. In this sense, I started looking at TFS once again as a good base for developing OTs and was pleasantly surprised to see Docker being seriously discussed.
The "pleasantness" quickly faded when I realized that most discussions on Dockers were either shot-down as "too complicated for most users" or just "too hard to get right", while most attempts to share Docker instances didn't give readers enough context to UNDERSTAND why things were the way they were.
Although I want to make knowledge on Docker more acessible, it's out of scope of this post to teach you about Docker itself, some previous knowledge is necessary. What ISN'T out of scope is giving you the know-how to interpret the principal components in using Docker for OTs. This is also a sort of self-documentation for my own future reference but I figured it could come in handy for others aswell.
The code-base used in this was the TFS1.5 Downgrade by Nekiro for protocol 8.6, so if you're using that, it should workout fine. Both files in this post have to be at the root of the repository (where your config.lua is).
My Naive ImageThe reason it's naive is because there isn't a lot of computer-trickery going on (or even at all) and there is almost no optimization done to either the
Dockerfile or the
docker-compose.yaml files. The only thing I've done is bring the TFS Compilation Instructions for Ubuntu and transformed it into the
Dockerfile that defines the IMAGE for the SERVER portion of our OT.
FROM ubuntu:23.04 AS build
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get --assume-yes install \
COPY cmake /usr/src/forgottenserver/cmake/
COPY src /usr/src/forgottenserver/src/
COPY CMakeLists.txt CMakePresets.json /usr/src/forgottenserver/
RUN mkdir build
RUN cmake .. && make
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get --assume-yes install \
COPY --from=build /usr/src/forgottenserver/build/tfs /bin/tfs
COPY data /srv/data/
COPY config.lua LICENSE README.md *.sql key.pem /srv/
EXPOSE 7171 7172
ExplanationThis should be pretty simple, beside it being base on the compilation instructions I took some suggestions from the Dockerfile in the TFS repository in Github and split it into 2 portions which are denoted by the
FROM ubuntu:23.04 statements. The top one defines our BUILD portion of the image where our source code will be built for us into the executable using CMake.
In both cases we are using
ubuntu:23.04 because that is the only version of Ubuntu that has
libfmt-dev at version 9.0.0 or over (which is a requirement lib for TFS). This image is pretty big and will result in a pretty long build step (takes me about 6 minutes to build the image) and gives us an image that is roughly 600Mb in disk size.
We set an environment variable that prevents the Ubuntu shell from waiting for user input and allows us to install all necessary packages, next we RUN a couple of daisy-chained commands. First we update the package repository by call
apt-get update and chain it with double ampersand ($$) to the
apt-get --assume-yes install command followed by all packages listed in the compilation procedure for TFS, this ensures our environment has everything to compile the sources.
Next we copy all necessary files from our local folder to the image (cmake folder, src folder, CMakeLists.txt CMakePresets.json) and set our working directory to the
/usr/src/forgottenserver directory. In this directory we create a folder called build, once again change our working directory to this new folder and call
cmake .. && make which does 2 things: First it ensures that we build our make files using the files in the parent folder and then calls the make command to build our code which markes the end of the build portion.
In the second portion of the file we do most of the same things except we install less packages (since we don't neeed build-essentials or cmake as they are only used to compile the code). After having all of our packages ready in this new step, we copy the resulting binaries from the build section into our
/bin/tfs folder on the resulting image. We then copy all files necessary to run a server from our local machine: data folder, config.lua, LICENSE, README, our SQL files and our RSA key.
We finish the image by exposing ports 7171 and 7172 to connections, setting our working directory to the /srv folder, exporting as a volume so we can read changes to any of our srv files whenever the server is live and setting the entrypoint to calling the TFS binary compiled previously.
Why is it naive?Well, there are a LOT of layers in our Dockerfile, and each layer renders more disk size and complexity to the image, things we don't inherently want. WE explicitly copy our config.lua file instead of copying the config.lua.dist and letting TFS' configmanager handle the conversion and finally we use a pretty HUGE and non-LTS base image (Ubuntu:23.04) which is a weird choice when compared to things like alpine (Which is around 30x smaller but has some documented issues when compiling and executing C/Rust/GO code).
Finally, we assume you already have a TFS repository cloned to your machine, that this file (and the following docker-compose.yaml) are in the root and that you don't want to produce nightly build from the latest commit in the master branch of the TFS repository.
But hell, it works.
What now?Before proceeding, make sure you build the image, open a terminal, head into the root folder of your TFS in your local machine where this should be and run:
docker build -t local/tfs-ubuntu:1.0.0 --no-cache .
This should produce a ton of verbose statements and it shouldn't error out at any point, once it's done you should have a
local/tfs-ubuntu:1.0.0 image locally available.
"Dockerizing"Assuming you have succesfully done the previous part, all that's left is to take the server, add our database, and fire it up. This can (mostly) all be done by specifing a docker compose like the one below
command: --default-authentication-plugin=mysql_native_password --log_bin_trust_function_creators=1
ExplanationWe use Docker Compose's 3.8 version just because, no apparent reason, older versions might or might not support some things in the following descriptions though. WE define our services, which in our case is just the database (called db) and the server itself (called tfs).
The database service definition has it's own "naive" quirkiness aswell, we use the official mysql image instead of mariadb, it's, again, a much bigger image and probably overkill for most people running OTs, but it doesn't try and screw you over when trying to connect locally depending on how your config is set meaning this should be a plug-and-play solution.
We call this service's container "mysql_tfs" and set it to always restart unless we explicitly stop it. The command statement has some additional CLI commands (mostly to allow for connection using a mysql native password (again, pretty naive, don't do this in production) and sets it so we can create triggers the way TFS structured them. We set tty to true so we can see prints to console from stdout and stderr (basically for logging/monitoring purposes). We then set Host, Database, User and Password for our database, these values shoudl reflect in your config.lua except for HOST, which should be set to the service name
We then map the image's
/var/lib/mysql folder (where data is persisted) to a db-volume folder so that we can persist data if we kill the container later. This is very important, and without this you will lose all your server progress at every container restart. Finally we just expose the 3306 port for any connections from the outside.
An important thing to note is that we don't automate the Schema import at all and as such, you still have to do it manually once the database is online.
The tfs service is pretty simple, for image remember to set it to the same name you gave it when you built it in the previous step, we call the container "server_tfs" in keeping with a pattern, set it to restart whenever it fails and again set tty to true so we can see WHY the server fails and any prints to the server log (like player logins and events).
We then expose the 7171 and 7172 ports so we can connect to the server, set it to depend on th db service (so if it crashes, the server goes offline aswell) and map some volumes. These volumes map our current ROOT (where the docker-compose.yaml is sitting on your computer) to the /srv folder in the container meaning that we can make changes to the files in that folder and have it reflect it on the server just as it would normally without having to rebuild the image every time. WE do the same things for the config and key files just for good measure.
What now?Now, you just have to start the server. again go to your folder in a terminal windows and run
docker-compose -p tfs-server-ot up
And you should see the database and the serve come alive (with their logs being colored for good measure).
There are some thing you have to do, the server won't be able to start everytime you turn on the container because it's a lot faster than the mysql database and you probably will have to manually start the container after the database is up, everytime (this is something that can be fixed, but again, NAIVE implementation, room for improvement).
Something you'll have to deal with is, the serve will not turn on if there is no schema, so make sure to login to the database (with DBeaver for example) and import your schema.
One last thing, RSA key's are iffy. If you get any RSA key trouble (specially if it's footer/header related), make sure the line endings are Unix-style (LF) instead of Windows-style (CRLF).