• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

Call to community at TFS GitHub

Anyone who hates on @Evil Puncker and his commits is objectively a stupid person.
You cannot have unproductive activity in git, this is by design.

Even this is a helpful contribution attempt as it shines light on a common misconception about % stacking.
Every contribution is a win-win.
 
Last edited:
Dear all,

I've beenhaving a similar discussion in otservbr- global project.. We need tochange how the project looks like in github and make better documentations teaching people how to turn the server on already with auto-restart and gdb logs. This is the only way that we guarantee everyone that has a crash will be able to contribute.

We are in the middle of creating an easy shellscript that can be put in the github in order for players to sugarcoat compilation and starting server
Any volunteers to help us?
 
turn the server on already with auto-restart

I was actually going to tackle this when I get closer to deploying the test server for my project. Because systemd is the defacto now, and TFS is in fact a network service daemon, a decent guide on how to deploy it as such would be ideal.
 
Recently I have shared with them what I'm using to deploy.. basically my architecture is quite simple as I only have a vps in digitalocean and everything is in the same machine but

recompile.sh
Bash:
#!/bin/bash

cd /root/server/build
make -j 4

movedistro.sh
Bash:
#!/bin/bash

cd /root/server
mv ./tfs ./tfs.old
cp ./build/tfs ./tfs

#aa

stg.sh - forgive me on this one I don't know too much about gdb options
Code:
gdb -ex run tfs \
  -ex "set confirm off" \
  -ex quit

I use pm2 for the management and as auto-restarter because we used to have a node.js website. But even after moving to an AAC I decided to continue using it because it's so good.

Basically whenever I need to compile I just type './server/recompile.sh' and then './server/movedistro.sh'
to open you basically install pm2 and do 'pm2 start stg.sh' and then you have
pm2 stop stg: closes the server
pm2 log stg --lines X: show X lines of the console
pm2 ls: show every process that is running.

Dunno much about shell but I would like very much to have something that could automatically dump gdb depending on the debug code. Today pm2 does this job if I put the -g flag in the CMakeList (to compile with debugging symbols), but it would be better having something to create a log just of the error and I don't know, perhaps calling server save upon exit code/sigsev crashes?

Something like this:

Bash:
set $_exitcode = 999
run
set $ec = $_exitcode
if $ec != 0
    call g_game.saveGameState()
    bt full   
    quit
end

It would be definitely good to discuss this further and have something acessible that could allow every otadmin to generate free logs for us :)
 
I'm extremely familiar with PM2. Here is a shot from when I was tearing down an install on one of my beloved personal servers.
1587825184153.png
That 22M is twenty two months uptime. Server being retired because the metal she's on is running OVZ6, 2.6 kernel, CTs only.

PM2 has some failings I do not like. For example, the use case where you want it to run as root, and only as root. So that all, or perhaps a delegated group, users should be able to call and get info but only root should be able to make any sort of changes. They do not support this scenario despite a mind boggling extreme level of demand for it, I'm sure I could go find at least 15 issues on their tracker referencing such. Instead a user will invoke it and it spawns a new instance even though you know damn well one is already running. Nuisance.

Another is, as least last time I check and for several years now, if your app spawns subshells or child processes the reported memory usage is not correct.

That being said, I still find it extremely useful, but I think it's an inappropriate choice for the task you are proposing it for. TFS should be handled by systemd just like her sister daemons nginx and mysql are. We could provide automated packaging tooling for common installation targets like Ubuntu LTS. If a wrapper is needed to facilitate coredumps, it would be slipstreamed here.

NodeJS userland is complicated. Asking users who often have troubles compiling binaries from apps that provide wonderfully verbose CMake tooling that tells them exactly what packages are missing and they still need their hand held? That sounds like a match made in hell.
 
Dear all,

I've beenhaving a similar discussion in otservbr- global project.. We need tochange how the project looks like in github and make better documentations teaching people how to turn the server on already with auto-restart and gdb logs. This is the only way that we guarantee everyone that has a crash will be able to contribute.

We are in the middle of creating an easy shellscript that can be put in the github in order for players to sugarcoat compilation and starting server
Any volunteers to help us?

you mean something like crash dumps? Teaching the community how to do crash dumps and send logs could pin potential errors much better (if the logs had something useful)
 
I'm extremely familiar with PM2. Here is a shot from when I was tearing down an install on one of my beloved personal servers.
View attachment 44759
That 22M is twenty two months uptime. Server being retired because the metal she's on is running OVZ6, 2.6 kernel, CTs only.

PM2 has some failings I do not like. For example, the use case where you want it to run as root, and only as root. So that all, or perhaps a delegated group, users should be able to call and get info but only root should be able to make any sort of changes. They do not support this scenario despite a mind boggling extreme level of demand for it, I'm sure I could go find at least 15 issues on their tracker referencing such. Instead a user will invoke it and it spawns a new instance even though you know damn well one is already running. Nuisance.

Another is, as least last time I check and for several years now, if your app spawns subshells or child processes the reported memory usage is not correct.

That being said, I still find it extremely useful, but I think it's an inappropriate choice for the task you are proposing it for. TFS should be handled by systemd just like her sister daemons nginx and mysql are. We could provide automated packaging tooling for common installation targets like Ubuntu LTS. If a wrapper is needed to facilitate coredumps, it would be slipstreamed here.

NodeJS userland is complicated. Asking users who often have troubles compiling binaries from apps that provide wonderfully verbose CMake tooling that tells them exactly what packages are missing and they still need their hand held? That sounds like a match made in hell.
In that matter I really don't have much experience, been using pm2 more because the last programmer I was working with used that for site and we integrated the tfs there to make it 'unified'. To be honest I've never worried much on things like escalability or resource management of my server so this is a place where my knowledge is very very limited to say the least. The problem with apps in which the memory usage is not correct I have perceived because we run a shell with gdb and tfs and it always show 1,2gb when our machine is around 1,5gb use but for that we usually run other linux commands (usually top or even htop).

Lately my main concern is that I have met a really nice person that goes by the nickname of 'INFAMOUS', he's very talented in making ddos attacks and he even managed to take down some brazillian forums in about seconds. Today I don't see how I would be able to even consider hosting a server knowing there's someone like him that is able to drop my game to nothing in 2 seconds.

You seem to be one of the most experienced users in the community regarding architecture, would be really helpful if you could give more insights (and perhaps even a step by step on how would you setup a environment and the shellscripts you would use). This is something that you find plenty information online but there are not really a 'consensus' about what needs to be done and what is better. I know that creating a topic as tutorial is perhaps a time consuming task but if you want PM me and I can try to assist you in any way. It would be very appreciated!
Post automatically merged:

you mean something like crash dumps? Teaching the community how to do crash dumps and send logs could pin potential errors much better (if the logs had something useful)
exactly, having tfs being compiled and run in debug mode by default to extract crash dumps to 'server' folder or even a 'log' one could be really helpful for all devs as well.
 
You seem to be one of the most experienced users in the community regarding architecture, would be really helpful if you could give more insights (and perhaps even a step by step on how would you setup a environment and the shellscripts you would use). This is something that you find plenty information online but there are not really a 'consensus' about what needs to be done and what is better. I know that creating a topic as tutorial is perhaps a time consuming task but if you want PM me and I can try to assist you in any way. It would be very appreciated!

This is where providing automated packaging tooling becomes indispensable. Because the environment I'd set up for my own needs is not in the same sphere as what typical forum visitors are capable of.

But if you give them something where they can still mod their sources, and then turn that into a standard .deb package and sudo apt install tfs.deb and off they go? We'll engender a high rate of adoption if we can provide a painless pathway to our ideal coredump contributor installation. For users who don't even mod the sources, we can provide an Ubuntu PPA.

By covering Debian/Ubuntu this way, you've got the majority of low hanging fruit picked. Users operating on some more advanced distro of Linux are precisely the ones I don't mind providing thorough support for. By the time one is on CentOS or Arch, they typically know their way around missing make dependency errors, and if they are seeking support their problem is probably actually interesting. These users will likely contribute to such a debugging project by merely asking nicely.
 
This is where providing automated packaging tooling becomes indispensable. Because the environment I'd set up for my own needs is not in the same sphere as what typical forum visitors are capable of.

But if you give them something where they can still mod their sources, and then turn that into a standard .deb package and sudo apt install tfs.deb and off they go? We'll engender a high rate of adoption if we can provide a painless pathway to our ideal coredump contributor installation. For users who don't even mod the sources, we can provide an Ubuntu PPA.

By covering Debian/Ubuntu this way, you've got the majority of low hanging fruit picked. Users operating on some more advanced distro of Linux are precisely the ones I don't mind providing thorough support for. By the time one is on CentOS or Arch, they typically know their way around missing make dependency errors, and if they are seeking support their problem is probably actually interesting. These users will likely contribute to such a debugging project by merely asking nicely.
that's actually a very interesting idea. Initially I was thinking about using docker or something like this to easily instantiate environments but deb packages could be a better option.

But I got another question, this way we could make gdb and tfs integration better and providing an easier way to setup the environments but this doesn't necessarily mean we are covering the security/ddos gaps we still have in the community, only making it easier to standardize the environments we are all testing in. Any thoughts on how to win on both sides?
 
Unfortunately you can't really cargo cult security.

It's a topic about as complex as human immune system response, except the viruses are sentient and listening to your conversation. Mitigation proposals typically require comprehensive data about the attacks, which requires a kind of logging that incurs a heavy performance hit. So of course no one has this logging on during the initial attack. And deploying such data collection requires root hands-on-access, which is a dealbreaker for most support forum visitors.

There is also the added complexity that TFS is by nature a network service daemon and publicly exposed. This means there is an extensive novel L7 attack surface that your run-of-the-mill web tutorials will be woefully inappropriate for covering, and effective mitigation strategies will be esoteric knowledge.

Also, fuck Docker. You want Podman.
 
Unfortunately you can't really cargo cult security.

It's a topic about as complex as human immune system response, except the viruses are sentient and listening to your conversation. Mitigation proposals typically require comprehensive data about the attacks, which requires a kind of logging that incurs a heavy performance hit. So of course no one has this logging on during the initial attack. And deploying such data collection requires root hands-on-access, which is a dealbreaker for most support forum visitors.

There is also the added complexity that TFS is by nature a network service daemon and publicly exposed. This means there is an extensive novel L7 attack surface that your run-of-the-mill web tutorials will be woefully inappropriate for covering, and effective mitigation strategies will be esoteric knowledge.

Also, fuck Docker. You want Podman.
I'll take a look at this podman, but at first look it seems very similar to docker proposal, what's so good about it?

Also, I'm sorry but I'm not much of a linux user and some setup which seems pretty easy are actually several hours of trial and error for me hahaha.
Would you be able to assist me in making a container for me to commit at otland repo in order to make it easier for at least people having the basics and running tfs by default with gdb in an easier shellscript?
 
it seems very similar to docker proposal, what's so good about it?
It fills identical role, but doesn't suck ass.

Would you be able to assist me in making a container
I'm sorry but that's just too heavy of a context switch for me right now, it's outside my "cached skillsets". I do not have tooling for it on-hand, and Docker-esque containerized deployment is outside the scope of what I had in mind.. -- I was just saying, if that is the route you want to take, Podman should be your route, not Docker. It would be a few weeks at minimum before I could spare such time.

If you really want to get started that direction, here and here.
 
The TFS repository has been automatically building Docker container images for years and I've had a guide and a playbook on how to deploy TFS with systemd for over 6 years now. It's really nothing new.

Running TFS via gdb constantly in production requires running it with debug symbols compiled in and without some optimizations. I don't know the performance difference off the top of my head but it may not be the best idea.
 
The TFS repository has been automatically building Docker container images for years and I've had a guide and a playbook on how to deploy TFS with systemd for over 6 years now. It's really nothing new.
I mean not only a docker container with TFS but with everything that may be useful to setup it with, like GDB and a few shells to better manipulate recompilation as I showed.

Running TFS via gdb constantly in production requires running it with debug symbols compiled in and without some optimizations. I don't know the performance difference off the top of my head but it may not be the best idea.
I totally understand your point, but we must assume the TFS is not ready for production yet and if someone just downloads and compiles it the default state should be debugging one. Or as I prefer, "pre-production" stage.

Once this is finalized and 100% stable without any issue nor improvement left, people are free to just do the optimizations as they should and turn off gdb.
 
TFS repository has been automatically building Docker container images
I suppose the Dockerfile in the project root and the MicroBadger badge should give that away.

running it with debug symbols compiled in and without some optimizations
-march=haswell -Og -g should perform well enough for the sort of user who'd accept a packaged solution, and are likely to use OVH or other current VPS hosts.
 
Back
Top