• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

Alpha Proxy Guide (kondrah/otclient multi-path proxy)

Just a passing thought - maybe we can receive the "login proxy" list over http when the client is opened - that'd keep me happy haha
You can set list of proxies in init.lua, login to account (with OTCv8 proxy system, it will pick 1 working proxy to send login packet), get new list of proxies in login packet, call g_proxy.clear() to remove all proxies (OTCv8 does it by default, Mehah does not) and add new proxies to client. It will use new proxies to connect to OTS.

I didn't want to hard code anything, since I also use multiple login proxies for failover, the same way CipSoft does it, just in case one of them goes down.
How do you use multiple login proxies for failover without proxy system? Using some external load balancer with HTTP protocol to login?
From first post description, it looks like you must put 1 of haproxy IPs as server IP in client, set login port to 7100 and it will connect using it (no failover) or you login to account using IP of dedic with OTS, which makes this system absolutely useless. If attacker can get real OTS IP, he can DDoS it and OTCv8 proxy becomes useless.
 
How do you use multiple login proxies for failover without proxy system? Using some external load balancer with HTTP protocol to login?
From first post description, it looks like you must put 1 of haproxy IPs as server IP in client, set login port to 7100 and it will connect using it (no failover) or you login to account using IP of dedic with OTS, which makes this system absolutely useless. If attacker can get real OTS IP, he can DDoS it and OTCv8 proxy becomes useless.
You can add a seperate listening port in the HAProxy configuration on the forward proxy servers for the login protocol, which will forward to another HAProxy instance running on the backend game server, which in turn can foward to TFS (assuming you still use the protocollogin that is included in TFS). So the login protocol flow would be: Client -> Forward Proxy (HAProxy) -> Backend (HAProxy) -> TFS
For failover you can simply adjust the code of OTClients entergame.lua module and make it try different login servers the same way the original CipSoft client does it. If one login server times out, it will move down to the next and try again, and only display a final error message if all of them fail. Users do never connect directly to the game server, so no, the backend IP will never be exposed.

Example configs:

Forward Proxy haproxy.cfg (for example login01.example.com, login02.example.com, login03.example.com etc)
Code:
# Login protocol
listen l3
        bind 0.0.0.0:6003
        mode tcp
        server srv1 <your backend ip>:6003 send-proxy-v2

Backend haproxy.cfg
LUA:
# Login protocol
listen l1
        bind 0.0.0.0:6003 accept-proxy
        mode tcp
        server srv1 127.0.0.1:7171 send-proxy-v2

Of course, due to forwarding the request to TFS using the PROXY v2 protocol, you will need code in the game server to parse the TCP headers properly and restore the users real IP address from the PROXY v2 protocol. If you use a standalone login server, like TFLS from Milice, then you don't really need to proxy the login protocol at all if you don't run the login servers on the same host as the game server.
 
Last edited:
I have been using this Kondrah proxy system for 2 years now, it is excellent if configured well its hides the real IP making an effective attack very expensive.
Now that the system has become free and public, I just have to say that its over for DDoS attackers.
 
We've been experimenting with extending this proxy system to use WebSockets, and the results have been promising. With this approach, there's no need for intermediate VPSes — you can fully leverage the Cloudflare and AWS backbones to reach your production server directly. So far the AWS seems to have quite better latency than Cloudflare. Some players from near-east even report latency improvement when passing throught those backbones.


We've been using this setup for the past few months on Gunzodus and Ezodus, and so far we haven’t noticed any latency or stability issues.


An added benefit is improved protection, since this method allows full use of WAF features — such as blocking or allowing traffic from specific countries, setting up strict white-lists, implementing rate limiting, and more.

But we still do not enforce players to use this proxy — it remains optional for now.
 
Last edited:
Cool guide but did anyone actually accomplish making use of it?

For example the guide says to use alpha proxy on port 7200. But its already taken by haproxy?
And the client part doesn't seem finished, what IP are we setting the client to connect to? It seems it will just be 1 ? And then this system is useless when that goes down ?
 
We've been experimenting with extending this proxy system to use WebSockets, and the results have been promising. With this approach, there's no need for intermediate VPSes — you can fully leverage the Cloudflare and AWS backbones to reach your production server directly. So far the AWS seems to have quite better latency than Cloudflare. Some players from near-east even report latency improvement when passing throught those backbones.


We've been using this setup for the past few months on Gunzodus and Ezodus, and so far we haven’t noticed any latency or stability issues.


An added benefit is improved protection, since this method allows full use of WAF features — such as blocking or allowing traffic from specific countries, setting up strict white-lists, implementing rate limiting, and more.
I've talked with some small/mid OTSes people in last weeks. They said that they've talked with 'big OTS owners' and none of them really use WebSockets for proxy. Some of them tried, but it failed - they spent over a week to implement WebSocket with CF and it failed.
CloudFlare docs say that 'web sockets' connection can be closed multiple times a day, as they update their infrastructure.
Did you try to use just WebSocket proxies or combination of CF WebSocket proxies with haproxies on VPSes, so if CF fails, haproxies will process all packets (0.25 sec lag, no way player will notice it)?

And the client part doesn't seem finished, what IP are we setting the client to connect to? It seems it will just be 1 ? And then this system is useless when that goes down ?
For any Kondra proxy, set IP in client to 127.0.0.1, then it will take control over this network packet and send it thru OTS proxy system.
 
1744492967705.webp


I'm getting this. I can make it to character list, but I when picking character and trying to connect to game, alpha proxy gives me this errors.

Can it be some errors in my tfs src ? Or is this before that happens.

Edit: Solved.
 
Last edited:
I've talked with some small/mid OTSes people in last weeks. They said that they've talked with 'big OTS owners' and none of them really use WebSockets for proxy. Some of them tried, but it failed - they spent over a week to implement WebSocket with CF and it failed.
CloudFlare docs say that 'web sockets' connection can be closed multiple times a day, as they update their infrastructure.
Did you try to use just WebSocket proxies or combination of CF WebSocket proxies with haproxies on VPSes, so if CF fails, haproxies will process all packets (0.25 sec lag, no way player will notice it)?


For any Kondra proxy, set IP in client to 127.0.0.1, then it will take control over this network packet and send it thru OTS proxy system.
The player should not feel any lag in that moment because Kondra proxy sends and receive packets simultaneously on multiple connections. (minimum 2) So during the reconnect event client simply receives and transmit packets through other connections. The AWS disconnects are minimal, CF once per few hours, Google is once per hour. Zero impact on player.
 
I'm very glad that there's finally a system available that allows effective protection against DDoS attacks

I'm trying to implement it and I've encountered some issues. Below, I'll describe step by step what I've done so far and what results I've gotten

I use 2 VPS to test it out:
  • 1st vps back end with tfs
  • 2nd haproxy server

1. HAProxy configuration on 2nd server:
Code:
global
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

defaults
        timeout connect 4s
        timeout client 50s
        timeout server 50s

# login connections
listen l1
        bind 0.0.0.0:7100
        mode tcp
        server srv1 <1st_vps_IP>:7101 send-proxy-v2
# game connections
listen l2
        bind 0.0.0.0:7200
        mode tcp
        server srv2 <1st_vps_IP>:7201 send-proxy-v2
# status connections
listen l3
        bind 0.0.0.0:7300
        mode tcp
        server srv3 <1st_vps_IP>:7301 send-proxy-v2


2. I have applied this changes to my 1.6 tfs

3. Configured those ports in config.lua for tfs
Code:
loginProtocolPort = 7171
gameProtocolPort = 7172
statusProtocolPort = 7171

4. Aplied those changes in OTClient

5. Added in otclient init.lua file:
Code:
g_proxy.addProxy('<2nd_haproxy_vps_IP>', 7201, 0)

6. Installed HAProxy on backend tfs server with this config:
Code:
global
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

defaults
        timeout connect 4s
        timeout client 50s
        timeout server 50s

# login connections
listen l1
        bind 0.0.0.0:7101 accept-proxy
        mode tcp
        server srv1 127.0.0.1:7171 send-proxy-v2
# status connections
listen l2
        bind 0.0.0.0:7301 accept-proxy
        mode tcp
        server srv1 127.0.0.1:7171 send-proxy-v2

7. I have installed Alpha proxy [email protected]:thatmichaelguy/alpha-proxy.git
use this config:
Code:
[Unit]
Description=Alpha Proxy
After=network.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/home/debian/alpha-proxy/build 7201 20
WorkingDirectory=/home/debian/alpha-proxy/build
Restart=always
User=debian
Group=debian
LimitCORE=104857600
LimitNOFILE=50000
[Install]
WantedBy=default.target


  • Tfs is running with no error
  • Proxy on 2nd vps is running with no error
  • Reverse Proxy on 1st server with tfs is running with no error
  • Alpha proxy is running
Code:
~/alpha-proxy$ ./build/alpha-proxy
Starting proxy server on port 7162 with a maximum of 10 connections per IP allowed

I see this in client:
1744979336946.webp
(here is ip of 2nd vps server with HAProxy with port 7201)

While trying to login to server is:
1744979506288.webp
and then
1744979519886.webp

Could someone help me figure out how to get it running?

Also, I have no idea where I'm supposed to put the proxies table based on what wrote. And is the TFS server on the backend supposed to use it?
 

Attachments

(here is ip of 2nd vps server with HAProxy with port 7201)
First you must get ping to proxy. P: 5000 means 'not connected'. Then you can debug why it cannot login to account/to game, but often after valid configuration of OTCv8 there is no problem to login to OTS.
Remember to set in OTC server IP and port to 127.0.0.1:7171. Proxy system will take control over login to account packet, deliver to OTS VPS and send to OTS to port 7171.
listen l2 bind 0.0.0.0:7200 mode tcp server srv2 <1st_vps_IP>:7201 send-proxy-v2
On haproxy server you configured to listen on 0.0.0.0:7200 and redirect traffic to OTS VPS IP port 7201 (where is alpha proxy service). You must configure in init.lua port 7200:
LUA:
g_proxy.addProxy('2nd_haproxy_vps_IP', 7200, 0)
 
First you must get ping to proxy. P: 5000 means 'not connected'. Then you can debug why it cannot login to account/to game, but often after valid configuration of OTCv8 there is no problem to login to OTS.
Remember to set in OTC server IP and port to 127.0.0.1:7171. Proxy system will take control over login to account packet, deliver to OTS VPS and send to OTS to port 7171.

On haproxy server you configured to listen on 0.0.0.0:7200 and redirect traffic to OTS VPS IP port 7201 (where is alpha proxy service). You must configure in init.lua port 7200:
LUA:
g_proxy.addProxy('2nd_haproxy_vps_IP', 7200, 0)

I have set port 7200 in init.lua and im using ip and port 127.0.0.1:7171 in client but still not working
Otclient:
1744984445596.webp
Login also does not work with this data
1744984462933.webp


I repost my configs:
  • 1st vps back end with tfs
  • 2nd haproxy server

OtClient init lua:
Code:
g_proxy.addProxy('<2nd_vps_server_ip', 7200, 0)

HAProxy on 1st server with tfs:
Code:
# login connections
listen l1
        bind 0.0.0.0:7101 accept-proxy
        mode tcp
        server srv1 127.0.0.1:7171 send-proxy-v2
# status connections
listen l2
        bind 0.0.0.0:7301 accept-proxy
        mode tcp
        server srv1 127.0.0.1:7171 send-proxy-v2

2nd server with HAProxy
Code:
# login connections
listen l1
        bind 0.0.0.0:7100
        mode tcp
        server srv1 <1st_vps_IP>:7101 send-proxy-v2
# game connections
listen l2
        bind 0.0.0.0:7200
        mode tcp
        server srv2 <1st_vps_IP>:7201 send-proxy-v2
# status connections
listen l3
        bind 0.0.0.0:7300
        mode tcp
        server srv3 <1st_vps_IP>:7301 send-proxy-v2

Alpha Proxy config:
Code:
[Unit]
Description=Alpha Proxy
After=network.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/home/debian/alpha-proxy/build 7201 20
WorkingDirectory=/home/debian/alpha-proxy/build
Restart=always
User=debian
Group=debian
LimitCORE=104857600
LimitNOFILE=50000

[Install]
WantedBy=default.target


1st VPS Tfs config.lua:
Code:
ip = "127.0.0.1" -- important, do not change
statusIP = "<2nd_vps_ip>" -- set this to your closest forward proxy server IP
loginProtocolPort = 7171
gameProtocolPort = 7172
statusProtocolPort = 7171

Is something still misconfigured here?
 
7. I have installed Alpha proxy [email protected]:thatmichaelguy/alpha-proxy.git
use this config:
Run on root/sudo:
Code:
netstat -nalp | grep LISTEN
on both VPS servers and post results. Replace OTS VPS IP with 1.1.1.1 and haproxy VPS with 2.2.2.2.
It should look like this:
Code:
tcp        0      0 0.0.0.0:6600            0.0.0.0:*               LISTEN      814202/./proxy_serv
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      3404931/nginx: mast
tcp        0      0 0.0.0.0:444             0.0.0.0:*               LISTEN      3404931/nginx: mast
tcp        0      0 0.0.0.0:6502            0.0.0.0:*               LISTEN      3640857/haproxy
tcp        0      0 0.0.0.0:6503            0.0.0.0:*               LISTEN      3640857/haproxy
tcp        0      0 0.0.0.0:6501            0.0.0.0:*               LISTEN      3640857/haproxy
tcp        0      0 0.0.0.0:6504            0.0.0.0:*               LISTEN      3640857/haproxy
tcp        0      0 127.0.0.1:6172          0.0.0.0:*               LISTEN      215456/python3
tcp        0      0 127.0.0.1:6171          0.0.0.0:*               LISTEN      221744/python3
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      3404931/nginx: mast
tcp        0      0 127.0.0.54:53           0.0.0.0:*               LISTEN      3432334/systemd-res
tcp        0      0 0.0.0.0:7171            0.0.0.0:*               LISTEN      3594979/./tfs
tcp        0      0 0.0.0.0:7172            0.0.0.0:*               LISTEN      3594979/./tfs
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      3432451/mariadbd
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      3432334/systemd-res
Maybe we will be able to find problem with configuration.
On add me on Discord: gesior.pl
We can resolve it on Discord and then post on OTLand what was wrong with your config.
 
Is it not possible to use this proxy with just a CIP client?
You can use it with probably any .exe file. Download Giveria client:
Giveria.exe is code from OTCv8 author that starts listening on some ports (7171 and 7172 - configurable) and then starts .exe.
If you connect in that .exe (ex. Tibia client) to given port and IP 127.0.0.1, proxy system will take control over that connection and pass it to server using haproxies and OTCv8 proxy/Alpha proxy server.

In Giveria client there are Giveria.exe that launches "standalone OTCv8 proxy client" and config file proxy.config that looks like this:
Code:
.\ClientFiles\bin\client.exe
7171 7172
proxy01.giveria.com proxy01.giveria.com 7162
proxy02.giveria.com proxy02.giveria.com 7162
.\ClientFiles\bin\client.exe is exe to run after launching proxy (Tibia 12+ client in this case)
7171 7172 are ports to listen on IP 127.0.0.1
Code:
proxy01.giveria.com proxy01.giveria.com 7162
proxy02.giveria.com proxy02.giveria.com 7162
are haproxy servers to pass packets to.
 
The second param of the ExecStart binary is the max connections per IP (4 proxies = 4 connections per IP per open client)
I ran Alpha proxy on medium size server as replacement for OTCv8 proxy code and it hit default limit easily.
Example limit of 20 connections per IP:
Code:
ExecStart=/path/to/the/proxy/build/alpha-proxy 7200 20
Means that, if you use 10 VPSes for haproxy, you can only run 2 MCs per IP (20 connections) or 2 players from single home (IP) can login at once.
When you open 3rd client, it shows ping '5000' to all proxies and it's not possible to connect to OTS.
 
Back
Top