• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

Alpha Proxy Guide (kondrah/otclient multi-path proxy)

Just a passing thought - maybe we can receive the "login proxy" list over http when the client is opened - that'd keep me happy haha
You can set list of proxies in init.lua, login to account (with OTCv8 proxy system, it will pick 1 working proxy to send login packet), get new list of proxies in login packet, call g_proxy.clear() to remove all proxies (OTCv8 does it by default, Mehah does not) and add new proxies to client. It will use new proxies to connect to OTS.

I didn't want to hard code anything, since I also use multiple login proxies for failover, the same way CipSoft does it, just in case one of them goes down.
How do you use multiple login proxies for failover without proxy system? Using some external load balancer with HTTP protocol to login?
From first post description, it looks like you must put 1 of haproxy IPs as server IP in client, set login port to 7100 and it will connect using it (no failover) or you login to account using IP of dedic with OTS, which makes this system absolutely useless. If attacker can get real OTS IP, he can DDoS it and OTCv8 proxy becomes useless.
 
How do you use multiple login proxies for failover without proxy system? Using some external load balancer with HTTP protocol to login?
From first post description, it looks like you must put 1 of haproxy IPs as server IP in client, set login port to 7100 and it will connect using it (no failover) or you login to account using IP of dedic with OTS, which makes this system absolutely useless. If attacker can get real OTS IP, he can DDoS it and OTCv8 proxy becomes useless.
You can add a seperate listening port in the HAProxy configuration on the forward proxy servers for the login protocol, which will forward to another HAProxy instance running on the backend game server, which in turn can foward to TFS (assuming you still use the protocollogin that is included in TFS). So the login protocol flow would be: Client -> Forward Proxy (HAProxy) -> Backend (HAProxy) -> TFS
For failover you can simply adjust the code of OTClients entergame.lua module and make it try different login servers the same way the original CipSoft client does it. If one login server times out, it will move down to the next and try again, and only display a final error message if all of them fail. Users do never connect directly to the game server, so no, the backend IP will never be exposed.

Example configs:

Forward Proxy haproxy.cfg (for example login01.example.com, login02.example.com, login03.example.com etc)
Code:
# Login protocol
listen l3
        bind 0.0.0.0:6003
        mode tcp
        server srv1 <your backend ip>:6003 send-proxy-v2

Backend haproxy.cfg
LUA:
# Login protocol
listen l1
        bind 0.0.0.0:6003 accept-proxy
        mode tcp
        server srv1 127.0.0.1:7171 send-proxy-v2

Of course, due to forwarding the request to TFS using the PROXY v2 protocol, you will need code in the game server to parse the TCP headers properly and restore the users real IP address from the PROXY v2 protocol. If you use a standalone login server, like TFLS from Milice, then you don't really need to proxy the login protocol at all if you don't run the login servers on the same host as the game server.
 
Last edited:
I have been using this Kondrah proxy system for 2 years now, it is excellent if configured well its hides the real IP making an effective attack very expensive.
Now that the system has become free and public, I just have to say that its over for DDoS attackers.
 
We've been experimenting with extending this proxy system to use WebSockets, and the results have been promising. With this approach, there's no need for intermediate VPSes — you can fully leverage the Cloudflare and AWS backbones to reach your production server directly. So far the AWS seems to have quite better latency than Cloudflare. Some players from near-east even report latency improvement when passing throught those backbones.


We've been using this setup for the past few months on Gunzodus and Ezodus, and so far we haven’t noticed any latency or stability issues.


An added benefit is improved protection, since this method allows full use of WAF features — such as blocking or allowing traffic from specific countries, setting up strict white-lists, implementing rate limiting, and more.

But we still do not enforce players to use this proxy — it remains optional for now.
 
Last edited:
Cool guide but did anyone actually accomplish making use of it?

For example the guide says to use alpha proxy on port 7200. But its already taken by haproxy?
And the client part doesn't seem finished, what IP are we setting the client to connect to? It seems it will just be 1 ? And then this system is useless when that goes down ?
 
We've been experimenting with extending this proxy system to use WebSockets, and the results have been promising. With this approach, there's no need for intermediate VPSes — you can fully leverage the Cloudflare and AWS backbones to reach your production server directly. So far the AWS seems to have quite better latency than Cloudflare. Some players from near-east even report latency improvement when passing throught those backbones.


We've been using this setup for the past few months on Gunzodus and Ezodus, and so far we haven’t noticed any latency or stability issues.


An added benefit is improved protection, since this method allows full use of WAF features — such as blocking or allowing traffic from specific countries, setting up strict white-lists, implementing rate limiting, and more.
I've talked with some small/mid OTSes people in last weeks. They said that they've talked with 'big OTS owners' and none of them really use WebSockets for proxy. Some of them tried, but it failed - they spent over a week to implement WebSocket with CF and it failed.
CloudFlare docs say that 'web sockets' connection can be closed multiple times a day, as they update their infrastructure.
Did you try to use just WebSocket proxies or combination of CF WebSocket proxies with haproxies on VPSes, so if CF fails, haproxies will process all packets (0.25 sec lag, no way player will notice it)?

And the client part doesn't seem finished, what IP are we setting the client to connect to? It seems it will just be 1 ? And then this system is useless when that goes down ?
For any Kondra proxy, set IP in client to 127.0.0.1, then it will take control over this network packet and send it thru OTS proxy system.
 
1744492967705.webp


I'm getting this. I can make it to character list, but I when picking character and trying to connect to game, alpha proxy gives me this errors.

Can it be some errors in my tfs src ? Or is this before that happens.

Edit: Solved.
 
Last edited:
I've talked with some small/mid OTSes people in last weeks. They said that they've talked with 'big OTS owners' and none of them really use WebSockets for proxy. Some of them tried, but it failed - they spent over a week to implement WebSocket with CF and it failed.
CloudFlare docs say that 'web sockets' connection can be closed multiple times a day, as they update their infrastructure.
Did you try to use just WebSocket proxies or combination of CF WebSocket proxies with haproxies on VPSes, so if CF fails, haproxies will process all packets (0.25 sec lag, no way player will notice it)?


For any Kondra proxy, set IP in client to 127.0.0.1, then it will take control over this network packet and send it thru OTS proxy system.
The player should not feel any lag in that moment because Kondra proxy sends and receive packets simultaneously on multiple connections. (minimum 2) So during the reconnect event client simply receives and transmit packets through other connections. The AWS disconnects are minimal, CF once per few hours, Google is once per hour. Zero impact on player.
 
Back
Top