• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

Proof of concept of a new game engine

I played pretty much all meaningful ots servers past 10-12 years and yes original cipsoft client and server seems to perform head and shoulders above any amateur project (ots) when it comes to performance ( cipsoft one is rock solid and very stable ) and reliability ( i did not encounter any bugs in cipsoft client and servers whereas on ots i always found something wrong). I can't speak for others but from my point of view the "smooth" feeling on most ots is fake because it's usually caused by some movement optimizations rather than server working well and on rl tibia that's not the case because there everything worked as expected and this fake feeling of smoothnes is not needed when everything just works. Let me remind you last time those are my observations based on 11 years of experience in playing on ots and not knowledge based on analyzing under the hood code.
I wouldn't vouch for the client, especially on modern systems. As for the server - yes, exactly that (bolded). Those optimizations is mostly getting rid of delays that were tibia's design, and expanding pre-walk mechanism in the client which gives the feeling of smoothness for players with high ping (but also results in a possible bigger desync gap). Also the Nagle's algorithm with delayed ack, which is a tcp issue (that's where the famous leatrix latency fix was supposed to help partly). As I said it's disabled in otc/tfs code by default, but so can be in tibia server. Those things impact the feeling but don't have much to do with server working well or bad. The real issue is tfs design flaws such as the approach to database, where it constantly inserts huge queries in the main thread, lagging the game (also disallowing you to seperate it from db). It was discussed years ago and there was no intention to rework it because of possible "backward compatibility" break up.
 
Pardon my pragmatism, but all you have to do is launch two clients (optimized otc on accordingly prepared tfs vs Cip's stack) along and see for yourself and it will be visible on anything 30hz+, not to mention 120Hz+ screens.

With all due respect to the work some of you guys put into parsing cams and making sure that everything is as "7.4" as it can get, I also tried with your modified cip's stack (that's what you are guys using, right? @kay ) there is not much improvement over OG cip's stack from a user perspective.

Tibia is a "returning player" game, and these hardly remember how laggy it was and won't find any nostalgia in desync.
RE everything to get to know how it originally was, but don't get too attached to being 1:1 with client choppiness, because that's not what players usually want.

If you will still disagree with me ill try to scrap out some footage to make the cip stack issues more visible, but please remember not to take it too personally.
I'm hoping that someday we will be able to fix the aforementioned issues.


500 players would be still more than you were able to benchmark yourself, so it would be wise to not discredit Gesior's words.
As far as I recall the heaviest load on your launch caused some issues.


Do you have anything to back it up?
I'm not claiming that it's perfect, but some of the contributions are very high quality and your words might be insulting/discouraging to some people.
You can be there to point out the issues with the contributions, even without suggesting any solutions - but I never saw it happen, did it?


I'm writing my post because I have a different perception that I'd like to share.
You said that people do it the easy way, so you prove yourself it's valuable.
If that's over your point of understanding - I'm sorry, but I still doubt that's a reason to tag anyone nor delete my posts
About the "easy way" I'm not going to discuss the RE methods for the tarball, i have seen and tested (exhaustive) a fully source RealOts working exactly as the tarball, and guess what, that person did it the same way.
 
Last edited by a moderator:
Good lucky in your project!

Just expressing my humble opinion, if the protocol of tibia is not well optimized as it could be (due to old technology perhaps?), recreate them in your way. This way, you only have to rewrite some things on otclient and we can have a modern server-side with up-to-date technologies.
 
I wouldn't vouch for the client, especially on modern systems. As for the server - yes, exactly that (bolded). Those optimizations is mostly getting rid of delays that were tibia's design, and expanding pre-walk mechanism in the client which gives the feeling of smoothness for players with high ping (but also results in a possible bigger desync gap). Also the Nagle's algorithm with delayed ack, which is a tcp issue (that's where the famous leatrix latency fix was supposed to help partly). As I said it's disabled in otc/tfs code by default, but so can be in tibia server. Those things impact the feeling but don't have much to do with server working well or bad. The real issue is tfs design flaws such as the approach to database, where it constantly inserts huge queries in the main thread, lagging the game (also disallowing you to seperate it from db). It was discussed years ago and there was no intention to rework it because of possible "backward compatibility" break up.
Yes i know that most of the things that you mentioned are revolving around server side of things and im glad that my observations are correct. But i would also add few words about client because i am not sure if my problems with them are comming from the fact that i am AMD gpu user and clients are mostly optimized for Nvidia users or if its bad overall. I can't say for sure that client is bad overall because i have bad experiences playing on it compared to vanilla tibia clients but rather i think that optimization for amd cards might be bad compared to lets say nvidia. I dont have the knowledge to test it and i'd rather play different clients and test them as an amd user and i did so and all i have to say is that vanilla client is the best by far in my case maybe if some1 who owns cards from both brands and have the knowledge needed can test clients like otc and compare it to vanilla one.
 
Last edited:
About the "easy way" I'm not going to discuss the RE methods for the tarball, i have seen and tested (exhaustive) a fully source RealOts working exactly as the tarball, and guess what, that person did it the same way.
It's obvious for anyone who's worked in reverse engineering. Each decompiler may use different algorithms, have different approach or level of support for certain tasks and its output quality may vary depending on situation. There isn't one that outmatches all the others in every aspect and every case. That alone already makes a valid reason to compare. Even a different interpretation of loops can give a better view to understanding what's going on there.
Also, to learn about a possible issue (the one he linked is completly irrelevant to the topic, by the way), first you need to be aware of that issue. And you won't always know without a reference (say an output of second decompiler) or without analyzing asm. But the latter is exactly the tedious work you want to omit (whenever possible) by using a decompiled code in the first place. Hence his claim is a logical fallacy.
Learning about all issues one chosen program may have (say Hex Rays) is also not your main concern, otherwise you may end up reverse engineering a decompiler. Your goal is to make the best sense out of cipsoft server code, without having to analyze the whole asm byte by byte by yourself (only the necessary parts). From my experience I can also guarantee that using a side decompiler (e.g. Rec Studio) will in many cases spare the OP additional work and time. There's no downside to this, therefore no point to discuss.
 
Last edited:
which gives the feeling of smoothness for players with high ping (but also results in a possible bigger desync gap).
Not only high ping, but lack of it is also easily noticeable even on your local server.
Later can be fixed too, there are modern approaches that can solve most of the desync issues and I have my project to prove it, and you don't even need an account to test it.
I'm not trying to be cocky but just to debunk the vanilla cip perfectness myth because honestly, it's just misleading.

Hence his claim is a logical fallacy.
Learning about all issues one chosen program may have (say Hex Rays) is also not your main concern, otherwise you may end up reverse engineering a decompiler.
The only logical fallacy here is not trying to understand your tools.
From my experience I can also guarantee that using a side decompiler (e.g. Rec Studio) will in many cases spare the OP additional work and time.
I'm sure that if OPs only intentions were to "save time" he would not be creating a new engine for a retro game on his own.
 
yes original cipsoft client and server seems to perform head and shoulders above any amateur project. from my point of view the "smooth" feeling on most ots is fake because it's usually caused by some movement optimizations rather than server working well and on rl tibia that's not the case because there everything worked as expected and this fake feeling of smoothnes is not needed when everything just works.
I agree with you on this. regular ots are too easy. it makes players think they're better than they are. many don't see it smooths out depth from the gameplay. then see how bad they are on real cip server and blame it on the client xDD

Tibia is a "returning player" game, and these hardly remember how laggy it was and won't find any nostalgia in desync.
RE everything to get to know how it originally was, but don't get too attached to being 1:1 with client choppiness, because that's not what players usually want.
good for short term fun but not long term success. constant need to add popular new features to an old success comes from your copy being too bland ...
 
Last edited:
is there a discord server (or could you make one) for this? could be nice to have more in-depth discussions and watch the development process
I think it's way too early for that. Also it would be hard to have in-depth discussions when only a few people here on OTLand know Rust well enough. But hopefully this project will encourage others to learn this language, or at least to try it. In my case, they had me after the fourth chapter, which describes how the memory works (the stack and the heap) and shows how well Rust manages it.
 
I think such engine deserves more planing. If you want this engine to have potential to replace/compete with TFS its useful to know it strengths and faults. Also cipengine features needs to be documented before implementation so you can also know it strengths and faults and to be able to take what is useful.
 
I think such engine deserves more planing. If you want this engine to have potential to replace/compete with TFS its useful to know it strengths and faults. Also cipengine features needs to be documented before implementation so you can also know it strengths and faults and to be able to take what is useful.
The only documentations for cipengine are posts here on otland. Only a few people study this engine from scratch and I do not speak about us (here in this thread and similar ones), but ppl in the past too. Sadly is that crucial resource wasn't tampered by many years in golden OT age and now we have theforgottenaboutcipengineserver tag 1.4 xD
Funny, because the original intention to create an open source Tibia (Tibia, not TFS not OTHire etc ) server has gone missing
 
how is this going
I'm creating a map editor for this new map format (more details in this post). It'll be written using Flutter.
So far I have spr and dat loading (partially) and drawing items on canvas together with saving them in state. Now I'm having some problems adding zoom.

View attachment otstudio.mov

Let me know if there are some features not present in RME that would be useful for you.
 
Last edited:
I'm creating a map editor for this new map format (more details in this post). It'll be written using Flutter.
So far I have spr and dat loading (partially) and drawing items on canvas together with saving them in state. Now I'm having some problems adding zoom.

View attachment 62487

Let me know if there are some features not present in RME that would be useful for you.
Why do I have a felling that this will have REALLY bad performance with bigger maps? Flutter lol
 

I don't quiet understand how he went from Rust to Flutter, so random. It would be better for him to just stick to Rust, it's not like you can't make windows apps with it and it would only improve his skills which then can be used on this new game server.
 
I don't quiet understand how he went from Rust to Flutter, so random. It would be better for him to just stick to Rust, it's not like you can't make windows apps with it and it would only improve his skills which then can be used on this new game server.
I love Rust, but it doesn't mean I should use it for everything without hesitation. Rust is great for a game server, but Flutter is great for UIs. And if I had to write a REST API, I would probably use NodeJS and NestJS (great framework), or at least consider it. And it's not because you can't write an API in Rust, there is Rocket for example, but Node ecosystem is so much bigger, and in this case performance is not a priority.
It's all about choosing the right tool for the job.

Why do I have a felling that this will have REALLY bad performance with bigger maps?
Ok, I'll test its performance as soon as possible and we'll see. But I chose Flutter mostly because I wanted to add more modules to this map editor in the future to make it the only tool you need, and writing complex UIs in Flutter just feels great. In Rust it will be much harder, there are many UI crates but none of them is mature enough.
 
Ok, I think it's time for an update.

I have quite a lot implemented in Flutter, but there are two big issues that make me unsure if this is the right way to go.

First, Flutter has a drawAtlas method, which allows to render a lot of things on the canvas with a really good performance, but if I use it, there will be some black lines on the screen. It's most likely the Flutter antialiasing issue, which is open for a few years now, and nobody knows when it'll be fixed.
Zrzut ekranu 2022-05-8 o 11.19.35.png
I tried rendering the map using a for loop instead, and it looks good, but the performance is of course very bad when there's a lot of tiles on the screen. So a 'workaround' would be to use the for loop if zoom is not smaller than ~0.5x and then switch to drawAtlas. But it still sucks.

Second, Dart (programming language used by Flutter) doesn't have real multithreading mechanism. It has isolates, but they are not really threads, because each isolate has its own memory and there's no way to access another isolate. This means if I want to pass data from one isolate to another one, Dart will copy them in memory, which is a costly operation. So for example, if I want to save project changes, I need to copy all the changes in memory to the isolate inside which I want to save changes, and copying will be done in UI isolate, freezing the UI. If the project is imported from TFS files, saving it for the first time means it'll have to copy the whole project data, so the UI will freeze for a minute or two. This one, I could actually accept, because importing projects won't happen that often.

But, there is Tauri. And I'm thinking about it. It's basically better Electron with Rust backend (and native WebView under the hood). So I could write map rendering and IO operations in Rust, compile it to Wasm and put it on WebView together with the UI written in HTML+JS+CSS. Of course, I would use a frontend framework, like React or Svelte. Or even Yew. So the plan is to play with it for a while and if it turns out to be a good tool for the job, I'll port my existing Flutter project.

Now, I also need some feedback regarding project structure. I'm still not sure what format should map areas be saved in, but for now I used json (mostly because Dart has native support for it). What do you think about this:
JSON:
{
    "tiles": {
        "32099,31567,9": [
            6967,
            {
                "item": 3031,
                "count": 24
            },
            {
                "item": 3028,
                "count": 12
            }
        ]
    }
}
If the only attribute an entity has is item, then instead of { "item": 6967 } it's 6967, to save disk space and make it more readable. The same applies to tiles, if a tile has some attributes, for example protection zone, it becomes:
JSON:
{
    "attributes": {
        "protectionZone": true
    },
    "entities": [
        6967,
        {
            "item": 3031,
            "count": 24
        },
        {
            "item": 3028,
            "count": 12
        }
    ]
}

And last, I was also thinking about storing assets (sprites). If we'll have human-readable map files, why should we use .spr file? So instead I created a directory with items assets, and each item also has its own directory. Images names contain information about frame, pattern x/y/z and layer. What do you think?
Zrzut ekranu 2022-05-7 o 19.09.58.png
 
Ok, I think it's time for an update.

I have quite a lot implemented in Flutter, but there are two big issues that make me unsure if this is the right way to go.

First, Flutter has a drawAtlas method, which allows to render a lot of things on the canvas with a really good performance, but if I use it, there will be some black lines on the screen. It's most likely the Flutter antialiasing issue, which is open for a few years now, and nobody knows when it'll be fixed.
View attachment 67663
I tried rendering the map using a for loop instead, and it looks good, but the performance is of course very bad when there's a lot of tiles on the screen. So a 'workaround' would be to use the for loop if zoom is not smaller than ~0.5x and then switch to drawAtlas. But it still sucks.

Second, Dart (programming language used by Flutter) doesn't have real multithreading mechanism. It has isolates, but they are not really threads, because each isolate has its own memory and there's no way to access another isolate. This means if I want to pass data from one isolate to another one, Dart will copy them in memory, which is a costly operation. So for example, if I want to save project changes, I need to copy all the changes in memory to the isolate inside which I want to save changes, and copying will be done in UI isolate, freezing the UI. If the project is imported from TFS files, saving it for the first time means it'll have to copy the whole project data, so the UI will freeze for a minute or two. This one, I could actually accept, because importing projects won't happen that often.

But, there is Tauri. And I'm thinking about it. It's basically better Electron with Rust backend (and native WebView under the hood). So I could write map rendering and IO operations in Rust, compile it to Wasm and put it on WebView together with the UI written in HTML+JS+CSS. Of course, I would use a frontend framework, like React or Svelte. Or even Yew. So the plan is to play with it for a while and if it turns out to be a good tool for the job, I'll port my existing Flutter project.

Now, I also need some feedback regarding project structure. I'm still not sure what format should map areas be saved in, but for now I used json (mostly because Dart has native support for it). What do you think about this:
JSON:
{
    "tiles": {
        "32099,31567,9": [
            6967,
            {
                "item": 3031,
                "count": 24
            },
            {
                "item": 3028,
                "count": 12
            }
        ]
    }
}
If the only attribute an entity has is item, then instead of { "item": 6967 } it's 6967, to save disk space and make it more readable. The same applies to tiles, if a tile has some attributes, for example protection zone, it becomes:
JSON:
{
    "attributes": {
        "protectionZone": true
    },
    "entities": [
        6967,
        {
            "item": 3031,
            "count": 24
        },
        {
            "item": 3028,
            "count": 12
        }
    ]
}

And last, I was also thinking about storing assets (sprites). If we'll have human-readable map files, why should we use .spr file? So instead I created a directory with items assets, and each item also has its own directory. Images names contain information about frame, pattern x/y/z and layer. What do you think?
View attachment 67664
Is saving map in a json a good idea? The file would be very large and it would take a lot more time to load than a binary file. But, at least, it would be possible to control this file through git (as you would know exactly which part of the map the contributor would be changing)
 
Back
Top