• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

TFS 1.X+ Optimizing player saves & when saves execute

Itutorial

Legendary OT User
Joined
Dec 23, 2014
Messages
2,412
Solutions
68
Reaction score
1,074
Hello everyone,

I noticed that all player save information happens in one place. IOLoginData::savePlayer() I do not really like that in order to save player items I have to save everything else with it. So I fragmented the IOLoginData::savePlayer() method into the following methods.

C++:
IOLoginData::savePlayerInstantSpells();
IOLoginData::savePlayerInventoryItems();
IOLoginData::savePlayerDepotItems();
IOLoginData::savePlayerInboxItems();
IOLoginData::savePlayerStoreInboxItems();

I added the instant spell saves in Player::learnInstantSpell() and Player::unLearnInstantSpell()

Then I added the following into Game::playerMoveItem()

C++:
ReturnValue ret =
    internalMoveItem(fromCylinder, toCylinder, toIndex, item, count, nullptr, 0, player, nullptr, &fromPos, &toPos);
if (ret != RETURNVALUE_NOERROR) {
    player->sendCancelMessage(ret);
    return;
}

// Player moved item into or out of depot
if (dynamic_cast<const DepotChest*>(fromCylinder) || dynamic_cast<const DepotChest*>(toCylinder)) {
    player->saveDepotItems();
}

// Player moved item out of store inbox
if (dynamic_cast<const StoreInbox*>(fromCylinder) && !dynamic_cast<const StoreInbox*>(toCylinder)) {
    player->saveStoreInboxItems();
}

// Player moved item into or out of inbox.
if (dynamic_cast<const Inbox*>(fromCylinder) || dynamic_cast<const Inbox*>(toCylinder)) {
    player->saveInboxItems();
}

// Player moved an inventory item.
if (fromPos.x == inventoryXPosition || toPos.x == inventoryXPosition) {
    player->saveInventoryItems();
}

I also slowed down item moves to 100ms.

For the inbox items I also had to make it save when players send mail to another.

Right now everything works properly, anytime an item is moved in each scenario that it would need to be saved it is. I am not using store inbox atm but I assume it works just as well. This freed up a lot of "lag" with player saves

So my question is what might I be missing? If I am not missing anything, would this be something that we should see in the source code? Doing it this way allows the database to be completely up-to-date incase of a crash. I also added saving with house items and limited them to being moved every 2 seconds because it has to save all house items. Though that is not included in this.
 
Last edited:
What do you mean save partially? Assuming any action in game that would result in the database needing to be updated does so. How would extra items be created if they are being deleted from the database and repopulated on each save action that is required?
 
Last edited:
Hello everyone,

I noticed that all player save information happens in one place. IOLoginData::savePlayer() I do not really like that in order to save player items I have to save everything else with it. So I fragmented the IOLoginData::savePlayer() method into the following methods.

C++:
IOLoginData::savePlayerInstantSpells();
IOLoginData::savePlayerInventoryItems();
IOLoginData::savePlayerDepotItems();
IOLoginData::savePlayerInboxItems();
IOLoginData::savePlayerStoreInboxItems();

I added the instant spell saves in Player::learnInstantSpell() and Player::unLearnInstantSpell()

Then I added the following into Game::playerMoveItem()

C++:
ReturnValue ret =
    internalMoveItem(fromCylinder, toCylinder, toIndex, item, count, nullptr, 0, player, nullptr, &fromPos, &toPos);
if (ret != RETURNVALUE_NOERROR) {
    player->sendCancelMessage(ret);
    return;
}

// Player moved item into or out of depot
if (dynamic_cast<const DepotChest*>(fromCylinder) || dynamic_cast<const DepotChest*>(toCylinder)) {
    player->saveDepotItems();
}

// Player moved item out of store inbox
if (dynamic_cast<const StoreInbox*>(fromCylinder) && !dynamic_cast<const StoreInbox*>(toCylinder)) {
    player->saveStoreInboxItems();
}

// Player moved item into or out of inbox.
if (dynamic_cast<const Inbox*>(fromCylinder) || dynamic_cast<const Inbox*>(toCylinder)) {
    player->saveInboxItems();
}

// Player moved an inventory item.
if (fromPos.x == inventoryXPosition || toPos.x == inventoryXPosition) {
    player->saveInventoryItems();
}

I also slowed down item moves to 100ms.

For the inbox items I also had to make it save when players send mail to another.

Right now everything works properly, anytime an item is moved in each scenario that it would need to be saved it is. I am not using store inbox atm but I assume it works just as well. This freed up a lot of "lag" with player saves

So my question is what might I be missing? If I am not missing anything, would this be something that we should see in the source code? Doing it this way allows the database to be completely up-to-date incase of a crash. I also added saving with house items and limited them to being moved every 2 seconds because it has to save all house items. Though that is not included in this.
This type of micro-saving will become extremely problematic when you have 1000+ online, you will easily bottleneck your DB. Just remember each DELETE statement checks foreign keys... that's a fuck ton of executions. It still amazes me that TFS doesn't even use TRUNCATE to empty a table such as when saving house items on a server save.....
 
What do you mean save partially? Assuming any action in game that would result in the database needing to be updated does so. How would extra items be created if they are being deleted from the database and repopulated on each save action that is required?
Simply, move item out of the house to player inventory (probably lots of more cases, but this is the most trivial one). If you save only one side, it's a dupe.
Why do you think saving on logout like tfs does is duping items on crashes?
 
Simply, move item out of the house to player inventory (probably lots of more cases, but this is the most trivial one). If you save only one side, it's a dupe.
Why do you think saving on logout like tfs does is duping items on crashes?
I save house items when they are moved anywhere, I save items when moved into inventory, I save items when they are mailed, I save items when they are traded. My question is, where else if there is anywhere else?

Saves
1) House item move (taking from, adding to, moving inside house)
2) Inventory item move (same here)
3) Depot item move (same here)
4) Inbox item move (same here)
5) Store inbox item move (same here)
6) Traded items
7) Items save on death or logout

Also note that it only saves if the item is actually moved. And if the server detects moves, talking, ect.. too fast it will kick the player for longer and longer periods of time.
 
Last edited:
I save house items when they are moved anywhere, I save items when moved into inventory, I save items when they are mailed, I save items when they are traded. My question is, where else if there is anywhere else?

Saves
1) House item move (taking from, adding to, moving inside house)
2) Inventory item move (same here)
3) Depot item move (same here)
4) Inbox item move (same here)
5) Store inbox item move (same here)
6) Traded items
7) Items save on death or logout

Also note that it only saves if the item is actually moved.

Does it not affect the performance ?
 
Does it not affect the performance ?
Of course it will affect performance but I need to get this working before I fix that problem. Which will just be instead of deleting every item and adding them all every time it saves. I will just update the item that changes. Easy performance fix.
 
Last edited:
I agree with Itutorial, @Nekiro , how can this be abused for duping when he is doing immediate saving all around... the problem with TFS is that they DO NOT do it this way, instead, they only save the entire player upon logout, so a player can grab something from depot or players house, have it on them, crash the server, and now the item is duped....

If he grabs the item from a house with this method, the house items are immediately updated, and so are the player items... one added, one removed... so I'm curious why this would be a vulnerability... and I'm not just trying to argue here... I truly would like to know, because I have considered such micro saving before myself, and was instantly shut down with everyone saying the same thing you are now, that it would make it easy for people to dupe items.... which, when I was considering it before, I just took what everyone was saying to be facts said by those more experienced than myself, but now, I'm looking over the logic and the data and I am still missing this inherent flaw...

Could you please be very specific and draw this out for us, so that we can also see the logic flaw if there is one?

I'm not saying there is not a flaw there... because I do believe I couldn't see it last time, and then sarah explained it to me, but, right now, I'm still struggling to wrap my head around how this system could have that flaw...

As for the performance hit. Sure, yeah, ok, calling the save functions all those times and querying the database that often could indeed cause a hit to performance, but as Itutorial already stated, one easy way to fix that is to stop deleting and saving EVERYTHING, like every single possible item every single time, and only update the entry that needs updated... but even better than that, what could be done, is this data could actually be maintained inside the RAM by being stored somewhere privately in the source code, where it is constantly being updated via this micro updates, but only ever actually being pushed to the database upon the player either logging out or dying, or, to be more cautious, one could simply have it on a timer to be updated after the player has been offline for X amount of time... doing it this way, would even allow the calls to the database (for saving) to be completely offloaded from the thread it's currently using for this work... so if we gave it it's own dedicated thread, it would actually end up being a major performance gain instead of a loss.
 
I agree with Itutorial, @Nekiro , how can this be abused for duping when he is doing immediate saving all around... the problem with TFS is that they DO NOT do it this way, instead, they only save the entire player upon logout, so a player can grab something from depot or players house, have it on them, crash the server, and now the item is duped....

If he grabs the item from a house with this method, the house items are immediately updated, and so are the player items... one added, one removed... so I'm curious why this would be a vulnerability... and I'm not just trying to argue here... I truly would like to know, because I have considered such micro saving before myself, and was instantly shut down with everyone saying the same thing you are now, that it would make it easy for people to dupe items.... which, when I was considering it before, I just took what everyone was saying to be facts said by those more experienced than myself, but now, I'm looking over the logic and the data and I am still missing this inherent flaw...

Could you please be very specific and draw this out for us, so that we can also see the logic flaw if there is one?

I'm not saying there is not a flaw there... because I do believe I couldn't see it last time, and then sarah explained it to me, but, right now, I'm still struggling to wrap my head around how this system could have that flaw...

As for the performance hit. Sure, yeah, ok, calling the save functions all those times and querying the database that often could indeed cause a hit to performance, but as Itutorial already stated, one easy way to fix that is to stop deleting and saving EVERYTHING, like every single possible item every single time, and only update the entry that needs updated... but even better than that, what could be done, is this data could actually be maintained inside the RAM by being stored somewhere privately in the source code, where it is constantly being updated via this micro updates, but only ever actually being pushed to the database upon the player either logging out or dying, or, to be more cautious, one could simply have it on a timer to be updated after the player has been offline for X amount of time... doing it this way, would even allow the calls to the database (for saving) to be completely offloaded from the thread it's currently using for this work... so if we gave it it's own dedicated thread, it would actually end up being a major performance gain instead of a loss.
There is no flaw (so I say!). In truth I need to update the current queries to work with the system I have. For example:

Player moves depot item to inventory:
In one query >> Remove item from depot + add item to inventory.

As it is currently theoretically a player could crash the server at the same instant one query is executed but not the other. Very unlikely but if it can be broken it will be broken. Other than that though, fix queries, group ones that need to happen together, no more duping faster saving.
 
I agree with Itutorial, @Nekiro , how can this be abused for duping when he is doing immediate saving all around... the problem with TFS is that they DO NOT do it this way, instead, they only save the entire player upon logout, so a player can grab something from depot or players house, have it on them, crash the server, and now the item is duped....

If he grabs the item from a house with this method, the house items are immediately updated, and so are the player items... one added, one removed... so I'm curious why this would be a vulnerability... and I'm not just trying to argue here... I truly would like to know, because I have considered such micro saving before myself, and was instantly shut down with everyone saying the same thing you are now, that it would make it easy for people to dupe items.... which, when I was considering it before, I just took what everyone was saying to be facts said by those more experienced than myself, but now, I'm looking over the logic and the data and I am still missing this inherent flaw...

Could you please be very specific and draw this out for us, so that we can also see the logic flaw if there is one?

I'm not saying there is not a flaw there... because I do believe I couldn't see it last time, and then sarah explained it to me, but, right now, I'm still struggling to wrap my head around how this system could have that flaw...

As for the performance hit. Sure, yeah, ok, calling the save functions all those times and querying the database that often could indeed cause a hit to performance, but as Itutorial already stated, one easy way to fix that is to stop deleting and saving EVERYTHING, like every single possible item every single time, and only update the entry that needs updated... but even better than that, what could be done, is this data could actually be maintained inside the RAM by being stored somewhere privately in the source code, where it is constantly being updated via this micro updates, but only ever actually being pushed to the database upon the player either logging out or dying, or, to be more cautious, one could simply have it on a timer to be updated after the player has been offline for X amount of time... doing it this way, would even allow the calls to the database (for saving) to be completely offloaded from the thread it's currently using for this work... so if we gave it it's own dedicated thread, it would actually end up being a major performance gain instead of a loss.
He did not say, he is saving houses while moving items from inventory, so I assumed he doesn't.
Either way even if it does not dupe (assuming you are sure you put the save in every single place possible). It will lag like hell, you'll need plenty optimized item saving to make this work, like binary for example.

There is so many places in the sources that I simply find this solution bad/unmaintainable. The logic would be quite complex too, because you have to detect where item was added or from where to where, to trigger save on everything, unless you want to just spam all save all times, then you better get a good cpu for hosting this server. Imagine you have 500 players and players move items literally every single second (for example using a rune or ammunition) and you spam a save every single time. Your cpu would probably die to handle this query spam and mutating queries are quite expensive to execute.

My approach to player saving is triggering the save periodically, when player logs in, he is loaded into memory, when player logs out, he is not unloaded from memory. Unload happens only when player is saved to database and is offline, otherwise is kept in memory for fast access paired with cache on everything it performs pretty well. There is also emergency save when crash handler catches the crash.
 
He did not say, he is saving houses while moving items from inventory, so I assumed he doesn't.
Either way even if it does not dupe (assuming you are sure you put the save in every single place possible). It will lag like hell, you'll need plenty optimized item saving to make this work, like binary for example.

There is so many places in the sources that I simply find this solution bad/unmaintainable. The logic would be quite complex too, because you have to detect where item was added or from where to where, to trigger save on everything, unless you want to just spam all save all times, then you better get a good cpu for hosting this server. Imagine you have 500 players and players move items literally every single second (for example using a rune or ammunition) and you spam a save every single time. Your cpu would probably die to handle this query spam and mutating queries are quite expensive to execute.

My approach to player saving is triggering the save periodically, when player logs in, he is loaded into memory, when player logs out, he is not unloaded from memory. Unload happens only when player is saved to database and is offline, otherwise is kept in memory for fast access paired with cache on everything it performs pretty well. There is also emergency save when crash handler catches the crash.
Yeah, I think performance will be the main issue. Having a save handler would work (which I was planning if needed) but even how items are as it is saving a single item would not take much to do. In theory it is probably plenty fast enough 500 players isn't that many to a CPU. The issue is when you are doing things like deleting every single item from the database and recreating all of them for not just 1 set of items but all of them as well as everything else player save does.

With a save manager it would also not cause duping because it wouldn't matter what happened in-game. It would reset to before the crash happened because nothing was saved anywhere. The main thing to consider is: "Is the database always in most up-to-date state" Having random saves for half this half that (like saving a player that logs out but not the one that stays on and they exchanged items).

I will leave this open hopefully more opinions come in. Just keep in mind, anyone who replies, I am going to optimize items saves and test thoroughly that all ways items are moved and should be saved are handled.
 
The biggest problem is really performance. I believe that performing these individual saves + an exhausted save would be a great option, that way it wouldn't overload the CPU. In this case, we would have the following:

It wouldn't guarantee 100% of duplicate items, but I believe it would improve security against duplicate items or their loss by 80%.


Example:
- When moving the first item, the player receives a 5-second exhaustion

- If he tries to move more items, this will be saved as a boolean, let's call it nextSave = true

If this boolean is true when the exhaustion ends, then the save will be performed again, otherwise it will not be necessary

The only problem with this is that if the server crashes before the exhaustion ends, then duplicates or items will be caused (but this already happens nowadays then xD)
 
The biggest problem is really performance. I believe that performing these individual saves + an exhausted save would be a great option, that way it wouldn't overload the CPU. In this case, we would have the following:

It wouldn't guarantee 100% of duplicate items, but I believe it would improve security against duplicate items or their loss by 80%.


Example:
I like the idea but the way I see it is if the database is not 1:1 with game state we are doing something wrong. I would rather reduce the speed players can do something (like RuneScape) than to have even a 0.0000000000001% chance of item duplication or item loss. Obviously the best solution is a server that never crashes but….
 
Redis cache solves performance and state issues.


As for the itemized saves... I honestly don't feel like there is going to be a huge cost to the database if you manage the saves in a queue that fires up and then waits for a like 200 milliseconds before running all the queries in the queue, and then goes back into a suspended state again, waiting for save tasks... yes I am of course talking about having it ran by a coroutine. If we ran a system like this, and we also were able to update specific entry sub-data, or really, just manage the way the data is serialized better, so that we are not deleting 415461451 items, just to update them all again as the same, with only one changed....

Binary as Nekiro suggested is also a very viable option and is actually the path I plan to take. It's a much harder path, but, if implemented properly, I view it is a much more beneficial path. I'm not talking about just using MySQL and BLOB'ng the data, I'm talking about stuff like protobuffers, cereal, capnp, ect. I will personally be using Cap'n'proto, or at least, that is what I have predetermined as my choice.

Binary is ridiculously faster. Binary doesn't require a "database server" setup as a third party install. Binary doesn't require a connection. These advantages happen to suit my desired plans perfectly, since I would prefer to get back to some of the things from the beginning of OT that I personally loved about it... such as it being so easy to get started, you basically download, unzip, double click and login...

As for performance, if you wanted to squeeze quite a lot of gains on that out of the current iologindata code, get rid of the exceptions (ie the try/catch blocks) in favor of error logging and manually deciding what to do in any instance of failure, such as early return false, or using paired data, or a class/struct similar to std::expected, using optionals if you have too... w/e, it's definitely better than paying for those try catch blocks litering that code... I know they are there for a reason, but I have never once triggered one of them, which means for the most part it's very costly safety net that's not really saving anything... besides, who wants the server to throw exceptions when these things happen anyways? Not like anyone is writing code to handle such exceptions... but yeah, micro updates, coroutine management where you are not just always doing every single transaction one at a time, and getting rid of those try catch blocks would do wonders for giving you enourmous performance gains to where you could afford to take a hit and do saves much more often.
 
Last edited:
Redis cache solves performance and state issues.


As for the itemized saves... I honestly don't feel like there is going to be a huge cost to the database if you manage the saves in a queue that fires up and then waits for a like 200 milliseconds before running all the queries in the queue, and then goes back into a suspended state again, waiting for save tasks... yes I am of course talking about having it ran by a coroutine. If we ran a system like this, and we also were able to update specific entry sub-data, or really, just manage the way the data is serialized better, so that we are not deleting 415461451 items, just to update them all again as the same, with only one changed....

Binary as Nekiro suggested is also a very viable option and is actually the path I plan to take. It's a much harder path, but, if implemented properly, I view it is a much more beneficial path. I'm not talking about just using MySQL and BLOB'ng the data, I'm talking about stuff like protobuffers, cereal, capnp, ect. I will personally be using Cap'n'proto, or at least, that is what I have predetermined as my choice.

Binary is ridiculously faster. Binary doesn't require a "database server" setup as a third party install. Binary doesn't require a connection. These advantages happen to suit my desired plans perfectly, since I would prefer to get back to some of the things from the beginning of OT that I personally loved about it... such as it being so easy to get started, you basically download, unzip, double click and login...

As for performance, if you wanted to squeeze quite a lot of gains on that out of the current iologindata code, get rid of the exceptions (ie the try/catch blocks) in favor of error logging and manually deciding what to do in any instance of failure, such as early return false, or using paired data, or a class/struct similar to std::expected, using optionals if you have too... w/e, it's definitely better than paying for those try catch blocks litering that code... I know they are there for a reason, but I have never once triggered one of them, which means for the most part it's very costly safety net that's not really saving anything... besides, who wants the server to throw exceptions when these things happen anyways? Not like anyone is writing code to handle such exceptions... but yeah, micro updates, coroutine management where you are not just always doing every single transaction one at a time, and getting rid of those try catch blocks would do wonders for giving you enourmous performance gains to where you could afford to take a hit and do saves much more often.
I am actually wondering if OT really needs to go that far. Let’s assume 1k players is the maximum on any server. The player:save() function as it used can be ran on every players through a loop and it would execute in about 2 seconds.

Let’s assume that instead of a 20ms time because we use the correct sql queries we get something like: 0.0004 or 0.0001 with mariadb. If all 1000 players moved an item at the exact same time (doesn’t happen ever) that would still be less than a millisecond to execute. It is really not that much overhead imo.

If you saved the queries for 200ms. The execution would be the same amount of time just all in one chunk. If done right, meaning you build it to only have to execute a single query that is optimized then it could definitely be better but you sacrifice database integrity.

I guess i need to look up limits of database calls for MySQL

Binary I assume is probably the best option but that’s a path for another day to me.
 
Last edited:
I am actually wondering if OT really needs to go that far. Let’s assume 1k players is the maximum on any server. The player:save() function as it used can be ran on every players through a loop and it would execute in about 2 seconds.

Let’s assume that instead of a 20ms time because we use the correct sql queries we get something like: 0.0004 or 0.0001 with mariadb. If all 1000 players moved an item at the exact same time (doesn’t happen ever) that would still be less than a millisecond to execute. It is really not that much overhead imo.

If you saved the queries for 200ms. The execution would be the same amount of time just all in one chunk. If done right, meaning you build it to only have to execute a single query that is optimized then it could definitely be better but you sacrifice database integrity.

I guess i need to look up limits of database calls for MySQL

Binary I assume is probably the best option but that’s a path for another day to me.
That's not the main issue with it.

If 1000 players moved an item, you have far more than 1000 queries if you take into account foreign key checks on SQL DELETE statements. And you will just bottleneck, especially without other I/O going on, which is why it is always best to batch process (maybe a hundred queries or so) large amount of queries to massively reduce processing time.

In the end, saving everything together using a transaction ensures there are no dupes.
 
Last edited:
it's the 10th time I see this discussion happening, I've already said how to solve this at least 6y ago:

1. Remove all controlable saves -> players should not be able to trigger a save (logging out, leveling up, etc.)
2. Create 2 sets, one for periodic save, one for priority save.
3. Cache information before any save.
4. Event based, decided which players will be inserted in periodic save and in priority save.
5. Upon deciding to save, you'll do a transaction and batch save all queue together (through a commit, cause we want this to be atomic)
6. Even if the server crashes, you ensure everyone in priority and everyone in periodic queue are in the same state.

Assuming you define the events correctly (the list of @Itutorial is already a pretty good start) and you always save the entire set together. You'll never have dupes.
 
it's the 10th time I see this discussion happening, I've already said how to solve this at least 6y ago:

1. Remove all controlable saves -> players should not be able to trigger a save (logging out, leveling up, etc.)
2. Create 2 sets, one for periodic save, one for priority save.
3. Cache information before any save.
4. Event based, decided which players will be inserted in periodic save and in priority save.
5. Upon deciding to save, you'll do a transaction and batch save all queue together (through a commit, cause we want this to be atomic)
6. Even if the server crashes, you ensure everyone in priority and everyone in periodic queue are in the same state.

Assuming you define the events correctly (the list of @Itutorial is already a pretty good start) and you always save the entire set together. You'll never have dupes.
Okay, this is a new one and isn’t bad. I still do not like the idea of an out of sync database. There is no reason (with how amazing hardware is now) that we should need to get this complex for 1000 players in a simple 2d mmo. I would much rather create a separate database server that’s handles all the queries while the game server just handles keeping game state together.

Is there an issue allowing players to activate queries if they are limited on when they can do so? Ex. Player can move item for x time because he just did and it saved.
 
I really get excited when someone takes the initiative to optimize TFS, but apparently people underestimate the number of players a server can have and especially the number of parallel scripts running.

I believe the first step was to continue what Gesior had already started with binary saves and then think about optimizing the ways and times the game saves.
 
Back
Top