• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

Theorizing a Modern Engine Design

@Yamaken in few days from now I will start a open source project in java which will expose rest api through which user will be able to update tfs database. If one day tfs is able to create rest api calls, then this might become useful. I'll try maintain all communication async. Still thinking about authorization method, but JWT is best option I think.
Nice, this api could be used to build an new kinda of acc. But does it need to be java? why not node.js? Async seens very easy with node.js.
 
Nice, this api could be used to build an new kinda of acc. But does it need to be java? why not node.js? Async seens very easy with node.js.
That's precisely how I would initially scaffold it. But your end target should be MariaDB Connector paired with libuv and communicating with Unix domain sockets for all sides of I/O, providing TCP loopback as a last resort. Dead simple, Pure C.

(NodeJS is literally just Google's ECMA JIT with libuv bindings.)

but not every game data needs to be in memory like in tfs's market system
Actually, that's precisely what will happen. At least in any system designed to operate under modern production standards. It will just be the RDBMS's cache memory instead of the network daemons. Raking leaves into the neighbors yard.
323657615492120576.png


In all honesty, it matters little in the specifics of how the architecture flaw is fixed, just as long as it's fixed. Hundreds of bugs die on the spot.
 
That's precisely how I would initially scaffold it. But your end target should be MariaDB Connector paired with libuv and communicating with Unix domain sockets for all sides of I/O, providing TCP loopback as a last resort. Dead simple, Pure C.

(NodeJS is literally just Google's ECMA JIT with libuv bindings.)
I don't believe we need to use a language like C for suck work since latency here is not critical(since the game engine will ask or write data in async fashion). An easier language to develop and deploy should be used here(btw, we can even use lua for this kek). If we wanted such low latency, we should use scylladb/seastar (https://github.com/scylladb/seastar)
Actually, that's precisely what will happen. At least in any system designed to operate under modern production standards. It will just be the RDBMS's cache memory instead of the network daemons. Raking leaves into the neighbors yard.
323657615492120576.png
Yeah, its obvious that optimized databases uses memory caches for memory. I was talking about game engine memory. Some data MUST be in the engine memory and thus engine holds the true data.
In all honesty, it matters little in the specifics of how the architecture flaw is fixed, just as long as it's fixed. Hundreds of bugs die on the spot.
I think if you are choosing an architecture, you need to know why pick x instead of y.
 
Last edited:
Nice, this api could be used to build an new kinda of acc. But does it need to be java? why not node.js? Async seens very easy with node.js.
Spring framework for java provides a lot of modules which make development easy, e.g. ORM, database (reactive) connectivity config via properties file, built-in web server, security, and two most important things dependency injection and inversion of contol. To run app all you need is java runtime environment.
Generally, developing web apps with spring sometimes come down to annotating class with interface and you have ready rest contoller.
So it is easy for me and I dont know js but would like to someday 😅
Anywsy, it is not subject of this thread, so if I will have something solid, it will be posted on proper forum.
 
since latency here is not critical

Not having to support NodeJS in Windows userland is what's critical. You're adding a lot of tech support complexity where none existed before. They were already expected to have a working C++ compiler, and the MySQL client libs, and libuv is smaller than Boost's build system documentation. The day I see a thread about node-gyp not working on Windows in support here is the day I hide that forum forever with a userscript.

t. someone who's so deep into NodeJS they port Bash scripts to Promises and child.exec just to pass the time, trying to save you from yourself.
 
Not having to support NodeJS in Windows userland is what's critical. You're adding a lot of tech support complexity where none existed before. They were already expected to have a working C++ compiler, and the MySQL client libs, and libuv is smaller than Boost's build system documentation. The day I see a thread about node-gyp not working on Windows in support here is the day I hide that forum forever with a userscript.

t. someone who's so deep into NodeJS they port Bash scripts to Promises and child.exec just to pass the time, trying to save you from yourself.
I understand, i also care about library dependency. But i think caring too much about noobs being able to this new engine is bad since it will ask for less complexity even if we need them to fix and improve certain things. Btw, if the library is supported by vcpkg it becames very easy to compile in windows. So, if the path is to use such type of language then i would pick C++ since there is few good http frameworks out there(like boost beast? which comes with boost which is a common dependency and widely used).
 
libuv can be literally added to the project as source bro and only needs headers that should already present on the OS.

616043636416708630.png


It's on par with compiling LuaJIT or Physfs.

Bash:
$ make
GNU Make wrapper by Xaekai
CC src/unix/libuv_la-async.lo
CC src/unix/libuv_la-getnameinfo.lo
CC src/unix/libuv_la-dl.lo
CC src/unix/libuv_la-getaddrinfo.lo
CC src/unix/libuv_la-core.lo
CC src/unix/libuv_la-loop-watcher.lo
CC src/unix/libuv_la-pipe.lo
CC src/unix/libuv_la-fs.lo
CC src/unix/libuv_la-loop.lo
CC src/unix/libuv_la-poll.lo
CC src/unix/libuv_la-process.lo
CC src/unix/libuv_la-random-devurandom.lo
CC src/unix/libuv_la-signal.lo
CC src/unix/libuv_la-stream.lo
CC src/unix/libuv_la-tcp.lo
CC src/unix/libuv_la-thread.lo
CC src/unix/libuv_la-tty.lo
CC src/unix/libuv_la-udp.lo
CC src/unix/libuv_la-linux-core.lo
CC src/unix/libuv_la-linux-inotify.lo
CC src/unix/libuv_la-linux-syscalls.lo
CC src/unix/libuv_la-procfs-exepath.lo
CC src/unix/libuv_la-proctitle.lo
CC src/unix/libuv_la-random-getrandom.lo
CC src/unix/libuv_la-random-sysctl-linux.lo
CC src/unix/libuv_la-sysinfo-loadavg.lo
CC src/libuv_la-fs-poll.lo
CC src/libuv_la-idna.lo
CC src/libuv_la-inet.lo
CC src/libuv_la-random.lo
CC src/libuv_la-strscpy.lo
CC src/libuv_la-threadpool.lo
CC src/libuv_la-timer.lo
CC src/libuv_la-uv-data-getter-setters.lo
CC src/libuv_la-uv-common.lo
CC src/libuv_la-version.lo
CCLD libuv.la
 
Ok guys, let’s actually do something. I know OP didn’t want us to write any code, but we can do it anyway.

So here’s my proposal for a new map format. (Credits to @jo3bingham for the idea of dividing map by areas. My format is based on it, but goes a little further.)

First of all, why? I know many of you already know why we need a new format, but I’ll quickly summarize it.

OTBM:
+fast
+supported everywhere
-binary file = not git friendly

Cipsoft format:
+many small text files = git friendly
-slow
-divided into sectors, which doesn’t really make sense (you can see in git client what file changed, but you don’t know what part of the map it is)

So this leads us to a format that can eliminate all these disadvantages, but keep all advantages (if combined with some other tools). Now, how it can look like:

A map is a directory, just like in Cipsoft format. But inside it, there can be other directories, and inside these directories another bunch of directories, and so on, and so on. Every directory represents one area (I’m not sure if „area” name is the best here, maybe it can be „region” or something else) on the map. An area is just a part of the map, which can be marked in map editor (just like houses now). And these areas can be nested infinitely (well, almost, because minimum size of an area is 1 sqm).

For example, let’s say we have a real/global Cipsoft map. In map editor, we can mark areas like: Rookgaard, Thais, Carlin, … . And then we can mark subareas, for example in Thais area we can mark: city, Mount Sternum, Fibula, … . If we save this map, its directory will look like this:

Zrzut ekranu 2020-06-11 o 13.40.14.png

And now, if you change something on Fibula, save the map and open a git client, everything will be clear as soon as you look at the file path: map/thais/fibula/map.area.

There’s one more thing to explain here: what are these other map.area files for? They basically contain all sqms that don’t match anywhere else. So if an sqm doesn’t belong to any area, it’ll be placed in map/map.area file. But if an sqm belongs to the Thais area, but doesn’t belong to any Thais subarea, it’ll be placed in map/thais/map.area. What map.area file looks like? Probably similar to Cipsoft .sec files, one line for one sqm. The exact syntax is yet to be determined.

This format already eliminates two of three disadvantages of current formats, but it’s still slow. That’s why we need another binary format, and a „compiler”. So just as we compile the code, we’ll compile the map directory into a binary file. There are (at least) two ways of doing it, and our game server can quite easily support both.
  1. We just start the server, and it’ll compile the map before loading it. This may look like no speed change, but I’ll explain it later.
  2. We use a CI/CD pipeline, which compiles both server and map for us and deploys only binaries. Then the server loads this binary at startup.
How the server can handle both cases easily? It checks if map directory exists AND if map binary exists. Now, there are 4 possibilities:
  1. Both don’t exist - server prints an error and exits
  2. Only binary - server loads binary
  3. Only directory - server compiles it and loads newly created binary
  4. Both exist - server calculates directory checksum (I googled it, it should be possible to do it based on all files checksums) and compares it with a checksum inside binary (map compiler must calculate it and save in the binary during compilation to make it work), if it’s the same, server loads the old binary, if not, it compiles the directory and loads a new binary
So in first case without CI/CD, it’s quite fast, because the server only recompiles the map if it changed, and nobody changes server map everyday. But it still has to calculate checksums everytime it starts. Maybe there’s better way to do it? Checking modification time? Or maybe we can use git hooks to check checksums and recompile the map only when some files changed?

Now, what about spawns and houses? I personally think they should be included inside the map directory, because they’re just a part of the map. I think they can both use the same area system, so we can cover every map.area file with corresponding spawns and houses files. But we have to assign each house to a city, so maybe a solution to this is to allow adding some parameters to areas? This way you can create a Thais city in map editor and assign it to „thais” area. Now all houses inside this area will be assigned to Thais city automatically. There are two ways of doing it, either adding a new file in each area directory, containing some parameters, or adding an area name to each city in a cities config file. Where will cities data be stored? Probably the best option is to add a global map config file in root directory. It can use json/yml/toml format and contain some data like: author, version, size, client version, cities, and anything else we need.
 
Well, while git is a poor repo for binaries, addon solutions for git-aware binary storage of assets do exist. And that leaves us with the real problem: Maps are "diff" unfriendly. And yes, diff'able chunks would be lovely. The classic example being a team member placing valuable items in a hidden location and then harvesting them, and selling them for profit on the side. Perhaps even being very clever about how they do it and even if you check for obvious inclusions of such things, they use a non-obvious method. Perhaps a quest reward dialogue that only checks for a certain item in players inventory and takes it from the player and gives them an item you'd have otherwise checked for.

However the information density involved in a gameworld map of this nature does not lend itself to pure textual representation. Simply put, even if you stored it like this, the diffs would be massive, and you'd quickly move the map to it's own repo just to not have to deal with it along side your other code.

Also, the usage of git workflow hooks is not really central to mapping. I mean look at the state of the default TFS map. And lets be honest, good mappers are artist types, and artist types and the semi-rigid nature of collaborative source code organization are not natural allies. Or to put this more simply: For the end user, compiling the code doesn't necessarily involve git. You can just as easily download the tarball and compile it without even having git installed. Map handling should be no different. For this reason the involvement of CI is superfluous here.

Given my recent experiences with the state of rme maintenance, I believe tight coupling of something like sharing new map format loader code between that mapper and the gameworld daemon to be a profoundly bad idea.

If I did code a map architecture like this in a new map editor, and even if it's representative of how the structure is actually expressed internally, I'd probably make the serialized format of it the default still, as a configgable ofc. The server should expect to deal with the serialized format only, or if it is to be aware of the exploded format, it should require the location of the binary executable of the mapper that made it (or 100% compatible), and the mapper should provide a commandline mechanism to serialize the exploded format, preferably capable of such entirely sans support files, so only the binary need be present and could be put alongside the daemon. Most deployments are remotes, and people do not map from their server, they map on their workstation, PC, or laptop.

This provides a nice delineation of responsibility: The server code for supporting the exploded format only needs to be written once, and the only moving code would be the new binary format. And so no matter how elaborate the human readable format becomes, changes to the server won't be necessary unless the binary serialization changes too.

If you want a mechanism so the server knows to re-request serialization, so a project can use the exploded form as their canonical representation, that's not unreasonable, but if I write such code there is only one horse I'd ride into that battle: xxHash.

Also human readable serials should absolutely be HJSON, or GTFO. I will fight the battle of value marshalling if it means a happy syntax. YAML is absolutely not even on the table. It is a blight upon the code landscape already. Proliferating it is sin.
 
Back
Top