1 and 2: It's weird to see it written like this. This text seems to be taken out of a work I've done in university years ago, I'm curious about the reference hahaha.
3. Personally I don't think this would work, you'll end up taking more time to analyse and confirm reports than facilitating your life.
Useful if you have a quick way to validate all players data massively to confirm if the account is not legit, however, you might get a lot of fake or troll reports.
4 is the same thing I've discussed here.
5. This doesn't make any sense unless you're trying to restrict multi-accounts.
Now, about [2] and other comments like
@Evil Puncker rightly said about using IA for detection. This is a gray area and I'll try to explain why.
To begin with, there's little to none literature or practical examples of this being applied in a real life scenario. When I've built the
antibot project for my private server years ago I could only find 3 references and neither of them were close to what I was trying to do.
You can check the article (in PT-BR) here:
antibot/Article.pdf at master · andersonfaaria/antibot (https://github.com/andersonfaaria/antibot/blob/master/Article.pdf)
You need to understand that this is not an easy task first of all, it involves identifying all types of bots including assisted ones and trainning algorithms to try and find a specific pattern to how they behave to separate them from real players. Apart from this, hundreds of millions of dollars were spent in this direction to try and make games bot-safe with the biggest examples being Valve and recently Riot.
Meaning, any person that do this work wouldn't give hints on how they did it, but instead would built a company to sell this as a service (like other bot-detection companies). This relies heavily on types of bots people use, their functions and which data you can collect to distinguish it.
Now, for my personal proof-of-concept project I have noticed most bots didn't used "noise" in the intervals they apply between functions. The interval might make the bot behave more human-like in what comes to speed (time between actions) but it would still be showing as a bot if you compare other layers of the speed, such as mean and closeness to mean (standard deviation).
View attachment 76441
Even like this, giving enough randomness could be enough to mask the actions less predictable.
One might argue that in the future AI will be able to be used in a better way to instantly spot bots and group them by functions they are utilizing in a given interval, however, when the technology to detects evolve so does the one to improve/create bots. You can expand that until you actually have 2 artificial brains fighting to see who will win (spoofer vs. detector), even so the spoofer will have always have a reward period until detector reaches a point to caught it.