• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

Swords artificially generated (GAN)

Hi everyone,

Latest updates:
  • I've found out that is better to train the network to be an expert in a certain things instead of training it to be able to draw both swords and other weapons. We have less glitchy sprites and the results becomes more reasonable even though they are more repetitive because lack of data.
  • I've added some custom free-for-use sprites and others that people donated me to this project. The only thing is that I have added them on top of the current learned file so network is still kind of adapting to it. The results seem pretty good so far.
  • I've started to train from scratch with 32x32 using pink background. In my head it is easier to separate later the alpha channel (transparency) computing the magenta pixels rather than identifying the object among a black background and cutting with 1 pixel distance (AA).


Just to give a proper explanation on how this works because many people is sending messages in my discord fearing a new technology that they just don't understand:
- GANs are basically an algorithm that takes several days to be trained on and they do a simple thing: they receive a random number and spits what he knows about the trainned files. The better the results (more alike the real data), the better is the network to generate new content based on what it saw.
So if you're a spriter and you feel that somehow I will mess your work, you're wrong. This will mainly be used to generate new concepts and 'base' sprites that are copyright free. If someone wants to hire your services, it's not a GAN that will be in the way as it only do things that are somewhat similar to what it has seen before. It won't draw naruto/pokemon sprites since it only saw swords from tibiawiki.
- After trainning with swords, I'll be separating the pickle file (basically where it stores the 'learning') and start trainning with other sprites to understand how the transition will work (from swords to clubs for example). I also want to know how different it would be to convert a sword to a club in comparison to trainning with clubs from scratch.

I'll keep you all updated on the progress. Below is the last generated batch. It has mastered a few tibia sprites already so I'm pretty close to ask it to draw random ones and check the results.
1590785156934.png

Unfortunately it returns a lot of noise in the image also (from magenta variations to including tunes of pink in the blade). I'm still searching for a way to quick clean those images and try to avoid having to perform mass manual editions in the result sprites. It is possible that with more days of trainning it will correct those noises, but that is just theoretically
 
Seems cool never heard of this before, I wonder what it can do with custom sprites like from OTSP
 
So hi everyone, I have good (and bad) news...

first the good: I've reached a level where the trainning is pratically finished.Trainning more would only cause overfitting so from now on it is pratically finished and it can generate pratically an infinite amount of swords RANDOMLY.

The randomly part is important because it received a 'random' seed and spits something that looks like what he trained at as I mentioned before.
So I was trying to understand a little bit more how to have some control over the latent vectors and I come across a thing called 'GANSpace' which is basically Principal Component Analysis (PCA) of the vectors that the GAN produces.

To be short: PCA basically reduces the dimensionality of the vectors to find the core responsibles. Imagine having thousand variables but in fact only 15 of them do something noticiable in the system so you use PCA to identify which are those 15.

The examples we found out there are very promising:

But in reality playing with the interpretable controls found by PCA are way harder than I first thought and controlling it might not be something feasible by now. I have reached the author of the paper of GANSpace to make sure I did everything correct as the paper described but most certainly is because of my limited dataset.

For example, I was able to play a little bit with some variations but the layers and components are being responsible for lots of things at once instead of being so separate as they were in the video, in reality I have something very close to this example here:

So yea, you can control the sprite but you also ruin it's metrics and harmony in the process, which in our case is crucial:

Using seed 16375 we see this sword, by messing a little by with the 'truncation' we can make it become more detailed or not.
unknown.png

Decreasing truncation to 0.3 show us a simpler blade.
unknown2.png

While increasing the truncation increase the level of details in the sprite in overall.
unknown1.png

Messing with distance shows very little variations, but as it comes closer to the limit (-10) it becomes more pink
unknown3.png

Messing with scale increase the glow in our sprite, putting it to the limit actually turns the sprite in a supernova
unknown4.png


While it was cool to play with, this showed to me that finding a good seed and valid variations for pratical purposes are actually something very hard. It just seems easier to just ask the generator to give me random values and evaluate them using other metrics instead of trying to manipulate the latent vectors.


So in terms of my initial TO DO's, this is what we have:

TO DO's:
[X] Test if it's more effective to train weapons separately or all together. We can potentially have hammers made of swords but who cares?
Answer: No, it is not. The best approach is to train each of them separately and transfer the learning.
[X] Insert some sprites from the forum to make our results better than the tibia style.
Answer: It did improved a lot and made the generator be able to accept some cases that do not follow tibia style, which I see as an improvement.
[ ] Once I have finalized the learning, provide the files so people can just run to generate infinite sprites in tibia style.
At first weapons but then equipments, outfits, borders, walls, floors. The sky is the limit!
Answer: I'm still evaluating if it makes sense to share the files that many of you won't be able to use due the difficult of dealing and trainning with this or if I just output a million samples and put them in a repo.
[X] There's a very recent article (it was published around a month ago) about a neural network called 'GANSpace' that 'attach' to GANs and allow you to control the outputs. I want to try that out and potentially make a website (if it works for me) where you can slide down to create your own custom sprite. Things like type, element, color and so on. @Gesior.pl, perhaps we can insert this in your current open tibia library page?
Answer: Unfortunately due the lack of data we probably won't be able to benefit much of GANSpace for now. If somehow I find an easy way to 'correct' the samples generated and use them to train again (with way more data now), perhaps can be an option for the future. The GANSpace might be an option for spriters that want to play a little bit and check some dope new 'concepts' for inspiration. I don't see it as an end product for now.
[ ] Find a quick way to make the alpha compositing (for transparent background) back, otherwise I'll need to retrain those in magenta background.
Answer: Unfortunately my tests with euclidean distance and noise reduction haven't been able to totally convert all pixels which are not so near the magenta (255, 0, 255) tune. I'll keep checking but we probably won't escape some manual rework here.

I'll keep studying and keep doing tests, probably will insert clubs now to see how is the transfer learning in action. I have been barely sleeping lately and been dividing my day in my full time job along with those tests and readings (lots of papers, lots of theory). Worst case scenario we will have some inspirational concepts and lots of half-finished sprites, but it's good that I'm raising awareness on this new trends because in a few years this will probably be more than a reality for art generation.
Post automatically merged:

Bonus:

One visual way to identify if the generator have actually learned the latent vectors and is capable of creating new sprites rather than just mixing existing styles, there's one thing called 'Latent Walk' where we send some random seed and the generator to draw lots of examples using those seeds. Then we create a video to see how is the transition from one to another. A good learning will provide a smoother transition as the algorithm is being able to cover the intermediate frames with new generated samples.
 
Last edited:
@Night Wolf Is it possible to run this based on different types of sprites? Like different shapes outlines (empty inside) for handgrips and blades together with different filling material (some silver/copper/gray/blue/gold/wood)? Maybe that would be better and you will be able to generate random shape and fill it with random material.
 
@Night Wolf Is it possible to run this based on different types of sprites? Like different shapes outlines (empty inside) for handgrips and blades together with different filling material (some silver/copper/gray/blue/gold/wood)? Maybe that would be better and you will be able to generate random shape and fill it with random material.
We need to train with those variations all created already, you can't just pass a list of filling materials and blank shapes because it will be trainned to generate more filling materials and blank shapes. It draws variations of what is trainned with and because there's not a huge dataset with sprites of a particular type (swords in this case) we end with several gaps because the distance between one type and another is just too big for the algorithm learn to create everything that has in between.
 
I've released the weapon generator, you can find it in downloads > tools:
 
Someone make an entire server with this. Ground tiles, walls, monsters, even gold coins.
 
I don't know if anyone noticed but in the last comment I have added before releasing the weapon sprite generator I mention that GANSpace was not
viable as of now:

Unfortunately due the lack of data we probably won't be able to benefit much of GANSpace for now. If somehow I find an easy way to 'correct' the samples generated and use them to train again (with way more data now), perhaps can be an option for the future. The GANSpace might be an option for spriters that want to play a little bit and check some dope new 'concepts' for inspiration. I don't see it as an end product for now.

The Weapon Sprite Generator uses the GANSpace so that contradicts what I wrote before...

For anyone willing to try this in future, I'm writting a Full end to end guide on how to start, understand concepts and tune parameters for training, generating and setting your own ganspace, but before that I would like to highlight a few things that I discovered in the past months:

As I mentioned before, GANs are an active area of research, so there's basically a new paper about it with important discoveries every week. In the last 2 months we got:
  • A new paper about GANSpace that made me understand better how to benefit from it, this was the main game changer that allowed me to use it in the weapon sprites dataset.
  • Several improvements and findings in stylegan2 architecture, I was for a while tracking those changes and implementing them all in a single github repo to have ALL benefits and possibilities (usually each paper has a completely different implementation, which makes VERY hard to have all the new discoveries merged in a single and ultimate repo). Unfortunately this repository is no longer viable as I broke it to a point that I got frustrated to go on.
  • Amazing new tools to play with while training (feature quantization, contrastive learning (loss regularization), self-attention, data augmentation, non constant 4x4 blocking, possibility to train on transparent images, possibility to use half precision (fp16 instead of fp32)
PS: I basically broke my old repo while trying to implement the data augmentation paper.

There's basically 10 new papers that I could list here with amazing discoveries and that simply changes everything for our particular case: Very small dataset, transparent background, lots of repetitive images and different kinds of sprites.

I'm currently making some changes to see how the new tools affect training results, once I have a way to measure which parameters are better for our dataset I'll see what's the best implementation we can use to achieve max benefit. There's also two things very important:
  • To perform the projection (create one image that is not in the dataset using images from dataset) we need to have the GAN archtecture in fp16.
  • To use GANSpace we need to either train directly on pytorch model or convert the weights from tensorflow model to pytorch model.

So after this assessment, I'll perhaps switch from tensorflow stylegan2 to pytorch stylegan2 that has already support for half precision (fp16 directly).
For us it means: Train directly on transparent data (raw sprite), quicker trainning, better results and less chance for "overfitting" or model collapsing because of data augmentation. Also direct conversion for GANSpace and Projection (perceptual similarity metric from zhang).

I swear, this is a snapshot of 2h training, already with transparent background. The results are way better already in comparison to 6h training of the previous versions.fake-0150.png
 
I would love to see this same process on armors. What AI conceives is so interesting!
Please keep this up. super awesome
:edit, my computer hid the message above this from me!!!AMAZING :D
 
I would love to see this same process on armors. What AI conceives is so interesting!
Please keep this up. super awesome
:edit, my computer hid the message above this from me!!!AMAZING :D
I will probably not only do this for armors, helmet, boots, legs but will re-do it with weapons (swords, axes, club, spears, bows/crossbows). Now we have the right tools to train this in a timely manner with outstanding results. Give me just a few days to understand what model works best and I'll start doing one kind at time.
 
I will probably not only do this for armors, helmet, boots, legs but will re-do it with weapons (swords, axes, club, spears, bows/crossbows). Now we have the right tools to train this in a timely manner with outstanding results. Give me just a few days to understand what model works best and I'll start doing one kind at time.
definitely dont rush it! i am just enjoying what you are doing and hope to get to use some in time :3 hahahha
 
definitely dont rush it! i am just enjoying what you are doing and hope to get to use some in time :3 hahahha
meanwhile you can play with the last version I have released, to generate new weapons (club, spear, axe, sword). But those will probably need you to clean the pink background before turning into end products. Hopefully with those new enhancements the next version won't require any treatment before becoming end product sprites.

 
Amazing stuff, loving it, keep it up with this nice project.
 
Great work 😃 It's really nice to see how you manage to keep improving the results. These latest armors look really promising!
 
This is insane tbh - looks cool. Looking at the logic, would it be possible to be able to adapt this somehow to link items together, like... sprites larger than 32x32, i.e. environmental objects that are mapped in 4 parts for example. Or even more complex, outfit variations so it would cover each direction? Cause then that would be unreal.
 
With some manual work and cherry picking out of generated samples you can try to match generated versions of each side and try to create an outfit, for example. But that's involve some work.

As for sprites larger than 32x32, this was designed at first to work with 1024x1024 so it's safe to assume it can handle anything up to that.
 
Last edited:
I think someone created something similar already but in was more about terrain generation. Applying GANs for map generation could lead to outstanding results, of not only generating terrain but the themes of details, vegetation and house patterns on top of it
 
Back
Top