Night Wolf
I don't bite.
- Joined
- Feb 10, 2008
- Messages
- 582
- Solutions
- 8
- Reaction score
- 929
- Location
- Spain
- GitHub
- andersonfaaria
Dear all,
I would like to showoff a glimpse of what will be the future of pixel art:
All the swords above were drawn from a generative adversarial network (GAN) called StyleGAN2 that was recently released by NVIDIA (article, github). I've basically extracted some of the tibia sprites and trainned the network with them. You can check what was the learning in each iteration through this dope gif:
Below I'll go explore a little on a few examples (first 16):
Initially it was basically drawing noises randomly
It first understood that the blackground had black colors
then it for some reason made this very strange shape
And HEAVILY insisted on it
Then the first swords came out
By iteration 9 they were still very 'conceptual' but we could see they improving alot. 5 hours have passed.
By iteration 12 they were already beginning to feel like real swords, in 32x32 they are very alike the real tibia ones, but in 64x64 we can still see some blur:
This is the 16th generation, 8 hours have passed and they are already looking pretty realistic in 32x32.
So far what I have done and what I have used:
TO DO's:
[ ] Test if it's more effective to train weapons separately or all together. We can potentially have hammers made of swords but who cares?
[ ] Insert some sprites from the forum to make our results better than the tibia style.
[ ] Once I have finalized the learning, provide the files so people can just run to generate infinite sprites in tibia style.
At first weapons but then equipments, outfits, borders, walls, floors. The sky is the limit!
[ ] There's a very recent article (it was published around a month ago) about a neural network called 'GANSpace' that 'attach' to GANs and allow you to control the outputs. I want to try that out and potentially make a website (if it works for me) where you can slide down to create your own custom sprite. Things like type, element, color and so on. @Gesior.pl, perhaps we can insert this in your current open tibia library page?
[ ] Find a quick way to make the alpha compositing (for transparent background) back, otherwise I'll need to retrain those in magenta background
Tell me what you thought of it!
I would like to showoff a glimpse of what will be the future of pixel art:
All the swords above were drawn from a generative adversarial network (GAN) called StyleGAN2 that was recently released by NVIDIA (article, github). I've basically extracted some of the tibia sprites and trainned the network with them. You can check what was the learning in each iteration through this dope gif:
Below I'll go explore a little on a few examples (first 16):
Initially it was basically drawing noises randomly
It first understood that the blackground had black colors
then it for some reason made this very strange shape
And HEAVILY insisted on it
Then the first swords came out
By iteration 9 they were still very 'conceptual' but we could see they improving alot. 5 hours have passed.
By iteration 12 they were already beginning to feel like real swords, in 32x32 they are very alike the real tibia ones, but in 64x64 we can still see some blur:
This is the 16th generation, 8 hours have passed and they are already looking pretty realistic in 32x32.
So far what I have done and what I have used:
- Open tibia Library to extract all sprites from Tibia.
- Some lua code to parse weapons.xml and extract the items by property (sword, axes, clubs)
- Used a python code to move files that have the same name as the id extracted in previous step to put all weapons in their own folders.
- Initially I thought it would nice to use Waifu2x to enlarge the sprites up to 128x128 in order to train with more degrees of freedom, but it turned out I was wrong. Even with high denoise the images still have some blur that is 'learned' by the GAN and inherited with the learning. I'm going to train for one more day and then train in 32x32 to make a comparison. This is the result using waifu2x up to 128x128 and then training with the results.
- I had to adapt StyleGAN2 to train with 128x128. Initially it was made to train with 512x512 and 1024x1024.
TO DO's:
[ ] Test if it's more effective to train weapons separately or all together. We can potentially have hammers made of swords but who cares?
[ ] Insert some sprites from the forum to make our results better than the tibia style.
[ ] Once I have finalized the learning, provide the files so people can just run to generate infinite sprites in tibia style.
At first weapons but then equipments, outfits, borders, walls, floors. The sky is the limit!
[ ] There's a very recent article (it was published around a month ago) about a neural network called 'GANSpace' that 'attach' to GANs and allow you to control the outputs. I want to try that out and potentially make a website (if it works for me) where you can slide down to create your own custom sprite. Things like type, element, color and so on. @Gesior.pl, perhaps we can insert this in your current open tibia library page?
[ ] Find a quick way to make the alpha compositing (for transparent background) back, otherwise I'll need to retrain those in magenta background
Tell me what you thought of it!
Attachments
-
1590472459986.png889.6 KB · Views: 35 · VirusTotal
Last edited: