• There is NO official Otland's Discord server and NO official Otland's server list. The Otland's Staff does not manage any Discord server or server list. Moderators or administrator of any Discord server or server lists have NO connection to the Otland's Staff. Do not get scammed!

OpenTibia Weapon Sprites Generator (GAN)

Night Wolf

I don't bite.
Joined
Feb 10, 2008
Messages
577
Solutions
8
Reaction score
926
Location
Spain
Dear otland,

As some of you were following in my initial thread Swords Artificially Generated (GANs), the future is here!
Today I come to share a technology that was recently released, we are talking about articles that were published less than 2 months ago!!

I have put up this colab notbook since I know is not everyone that has a cool gpu available at home. With this you'll be able to run it in the cloud without paying anything.
Pixel Weapons Generator - Google Colaboratory

I hope that this will be a game changer for people who always wanted to have custom sprites but didn't had the an initial capital the invest. I also hope this tool can be used to improve some initiatives like OpenTibia Sprite Pack

For now I'll be only releasing the Pixel Weapons Generator, it took me around ~1 month and half to finetune learning rates and hyperparameters, and actually study A LOT of the theory behind it.
This doesn't generate end product weapons, but you can literaly control the sliders to edit layouts and find cool models to start with. After finding one that you like, you can try to improve it a little bit changing some of the layers and increasing distance in certain layers to update a lot of traces such as color, element (in some cases, limited to fire and ice) and geometric details (size, width, weight, hilt, rotation and so on). Due the lack of data not every change will return a good product, but with some manual intervention in background and a few noise reduction it actually can be utilized.

A few examples generated:
download (3).pngseed3876010.pngdownload (2).pngdownload (1).pngseed0018.pngseed0042.pngseed0046.pngseed0045.pngseed3875606.png

If you want to know more about GANs, check out this video of Computerphile channel

And also this video about the tool you're about to run: GANSpace. It will give you an idea on how to actually control the parameters you want.

If you never used Google Colab, then the first steps are (I'll continue the instructions in a post because I've reached the limit of 10 images)
Post automatically merged:

1) Make sure your runtime type is set to GPU. This won't run in CPU mode because of cuda and tensorflow libraries.
1594164441614.png
1594164467707.png

2) Just go to each cell and press the play buttom in the corner. Make sure to wait it finishes the execution until you go to the next one:
1594164527496.png

3) In order for it to run, it requires to download my repository and install in your google drive. To do so you need to run this cell, click in the link, enter with your google account, copy the authorization code, paste in the input window that will apear and press enter.
1594164602808.png

4) After running everything you'll reach the "hardest" parts which is finding good configurations to your model.
1594164651687.png

The first one is the component, in this code the component is set to be a random number between 0-20, you can actually set it to a constant value by double clicking the title of the cell.
The second one is when you're running the UI, it will go to a random seed with truncation 0.7. Make sure to read all text and instructions that is in the colab, they will be your helpers throughout.

5) Play with seed and truncation first. They are the first control you'll be wanting to start play with. Usually good values of truncation are between 0.5 and 1.0. Increasing further to 1.0 shouldn't change anything, but sometimes it does depending on what component you're at.
Once you find a good model, you can change some details of it by choosing the layer you want to edit and increasing the distance (which is multiplied by scale). Usually I start with scale 10.0 and slide from -10 to +10 in distance between each layer (0-1, 1-2, 2-3, 3-4..) to actually understand what each one changes in the model. Once I have it defined (and named) I start changing what matters most to me.

Hope we make a good use of it, I'm still studying to see how this can be improved to achieve even better results.

All I ask two things:
  • Make good use out of it :)
  • If you find a way to massively remove the noise of the pink background, let me know.
 

Attachments

Last edited:
Dear otland,

As some of you were following in my initial thread Swords Artificially Generated (GANs), the future is here!
Today I come to share a technology that was recently released, we are talking about articles that were published less than 2 months ago!!

I hope that this will be a game changer for people who always wanted to have custom sprites but didn't had the an initial capital the invest. I also hope this tool can be used to improve some initiatives like OpenTibia Sprite Pack

For now I'll be only releasing the Pixel Weapons Generator, it took me around ~1 month and half to finetune learning rates and hyperparameters, and actually study A LOT of the theory behind it.
This doesn't generate end product weapons, but you can literaly control the sliders to edit layouts and find cool models to start with. After finding one that you like, you can try to improve it a little bit changing some of the layers and increasing distance in certain layers to update a lot of traces such as color, element (in some cases, limited to fire and ice) and geometric details (size, width, weight, hilt, rotation and so on). Due the lack of data not every change will return a good product, but with some manual intervention in background and a few noise reduction it actually can be utilized.

A few examples generated:
View attachment 47261View attachment 47262View attachment 47264View attachment 47265View attachment 47266View attachment 47267View attachment 47268View attachment 47269View attachment 47270

I have put up this colab notbook since I know is not everyone that has a cool gpu available at home. With this you'll be able to run it in the cloud without paying anything.
Pixel Weapons Generator - Google Colaboratory

If you want to know more about GANs, check out this video of Computerphile channel

And also this video about the tool you're about to run: GANSpace. It will give you an idea on how to actually control the parameters you want.

If you never used Google Colab, then the first steps are (I'll continue the instructions in a post because I've reached the limit of 10 images)
Post automatically merged:

1) Make sure your runtime type is set to GPU. This won't run in CPU mode because of cuda and tensorflow libraries.
View attachment 47271
View attachment 47272

2) Just go to each cell and press the play buttom in the corner. Make sure to wait it finishes the execution until you go to the next one:
View attachment 47273

3) In order for it to run, it requires to download my repository and install in your google drive. To do so you need to run this cell, click in the link, enter with your google account, copy the authorization code, paste in the input window that will apear and press enter.
View attachment 47274

4) After running everything you'll reach the "hardest" parts which is finding good configurations to your model.
View attachment 47275

The first one is the component, in this code the component is set to be a random number between 0-20, you can actually set it to a constant value by double clicking the title of the cell.
The second one is when you're running the UI, it will go to a random seed with truncation 0.7. Make sure to read all text and instructions that is in the colab, they will be your helpers throughout.

5) Play with seed and truncation first. They are the first control you'll be wanting to start play with. Usually good values of truncation are between 0.5 and 1.0. Increasing further to 1.0 shouldn't change anything, but sometimes it does depending on what component you're at.
Once you find a good model, you can change some details of it by choosing the layer you want to edit and increasing the distance (which is multiplied by scale). Usually I start with scale 10.0 and slide from -10 to +10 in distance between each layer (0-1, 1-2, 2-3, 3-4..) to actually understand what each one changes in the model. Once I have it defined (and named) I start changing what matters most to me.

Hope we make a good use of it, I'm still studying to see how this can be improved to achieve even better results.

All I ask two things:
  • Make good use out of it :)
  • If you find a way to massively remove the noise of the pink background, let me know.


First:

1594227095958.png
 
That's perfect, It can generate good bases to Make different styles from these weapons, and more varieties can generate new sprites totally new and different (and better), right? I'm curious to see more results. Thanks Wolf for this contribution. (I'm Glad that It don't cost 5k$ btw)
 
Last edited:
That's perfect, It can generate good bases to Make different styles from these weapons, and more varieties can generate new sprites totally new and different (and better), right? I'm curious to see more results. Thanks Wolf for this contribution. (I'm Glad that It don't cost 5k $)
Fortunately I have other incomes and don't rely on open tibia sales, also I used a lot of things that were available for free so it wouldn't be fair:
  • StyleGAN2: Released by Nvidia
  • GANSpace: Released by Erik in partnership with Nvidia Labs team
  • Overal Improvements in StyleGAN2 network and datatypes: picked from a github of a MIT master student
  • Other extensions and colab version: picked from github of two artists that use GANs to create art. One of them also made a course specifically for StyleGAN that I saw in youtube to learn more on how to train the networks.

- My part is basically:
--> Training the weapons file for over a month and guarantee the results were cool
-->Overall colab changes (adjust parameters, cherry pick and improve some stuff) and some improvements in the UI
 
Its awesome job you made, wonder if some day we will be able to do such things with creatures :) if you are accepting donations I would be proud to donate such a good project you made.
 
Its awesome job you made, wonder if some day we will be able to do such things with creatures :) if you are accepting donations I would be proud to donate such a good project you made.
Thanks but what I did was quite small in comparison to the original projects of ganspace and stylegan2 from nvidia labs.
I'll try to train with creatures later but I can already foresee that it will be hard to find good models that are alike in all directions and their movements.
Hopefully we can benefit from ganspace to allow us to search for those variations.

For creatures it may worth to actually check another techniquee called feature quantization, we would basically passing a "filter" to change the style of existing creatures rather than recreating them from scratch.

If you're interested in making a donation, you can convert it to OpentibiaBR Organization, I help them for a while now and they are doing a really great job improving client and server.
There's also @fabian766 who is working in an optimized version of tfs and also a brand new cpp client, you can donate to him using this link
And last but not least there's @Mehah who is doing an amazing job for the community to have an open source otclient alternative to otcv8. Here's his donation page

These guys definetely deserve more recognition and donations than I do.
 
Awesome stuff

If we could use this kind of technology to generate creatures, then the OT sprite pack would be completed in a blink, that's the major impediment right now... if you look at its current status it's like 75% items 20% ground-building- nature tiles 5% creatures (to no one's surprise, as designing creature sprites with all the animations is a much much more time-consuming and technical process). Or maybe in this same regard we could use machine learning to help in some way to make working with creature sprites easier? (i.e. help with the animations)

Anyways, don't want to start an off-topic discussion. I'm just getting my feet wet with machine learning myself, what a magnificient (and quite frankly, scary) topic it is... keep up the great work!
 
Awesome stuff

If we could use this kind of technology to generate creatures, then the OT sprite pack would be completed in a blink, that's the major impediment right now... if you look at its current status it's like 75% items 20% ground-building- nature tiles 5% creatures (to no one's surprise, as designing creature sprites with all the animations is a much much more time-consuming and technical process). Or maybe in this same regard we could use machine learning to help in some way to make working with creature sprites easier? (i.e. help with the animations)

Anyways, don't want to start an off-topic discussion. I'm just getting my feet wet with machine learning myself, what a magnificient (and quite frankly, scary) topic it is... keep up the great work!
Grounds and builds are actually easier to train on than equipments, for example. For creatures it will be challenging but I'll try to instead of using only a single image separated, to use a whole batch with all frames of animations and rotations. I'll keep you all posted
 
Nice nice, an OT GAN Sprite Pack, made to be a full fledged replacement for cipsoft sprites is starting to seem like a plausible idea.
If this could be made with matching id representations as the cip version (like matching id for fire sword), it could even be cross compatible. Which I think is a neccesary step to fully migrate over to an open source sprite pack. (Find a way to identify similar, but not identical sprites?).

I would like to see how well it adapts to ground tiles, like various grass squares, cobblestone road etc.

You might want to look into an AI edge detector/mapper to identify which part of a sprite should have a clean pink background. Although I imagine this is also hard to make training data for.
 
it could even be cross compatible.
since it was trained on top of cip's weapons it definetely can be cross compatible, that was my goal to begin with.
There are some techniques to actually find sprites that look near, one of them is called "projection" when you try to find seed and truncation values that match a provided image, if the image belongs to the train dataset, it actually finds a perfect sample of it, and then we could use this seed number to begin with and actually just made some small details. To be honest this is possible but I don't think it would make the sprite pack seems cool but rather they would seem small variations/worsed versions than cip ones.

With ganspace we can actually find models that look better and very different from the vanilla ones, but still somehow resemble them.
For cleaning noise, I've been some GANs that actually do this noise reduction detection, I'll try some stuff later on and see if it is better than trying alpha compositions with denoising
 
What if you got around 250 more sprites to use? How much would that make a difference? Just the number of possible results or this could also improve quality or something?
 
What if you got around 250 more sprites to use? How much would that make a difference? Just the number of possible results or this could also improve quality or something?
Basically we have the following:
- More data = more possibility of variations = more time that you can train and improve quality without the model collapsing.

I've been checking some articles about GANs that work with small datasets but even for the best ones we can't work with less than 100 samples per type. There are weapons in tibia like bows/crossbows that we simply do not have that much variety.

So our option becomes restrict to either put those models together with others to achieve > 100 samples or to have a very shitty model that basically only make copies of the same trained image.
 
Just a question, are you all still being able to set the environment and run the program?
Recently I had a problem with my google drive and had to change a few things which may have impacted this repository. Can anyone confirm?
 
Just a question, are you all still being able to set the environment and run the program?
Recently I had a problem with my google drive and had to change a few things which may have impacted this repository. Can anyone confirm?

Still works for me!
 
Thanks for sharing your project. After reading our last conversation, I knew you knew what you were talking about. Now its confirmed!
as for animations, I think there is a possible way of doing this...


edit: I get the "np is not defined" error, though I have not looked into it

on another note, what we could do is similar to what MIT has done with "creepy" generated images, where people go online and choose the creepiest pics generated by the AI and thus more data is generated that way, at least the "good" candidates... here's the link to what I'm talking about (warning, very sp00ky stuff) Nightmare Machine: Help us create the most scary (http://nightmare.mit.edu/faces)
 
Last edited:
The error you face probably means you haven't executed something or that you're using runtime as tpu/cpu instead of gpu. If you need any support please let me know
 
Back
Top