Generating Art

REQUIREMENTS #

Make sure you can fit the requirements for this project:

  • A Google Account
  • At least 50GB of Google Drive Space Free ($1+ per month)
  • (RECCOMENDED) A Google Colab Pro plan ($10 a month)

Dataset Creation

Preprocessed Datasets If you are new to StyleGAN or don’t want to spend hours finding images on your own, I have created 20 datasets you can download right from google colab.

Custom Dataset if you really must be a special snowflake with your own dataset, there are some great resources to gather images. Wikiart.org and Imgur posts are your best bet for quickly gathering images you like most. You will need at least 1,000 images to have a decent model.

When you are done with gathering raw images, zip all the images in a basic folder, and follow the instructions on the Google Colab to preprocess the images for training.

Setup #

What is Google Colab?

If you’re new to the Cloud Computing or AI Scene, you’re in for a treat. Google Colab is a cloud computing online software that you can utilize for free. It will give you server-grade hardware to train your models with, so your potato computer doesn’t need to suffer any more than it does. Colab Pro, which costs $10 a month will give you longer runtimes, better hardware and space which is helpful for large projects like this.

Lets get this party started, so open Google Colab

Connecting Google Drive

To save all the files during training, we will connect our Google Drive Account to the session. Click the play button on the left, and connect your google account to copy the code. Once it is connected, we are free to continue to installing the libraries.

Installing Libraries

Next up is installing all of our code, libraries, and configurations to train our model. If this is your first time installing, it will create a folder and copy the GitHub repository into your google drive, if it already exists it will simply install the libraries necessary.

Processing Dataset #

Custom Dataset

Remember that zip file you made? Well, you need to upload it. Click the play button for the first block of code, and click your dataset. This might take a minute, so hang tight.

Next, you will preprocess the dataset. This is vital for the AI to read your images. Depending on the size of your dataset, this could take a few hours, so id suggests looking at the training parameter explanation down below while you wait. Once this is done, we can continue to training.

Preprocessed Dataset

All of these images are already done processing, so all we need to do is download them. Right below “Custom Dataset” there is a dropdown menu for the selection of datasets you can download. Feel free to look up examples of the different genres. Once you found the one you like, press the play button and it will download and unzip for you.

Training #

Dataset preprocessed, Colab setup, you’re almost there! Well, not really because the training process will take a few days but we can pretend everything is fine. Anyways, this is a basic rundown of the parameters that is used for the training process so we can tweak them if deemed necessary

You can change this to your liking, you find more by using the python.py –help or referring to the original github page

  • dataset_path: This is for custom dataset users only. Copy the path (can be done with the file explorer to the right) and enter it in the quotations.
  • snapshot_count: The number of times a sample is generated from the training. The fewer snapshots, the more samples produced but the more storage is taken up, and vice versa for more snapshots, id keep it to 1-8.
  • Metric_List: This is used to determine the quality of the samples, and at the slight cost to speed can improve the quality of the final output
  • Augs: ADA is a new way of training, which seriously accelerates the training process and quality of output.
  • Augpipes: Enables all available augmentations. This can be helpful when you want to generate media from your model.

Now we have that out of the way, you can hit the train. Assuming you set everything up correctly, it will start to train. You can check its progress visually under the results folder. This will at least take a continuous day of training. If you get kicked out of your session, you can reload your dataset and resume_from your last .pkl file.

When you feel like it’s done, you can stop the training. Your last pkl file from your results folder will be the file you want to download to use for other projects.

Generating Media #

In the Colab I have included some basic applications for your newly-trained model.

Generating Images

Pretty self-explanatory, it will generate PNG files of art in 1024×1024 resolution. First link your .pkl file mentioned previously, and then choose how many images you want to be generated. If you want 100 images, for instance, you could generate seeds 101-200. Click play and it will generate in the out folder on google drive.

Animate Your Art

If you found some images from the art you have generated, you can create a video of the images seamlessly transitioning into each other. Note the images you like their corresponding seed number. From here, write them in the –seeds argument. I would recommend ending with the same seed you started with, to create an endless loop of the video. Now you need to do some math. The video generates in 24 FPS, and therefore however many seeds you have will have x amount of time to transition. If you have 5 images, for instance, and 60 frames. You would do 60/5 which is 12. Then think about how it’s 24 frames a second, and therefore each image would have about 0.5 seconds to generate. Modify this to your preferred speed and let it rip. This will generate an mp4 file in your out folder.

How to continue #

Well, pretentious author. I did everything here yet there is still a hole in my heart that I have yet to fill. Good news! There is much, much more to do! here ill link some fun projects that I couldn’t include, but are well worth trying trying:

 Audio-reactive Latent Interpolations

Powered by BetterDocs