Training Neural Networks with Genetic Algorithms In Swift

As always, before we begin, you can check out the code posted on my Github.

This year for the Swift Student Challenge Competition, I submitted a Swift Playground that implemented a genetic algorithm to train a neural network to play a simple side scrolling game. All of the code was written by me from scratch and uses pure Swift.

The Basics

Fundamentally, the goal of this project was to build off of my submission from last year and take it to another level. My submission last year was a simple neural network built to play the classic game snake. You can learn more about it here.

My idea to improve the neural network was to improve the neural network processing to allow many neural networks to be trained and run in parallel and to implement a unique way of training the neural network without training data. The method of training that I implemented is called a genetic algorithm.

Ill break this post into sections, so feel free to jump to any part:

  1. Implementing a Neural Network
  2. Training Using Genetic Algorithm
  3. All Other Parts of the Playground
  4. Wrap-Up

Implementing A Neural Network

One of the fundamental parts of this project is a neural network implementation in pure Swift. There has been a lot of machine learning work done in Swift, specifically with Apple’s release of Core ML in 2017. However, because of Core ML’s still limited use cases, and for the purposes of streamlining the playground, I chose to create a custom implementation of a neural network.

Some of the implementation is similar to another one of my projects, SnakeAI, which I developed in May 2020. If you are looking for some details on the implementation of the fully featured neural network, that article has some more information. However because this project will be feature with a genetic algorithm for training, there are a few notable changes.

Firstly, all of the training features, including backpropagation and error functions have been removed in favor of a more streamlined and efficient codebase. Additionally, the structure of the features of the neural network has changed drastically. The neural network now processes 2x more features including the heights of upcoming obstacles, the distance to those obstacles, and the player’s current height in the game. These features are implemented in the Features.swift file. Finally, the structure of the neural network has changed to a fully-connected 10x10x2 neuron structure (22 total neurons) to promote more efficient and effective decision making. More details on the full inner-workings of the neural network can be found in the Swift files posted on Github.

Training Using A Genetic Algorithm

The most innovative part of this project is the usage of a genetic algorithm as a unique way of training the neural netwrok. This differs from a traditional neural network training methods because it requires little or no training data. The neural network in this playground is built on only 20 seconds of data from me playing the game. The rest of its knowledge comes from its training via genetic algorithm. Here is all of the details on how it works:

As an overview, this process seeks to mimic the process of natural selection in nature. The game begins by initializing 100 unique players, with their own neural networks to make decisions. During the game, players will make decisions many times per second on how to move through the map and avoid obstacles. A player’s score in the game determines how often their genes will be passed on to the next generation. Higher scoring players pass on their genes more often, so over time, weaker genes are mutated out of the population, and the neural network gets stronger.

To go more in depth, heres a flowchart of the process:

  1. Play GameThe game is played using either the individuals generated by the genetic algorithm or individuals pre-saved to the program. This evaluates the fitness of each individual which will be used for generating the next generation.
  2. Add ElitesThe genetic algorithm begins to process the next generation by adding elites to the new population. Elites are a direct copy of the best individual from the current population. Adding elites directly to the next population prevents the unlikely possibility of randomly mutating out all good genes.
  3. Weighted SelectionA weighted selection of individuals occurs based on the fitness scores of the individuals. This process selects two individuals to be used in the following steps.
  4. CrossoverUsing the individuals from the weighted selection, the process of crossover seeks to imitate the biological process of crossover to mix the genes of the individuals. A random point is selected in the individuals DNA. All genes before this point will come from the first parent, while all genes after this point will come from the second. Additionally, a second child is created using the unused genes from the first child. (Before random point from parent B and after random point from parent A)
  5. MutationGenes within the children are mutated based on the mutation rate set in the code. This increases genetic diversity in the new population.
  6. Generate New PopulationAfter all of the above processes are complete, the new population now called the “current population” and the entire process begins from the beginning. As this process repeats more and more, unfit genes leave the population and the individuals score higher and higher.

All Other Parts of the Playground

Apart from the neural networking included in the playground, there is also code for a simple game. The game is a hybrid between flappy bird and Google Chrome’s dinosaur game. As a basic overview, the game consists of three files:

  1. Game.swift This is a handler for most aspects of the game, including score keeping, object generation, and game start/stop.
  2. Player.swift This file is the foundation from which each player is generated. It is a subclass of SKShapeNode that consists of the node itself as well as the player’s neural network.
  3. MainScene.swiftThis file handles all of the graphics of the game. Additionally, while the genetic algorithm is enabled, it handles calling the appropriate processes in order to create a new generation.

Wrap-Up

Overall, this was probably my most intensive programming project to date. All of the code in this playground was written by me over the course of about 2 ½ weeks (4/1/21-4/19/21). At the end of all of it, it turned out to be an overwhelming success.

After about 5000 generations of the neural network is capable of obtaining an average score of about 25 points and a high score of over 100 points. This proves that a genetic algorithm can be effective method of training a neural network, especially in situations where little or no training data is present.

If you would like to learn more, shoot me an email! Theres a link at the top of the page. Also, feel free to check out the code on Github and maybe leave a ⭐️.

🏄‍♂️

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2022 Blake Bollinger

Up ↑