NeuralFractal 0.3.0

A Visual Exploration of Neural Dynamical Systems

Built on Top of PyTorch
Define Dynamical Systems Using Complex-Valued Neural Networks
GPU Support for Accelerated Sampling and Rendering

What is it all about?

NeuralFractal has been developed for exploring the properties of dynamical systems defined by neural networks and specially complex-valued neural networks. Fractals are visualizations of Chaos. They have infinite self-similar patterns. One way to generate fractals is by applying a function repeatedly on a set of points and keeping the points that do not diverge to infinity. Interestingly, Even dynamical systems constructed by simple functions in this way can generate amazing fractals. But what happens if instead of a simple function we use a neural network? Repeatedly applying a neural network is equivalent to a recurrent neural network which is able to model complicated non-linear dynamical systems. Reservoir computing has demonstrated even completely random RNNs can construct strange and interesting dynamics. This package is an attempt to explore the strange and beautiful world of fractals.

Fractal
A fractal generated by a complex valued neural network after pseudo-coloring

Quick Start

Installation

You can install the package using pip. Note you need PyTorch as a requirement.

pip install nfractal

How does it work?

For a moment forget about complex neural networks. Let's generate a simple fractal.

Given a complex number \(s \in \mathbb{C} \) and the complex function \( f_{s}(z) = z^2 + s\)

The point \(s\) is a member of the Mandelbrot set if the following iteration does not diverge to infinity:

$$z_{k+1} = f_s(z_k)$$

Where \( z_0 = 0\). We can implement the above dynamical system using a PyTorch module

import torch.nn as nn

class Mandelbrot(nn.Module):
    def forward(self, z_k, s):
        return z_k**2 + s

dynamics = Mandelbrot()

Now we can try to plot the members of Mandelbrot set in the complex plane. A simple way is to sample the complex plain uniformly, Then applying the above iteration a finite number of times say \(50\) and plot the samples whose abosolute value is less than a specific radius say \(2\). You can use the UniformSampler class to do so.

from nfractal.samplers import UniformSampler

sampler = UniformSampler(dynamics, sample_size=2**17, img_size=(600, 800), threshold=2.0)

Now you may perform the sampling using sampler:

image = sampler.sample(epochs=100, max_iters=50, verbose=True)

Setting verbose=True will display the sampling process and acceptance rate of samples at each step. The output will be a PyTorch Tensor containing the image of size (600 x 800 x 1).

import matplotlib.pyplot as plt
plt.imshow(255-image, cmap='gray')

After plotting the image using Matplotlib you will see something like this:

Fractal

It is easy to enhance the image a little by histogram equalization:

from nfractal.utils.enhancements import hist_equalizer
improved_image = hist_equalizer(image)

The result seems much better:

Fractal

Complex-Valued Neural Networks (CVNN)

Considering the previous example, we can use a CVNN as \(f_s(z)\). A CVNN is a neural networks with complex parameters that accepts complex inputs. That is \(f_s(z_k) = \mathrm{CVNN}_{\theta}(z_k,s)\) where \(\theta\) is the network's parameters. Now you can create define a neural dynamics. However your network needs complex layers and complex activation function. Some of the PyTorch's activation functions support complex inputs (e.g. nn.Tanh). NeuralFractal tries to facilitate constructing complex layers and complex activation functions which will be improved over time. In the below example a random CVNN is defined using complex linear transformations.

import torch
import torch.nn as nn
import nfractal.nn as cvnn

class NeuralDynamics(nn.Module):
  def __init__(self):
    super(NeuralDynamics, self).__init__()

    self.net = nn.Sequential(cvnn.Linear(1, 6),
                             nn.Tanh(),
                             cvnn.Linear(6, 6),
                             nn.Tanh(),
                             cvnn.Linear(6, 1))
  
  def forward(self, z_k, s):
    return self.net(z_k)**3 + s
    
torch.manual_seed(33) # to reproduce the same results
neural_dynamics = NeuralDynamics()    

GPU Accelerated Sampling and Rendering

NeuralFractal is CUDA friendly so you can leverage GPU devices to accelerate sampling and rendering.

from nfractal.samplers import UniformSampler

sampler = UniformSampler(neural_dynamics, img_size=(1200, 1200), sample_size=2**19,
                         device='cuda', zoom=10.0, center=(0.0, 0.0))

image = sampler.sample(epochs=100, verbose=True)

Before displaying the result, Don't forget to convert the image into a cpu tensor using image.cpu(). The output looks amazing :)

Fractal

You may like to colorize it using pseudo-coloring. The nfractal.colorize sub-package provides some facilities. One method is transferring color from another image. Consider the below image:

Fractal

We want to transfer the color from the image above to the fractal image. We can use the ColorTransfer class. But first you need to read the target image:

from nfractal.colorize import ColorTransfer
# read the target image
target_img = plt.imread('target.jpg')
# convert the image to a PyTorch tensor
target_tensor = torch.from_numpy(target_img).to(torch.float64)
# create the colorizer
colorizer = ColorTransfer(target_tensor)
# colorize the image
colorized_image = colorizer(image.cpu())

Now we can display the result:

Fractal

You can also zoom in the image as you like by changing the zoom and center parameters:

sampler = UniformSampler(neural_dynamics, img_size=(1200, 1200), sample_size=2**19,
                         device='cuda', zoom=1.0, center=(0.0, 0.0))

image = sampler.sample(epochs=100, verbose=True)

And the result:

Fractal

About

Contributors

Amirabbas Asadi amir137825@gmail.com
Independent AI & Computer Science Researcher
Undergraduate Student of Computer Eng

GitHub LinkedIn

References

  • - Inspired by a blog post by Xander
  • - E. Reinhard, M. Adhikhmin, B. Gooch and P. Shirley, "Color transfer between images," in IEEE Computer Graphics and Applications, vol. 21, no. 5, pp. 34-41, July-Aug. 2001, doi: 10.1109/38.946629.