Having fun and making fun with game of life.

Summary of the previous episode



In the previous episode we made a simple game of life called GoF because I am dyslexic and Game Of Life became Game Of liFe. The purpose of the exercise was to demonstrate that an abstraction ought not to be complicated but ought to dissociate the intuitive way to use the code and its implementation. Now, our new mission, is even funnier: it is to show how a simple toy program can teach us serious things.


Introducing the game of life console 



I have made a small tutorial here : http://gof.readthedocs.org/en/latest/quickstart.html

My game of life implementation is deliberately faulty in its implementation, is a little bit of a troll, but it is has an amazing power to illustrate some key concepts of developing (I prefer factorizing):
  • it is an introduction to non linear algebra a field in which computer analysis is a must;
  • it is a massively parallel model hence a way of thinking problems in order to defeat concurrency problems;
  • it is a very simple way to begin manipulating python;
  • there are some smart tricks based on abstractions; 
  • it is an introduction to discrete cellular automata / multi agent simulations.


The troll in the code



I am not an Object Oriented developer. I am an old old developer coming from the old time, we use to say : «show me your code, and I shall be mystified, show me your data, and I know how your code really works». Since OOP and encapsulation has been massively adopted, data and code are mixed up in the same files. As a result, it has become impossible to tell code from data.

My code is old fashioned : data on one side (the matrix), and functions on the other side (evolve, bleach, dirty). But, is not.

If you type :

matrix.play = evolve
matrix.reset = bleach
matrix.random_fill = dirty

grid.reset(15,20)
grid.random_fill(10)
grid.play(200,3)


Then, now all the tutorial can be rewritten in a plain OOP fashion. So, the troll in my code is about stating that coding is not about using a paradigm, it is about making code that works, and that OOP and imperative programming can be equivalent.

Since I miss the separation of data and code, I do advocate separating data and behaviour in distinct files and that is why I love python dynamic typing but you could also use inheritance to achieve the same result. One illustration of this principle is archery that use traits (mixins) to give behaviours to inert objects (Mutable Mappings in this case).

I deliberately left a mistake in my code to illustrate my point: my Game Of Life's implementation cannot be used for a generic 2D cellular automaton networks because I left a method for counting living neighbors that is specific to Conway's game of life. I intend to use it make a point later if I have the time to develop on cellular automata, and on the proper separation of methods in an object.


Readying the path for some analysis



Imagine you want to make an histogram of the number of cells that are ALIVE after 0, 50, 100, 200 iterations. For the sake of fun, I use gof.weird_array.SparseArray. Hence you just have to count the number of items in the set to count the living cells this way:

import matplotlib.pyplot as plt
from gof.matrix import matrix 
from archery.bow import Daikyu
from archery.barrack import bowyer
from gof.gof import dirty, bleach, evolve
from gof.weird_array import SparseArray
from json import dump
result= Daikyu({})

for _try in range(100):
    ### {0}^{0} is more friendly thant the boring set()
    grid = matrix(20, 16, SparseArray({0}^{0}))
    dirty(grid, 10)
    result+= bowyer(Daikyu, {0: {len(grid.matrix._set): 1 }})
    evolve(grid, 50, 'unseeable')
    result+= bowyer(Daikyu, {50: {len(grid.matrix._set): 1 }})
    evolve(grid, 50, 'unseeable')
    result+= bowyer(Daikyu, {100: {len(grid.matrix._set): 1 }})
    evolve(grid, 100, 'unseeable')
    result+= bowyer(Daikyu, {200: {len(grid.matrix._set): 1 }}) 

fig = plt.figure()
for i, label in enumerate([0, 50, 100, 200]):    
    ax = fig.add_subplot( len(result), 1, i+1)
    if not i:
        ax.set_title("Number of living cells for 100 random configurations")
    p = None
    x,y=[],[]
    for k,v in sorted(result[label].iteritems()):
        x+= [k]
        y+= [v]
    ax.plot(x, y, label="%d iterations" % label)
    ax.legend()

plt.savefig("histo.png")


And Tada o/



What is so special about these plots ?
  1. there is no nice distribution of the number of the accessible configurations after initial  time  (gaussian or any other distributions);
  2. the configurations that were pretty homogeneous in terms of 0/1 are now more spread (the entropy of each initial patterns decreases, but the result are less homogeneous in terms of 0/1 ratio than the initial patterns)  ;
  3. the number of configurations increases, and then decreases : we have a transition! 
At iteration 200, patterns should be converging to their  attraction basin. With a network of simple systems interacting with each other, you have a complex system. These rules being non linear, they cannot be addressed with linear tools (matrices, Fourier...). It is thus quite hard to predict their behaviors.

If you have the audacity to speak French ;) you can read this : http://www.lps.ens.fr/~weisbuch/cha2/02.html

Now you have a lot of challenge to solve :
  • is there a domain in which the system is chaotic (given a distance of 1 for two distinct patterns, how does final distance evolve?);
  • Are there more chaotic rules than Conway's?
  • If we change the actual neighborhood for a random neighborhood how does the system change? 
  • how many stable attractors exists, what are they size? 
  • how many precursors exists for a basin, how these are distributed?
  • if we consider the grid as a vector of 1/0, then it is a number. Can we construct a chaotic enough cellular automata that would make a good cryptographic hashing function (hint : rule30) ?

What about GPU?



Another cellular automata that is fun is the lattice gas automaton. And it is suited for GPU : Data-Parallelism and GPUs for Lattice Gas Fluid Simulations Since the same causes produces the same effects, most of the cellular automata networks will be easily implemented on GPU.

 

And what about Finance? 



I have found G. Weisbuch works inspiring :  hits and flops dynamics by G. Weisbuch. Financial phenomenon are pretty much based on a lot of agent with their own rationalities, some global beliefs and a neighbourhood. It seems very tempting to use a cellular automata to model financial phenomenon (which seems non linear).

Plus cellular automata are pretty easy to code, easy to parallelize with CUDA. So maybe we can do fun things. At least I tried with a very ugly first code in python of a multi agent simulation with a fanciful utility : http://wiki.github.com/jul/KISSMyAgen


And now?



Well, I hope you enjoyed, that's all.

PS Special thanks  to Alejandro Weinstein, Bmispelon for the corrections...

No comments: