I think it was almost 20 years ago when I expectantly started the robot algorithm that had slowly been evolving for weeks.
I had already realized that I had to adjust the fitness function to give points for killing, not just surviving, since the first attempt had created robots that instead of skillfully picking off the opponents one by one - as I had envisioned - took speed for the nearest corner and then just sat there waiting to be killed by one of the other traditionally programmed robots.
I realized that the corner strategy made sense as they got the most protection there, but it was a bit unsatisfactory to watch...
In the second try the robots scurried to their usual corners, waiting to be killed, but this time firing randomly, occasionally killing one of their opponents by luck before they got hit.
The evolutionary algorithm sort of hit a plateau there, and besides I kind of wanted to use my computer for other things rather than spending days and days simulating robot fights.
But I learned two things:
* Evolution most likely is a thing.
* It's really really hard to write fitness functions...
It's important to make sure that what these algorithms actually produce is surprising to subject matter experts. I can recall, but can not find at the moment, a tumblr post where the blogger was surprised that a computer program which optimized a physical structure to minimize material while maximizing strength ended up looking organic.
This didn't surprise me at all. Organic structures evolved over millions or billions of years and probably are nearly optimal at accomplishing a particular task. I'd be surprised if the optimization software didn't produce something that looked organic.
It's not like the optimization software was better than actual organic structures either, which are neither isotropic (in organic structures the strength varies depending on the direction) or homogeneous (in organic structures the strength varies depending on the location) as the software had assumed.
This article is actually providing select anecdotes from a more exhaustive paper: https://arxiv.org/pdf/1803.03453.pdf
I tend to believe life is universal. It may arise in a Turing machine that executes all possible programs one line at a time in parallel. It may arise in the subatomic realm... assuming sub-atomic particles exhibit sufficiently diverse complexity between 10^-35 and 10^-17 meters (a space unknown to us). It may arise between galaxies in the universe etc. We should view life more broadly.
I'm also reminded of https://blog.openai.com/faulty-reward-functions/ (which I saw on HN a while ago), where an AI unexpected learned to
> turn in a large circle and repeatedly knock over three targets, timing its movement so as to always knock over the targets just as they repopulate. Despite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track, our agent manages to achieve a higher score using this strategy than is possible by completing the course in the normal way.
Would be interesting to provide videos of the evolved robots rather than just the anecdotes.
Very interesting. 'Learning to Play Dumb on the Test' was the most surprising one to me, followed by the alien-turned-car that lead to 'novelty search' algorithms.
> Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.
Someone needs to tell the author about quantum mechanics. It's not an unreasonable hypothesis to explain quantum effects as numerical errors in the Matrix.