Digital Fern Software

For the Digital Fern project that I have been working on, I have been looking at using an array of sensors to drive its movements. Due to the complexity of the structure, I was hoping to use a neural network to provide the control instructions to the Fern. While that seems somewhat vague, even to myself, I figured if I could provide a software proof of concept, I would be able to find a physical implementation.

How will the Fern behave?

  • The Digital Fern was built on the premise of finding unique ways to provide control to a robotic object while rejecting the x, y, z coordinate systems that run many robots and gantry systems. Additionally, I wanted to imbue it with a kind of organic logic and movement. I used syringes as hydraulic cylinders as this would reduce weight on the cantilevered portion and smooth out the jittering of stepper motors. And for the logical portion, I wanted to also implement a system that was based (at least theoretically) in nature: a neural network. The inputs for the neural network would be sensor data fed to it by an array of photoresistors. The branch will react to light by “reaching” towards it. The idea of the neural network became appealing when considering how much data is drawn from the sensors and how many values need to be fed to the Fern to move it. The challenge is in reality, the training aspect. How would I tell it that it was moving correctly and not incorrectly?

p5 Simulation

While that question remains unanswered, I developed a simple sketch that at least produces an effective steering behavior using three “sensor” values and a neural network. The values of the “sensors” are based on their alignment to or away from a target point. As they face towards it, their values increase, and when they face away, they decrease. The sensor body then compares the three values, and when the red sensor is strongest, it rotates clockwise. When the blue is strongest, it rotates counter-clockwise. When green is the strongest, it doesn’t rotate. In terms of movement, it accelerates when green is strongest and slows down when either red or blue is strongest. See it in action below:

  • Click the mouse to control of the target.
  • Press ‘spacebar’ to end (and begin) training mode.
  • Press ‘control’ to turn back propagation on or off.

The code is available here:

Leave a Reply

Your email address will not be published. Required fields are marked *