Saturday, April 18, 2015

Natural examples of Subsumption Architecture-like control systems (part 1?)


In the following blog post I want to do two things.

Firstly, I want to give a brief overview of the so-called Subsumption Architecture -- a kind of robot control system developed in the 80's primarily by the roboticist Rodney Brooks. I give this short introduction so that I wont have to do it again in this blog. It's going to come up again, and I'd like to point this post rather than reiterate it (I could also point you to his papers, which are extremely readable).

Secondly, I want to use these posts as repositories for biological examples of Subsumption Architecture like control systems.

Subsumption Architecture in brief

 

The SA is a bottom-­up approach to designing robotic control systems that proceeds by the accretion of complete behaviour producing layers. Each layer is complete in the sense that it is a total “activity producing subsystem” that takes an agent from perception through to action. During the design and implementation of these layers their adequacy is tested in the real world (the actual environment that the robot will inhabit). There are two distinguishing features of the Subsumption Architecture.

First, unlike traditional models of intelligent action where all information from sensors filter through to a central processor (after being converted into suitably neutral representations of the world) and used to plan a series of actions, each module that makes up a behavior producing layer is potentially connected
directly to both sensors and actuators. The SA is also explicitly anti-representation, in the sense that there are no centrally stored representations (think data-structures) shared between the behaviour producing layers. This aspect of the SA is the topic for a follow up post.

Secondly, the behaviour producing layers are arranged into a hierarchy (i.e. the “Subsumption hierarchy”). The lowest levels implement the most basic (but complete) behaviours, such as basic movement and collision avoidance, and each subsequent layer that is added will co opt, subsume, or, often, simply disable the behavioural competencies provided by the lower levels. The robot's entire behavioural repertoire is not “encoded” in any one place but rather emerges out of the interaction of several different activity producing layers. The communication between layers is one way, from the higher layers down to the lower and consists primarily of simple signaling (on, off, possibly a short bus that is able to represent a handful of numbers/states).

The examples


Kent Berridge, in his masterful review paper "Motivation Concepts in Behavioral Neuroscience" (Berridge,2004) describes Dethier's "Hungry Fly".
A Fly has two eating reflexes. When a fly lands on a food source, an excitatory reflex engages eating behaviour. When its stomach is full, a second reflex is engaged that inhibits eating behaviour.
Dethier was actually able to disable this second, inhibitory, reflex by severing a sensory nerve attached to the the fly's gut. When severed, the excitatory reflex was never overridden by the inhibitory reflex (i.e. the inhibitory reflex was never engaged) and so the fly would eat until its stomach burst.

The excitatory and inhibitory reflexes involved in the fly's eating behaviour have clear similarities to the kind of hierarchical overrides that drive action selection in the Subsumption Architecture. Lower level behavioural modules are engaged until overridden by a higher layer -- by being, either, co-opted or simply disabled.
Think, for example, of a simple robot whose simplest behaviour is to drive forward. Simply drive forward. This isn't a particularly useful behavioural profile because, unless this robot lives in an infinite and smooth plane (or sphere, or whatever, as long as there are no obstacles) this robot is going to run into some obstacle and get stuck, forever.
Now we add a second behavioural layer to this robot - a small sensor along its front that can sense when the robot butts up against an obstacle. This behavioural layer then disables the first layer so that the robot doesn't keep blindly racing forward, and then -- say -- engages only some of the robots wheels (or legs etc.) so that it turns in a random direction until the sensor no longer detects an obstacle. The second layer is then disengaged and the first layer is free to make the robot run off straight until another obstacle is detected.
If some techno-Dethier came and severed the wires running from our robot's sensor, it would run into an obstacle and be stuck forever (not quite as grim a fate as our "Hungry fly's" exploding stomach)1.

The housefly's flying behaviour is also reminiscent of the Subsumption Architecture (Clark, 1997), in that there is no central control mechanism that "chooses" flying behaviour out of a range of possible behaviours. Essentially, there are sensors in the fly's feet that are connected directly to the wings. When the fly's feet are no longer in contact with some surface, the wings begin to flap.

Another interesting example is predator evasion in Noctuid Moths (McFarland and Bösser, 1993). Noctuids are preyed upon by bats and the moths' auditory system is exquisitely attuned to their predators' echolocation system, being able to both sense distance, direction, and whether the bat is approaching. If the bat approaches within a few meters of the moth, nerve cells from the auditory system send its wing muscles into spasm, causing the moth to fly erratically and drop towards the ground, hopefully evading the bat. Here we again have a fairly complex behaviour being triggered directly by environmental stimuli -- at no point was this behaviour selected by a central control mechanism, and there was no internal representation of the external world at all. Further, we see the typical hierarchical overriding that's the defining characteristic of Subsumption Architecture action selection.

These examples can be easily multiplied (see, for example, David Spurrett's discussion of the Sea Slug's eating habits) and the fact that we're able to point to natural control systems that exhibit similarities to the Subsumption Architecture is evidence that Brooks et al were on the right track, at least for certain classes of behaviour.
The real question is how far are we able to go with the Subsumption Architecture? Can we get from "insect level" intelligence all the way to human level intelligence simply by scaling the Subsumption Architecture? I don't think that there's yet a convincing argument against the possibility, but I think it's unlikely (although I'll leave that question for a future post, and a paper, and a thesis).

Notes:

1. It's important to note here that what I'm suggesting isn't that the way in which the competing reflexes are implemented is in fact by behavioural modules overriding each other, just that the way in which the two behaviours are engaged -- landing on food, fullness of gut -- and interact -- the inhibitory reflex "switching off" the excitatory reflex -- are reminicent of the Subsumption Architecture. One could easily imagine this behaviour being implemented in a SA.

References:

Berridge, Kent C. "Motivation concepts in behavioral neuroscience." Physiology & behavior 81.2 (2004): 179-209.

Brooks, Rodney A. "Intelligence without representation." Artificial intelligence 47.1 (1991): 139-159.


Clark, Andy. Being there: Putting brain, body, and world together again. MIT press, 1997.

McFarland, David, and Tom Bösser. Intelligent behavior in animals and robots. Mit Press, 1993.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.