About
Notes from a series of conversations I had with my friend David, a neurologist, on the subject of brains, minds, and intelligent machines.
2020-05-24 v1.0, notes from our first conversation
2020-06-02 v1.1, with a few tweaks & clarifications
2020-06-06 v2.0, notes from our second conversation
Introduction
Like many, I have always had a slow-burning interest in the human brain and how the mind works. My understanding is that the most interesting mechanisms and details are still shrouded in mystery, but at the same time, for example, imaging technology is getting better every year. I've been meaning to get a handle on where we are today with our understanding of the brain and its astounding powers and behaviours.
With my tech background the broader context is, of course, artificial intelligence, and the ways in which we build smart behaviour in software systems. There are clear parallels between how we techies believe the human mind does its thing and how we do things for example in machine learning. One could almost say that all of computing is in a sense an exercise in trying to get the machine to do artificially what we do naturally in our heads.
To build supposedly intelligent, and certainly more interesting systems, having some kind of an understanding of how the mind works should be useful. Conveniently, I know just the guy to ask about all things brain.
My friend David is a neurologist, a doctor specialising in the health and disorders of the brain and the spinal cord. He recently started a whole new chapter in his life, taking on a teaching position abroad. David also recently became a father, gaining that unique perspective on how a human mind develops. In other words, David has fresh experience in taking stupid questions and talking to people who don't know anything about anything!
I've been meaning to get David's take on all matters of the mind for a good while now. In the midst of the COVID-19 lockdown, I decided to call him up to see how he was doing, and to pick his brain on the character of human mind. We quickly found some common ground.
This text is notes from my conversations with David. We started off in mid-May 2020 with some basic questions I had on the physiology of the brain and on select brain functions. We also explored some of the more philosophical territory, questions around the nature and characteristics of the human mind. Some of these things are just an online search away, but I was keen on getting David's own perspective.
As always, inaccuracies and misunderstandings and misrepresentations are all my doing, all the insightful bits are David's. I've edited and reorganised our conversations below for readability: what questions I had written down beforehand, what I asked David, what we ended up talking about, and what reads below are all different things. I hope the essence of our lively conversations still carries through.
First Conversation
Brain Matter
Everything I always wanted to know about the brain, but was too afraid(?) to ask.
What makes the human brain special? What do we have that other animals don't?
David agreed that one of the things that makes human brains special, to the extent that that is a reasonable assessment, is due to the frontal brain matter, the more recent stuff in evolutionary terms. The front stuff is in charge of key human cognitive abilities.
My rough mental model is that the human brain is built of layers. The oldest parts, in evolutionary terms, are in the "core", close to the brain stem. The further out you go in the brain, generally the more "recent" stuff and function you discover. In other words, all the fancy stuff is close to the edges, whereas most of the more automated functions — the ones we are not usually consciously aware of — are more central. David didn't seem to find this coarse approximation unreasonable.
David suggested that humans are special due to the breadth of our context in our experience of the world. We are capable of being aware of things at multiple levels simultaneously. I got the impression that awareness is a term or concept that David finds particularly useful in this sort of general discussion.
Another way to look at what makes human brains special is that humans have the capacity for complex review. We can perform multiple passes of processing on any given stimulus. If consciousness is awareness to stimuli, humans can not only react to stimuli, but to think about them, and do planning and review. "Higher brain function" is about more purposeful action in the face of stimuli. Like with everything else, thinking and planning give an evolutionary advantage, which is why we have the capacity for it.
Indeed consciousness as a whole can be thought of as a post hoc construct, activity after the fact, activity long after a stimulus has occurred.
More on the evolutionary history of brain development. Is it so that a typical human brain (if there is such a thing) has all of the brain components we've ever discovered? Or put another way, is it so that no other animal has components that the human brain doesn't have? I mean octopuses, etc., have crazy nervous systems, but in terms of functional units, do we have all the parts?
David was a little reluctant to play along with my formulation of brain "parts". A highly compartmentalised, effectively a phrenological view of the brain is not accurate in that for just about any interesting behaviour, activations happen all over the brain.
That said, the functional model of the brain does suggest that certain regions are definitely more important for some functions. Functional MRI (fMRI) reveals all kinds of networks of coordinated activity. Some brain functions are fairly localised, and different parts of the brain are involved in specific things.
David pointed out that if a patient has a stroke in a region of the brain that is critical for some function, probably no amount of "network effects" will help restore that function. So there are critical locations for some brain functions, but that does not mean that there are general "centres" for macro behaviour. The brain layers play a role as well: a stroke in the older parts of the brain has potentially the capacity to mess up more things, compared to something in the more exterior levels. But then again, all the cool human stuff is in the upper levels. There are no "safe" regions for strokes: all of the brain is generally useful to keep in working order!
In summary, my question doesn't really make sense, as the brain is more of a network of functionality than a composite of distinct parts.
I've heard something about folds and grooves in the brain — surface area? Why is surface area so important?
The surface area, it turns out, is not really the point. The point is to maximise the amount of outer brain material, the newer stuff, while having in place the layered structure that is necessary for its function.
The brain's outer regions are full of grey matter, the brain material where the main neuron activity happens. Below the grey matter in the central brain is the white matter, which serves as a connection and communication network — a relay system — between different regions of the brain. All the grey stuff needs to link up directly with the white stuff, so you cannot just have a blob of grey and call it done: the top of the blob would be too slow to access, too removed from the rest of the brain. If we think of the brain as a carefully co-ordinated network of processing centres, having some slow-to-reach nodes would slow down the whole system.
Put it another way: brain cells, neurons, are the clever bits in the human, the main building blocks of complex behaviour, so we want lots of those. But for a unit of grey matter, you need a certain amount of white matter and the other layers below it for support. You have a fixed amount of brain volume, because your head cannot be arbitrarily large (for one, babies need to be small enough for birth), but you also want to maximise the amount of grey matter. Folds and grooves are a way of maximising the packing of grey matter such that all regions can efficiently talk to one another. In a more uniform packing there would be regional inefficiencies.
Timing in the brain is weird.
In talking about the structure of the brain, David mentioned that timing in the brain is weird. For example, we know from EEG recordings that limb movement can be preceded by brain activity hundreds of milliseconds or even full seconds earlier — well before any desire to move is even perceived! Our brains are capable of different rates of activation.
I suggested that variable timing makes perfect sense from a computational point of view: first you calculate, and then you act. It's just that when it comes to human agency, it's simply unsettling when perceived action is perfectly predicted by brain activity beforehand.
I floated the idea of brain buffering I had heard from somewhere: if you touch your nose, you feel the effect on the nose and the finger at the same time, even though the signal for one has to travel through the arm.
David countered by saying that sometimes we bypass the buffer. If there are negative consequences from an action, like something that would cause pain, we do the corrective action well before our conscious mind gets involved. Indeed all reflexes, almost by definition, operate below the conscious threshold: we are capable of action before we complete the analysis of a situation in the upper levels of the brain.
(Timing is a central challenge in digital circuit design, the implementation of circuit boards that power everyday electronics and all kinds of computing devices. Here's Matt Parker and a Domino computer nicely illustrating the challenges of timing.)
Interlude: Science and Society
Varying rates of activation provides a lens on media culture as well, even politics. How do you communicate effectively with an audience: through instant emotion, or through making your listeners think really hard? No amount of persuasion can convince a listener who has accepted an easy answer. Flying below conscious levels makes your message easy to digest. You react before you review brain gets to do the processing.
(But surely clarity in communication is less work for the audience than an incoherent message? How does that work?)
(The readily accessible emotional level reminds me of Human Universals and Alan Kay's list of easy and hard things. Kay has many talks around the same subjects and I don't like his manner of presentation at times, but I think he has taste for big ideas and their timelessness.)
I also argued briefly with David about the nature of science. My proposition was that empirical study and best effort theories are strictly enough for science, but David held the view that it's not enough to just fit the data. Science has an internal structure to it, the scientific method helps the development of culture and society, because it's not arbitrary. Science is about building blocks. The most treasured theories are not only useful, but extendable, capable of serving as a foundation for future work, for further elaboration and insight.
I wondered about the anthropocentric bias in scientific discovery. Is there any value in beautiful, elegant theories and symmetry and aesthetics in science? Maybe nature is just one big mess? In the end, I think I accepted the notion that it is at least useful to have a progressive dimension embedded in the scientific method.
Philosophy of Mind
Moving on from basic brain physiology, we switched gears and had a quick exchange on some of the classic questions that in a sense bridge our respective fields.
(Theory of mind refers to the body of work exploring questions around the origin and nature of mental states, the psychological and social dimensions of having a mind. Philosophy of mind is more explicitly concerned with the nature of human mind and its relationship with the body, as exemplified by the mind-body problem.)
What's your take on the origin of mind? Is there an industry consensus, or are there different schools that brain practitioners subscribe to? Does anybody in medicine entertain, shall we say, a more dualistic view? Are you aware or even familiar with any serious theory of mind that does involve something other?
(A kind of a leading question, I wanted to steer us in a certain direction, to set up a follow-up question, but we didn't quite end up going there.)
As expected, David wasn't having any of it. Nobody in medicine gives dualism any thought. It's all well and truly in the past — and has been for decades at least. So no spirits or souls or essences. I gather that only a purely physicalist interpretation of the mind has any modern clinical use.
Lumping the word 'transcendental' in with the others, I got David to offer that the experiential dimension is more of a separate question. There may be traces of more dualistic thinking in psychiatry, where the nature of pathology is quite nuanced. What is "normal" or real is not a clear cut thing. The relationship between the individual experience of things on one hand, and generally accepted notions of reality on the other can be subtle.
(I'm pretty sure I claimed that even legendary neurologist Oliver Sacks arguably held, shall we say, an anti-mechanistic view of the human mind.)
To me, the substrate for a mind is not that important. "There's nothing special about the brain." The ability to produce emergent behaviour seems to me like the key thing. Where do you stand?
I wanted to talk about my hero Douglas Hofstadter and tell David about his off-mainstream ideas about artificial minds, but we didn't quite get there. Instead we had a splintered discussion on some related material.
Hofstadter, like his PhD student David Chalmers, is probably fairly positioned as an emergent materialist, which arguably falls on the dualist side, specifically under property dualism. Some higher order emerges from processes in the brain. (However, no supernatural soul, or any other transcendental element is required for the brain's operation.)
From emergent materialism we can take a few steps in the terminological minefield to land in biological naturalism, a theory advocated by philosopher John Searle. According to Searle, all mental phenomena are higher level features of the brain, ultimately explained by neurobiological processes. It's just that some of them are accessible only in the first person. Searle thinks his views are incompatible with with property dualism, but as far as I can tell the distinction is quite subtle.
Searle is probably most famous for the Chinese Room Argument, a thought-experiment in which he, knowing nothing about Chinese, is alone in a room with a computer and tasked with translating Chinese characters slipped under the door. By following the computer's translation instructions, he is able to send the appropriate translation string back out under the door, leading those outside to believe that there is a Chinese speaker in the room. Searle goes on to argue that 1) appearance of language understanding is not the same as real understanding, and 2) because computers can only do superficial syntactic symbol manipulation, as with the Chinese Room, no strong AI beyond simulation of biological processes is possible.
The Chinese Room Argument is a direct attack against the Turing test, the most famous challenge in all of computing. The Turing test is a test of machine intelligence, functionally an imitation game, where the task for the computer is to pass for a human in online chat. Turing argues that if a machine is to pass such a test, we should grant that it is intelligent.
David offered first-hand reports that training a human brain takes a long time! One shouldn't expect too much from a simple computer program.
I countered that at the same time, a program probably doesn't need to replicate all aspects of human cognition and human brain systems, because a program exists in a completely different domain. A program could "grow up" in some environment that is far simpler than the messy reality that humans have to deal with. And yet, the program could produce something interesting from those modest circumstances. Maybe enough to pass the Turing test, who knows?
Dreams
I wanted to build more bridges between classic brain stuff and modern computing so I asked David about dreams.
What are dreams? What do we believe is their true function?
David makes a distinction between our perception of dreams and the underlying processes. Our experience of dreams, the putative content of our dreams, is effectively just noise. When we dream, we somewhat arbitrarily tune in on the activity generated by the underlying processes. Perhaps dreams are simply us placing undue importance on whatever activations our brains happened to be making while we were sleeping.
Dreams come in cycles, and the frequency of the dreamy REM sleep increases towards the end of the night. If you wake someone up in the middle of REM sleep, they can say what they were dreaming about, because whatever they experienced just happened to be the state of the brain at the time. We give dreams weight, afford them importance in our mental state, because we typically process them first thing in the morning.
Is it so that when we see dreams, we literally /see/ dreams? We use the same hardware, but in a different way? Do we /see/ dreams because much of brain hardware is vision oriented?
My mental model of the vision system in the human brain is that it is a series of hierarchical detectors. From edges to corners to features to larger structures. Further, quite a good chunk of brain machinery is somewhat specialised to do this image processing. I believe David found this agreeable, but also noted that it's not clear whether there are special grandmother neurons at the top of the hierarchy.
In short, yes: the visual cortex has been found to activate during sleep.
What about sleep in general?
On the purpose of dreams and sleep more generally. I told David that I understand sleep as a maintenance scheme where the brain switches off some awareness circuitry in order for subsystems all over the body to run their maintenance routines.
I told David about garbage collection, a computer memory management technique available in many modern programming languages. The idea in GC is that an automated process periodically clears out temporary memory objects and structures that are no longer needed, and re-organises the objects that are still in active use. The analogy is that sleep is the garbage collection process of the brain. The rest of the body can run similar maintenance routines when the supervisor is switched off.
I offered that perhaps dreaming is just inadvertent stimulation of the visual circuitry in the brain, a side effect of some kind of "garbage collection" taking place in the brain.
David wasn't entirely convinced. In his view dreaming is probably more like a review process, where recently recorded memories are selectively chosen for longer term storage. Some kind of a weighting mechanism directs engram formation, the theorised process of persistent memory-making.
To Freud, dreams were manifestations of the repressed stuff in our brain? Is Freud still in the picture?
A brief exchange on Freudian theory — David is not a fan. He told me about taking a stand against Freud back in uni days. To him there's a good deal of slack in psychoanalysis, plenty of room for an unscientific, unfounded practice. Things can be stretched too far, when it comes to, for example, the interpretation of dreams and the chasing of supposedly repressed thoughts. David doesn't have a problem with psychoanalysis, per se, but he does find the Freudian method somewhat groundless and difficult to build on.
David was quite decided on Freud and so I found myself almost defending Freudian analysis. To me, the method offers a means of unlocking or dislodging whatever brain constructs are not being appropriately garbage collected and processed. From a Freudian point of view, the therapy is in the surfacing of the unconscious thing, in order for the brain to be able to do away with the thing that's interfering with waking mental states. It's the same problem with computers: sometimes you have objects that just stick around, because some process is needlessly referencing them. Memory leaks are found and fixed with the help of profiling tools.
DeepDream and image hallucinations.
To give back a little, I told David about DeepDream, a computer vision program that generates images by repeatedly amplifying detected features, yielding nightmarish over-processed images that resemble artworks done under the influence of hallucinogenic drugs. Here's a popular science take on the system: When Robots Hallucinate.
DeepDream is a deep learning system that is first trained on an image dataset, and then "run backwards" by fixing the weights and having the network generate new images from a seed pattern. With iteration, using the previous output as the next seed, these images take on a certain visual aesthetic.
The analogy is that running DeepDream, a visual neural net with a scrambled internal state, is similar to an artist with a brain scrambled by a psychoactive drug. Reversed detector activations as hyperactive brain chemistry — two systems producing surreal pictures and unsettling animations.
DeepDream got started as an attempt to simply visualise the state of the neural network under development. The artistic dimension emerged from the engineering. The developers were trying to look at different layers in the network and just projected out what was there, a kind of a sampling of the space of detectable features.
In neural nets research, the quest is on for finding the right representations, the right building blocks for interesting behaviour. DeepDream showed that the state of a trained network can give rise to a whole new family of representations. These representations carry a meaning of their own, one greater than that of all the contributing building blocks. How does that meaning enter the picture?
Second Conversation
Our second conversation, somewhat more free-flowing, again started off with physiology, but soon transitioned into a discussion about the nature of consciousness.
Measures and Morphology
One metric I hear talked about sometimes is the ratio of brain volume to body volume. Why is it that absolute brain size doesn't give you an advantage? (Or I guess measuring advantage is a little suspect.) There are mammals, even land mammals, with larger brains than what humans have, but there's no contest.
David immediately pointed out something I had completely missed: big animals have bigger brain bits, even the neurons are larger. Somehow I had never considered that the constituents of brain matter itself would vary in size. David mentioned squid axons and their role as one of the earliest nervous tissues studied, thanks to their relatively large size, stability, and ease of study.
I saw an analogy with electrical circuits: a thick wire can carry a bigger current. With a larger cross-section, there's less resistance. Brains do electrochemistry — bigger animals need different kind of circuitry. It's all about passing signals, getting the neuronal impulse moving.
In short, thinking in volumes gives you the wrong measure. If you want to compare a human and a whale, what you really want is some kind of a measure of computational capacity. Contributing factors could be things like the number of nerve cells per unit volume, the functional units involved, etc. The organisation and interconnectivity matter a lot. Whales may have big brains, but things like neuronal density and degrees of connectivity are likely more important for computation.
David also pointed out that there are physical limits at play when it comes to big brains. There are downsides to having bigger nerves. Big parts may be less efficient, one needs more resources to make them. These things manifest as evolutionary, biological constraints.
(Various cross-species brain comparison measures, such as encephalisation quotient, have been proposed and studied. This overview has some pointers. I suppose my broader interest here is with measures of brain power that are not strictly tied to the physiology.)
Not all brain matter is the same.
Another thing David pointed out is that neurons are not homogeneous. There's variety in types of neurons. Neurons have different morphologies, there's structural specialisation. Neurons come in a number of sizes and shapes.
There are also cellular structures in place supporting neuron activity. For example, the spinal cord has special cells for input and output. There are things like myelin shielding, a form of insulation, just as with insulated electrical wires. (The autoimmune condition MS damages this shielding, interfering with signal passing in the nervous system.)
The computational simplification of the neuron as a straightforward homogeneous functional unit is useful for some models, some analogies, but the reality is much more complicated. Perhaps we, in tech, should be looking at the variety of neurons to see if there's some functional benefits from specialisation in a more purely computational setting as well. It's a grand old mess in the brain. We don't know about the mechanisms. Maybe even the non-neuronal cells play a key role in the brain's operation.
After all, the difference in shape and size should be meaningful. Physical structure contributes to function. It's all the physics that makes chemistry complicated!
Complexity
The correct way to think about the brain is something like a functional network of hubs and nodes. There are regions of specialised processes, all interconnected into a whole. Coordination among regions gives you a more capable brain.
One big network vs multiple specialised networks
Evolution led to a brain structure built on coordinated specialised areas. Is that always what you want, or could a single big network do the job as well or better?
I mentioned that in computing we are often interested in optimal algorithms, where we consider the asymptotic behaviour of an algorithm, how the runtime of an algorithm changes as we increase size of the problem instance. This presupposes some model of computation.
At other times, where the task is particularly complicated and does not admit a clean mathematical definition, we use more "fuzzy" methods, such as genetic algorithms and other heuristic techniques. These methods often borrow concepts and terminology — computational analogies — from biology and nature at large.
Whether the brain is functionally successful depends on what you are trying to optimise for. David mentioned neuroscientist Daniel Wolpert, who has argued that complex movement is really what the brain is all about. If the brain is optimised for anything, it's towards something like the ability to adapt to circumstances, so we can move around in the world. And as always with evolutionary processes, it's sex that drives the action. That is the big picture.
We pushed this a bit further. David pointed out that this line of reasoning leads to an interesting notion of predestination. Of all the things that can walk and do sex, somehow we were selected for. Reproduction really is the endgame. All this culture and art and humour is just a byproduct. Why have it at all? Are we inevitable? It's quite a humbling reminder of the arbitrariness of the universe.
In a sense the point is that there are, there certainly could be, other ways of meeting the environmental requirements. Evolution just brought us where we are.
Could there be another kind of brain? Could there be a less messy brain, without all the baggage? A different kind of brain that is perhaps a better match for a particular computational view?
For building an artificial mind, reverse-engineering the messy brain we have may not be the way to go. Our brains are not optimal for all kinds of tasks. It's a balance between all the interesting things that our human brains can do, all the complex behaviour, versus the maintenance cost of all the brain structures.
Can we build complexity from simple building blocks? Are there computational Lego from which cognitive powers can be built?
David argued that in a sense this is already known to be true. Neurons, the activation mechanism, synaptic signals — it all does kind of boil down to those mechanisms. Complexity is built in layers on top of the neuron structures, the way neurons are put together. It's all about iteration, repetition, layering — accumulative complexity.
Emergence
So we are the result of runaway complexity escalation. I offered emergence, one my favourite things to think about, as the umbrella concept for this notion of layered complexity built on top of simple building blocks. Local behaviour translating in aggregate into large-scale macro phenomena.
When birds flock, what we observers see is the spectacular macro behaviour. Do birds have perception of the whole, or simply of their local situation? It would be a lot to take in for sure.
The role of the observer is quite significant here. Grand movement is not meaningful in itself, we are just wired to see the big thing, the gestalt of it all: the whole is more than the sum of the parts. We provide the significance after the fact.
Where does the extra come from?
We revisited our earlier discussion on dualism: if the whole is more than the sum of the parts, where does the extra come from? Is it real?
For David, "dualism" implies a belief that the extra is somehow magical (for lack of a better word), whereas the strictly physicalist (anti-dualist?) view is that the question is meaningless: the sum is just a perceived thing, a sort of an illusion that is always backed by the actions of the parts.
The thing is that language really kind of fails us here. There are many difficult to nail down concept and terms at play. This line of thinking borders on reductionism and reducibility, for example. And then there's the notion of causality as well. And all this ties in with the importance of the observer.
David was sketching out a definition for dualism where predictability or "forecastability" is the decisive element. If all the pieces are fully understood, then we should be able to work out the macro scale effect. No magic necessary, regardless of the amazing emergent spectacle before us.
In other words, a divide something like:
- Mechanistic: If you can understand all the bits, you can predict the new thing that appears, versus
- Dualistic: You can understand all the bits, but you still can't predict the macro effect (because magic).
Of course, even in the strictly material world things are quite fuzzy. Quantum stuff makes physics weird, nature is inherently non-deterministic. We have complex, nonlinear systems and chaos, where predictability goes quickly out the window once the initial setup is nudged a little.
Towards Consciousness
We may think consciousness is important, but perhaps it's not. David argued that emergence is not a controversial view when it comes to the operation of the mind. Consciousness is not a big deal! In the grand game of emergence, it's just another layer of perception. We apply or perhaps supply the consciousness distinction.
Is there a next step?
David offered a great argument in support of a "boring", non-magical view of the emergence of consciousness. We are not the final stage of evolution. So what happens next in evolutionary terms? How would a more evolved brain gain its powers, take things to "the next level" of processing?
Even without an understanding of the evolutionary pressures involved, the new brain is going to be built on top of the existing one. Would we need more magic to level up, or do we just keep on doing this incremental stuff that nature has been doing since always? If nature shapes and "optimises" smoothly in this way, why would magic ever be needed?
Indeed "level up" is not correct: consciousness is not a staircase. "Levels" implies a discontinuous process or scale —a continuous spectrum seems more natural. We are not fundamentally, magically different from chimps, say. There are variations across the animal kingdom, and within species as well. It's got to be a continuous spectrum, not a series of discrete cognitive steps on a brain scale.
All it is, is that we've just become slightly more evolved, slightly more capable, slowly and gradually. We humans have in a sense more consciousness, more awareness, than our fellow animals. Consciousness is not a binary thing.
Spectrums
Our conversation was already trailing off, but we still tackled some of the biggest foundational questions.
We can study very stupid things that are somewhere on the conscious spectrum. We can model simple things. Movement and sensing and all that good stuff lands you a spot on the spectrum of consciousness, depending of course on how you define it. From single cell creatures to complex primates, everybody is accounted for.
What's the minimum thing?
Simple creatures got me thinking about the spectrum of life in general, the elements of life. David was saying that life requires no purposefulness: moving and reproducing is pretty much all you need. There are even creatures without a reproductive core that can be thought of as being alive. Some scientists are looking for whole new forms of life. (I recall hearing about non-carbon biochemistry.)
This is of interest to me because with AI, the task really is to create something "animated" from inanimate matter. The Hofstadterian view is that the notion of self, particularly self-reference is central to being alive in this way.
But is it necessary to to be alive to be conscious? Or to be conscious for human level behaviour?
Where does consciousness begin?
David emphasised the role of monitoring as the building block of consciousness. In a sense the business of life is to come up with what we call consciousness. We monitor the interplay between sensory perception, the environment, and the actions we take. Perhaps something like allocation of intent could serve as a working measure for consciousness.
(As it happens, a kind of intentionality, attention, has been a big thing in neural nets in recent years.)
But, I was wondering, in what sense are robots not conscious then? I argued, with a Hofstadterian line, that self-reference is all you really need to establish a spot on the "mind spectrum". To me, having a sensory apparatus, being able to sample the environment, feels secondary.
David countered by suggesting that awareness of sensory stimuli could work as a definition for self-reference, coming back to monitoring. The self is just another layer, it doesn't really matter how many levels you have, it's all just processing. If there are enough levels of monitoring, then eventually you get something like consciousness.
On the other hand, living things do many actions without conscious thought. A big percentage of what even humans do is unconscious. But I wonder how much of the interesting human behaviour is conscious.
And if you are a computing system, what's the minimum thing you need to do to climb on to any of these spectrums?