The simplest and most intuitive definition of consciousness is that _it's what something feels like to be_. This definition is due to Nagel who famously asked in his essay: "_What is it to feel like a bat?_". If there is something like to feel like a bat, then bat is conscious.
Since consciousness is a broad, catch-all term, let's particularly define the consciousness we're most interested in as **primary consciousness, which is simply the ability to experience**. Primary consciousness is also called sensory or phenomenal consciousness. It does NOT necessarily include higher order capabilities such as reflection, self-awareness, cognition, or long term memory.
#### Components of primary consciousness
There are many ways to categorize the components of primary consciousness but we will use the categorization done by Todd E. Feinberg and Jon M. Mallatt in their book _Consciousness Demystified_.
At a very broad level, primary consciousness requires two components:
1. **Image based consciousness**: mapping of incoming sensory data onto isomorphic maps within the brain so that a picture of a unified world can be developed inside consciousness
2. **Valence based consciousness**: generation of global positive or negative emotions corresponding to approach or aversion attitudes. Essentially, the ability to feel likes / dislikes at a very fundamental level
Our exterioceptive senses of sight, sound, touch, smell and taste bring data from distant parts of the environment and create a isomorphic maps within the brain in order to create a map of the world, identify various objects and locate the organism in relation to those objects. It's interesting to note that the world constructed is 3D and all sensory qualities often have a precise location in that 3D world (the red apple is there, fragrance is coming from the left, sound is distant and so on)
Our valence related parts of the brain interpret our sensory and internal data to trigger "global" valence states such as pleasure, fear, anger, frustration and so on. This valence data is needed to inform organism's actions. Without valence, the organism will have a world but not know what to do within it. Note that these valence states are often without location. "We do not feel happy in our feet".
There's also interioceptive senses which bring data from inside our body into the brain. In a way, this sense shares attributes with both image based and valence based. E.g. when we're full, we feel satiated (a global valence state). But when we have a pain in body, we feel is somewhere inside the body (even though it is not as precise as external senses, e.g. we don't know whether the pain is in stomach, appendix or any other internal organ).
#### Characteristics of primary consciousness
We can sketch out following characteristics of primary consciousness:
- **Subjective**: the experience is always private for an individual. We can't look at someone's brain to experience what the individual is experiencing.
- **Referral**: even though the experience is generated inside the brain, it always feel like it's happening out there in the world.
- **Unity / binding**: how do discrete neurons create a fully unified and continuous world? We don't see a grainy world as one can imagine may happen because neurons are discrete units inside the brain. How this happens is often called the _binding problem_ or the _grain problem_
- **Causation**: how does subjective consciousness ultimately cause actions in the objective world?
- **Qualia**: how do phenomenal qualities take shape? Why _red feels like red_ and not some other color? Why does sound _feel_ completely distinct in nature versus say touch, taste or sight? This problem is often called the "_hard problem of consciousness_".
#### How to tell which organisms are conscious?
In the book _Consciousness Demystified_, the authors give a list of attributes that we can check in order to decide which organism is conscious and which isn't. The list of attributes is different for image based consciousness and valence based consciousness.
At a very high level, **image based consciousness helps figure out the environment (including where is the organism within it) and affect/valence based consciousness helps in maintaining homeostasis within that environment by approaching/avoiding relevant things**.
##### Image based consciousness
To detect **whether an organism has internal mental imagery**, we can look for the following attributes:
- **Detailed sensory input**
- Unless there's comprehensive sensory input, no mental imagery can be developed. E.g. worms have very simple light detecting patches while fishes have camera eyes which takes in a lot of photons.
- **Neural architecture for isomorphically mapped, multisensory representations**
- In order to create a world inside the head, there must be an isomorphic map inside the brain and for a unified world to exist, data from multiple senses should converge inside the brain.
- **Multimodal discrimination of objects**
- If objects in mental imagery are constructed by integration of multiple senses (sight, sound, taste, touch, etc.), then the organism should be able to distinguish the objects using data from more than one sense at a time (e.g. which of the following will be approached first: an object that smells like food or an object that smells, looks and sounds like food? If it is the latter, we know there must be a map inside the head)
##### Affect / valence based consciousness
We need to figure out whether the organism has genuine desires. For that, we want to identify **purposeful, planned, anticipatory behavior rather than simply a reflex behavior**.
- **Operant learning** (and not just classic conditioning)
- See if the organism is capable of operant learning in which a new full-body behavior (that the organism has never done before) can be learned. If the organism learns something like "If I need the food, I need to do X", then it is likely that the organism has reward/motivational systems working inside the brain that can sculpt appropriate motor commands can be triggered to satisfy those motivations.
- **Behavioral trade-offs**
- Weighing the benefit of food against the increased risk of predation. Trade offs like these mean the organism is doing integration of valence states.
- **Frustration behavior**
- Do negative emotions leave a hang over even after the negative stimuls is taken away? If so, there's probably a global emotion-state that's getting maintained inside the brain.
- **Self delivery of pain relievers or rewards / approaching drugs (like ethanol)**
- This suggests that the organism is representing valence states as these actions mechanistically intervene to push valence states to positive.
#### Which organisms are capable of consciousness?
A question relevant to [[Sentient beings that are capable of suffering]]. [Feinberg and Mallatt](https://mitpress.mit.edu/books/consciousness-demystified) did a broad survey of organisms that are capable of both image-based and affect-based consciousness. Their conclusion is that all **vertebrates (including mammals, fish, reptiles and birds), arthopods (including insects and crabs) and some cephalopods (e.g., octopuses, squids)** show both of these consciousness types and hence are conscious.
In vertebrates, the area of optic tectum serves as a location in the brain where sensory maps reside. Hence, optic tectum is analogous to thalmic-cortico circuits of mammals. It's interesting to note that common ancestors of mammals and birds had mental imagery mapped in a specific order within optic tectum. For birds and mammals the ordering of how senses get mapped changed as they evolved.
They authors exclude simple organisms like worms (_C. elegans_) that has a few hundred neurons, very crude senses (just light detecting cells) and a brain capable of basic functions and reflexes. The list of who doesn't has consciousness includes single cell organisms such as bacteria and plants. See [[Sentient beings that are capable of suffering]] for detailed answer on why plants are not conscious, but the basic idea is that the list mentioned above for image based and affect based consciousness doesn't hold true for organisms such as worms, plants, fungi or bacteria.
Until we truly understand the [[Causal mechanisms for consciousness and suffering]], it's hard to say more but what can be said with fair amount of confidence is that lacking detailed senses and a way to represent them, an organism cannot have a detailed mental imagery of the world. Similarly, if the organism is incapable of learning new behaviors, it means organism is stuck in reflexes and hence lacks affect based consciousness.
Another way to see this issue is that you develop a capacity for consciousness (and invest into it, evolutionary speaking) if you can utilize it for an adaptive purpose. Lacking enough locomotor flexibility, **simple organisms have no need to develop an internal world (since they can't do anything about it)**. This is also why the organisms with highly developed brains have excellent locomotor capabilities.
#### Explaining the "hard problem of consciousness"
Todd E. Feinberg and Jon M. Mallatt suggests that the hard problem of consciousness - _why it feels like anything at all_ is actually two problems:
1. Why is my subjective experience inscrutable to an external observer?
2. Why do different types of qualia feel different?
They don't provide satisfying responses but they make the argument that the subjectivity of experience can be traced to basic property of life: embodiment. For an organism, its own boundaries are defined by what it cares to protect irrespective of what's going on the outside.
As far as qualia is concerned colors feel different from sounds because they have different pathways inside the brain. So it would be surprising if they didn't feel different.
I think the explanations above are a start but ultimately I'm not satisfied by them. They feel like putting a lot of issues under the rug by not rigorously exploring why, for example, embodiment would suggest private nature of experience or that the experience of qualia from neural pathways.
The job of science is to provide answers to _what_ happens under XYZ conditions. It provides us with useful predictive models of reality. But in never tells us _why_ so and so exist. If a particular predictive model is an outcome of another, more fundamental predictive model, then one can say the _why_ of previous model is explained. Say, if we can derive Newton's theory of gravitation from Einstein's general relativity then the question of _why_ Newton's theory dissolves. But science is not generally equipped to answer the _why_ question for all models/laws.
Consciousness feels mysterious because we don't understand the mechanisms inside our brain responsible for it. None of the brain's activity is felt by us. The conscious experience simply pops out of nowhere and that feels mysterious. Once we have interventionist methods (say stimulate neural activity at a specific neuron) and observe its subjective experience (say color changes to blue), consciousness will become less mysterious.
**"Part of our difficulty in understanding consciousness, he says, is reliance on imagination when we try to take up the point of view of another subject".** (via [Barron and Klien, 2020](http://colinklein.org/papers/HardProblemKleinBarronForWeb.pdf))
So, gradually, for consciousness we can perhaps imagine the final theory / predictive model to take shape like: if XYZ (say this synchronization/pattern/arrangement) exists, then ABC happens (say a 3D space for consciousness appears). But we'd never be able to explain _why_ such experience exists in the first place. The question of why XYZ should lead to ABC will forever remain outside the domain of science. Perhaps we need to resign to philosophy for gaining a handle on such questions?
**This suggests that we should avoid confusing what science can answer and what it cannot (and, hence, where philosophy begins).**
#### Differentiating contents of experience with capacity for experience
[Barron and Klein, 2016](https://www.researchgate.net/publication/311770445_Insects_have_the_capacity_for_subjective_experience) and [Barron and Klein, 2020](http://colinklein.org/papers/HardProblemKleinBarronForWeb.pdf) argue that our mid-brain structure is responsible for capacity for consciousness (that's where anesthetics work, and that's where activity starts when people emerge from coma and damage in cortex doesn't reduce the capacity for consciousness but only contents). This is also supported if we assume similar structures in other vertebrates or even arthopods create primary experience (they obviously lack cortex). Capacity for consciousness may include things like providing a background 3d space in which contents appear. We never experience the 3d space (without objects directly), yet this 3d space is important.
Cortex seems to be responsible for contents of consciousness because in deep-brain stimulation it is possible to generate specific experiences (say the experience of seeing a face) by stimulating specific parts of the face.