Consciousness is not a binary concept. As we saw in [[What non-living things are conscious]], consciousness is actually everywhere and even non-living things (such as robots, software, stars or atoms) are likely to be conscious. However, following reasons suggest that consciousness is experienced differently by different systems: - A system needs to have capacity for different aspects of consciousness in order to experience those aspects. For example, an atom likely doesn't have a rich memory like we do because obviously there's no capacity to store the amount of data that's required to have a rich memory - From neurological conditions in humans, we know that different capacities for consciousness can disappear if there's a damage in different parts of the body/brain. For example, in pain asymbolia, patients don't get bothered by pain (even though they feel it) goes away which suggests that the botheration from pain is not fundamental and consciousness can be there without it. Similarly, in dementia, memory goes away while moment-to-moment consciousness remains. So, from a moral point of view, actually what matters isn't consciousness *per se* but feelings with negative valence (stress, pain, suffering, depression and so on). And as conditions like pain asymbolia suggest, the negative valence conditions differ in different physical systems due to their capacity and/or specific implementation. Then to give moral weights, we need to think about whether negative valence states are fundamental or higher-level combined constructs. Because an answer to this will help us prioritize different systems differently. If, for example, a digital system or a molecule experiences a world but has no concept for suffering or pain, then we can exclude it from moral consideration. Similarly, if death of a plant is similar to a tiny pin-prick while death of a chicken is suffering on a similar level to death of human-self, then we obviously need to prioritize chicken over plants.