In several interviews (like this one with Anderson Cooper), Geoffrey Hinton, the Nobel Prize-winning “godfather of AI,” has defended the view that we must give maternal feelings to artificial intelligence (AI), to keep it safe. Can we really develop a maternal AI? In a recent post, Paul Thagard has pointed out that this proposal is implausible because AI lacks the biochemistry underlying the emotions that are necessary for maternal care.
Maternal artificial intelligence?
We presented a similar argument in two publications (Haladjian and Montemayor, 2016; Montemayor, Halpern, and Fairweather, 2022), in which we argued that some kinds of intelligence can be mechanized through AI, but not emotional intelligence, partly because of the biological roots of emotions and partly because the simulation of emotions is strategic and unreasonable. So we agree with Thagard, but we think that, besides being critical of empathic AI proposals, we must further clarify why the issue is not merely the biochemical substrate of emotions. This substrate matters, but the possibility of empathic symmetry through genuine social interaction is much more important. It is the felt reciprocity between two emotional beings that creates the powerful bond that mothers feel when they categorically, not strategically, care for their babies.
Much clarity is needed here. First, debates about consciousness must integrate other findings in psychology to arrive at a better understanding of consciousness, and also to achieve more clarity about why exactly AI is limited concerning conscious awareness. There is considerable confusion about emotional and conscious AI. A recent paper clarifies different notions of consciousness that must be taken into account in discussions of AI. This is important, but with a few exceptions, popular theories of consciousness ignore the key contributions that attention makes to consciousness, as well as the deep relation between attention and intelligence. Attention is decisive for determining what enters into awareness, and it is also fundamental for many kinds of intelligence, including our communication capacities.
Thus, when we analyze AI in terms of specific kinds of intelligence, consciousness should not be the only topic under investigation. In fact, it is plausible that attention is much more important for intelligence than consciousness because intelligence is something we evaluate publicly, while conscious awareness and phenomenal contents remain securely shielded in the privacy of subjective awareness, something unlikely to be achieved by artificial systems.
Second, the claim that the biochemistry underlying emotions is essential for motivations regarding care, including maternal instincts, needs to be further clarified. Recently, Ned Block (2025) has also argued that the biochemistry of our biological makeup may be necessary for consciousness—as he puts it, maybe only meat machines are conscious. Block argues for this thesis by appealing to the subcomputational biological realizers of consciousness, which, if necessary for consciousness, must be defined in terms of their biochemistry, rather than their functional or computational role. This point relates to our previous remarks about attention. Various types of attention can serve as the necessary precursors to consciousness, as biologically determined constituents of consciousness, rather than mere informational functions. Many of these types of attention can be found across species.
The empirical evidence also indicates that attention is likely a necessary condition for consciousness (Montemayor and Haladjian, 2015). Therefore, to advance our scientific reasoning on consciousness, including AI consciousness, attention must be examined in detail, and its priority or necessary role in the emergence of consciousness should be carefully considered. Likewise, the biological constraints on emotional intelligence should also appeal to the distinction between attention and conscious maternal care instincts.
Addressing AI risk with more realistic models of emotional intelligence
Going back to Hinton’s proposal, maternal instincts are embodied, disinterested, unconditional, and deeply empathic. Hinton is right in claiming that they are unique: They make possible a situation in which a much more vulnerable and less intelligent creature takes control of a much stronger and more intelligent one. But the perspective of a flesh-and-blood mother is a concrete and emotional one. The disembodied system Hinton is considering is one that, at best, can be characterized in terms of strategic reasoning.
The scenario he is concerned with is the following: Once we are left behind in a kind of quickly progressive intelligence-gap escalation, we will need to trick superintelligent agents into thinking that they must care for us, even though we would not have the level of intelligence they could potentially reach. However, similar to our interest in them, their interest in us is not going to be conditioned by the categorical kinds of desire that are involved in maternal care. At best, these disembodied systems will have strong probabilistic priors not to destroy us. However, if they are truly smart, they will quickly update those priors. This probabilistic kind of reasoning does not fully capture the way the biology of emotions works, and it certainly doesn’t seem to capture the way maternity works.
An attentive AI is easier to conceive than a maternal one. We need to clarify the role of emotions in our moral intelligence. Emotions are not mere preferences or styles of problem-solving. Maternal instincts are deeply felt emotions of care that compel categorically, without the mediation of any instrumental or strategic reasoning, and mothers respond to them even to their own detriment. On this topic, we need to think very seriously about whether AI, as an infrastructure for communication and trust, is nothing beyond strategic or instrumental reasoning. If so, we have no reason to trust that AI agents will perform in ways that care for humanity, and although Hinton may not have the ideal solution to the problem of AI risk, he has the right intuition: We are definitely at risk by trusting something that is not trustworthy.
Deep Insight Think Deeper. See Clearer