
There is a question that sits underneath all of artificial intelligence, older than the field itself: is human cognition structured enough to replicate? Not in the trivial sense of mimicking outputs — chatbots have done that since ELIZA — but in the deeper sense of reproducing the internal architecture. The topology of thought. The machinery that makes a person predictable to those who know them, and surprising to those who do not.
Carl Jung believed the answer was yes. His typological framework — the cognitive functions, the attitudes of introversion and extraversion, the axes of thinking-feeling and sensing-intuition — was not meant as personality trivia. It was meant as a deterministic claim: that from an early age, a subject's behavior over a determined stimulus, and its deviations under different approaches, could be predicted. That there exists a structured model for human behavior, and that its mathematical approximation could show how cognition can be both predicted and replicated.
This is a stronger claim than most people realize. It implies that the space of human responses is not infinite but constrained — bounded by type, by developmental stage, by the particular configuration of functions that crystallized in childhood. If you accept the premise, then the path to an artificial mind is not about brute-force generalization. It is about deploying the right structural prior.
The Deterministic Skeleton
Consider what a 1:1 deployment of an artificial mind would require. Not a language model that approximates the statistical distribution of text, but a system whose internal state transitions mirror the cognitive architecture of a specific individual. The behavior of the subject — hopefully provided with limbs, as the original note wryly observed — could be determined because the model encodes the same typological constraints.
Under this framing, a perturbance rejection neural network becomes the natural architecture. The system maintains a baseline cognitive mode — the dominant function stack — and treats unexpected stimuli as disturbances to be absorbed, not catastrophes to be avoided. The inferior function activates under stress, exactly as Jung described. The shadow emerges when the rejection capacity is exceeded.
This is not metaphor. It is a control-theoretic reading of analytical psychology. The cognitive functions are the plant. The ego is the controller. The persona is the output filter. And the individuation process — Jung's lifelong integration of conscious and unconscious — is the slow tuning of gains across a lifetime.
La Intuicion en un Sistema Artificial
La intuicion en un sistema artificial esta denotada por ciertas directrices probabilisticas, asociadas directamente por los parametros de entrada, como identificacion de objetos o reconocimiento espacial sonoro. Bajo ciertas condiciones, estas directrices se ejecutan para obtener un parametro estadistico, y ciertamente aleatorio, que para un rango de funciones posibles a realizar, ejecuta tareas dependiendo de esto.
This is the crux of it, stated plainly in the language it was first thought in. Intuition in an artificial system is not mystical. It is a probabilistic directive — a set of conditional execution paths that fire based on input parameters. Object identification, spatial-auditory recognition, pattern completion across incomplete data. The system evaluates a statistical parameter, necessarily stochastic, and from the range of possible actions, selects one.
What makes this intuition rather than mere computation is the stochasticity. A purely deterministic system does not intuit — it calculates. Intuition requires that the system operate in a regime where the mapping from input to action is underdetermined, where multiple responses are valid, and where the selection among them is governed by priors that the system cannot fully articulate. This is what Jung meant by the intuitive function: perception of possibilities that are not contained in the sensory data itself.
The Gap That Remains
Modern neural networks have achieved something remarkable in this direction without intending to. A transformer trained on enough text develops internal representations that loosely mirror cognitive typology — not because anyone designed it that way, but because the statistical structure of human language preserves the statistical structure of human thought. The biases, the associative leaps, the characteristic errors of each cognitive mode are all encoded in the training distribution.
But the gap between a statistical echo and a true cognitive model remains vast. The echo can generate text that sounds like intuition. The model we are reaching for would have intuition — probabilistic directives firing against learned priors, selecting from underdetermined action spaces, maintaining coherence through a typological skeleton that constrains the space of possible responses.
Jung gave us the architecture. Probability theory gives us the execution model. Neural networks give us the substrate. What remains is the integration — the hard problem of artificial cognition, which is not consciousness but coherence. Not whether the machine thinks, but whether its thinking has the structural integrity of a mind that has been shaped, from an early age, by the particular configuration of functions it was given.
The deterministic model of human behavior is not a fantasy. It is an engineering target. And the path to it runs, unexpectedly, through a Swiss psychiatrist who died before the first perceptron was built.