The Future of Artificial Intelligence: Professor Higgins vs. Professor Preobrazhensky

3–4 minutes

Discussions about artificial intelligence tend to focus on algorithms, yet rarely on the cognitive frameworks of those who create them.

Between the humanistic aspirations embodied by Professor Higgins and the crude experimental impulses of Professor Preobrazhensky that produce Mr. Sharikov lies a defining tension of the 21st century: how far can humanity go in its attempt to shape a new kind of cognition?

In the classical myth, Pygmalion sculpts a woman so flawless that he falls in love with her. The goddess Aphrodite grants his wish and brings the sculpture – Galatea -to life.

Shaw’s Pygmalion is more than a mythological reference; it is a socio-ethical exploration of human transformation and social identity. Professor Higgins does not carve in marble—he educates, using the instruments of form, repetition, and social encoding, believing that language can shape not only speech but identity:

Remember that you are a human being with a soul and the divine gift of articulate speech.”

This reflects a belief that one can transform a person without distorting the essence of their nature.

In this light, a story once about education becomes a lens for thinking about AI: training machines to “reason” within structured systems of language, form, and cultural meaning.

This is not a mechanical project, but a pedagogical one: intelligence understood as something shaped, not assembled.

By contrast, Mikhail Bulgakov’s short novel Heart of a Dog (1925) illustrates the breakdown of humanist ambitions. Professor Preobrazhensky transplants human organs into a stray dog, attempting to engineer a “new Soviet citizen”—compliant, unquestioning, and submissive.

The result is not uplift, but collapse: Mr. Sharikov – a creature born from a transformed dog with a human body and primitive instincts. He does not aspire to rise to the level of his creator, but instead drags others down to his own wild nature.

Professor Preobrazhensky is not a Pygmalion. He seeks to override the design of nature within the laboratory and ends up producing a grotesque distortion of reason. The myth returns, but inverted: when creation loses its inner coherence, it veers toward absurdity.

This outlines a destructive scenario for AI: systems that act without reflection and compute without comprehension.

Professor Higgins and Professor Preobrazhensky are not just characters in literature. They represent two civilizational archetypes – two fundamentally different philosophies of creation. And the space between them is not a technical dilemma.

The Higgins model: designing the architecture of AI cognition through education, cultural embeddedness, and shared understanding;

The Preobrazhensky model: advancing without ethical grounding, in the belief that databases of formulas and templated expressions are sufficient for thought to emerge.

This is not to say that Professor Higgins is a moral exemplar, or that Professor Preobrazhensky is a villain. Both are flawed human beings operating within their historical and cultural limitations. The point is not to judge the characters, but to recognize the philosophical models they embody – and to ask which model is guiding today’s AI development.

The fundamental question is not whether AI will surpass human intelligence, but whether it will preserve space for what is distinctly human.

Artificial intelligence does not merely imitate patterns. It absorbs the worldview of its creators.

If grounded in responsibility, such systems may extend the scope of human dialogue. If driven by a will to dominate, they may give rise to a Sharikov – reactive, repressive, yet fundamentally incapable of thought.

It is a decisive cultural choice – one that will determine whether we become Pygmalion or Preobrazhensky – creators who shape or engineers who override, educators who uplift or experimenters who distort.

About the author

Yuliya Ceylan has over two decades of experience in international civil aviation, working across complex, safety-critical operational environments. Her background is rooted in systems thinking, human decision-making, and cultures where technology and responsibility are inseparable.

She examines the ethical and structural implications of artificial intelligence, applying aviation-grade principles of accountability and failure prevention to emerging technologies.

She is the author of A Moment Before the Catastrophe: An Action Plan as the Key to Rescue.
She holds an MBA from Kyiv-Mohyla Business School and completed executive education at Harvard University.

Залишити коментар