How did ChatGPT respond to the ancient mathematical “doubling the square” problem that Socrates used to test his students?
For over two thousand years, humanity has been wondering about a profound question: is knowledge innate or is it acquired through experience? Today, with the advent of artificial intelligence, the debate has become relevant again, so much so that a group of researchers from the University of Cambridge and the Hebrew University of Jerusalem decided to pose an ancient mathematical problem to ChatGPT, the same one that Socrates used to test his students: the famous “doubling the square problem”.
In Plato’s narrative, the student, believing he had found the solution, doubled the length of the sides of the square, without realizing that he was thus quadrupling its area. The real answer, Socrates explained, consists in building a new square with a side equal to the diagonal of the first.
The digital experiment. The researchers chose to ask ChatGPT the same question to see if a linguistic model, trained almost exclusively on texts, could “discover” the solution without ever having read it. In theory, the probability that the problem was present in the training data was very low, so any correct answer would have suggested a form of autonomous learning.
Initially, the chatbot approached the task with surprising logical coherence, showing that it understood the request and providing reasoning close to that of a real student. But when he was asked to double the area of โโa rectangle, he made a mistake: he stated that the diagonal cannot be used for this purpose, effectively denying the existence of a geometric solution.
Artificial intuition. For the researchers, that misstep was revealing: the error did not arise from a quote or existing text, but from independent processing; in other words, ChatGPT seemed to behave like a learner who tries, makes mistakes, and reformulates his or her hypotheses based on previous experience.
It is a “learner-like” attitude (similar to that of a student, precisely), which recalls the pedagogical concept of zone of proximal development developed by the Soviet psychologist Lev Vygotsky in the last century, namely “the space between what you know and what you can learn with the right help“. This suggests that artificial intelligence, despite not possessing a consciousness, can simulate a human-like cognitive process, building connections and analogies even in the absence of direct examples in its training.
Lessons for the future. The study is also noteworthy for another reason, namely because it offers new insights into understanding what “thinking” really means for a machine.
However, the authors urge caution: it is not correct to say that ChatGPT thinks like a human being, since its behavior remains the result of statistical correlations, not of conscious understanding. Nonetheless, the experiments show how useful it is to learn to interact with AI in a more didactic way, favoring prompts that stimulate exploration (“let’s analyze this problem together”) rather than simply asking for an answer.
Looking ahead, researchers imagine new models that combine chatbots and dynamic geometry software, capable of collaborating with students and teachers to discover mathematical principles in an intuitive way. A way, perhaps, to bring Socrates and artificial intelligence into dialogue, separated by 2,400 years, but united by the same desire to understand how ideas are born.
