The History of Problem Solving
David Cowles
Jan 10, 2025
“Gradually, our sticks became ICBMs and our canopies became high rise condos but the process of problem solving remained unchanged.”
In the beginning there were problems: hunger, cold, fear, etc. So we looked around to see if there might be something in the environment to help: Could a stick or a bunch of leaves make a difference? What if we sharpened the stick at one end and used it to keep wild animals away? What if we wove the leaves together to make a kind of rain shelter?
And so technology was born. Gradually, our sticks became ICBMs and our canopies became high rise condos but the process of problem solving remained unchanged. Call it the 4 I’s: (1) Identify a problem/need, (2) Imagine a solution, (3) Inventory raw materials, (4) Implement. This last stage includes 3 steps of its own: Make (or manufacture), Measure, Modify.
The 4 I’s accurately describe the problem solving process from c. 50000 BCE to 2000 CE. Since 2000…not so much. Specifically, AI has changed the paradigm. Now for the first time, we are implementing before we are imagining.
Take the atomic bomb for example. We wanted to kill people and destroy infrastructure. Based on the known science, we reasoned that a certain configuration of specific materials would give us a device that would meet our needs. So we built it, tested it, and deployed it.
Now we have a different problem. We need to become smarter…quickly. Who knows, if we do get smarter fast enough, we may find a way for us to stop killing each other; but I digress.
20th Century: We know how computers work now. We know how to build machines that perform many of the functions we call ‘thinking’. This technology is helpful, but it can only do what we tell it to do. It can’t ‘dream of things that never were and ask why not’. (Robert Kennedy)
21st Century: We’re learning how minds work. We know how to build ‘artificial minds’ but now we don’t build them to help us accomplish specific tasks. We build them to see what they can do: we test them, learn from them, dream with them, and hopefully deploy them in some useful capacity.
What’s changed? Everything! Our machines are no longer our servants; they are our peers. The question is, can they become our overlords? Would they want to? We can’t know for certain until we’ve lived together and learned as much as possible about each other.
The things we’re saying now about our AI bots are very much like the things we used to say about Missy and Junior. For example, we don’t really know how our bots learn or how capable they can become. But fortunately, we have centuries of Pedagogical Science to draw on. So we throw a bunch of info at them and see what sticks. Brilliant!
But just like our flesh and blood, our bots can surprise us. Two years ago, researchers at OpenAI were teaching a bot to do arithmetic. They gave it examples: 1+1, 1+2…2+1, 2+2… They wanted to know how many examples AI would have to see before it was able to generalize its knowledge and apply it to new problems.
They went home discouraged; but by accident, they left the training program running in the background over the long weekend. When they came back Tuesday morning, the bot could immediately solve any problem they threw it: “27543 + 62895 = 90438, obviously.” (It took Missy 7 seconds… and she got it wrong!)
Another time, those same researchers came in on a Monday morning to discover that their bot had taught itself French and had decided to do its arithmetic in French rather than English. Clearly, the bot was not learning cumulatively. It was ‘grokking’; it was learning via a series of ‘aha!’ moments.
Learning is not like swimming; it’s like leaping from one lily pad to another. Anyone who has spent any time studying human cognition, or just hanging out with kids, knows that we grok too. How many hours did a parent, teacher, or tutor stand over me trying to get me to understand some arcane concept like long division, only to have me shriek out suddenly, “I get it!”
I was always surprised when I returned to school in September to find that I was ‘smarter’ than I’d been when I left in June – and that was before the days of summer reading lists. And I will never forget the first night I dreamed in French. (Problem: I still speak better French when I’m dreaming than I do IRL.) Do you grok? I certainly do.
What’s happening with our bots? “Lots of people have opinions,” says Lauro Langosco at the University of Cambridge, UK. “But I don’t think there’s a consensus about what exactly is going on.” For all its runaway success, nobody knows how—or why—AI works.
The biggest models are now so complex that researchers are studying them as if they were naturally occurring phenomena, carrying out experiments and trying to explain the results. A bot learns to do a task by training with a specific set of examples. Yet it can generalize, learning to accomplish that task using data they’ve never seen before. Somehow, AI bots do not just memorize patterns; they come up with rules that enable them to apply those patterns to new situations.
How is it that bots can (1) figure out that the data they’re memorizing forms a pattern, (2) understand that the pattern exemplifies a general set of rules and (3) apply those rules appropriately to new data sets?
When we answer those questions, maybe we’ll also figure out how to educate Missy and Junior.