Knowledge in the Age of AI
by David Weinberger

guestPosted by

2,400 years ago, Socrates argued that the “justified true belief” (JTB) theory of knowledge that is still popular today was not adequate. He agreed that knowledge was a type of belief, and that it had to be a true belief if it were to count as knowledge. But if you’re just guessing and your guess happens to be correct, that can’t count as knowledge. Rather, you have to have a good set of reasons — a justification — for that belief. 

But, for Socrates, that’s still not enough for a belief to count as knowledge. You can’t just be reciting some justification from memory. You have to understand it. 

I personally would add one more letter to this: “F” for framework.  Every single thing we know is part of a larger system of knowledge. If you know the water is boiling in your tea kettle because you hear its whistle, then you also know that water is a liquid, flames heat things, things can transmit heat to other things, and so on, until your entire knowledge framework has been drawn in.

So, guess what two things the knowledge that comes from machine learning (ML) — what we generally  mean by “AI” these days — doesn’t have: understandability or a framework from which its statements of knowledge spring.

We might want to say that therefore ML doesn’t produce knowledge. But I think it’s going to go the other way as AI becomes more and more integral to our lives. AI is likely to change our idea of what it means to know something.

Inexplicable knowledge

Sometime soon you’ll go in for a health exam and your doctor will tell you something like this: Everything looks good, except you have a 75% chance of having a heart attack within the next five years. You’ll respond that that’s nuts given your vital signs, diet, exercise routine, genetics … The doctor will agree but add that the prediction came from an AI diagnostic system that has proven itself to be reliable, even though no one can figure out how it comes to its conclusions. Initially you’ll be skeptical because you want to understand how it came up with that diagnosis, by which you’ll mean you want to understand how it fits into your framework of what causes heart attacks.

You’re unlikely to get that understanding, and that’s more or less on purpose.

With traditional computing, a developer would write a program that captures what we know about the causes of heart attacks: cholesterol levels and blood pressure, how they correlate for reasons that our framework explains, and so on.  

But we don’t program machine learning models that way. In fact, we don’t program them at all. We enable them to program themselves by letting them discover patterns in the tons of data we’ve given it. Those patterns may be so complex that we simply can’t understand them, but as long as they help increase the system’s accuracy, who cares?

Actually, lots of people care, because the inexplicability of these systems means that they can hide pernicious biases. That’s one important reason there’s so much research going on to make “black box” AI more understandable.

 But the tendency of AI to train itself into inexplicability for the sake of accuracy may be giving us a different idea about how knowledge works, for there must be something about these wildly complex interrelationships of data that captures an essential truth about the world. 

Perhaps it’s this:

Our frameworks have been composed of generalizations that oversimplify a world made of particulars in complex interrelationships. That ML works reveals the limits of generalizations and reveals the power of the particulars that compose the world. It doesn’t take away from the truth of those hard-won generalizations — Newton’s Laws, the rules and hints for diagnosing a biopsy — to say that they fail at predicting highly particularized events: Will there be a traffic snarl? Are you going to develop allergies late in life? Will you like the new Tom Cruise comedy? This is where traditional knowledge stops, and AI’s facility with particulars steps in. 

Recognizing the weaknesses of generalized frameworks is much easier when we have machines that bring us more accurate knowledge by listening to particulars. But it also transforms some of our most basic beliefs and approaches.

Michele Zanini and I recently wrote a brief post for Harvard Business Review about what this sort of change in worldview might mean for  business, from strategy to supply chain management. For example, two  faculty members at the Center for Strategic Leadership at the U.S Army War College have suggested that AI could fluidly assign leadership roles based on the specific details of a threatening situation and the particular capabilities and strengths of the people in the team. This would alter the idea of leadership itself: Not a personality trait but a fit between the specifics of character, a team, and a situation. 

AI’s effect on our idea of knowledge could well be broader than that. We’ll still look for justified true beliefs, but perhaps we’ll stop seeing what happens as the result of rational, knowable frameworks that serenely govern the universe.  Perhaps we will see our own inevitable fallibility as a consequence of living in a world that is more hidden and more mysterious than we thought. We can see this wildness now because AI lets us thrive in such a world. 

Such a vision seems to me not only to be true, but to be liberating, humbling, and joyous, and thus a truth we would do well to embrace, even if it took inscrutable machines to teach it to us.

About the author:

David Weinberger, Ph.D., writes about technology’s effect on our ideas. He is a long-time affiliate of the Harvard Berkman Klein center.

Leave a Reply

Your email address will not be published. Required fields are marked *