“Inferentialism” – a recent system of logic – appears to have changed language and could now revolutionize the use of artificial intelligence (AI).
The rigid structures of language that we once held securely to are giving way. Take for example topics such as gender, nationality or religion – which no longer fit so well into the rigid linguistic boxes of the last century, as exemplified by .
But in addition to changes in thinking, this century has brought us yet another challenge: the The emergence of AI imposes on us the need to understand how words relate to meaning and reasoning.
A global group of philosophers, mathematicians and computer scientists has come up with a new understanding of logic that addresses these concerns, called “inferentialism”.
A standard intuition of logic, dating back to at least Aristotleis that a logical consequence must be valid in virtue of the content of the propositions involved, and not simply in virtue of being “true” or “false”.
For the past two millennia, the philosophical and mathematical basis of logic has been the idea that meaning derives from what words refer to. It presupposes the existence of abstract categories of objects that are in the universe, and defines the notion of “truth” in terms of facts about these categories.
Inferentialism best represents modern discourse. Its roots can be found in the radical philosophy of the Austrian philosopher Ludwig Wittgensteinwho in his 1953 book, Philosophical Investigations, wrote the following:
“For a large class of cases of use of the word ‘meaning’ – although not for all – this word can be explained as follows: the meaning of a word is its use in language”
This notion makes meaning more about context and function. In the 1990s, the American philosopher Robert Brandom refined “use” to mean “inferential behavior”, laying the foundations for inferentialism.
Instead of assuming abstract categories of objects that float in the universe, the inferentialist explanation of meaning recognizes that understanding is given by a rich network of relationships between elements of our language.
Let’s think about current controversial topics, such as those related, for example, to gender. We ignore the metaphysical questions that block constructive discourse, such as the question of whether the categories “masculine” or “feminine” are real in some sense. These questions don’t make sense in the new logic because many people don’t believe that “feminine” is necessarily a category with true meaning.
Inferentialism made concrete
Inferentialism is intriguing, but what does it mean to put it into practice? In a lecture in Stockholm, in the 1980s, the German logician Peter Schroeder-Heister named a field, based on inferentialism, called “proof theoretical semantics”.
In short, proof-theoretic semantics is inferentialism concrete tornado.
This domain has seen substantial development in recent years. Although the results remain technical, they are revolutionizing our understanding of logic and constitute an important advance in our understanding of human and machine reasoning and speech.
Large language models (LLM), for example, work by guessing the next word in a sentence. Their guesses are informed only by habitual speech patterns and a long training program that includes trial and error with rewards. Consequently, they “hallucinate”, which means they construct sentences that are made up of logical nonsense.
Through inferentialism, we can give them some understanding of the words they are using.
For example, an LLM might hallucinate the historical fact: “The Treaty of Versailles was signed in 1945 between Germany and France after World War II” because it seems reasonable. But, armed with an inferential understanding, we realize that the “Treaty of Versailles” was signed after World War I and 1918, and not after World War II and 1945.
This can also be useful when it comes to critical thinking and politics. If we have a proper understanding of logical consequences, we may be able to automatically identify and catalog nonsensical arguments in newspapers and debates.