Saturday, April 03, 2021

The metaphysical status of types

 I'm reproducing a few paragraphs from chapter 7 of Peter Smith's Introduction to Formal Logic below.  The thought is that if and when we teach a machine, it will at a minimum, expose the implicit assumptions in our metaphysics.  There's more to think about than just the paragraphs below, but this much will do, I think.

7.1 Types vs tokens

We begin with two sections introducing relevant distinctions. Firstly, we want the distinction between types and tokens. This is best introduced via a simple example.

Suppose then that you and I take a piece of paper each, and boldly write ‘Logic is fun!’ a few times in the centre. So we produce a number of different physical inscriptions – perhaps yours are rather large and in blue ink, mine are smaller and in black pencil. Now we key the same encouraging motto into our laptops, and print out the results: we get more physical inscriptions, first some formed from pixels on our screens and then some formed from printer ink.

How many different sentences are there here? We can say: many, some in ink, some in pencil, some in pixels, etc. Equally, we can say: there is one sentence here, multiply instantiated. Evidently, we must distinguish the many different sentence-instances or sentence tokens – physically constituted in various ways, of different sizes, lasting for different lengths of time, etc. – from the one sentential form or sentence type which they are all instances of.

We can of course similarly distinguish word tokens from word types, and distinguish book tokens – e.g. printed copies – from book types (compare the questions ‘How many books has J. K. Rowling sold?’ and ‘How many books has J. K. Rowling written?’).

What makes a physical sentence a token of a particular type? And what exactly is the metaphysical status of types? Tough questions that we can’t answer here! But it is very widely agreed that we need some type/token distinction, however it is to be elaborated.

Types are very natural to us humans.  We train deep neural networks to distinguish between types, like recognizing cats and dogs. But if you think about it, we've already "told" the neural network that types are important.

One can imagine that the ability to come up with types is very useful from the point of view of evolution,  (e.g., identifying predator tokens as a type might be efficient), and that humans with faulty type mechanisms turn out to be evolutionary dead-ends.   The thought experiment that might help to figure out where types come from is to figure out how to get a machine to come up with types from tokens without implicitly requiring that it must do so in the first place.