Showing posts with label machine learning. Show all posts
Showing posts with label machine learning. Show all posts

Saturday, April 03, 2021

The metaphysical status of types

 I'm reproducing a few paragraphs from chapter 7 of Peter Smith's Introduction to Formal Logic below.  The thought is that if and when we teach a machine, it will at a minimum, expose the implicit assumptions in our metaphysics.  There's more to think about than just the paragraphs below, but this much will do, I think.

7.1 Types vs tokens

We begin with two sections introducing relevant distinctions. Firstly, we want the distinction between types and tokens. This is best introduced via a simple example.

Suppose then that you and I take a piece of paper each, and boldly write ‘Logic is fun!’ a few times in the centre. So we produce a number of different physical inscriptions – perhaps yours are rather large and in blue ink, mine are smaller and in black pencil. Now we key the same encouraging motto into our laptops, and print out the results: we get more physical inscriptions, first some formed from pixels on our screens and then some formed from printer ink.

How many different sentences are there here? We can say: many, some in ink, some in pencil, some in pixels, etc. Equally, we can say: there is one sentence here, multiply instantiated. Evidently, we must distinguish the many different sentence-instances or sentence tokens – physically constituted in various ways, of different sizes, lasting for different lengths of time, etc. – from the one sentential form or sentence type which they are all instances of.

We can of course similarly distinguish word tokens from word types, and distinguish book tokens – e.g. printed copies – from book types (compare the questions ‘How many books has J. K. Rowling sold?’ and ‘How many books has J. K. Rowling written?’).

What makes a physical sentence a token of a particular type? And what exactly is the metaphysical status of types? Tough questions that we can’t answer here! But it is very widely agreed that we need some type/token distinction, however it is to be elaborated.

Types are very natural to us humans.  We train deep neural networks to distinguish between types, like recognizing cats and dogs. But if you think about it, we've already "told" the neural network that types are important.

One can imagine that the ability to come up with types is very useful from the point of view of evolution,  (e.g., identifying predator tokens as a type might be efficient), and that humans with faulty type mechanisms turn out to be evolutionary dead-ends.   The thought experiment that might help to figure out where types come from is to figure out how to get a machine to come up with types from tokens without implicitly requiring that it must do so in the first place.

Sunday, November 29, 2020

Machine Learning and Physics

Machine Learning has in use in the Large Hadron Collider for long enough that there is now a Coursera online course about it. Basically, machine learning is used to help handle the approximately one petabye per second of data collected from particle collisions. That is one kind of use of machine learning.

 Neural networks do the job of recognizing the content of images and video, and speech recognition much better than the traditional type of computer algorithms that people can write, so it is absolutely the right technique to give computers the senses of vision and hearing. The detection of "cat" or "utility" pole from two-dimensional arrays of bytes, which is computer vision, is generalizable to "seeing" patterns in data sets in N-dimensional arrays. This "data pattern sense" is a sense organ humans lack. Neural networks can help provide humans this sixth sense. 

 Beyond that, neural networks have to connect up with some type of symbolic representation, in order to be able to handle concepts, even simple relations like "bigger than", "smaller than", "behind", "above", etc.. The seventh lecture, Neurosymbolic AI, in the MIT introduction to deep learning, 6.S191 is from where I learned about it back in February, and a regret this year is that I have not been able to follow up on it. The idea is something like this (words added to clips from David Cox's slides):


I believe that the computer will have to connect what it can sense with its "data pattern sense" to symbolic representations, and then, what the computer can do will be no better or worse in its performance than whatever automated reasoning can do today, for instance, in mathematics theorem provers. So, if mathematics falls to Artificial Intelligence, then physics may follow, but not otherwise.