Saturday, June 30, 2012

Means of Knowledge - 2

The diagram below is based on one that Bee drew, the lower part of the diagram is what I've added.
Cats
A Google research team built a machine, a neural network with a billion connections, using a computing array with 16,000 processors. (More on neural networks below.) They then trained the network on ten million digital images chosen at random from YouTube videos. They used unsupervised training, i.e., this machine was not given any feedback to evaluate its training. And, lo and behold, the neural network was able to recognize cats (among other things) - (from The NY Times)

Google Fellow Dr. Jeff Dean - "It basically invented the concept of a cat".

A bit about neural networks - this slide is from Prof. Yaser Abu-Mostafa's telecourse and depicts a generic neural network.

nn

 Very simply, the inputs x on the left - in our case, some set of numbers derived from the digital images - are multiplied by other numbers, called weights, and summed up.  These are fed to the first layer of the neural network.   Each "neuron" (the circles marked with θ) is fed with its own weighted sum, s.  The neuron uses the function θ(s), which is a function of the shape shown above, to convert the input into its output.   The outputs of the first layer of neurons is used as inputs to the next layer and so on.  The above network has a final output h(x) but you can build a network with many outputs.    So, e.g., for a neural network successfully trained to recognize cats,  h(x) could be +1 if the inputs x correspond to a cat and to -1 if the inputs correspond to a not-cat.

Training simply involves choosing the weights.  The learning algorithm is some systematic way of picking weights and refining their values during the training algorithm.

It is hypothesized that our brains work in very much the same way.   Our brains probably undergo supervised training.   In the above network, supervised training would mean that you feed it one of the images,  look at the result the network gives (cat or not-cat), and then tell the learning algorithm whether the result was right or wrong.  The learning algorithm correspondingly changes the weights. Eventually if the network is successful in learning the weights converge to some stable values.  I say our brains underwent supervised training, because natural selection would tend to wipe out anyone who misperceived reality.

The Google experiment used unsupervised learning, which to me, makes its discovery of cat all the more remarkable.

[I should probably say a little more about biological neural networks.  The  neuron receives inputs usually from other neurons across junctions called synapses.  Some inputs are excitatory and some are inhibitive.   The strength of signal received depends on properties of the synapse.  If the overall strength of inputs cross some threshold, the neuron fires, otherwise it remains quiescent.  (The analogs of the threshold and quiescent/firing behavior is handled in our machine neural network above by the inputs from the circles containing the 1s and the θ(s) function respectively. ) I hope it is clear why the machine network above is naturally called a neural network.]

Arriving at the main theme,  if what the neural network (and our brains) have assembled by perception of the environment is to be termed knowledge,  where did it fit on Bee's original chart?  The collection of weights and connections in the neural network comprise at best an implicit model of cats.  It is these implicit models, presumably derived from the Real World Out There, that we are conscious of as objects in the Real World Out There.   Because we have been "trained" by evolution, the objects we perceive in fact, do mostly correspond to objects in the Real World Out There.

Once we have Theory, we now have abstract concepts on top of which to build further, and we eventually arrive at ideas, such as atoms and molecules,  that are not directly there in our perceptions, but are there in the real world.  Perhaps our difficulties with quantum mechanics are that our brains have not had to handle inputs with quantum mechanical features; but perhaps it is not a limitation of neural networks, only a limitation of our perceptions.   So perhaps a super-Google neural network could be trained to "understand" quantum mechanics in a way that we never can. That is, until we build additional "senses" - brain-machine interfaces - that can feed additional data into our brains, to train the neural networks in our heads.

Further, our brains are definitely biased by the demands of survival.  Now we have, in principle, the ability, via unsupervised learning in very large neural networks, to find out more about the Real World Out There, without that bias imposed.   New means of knowledge????



Friday, June 29, 2012

Means of knowledge

This began life as my comment on Bee's blog.  The question has been bothering me for a few days.

Let us say that formal mathematical reasoning began around the time of Euclid, some 2500 years ago. Let us say that science as we understand it today began around the time of Newton, or some 300 years ago.

Each maybe with someone else or sometime a bit earlier.  The historical details are not important to the point, which is the relative recentness of the appearance of these methods of acquiring knowledge compared to the long lineage of man - even if you take humans to have the potential mental capability to acquire and use these methods even for only the last 10,000 years, instead of the hundred-fold longer period of the million+ years of human evolution.

Much before the appearance of the formal mathematical method, and the scientific method, could anyone have dreamed of these methods and their effectiveness?

Can we conceive of additional effective methods of acquiring knowledge? Is our inability to imagine them a proof that there can be no such methods? It would be somewhat arrogant to think that we've exhausted the possibilities so soon.

Or are we at the dawn of a third method, that vaguely at the edge of our intuition? The new cognition enhancing tools we have are the computer and the network, and maybe the collaborative mathematics and science enabled by the web are just our first fumbling steps towards what we cannot yet grasp.

Tuesday, June 26, 2012

Dynamic Range

Hydrangeas at the New York Botanical Garden.  Could use some dynamic range here.  Raising the shadows beyond what I did in the third version introduces banding noise in the shadows. This is a photographic situation where the Nikon D800 is expected to shine.

The dome of the conservatory is overexposed, even with highlights set to 0 in LightRoom 4 (just upgraded from version 3)

20120623-_MG_5442

Monday, June 25, 2012

A Symbol of America's Misfortune : SCOTUS

It must be payback for bad karma that the US is now saddled with the current Supreme Court bench.
Read, gnash your teeth and weep.  To me, they doesn't sound any different from an Iranian Ayatollah issuing one of his inhuman fatwas, the same clever parsing of words and tortured logic.

Everything you need to know about Scalia 
is contained in the following exchange with Leslie Stahl in a 2009 60 minute interview:
If someone's in custody, as in Abu Ghraib, and they are brutalized by a law enforcement person, if you listen to the expression 'cruel and unusual punishment,' doesn't that apply?" Stahl asks.
"No, No," Scalia replies.
"Cruel and unusual punishment?" Stahl asks.

"To the contrary," Scalia says. "Has anybody ever referred to torture as punishment? I don't think so."

"Well, I think if you are in custody, and you have a policeman who's taken you into custody…," Stahl says.

"And you say he's punishing you?" Scalia asks.

"Sure," Stahl replies.

"What's he punishing you for? You punish somebody…," Scalia says.

"Well because he assumes you, one, either committed a crime…or that you know something that he wants to know," Stahl says.

"It's the latter. And when he's hurting you in order to get information from you…you don't say he's punishing you. What's he punishing you for? He's trying to extract…," Scalia says.

"Because he thinks you are a terrorist and he's going to beat the you-know-what out of you…," Stahl replies.

"Anyway, that's my view," Scalia says. "And it happens to be correct."
 http://www.youtube.com/watch?v=zPqjCM6e5oM

And after going a tirade about how the sovereignty of the State of Arizona is being violated by the POTUS, Scalia joins in overturning an opinion of the State of Montana's highest court without a hearing.

Grass suddenly produces cyanide, kills cows

The story is here.  In Texas, supposedly stressed by drought, a hybrid grass variety, Tifton 85, produced HCN enough to kill cows that grazed on it.  This has not been observed before in the decades since its introduction.

What I find interesting is this, on dailykos.com from a web-page that has since been taken down (the original link http://haysagriculture.blogspot.com/2012/06/potential-toxicity-issues-with-tifton.html)

A little background is in order.  Tifton 85 bermudagrass was released from the USDA-ARS station at Tifton, GA in 1992 by Dr. Glenn Burton, the same gentleman who gave us Coastal bermudagrass in 1943.  One of the parents of Tifton 85, Tifton 68, is a stargrass.  Stargrass is in the same genus as bermudagrass (Cynodon) but is a different species (nlemfuensis versus dactylon) than bermudagrass.  Stargrass has a pretty high potential for prussic acid formation, depending on variety, but even with that being said, University of Florida researchers at the Ona, FL station have grazed stargrass since 1972 without a prussic acid incident.
The pasture where the cattle died had been severely drought stressed from last year’s unprecedented drought, and had Prowl H2O {a herbicide} applied during the dormant season, a small amount of fertilizer applied in mid to late April, received approximately 5” of precipitation within the previous 30 days, and was at a hay harvest stage of growth.  Thus, the pasture did not fit the typical young flush of growth following a drought-ending rain or young growth following a frost we typically associate with prussic acid formation.
My question is - how long will it take to unearth the dangers of Genetically Modified plants and animals?

Sunday, June 24, 2012

Evolutionary Metaphor

"The Mark Inside : A perfect swindle, a cunning revenge, and a small history of the big con" by Amy Reading, is mostly about J. Frank Norfleet, a rancher from Texas, and his successful quest to catch the people that swindled him of a fortune.   The book mentions newspaper stories in the New York Times, and so I looked up the archives.  I found this, published April 27, 1924,

A Sucker With Claws
Texan, Gulled by Con' Men, Jails 40 of Them and May Be Rewarded By Congress

Just because a man is born a sucker is not sign that he may not turn into a tiger before his earthly course is run.  J. Frank Norfleet of Texas made the evolutionary jump almost overnight, with the result that about two score confidence men, members of a gang that fleeced him out of $45,000, are now behind prison bars and have ample time to ponder the Darwinian theory that a fish may grow claws.
 

Welcome to the Club!

Balu & friends are shocked by Richard Dawkins' combination of ignorance and arrogance.  I say, Welcome to the club!

Monday, June 18, 2012

On WSJ reporting

This Wall Street Journal blog talks about the conviction of Rajat Gupta for insider trading on Wall Street:
The jury found that he was “motivated not by quick profits but rather a lifestyle where inside tips are the currency of friendships and elite business relationships,” The Wall Street Journal reported.
But that is actually what the prosecution claimed.  I don't know in finding someone guilty that the jury endorses the motive that the prosecution ascribes.


Rajat Gupta, once one of America’s most- respected corporate directors, was indicted on six criminal counts in an insider trading case that prosecutors said was motivated not by quick profits but rather a lifestyle where inside tips are the currency of friendships and elite business relationships.
By Michael Rothfeld, Susan Pulliam and S. Mitra Kalita, The Wall Street Journal, October 27, 2011
PS: the indictment does not mention motive.

Friday, June 15, 2012

Campaign Slogan

From a reader of the NYTimes:

Romney would have let Detroit die and Bin Laden live.
But then in typical liberal fashion adds the qualifier:
(if he meant what he said, when he said it.)
Campaign slogans are not meant to be a place for fair play!

Wednesday, June 13, 2012

Canada's War on Science

Read about it here.   The conservative government of Canada is shutting down any research that does not suit its ideological agenda.  It sounds just like China or Pakistan. 


Monday, June 11, 2012

Cultural biases

Regardless of whom you think of as (more) correct and whether the language problem in science is real or imagined , from this essay one would have to conclude that people project the implicit assumptions of their culture even onto the animals they study.

PS:
Here is an article about Kinji Imanishi and his ground-breaking research in primatology ( Current Biology Vol 18 No 14, Tetsuro Matsuzawa and William C. McGrew )
Imanishi’s focus was to seek the evolutionary origin of human society. For him the central issue was society, and society had its own reality: it cannot be reduced to its constituent individuals nor just relationships among individuals. The society exists as a whole. This belief was the primary force for Imanishi sending the expeditions to study the society of monkeys and the society of chimpanzees in the wild.

China's Kleptocracy?

Prof. Krugman points to this post about China and wonders if it is correct.  Prof. Krugman's synopsis is:
Hempton basically argues that China has turned financial repression — controlled interest rates on deposits, which ensure a negative real rate of return — into a giant engine of kleptocracy. The banks extract rent from depositors, transfer those rents on to state-owned enterprises in the form of cheap loans, and then the Party elite essentially embezzles the money. Underlying the whole system is a high savings rate that Hempton attributes to the one-child policy.
His readers point to these writers, here are links to some or other of their writings.
Arthur Kroeber.
Patrick Chovanec 
Nick Lardy
Michael Pettis

and a direct reply:
Thomas Barnett
So yeah, an accurate description, and yeah, way over-the-top in its gloom-and-doomism.  

Sunday, June 10, 2012

Machine Learning Course - CS156

Note:
Registration for the recorded version of the course
will open mid June

Caltech Professor Yaser Abu-Mostafa covers the basic theory of machine learning in this distance learning course.  The 18 recorded lectures are here and the rest of the course material is linked from that page, or is here.   Each lecture recording is an hour of lecture, followed by a half hour of recorded question & answer.

In order to do the homework, you will need to write some programs - perhaps Python is best.  I had Mathematica too, which I used for visualization.  You will need a quadratic programming package, which with Mathematica, costs $$, but there is freeware for Python.   Most of the homework is useful.

There is a book, too, "Learning from Data: A Short Course", by Yaser Abu-Mostafa and others; the course covers more than is in the book.  The book adds some depth to the areas that it covers, and if you're going to spend time on the course, having the book is probably worthwhile.

Professor Abu-Mostafa is a very good lecturer;  and overall I rate the course highly.  I met my objectives, which was to get a view of the foundations.  Supposedly "data science" is an emerging disciple and machine learning is one of the weapons in the data scientist's arsenal.  Now I'm a bit better prepared to evaluate prospective data scientists.

The most "aha!" moment in the course was with Support Vector Machines; and the most vague concept in the course was that of deterministic noise.

The main drawback in distance learning is the relative isolation; however distance learning is a way the problem of the very high cost of higher education might be addressed.  Now anyone with a reasonable internet connection can take a fairly substantial course from Caltech.

I took the course "live", there were two lectures a week through April and May, and I just submitted my answers to the final exam.   I suppose henceforth one could take it self-paced.

(Added later): The above probably doesn't sound like an ringing endorsement for this course.  That is more due to my nature than to the course.  But if it is a subject that interests you I strongly recommend it.


Saturday, June 09, 2012

Hunter's The Chronicles of Mitt

On Daily Kos, Hunter imagines what the diary of presidential hopeful Mitt Romney might read like.
Here are links to a few, and excerpts from some (no excerpt doesn't mean it isn't good).
June 9 entry.
June 8 entry.
Excerpt: Hello, human diary. It is I again, Mitt Romney, your better. There is not much to report today, as I have been mostly engaged in further practice sessions as to how I can better be as generic as possible when addressing issues of the day. I am doing quite well in these sessions. For example, my economic policy can now be summarized by saying: America is the best nation in the world. My foreign policy is roughly that same sentence.
June 7 entry.

Excerpt: I have very defined theories as to what money does and does not like, Mr. Diary, that I will perhaps expound upon at a later date. It is obvious that money gets lonely when in small quantities, and strongly prefers the company of like-minded or larger sums of money. It thus tends to shift itself from poorer individuals to richer ones quite rapidly if not blocked by cruel government policies preventing such things. Money is very shy, and will try to hide itself (perhaps, say, in other countries) if it senses tax regulation nearby. Money likes to create jobs, primarily in the sector of the economy dedicated to guarding and pampering itself. Other human units may be experts on foreign policy, or on energy matters, or on matters of law or the like, but my own expertise is in the various moods and preferences of money. I have based each of my various careers and each one of my own current policy prescriptions based on this knowledge; indeed, most of my campaign has been an effort to get this nation to more properly consider how deeply they can hurt the feelings of money, through current policies, and how best to reform those policies in the future.
June 6 entry.
June 1 entry.

Friday, June 08, 2012

High Fructose Corn Syrup

A few months ago, I had posted Prof. Robert Lustig's warning about sugar, or more accurately fructose. Sugar is 50% glucose and 50% fructose.  The commonly used sweetener, High Fructose Corn Syrup, is said to be 55% fructose and 45% glucose, and doesn't seem much worse than sugar.

But now this  (and from 2010, this)
Consider, for example, the most common form of HFCS - HFCS 55, which has 55% fructose compared to sucrose which is 50% fructose. Most people think this difference is negligible, but it's 10% more fructose. Yet this assumes that foods and drinks are made with HFCS 55. Our study showed that certain popular sodas and other beverages contain a fructose content approaching 65% of sugars. This works out to be 30% more fructose than if the sodas were made with natural sugar. HFCS can be made to have any proportion of fructose, as high as 90%, and added to foods without the need to disclose the specific fructose content.
It is not fair for the industry association to talk about the relative harmlessness of  HFCS 55 and HFCS 42 and then to feed us HFCS 65! 

Wednesday, June 06, 2012

Math stereotyping starts young

While exploring Alexandre Borovik's math blog, I came across this:
Even [when their children are] as young as 22 months, American parents draw boys’ attention to numerical concepts far more often than girls’. Indeed, parents speak to boys about number concepts twice as often as they do girls. For cardinal-numbers speech, in which a number is attached to an obvious noun reference — “Here are five raisins” or “Look at those two beds” — the difference was even larger. Mothers were three times more likely to use such formulations while talking to boys.
 

Tuesday, June 05, 2012

The Avengers: fails the Bechdel Test

The Avengers  is an entertaining movie.  It however fails the Bechdel test.

Recall what the Bechdel test is: The movie has to have (1) at least two named female characters,  (2) who talk to each other (3) about something other than a man.

The Avengers has three named female characters.  However, they do not talk to each other.  A feminist critique is here.




Monday, June 04, 2012

Who speaks for women?

In the Land of the Free and the Home of the Brave, it is men who speak for women, by an overwhelming majority.  At least in the mainstream media.  Via dailykos.com