Here and referred to below.
Suppose we have a physical quantity Q that must take its value between 0 and 1, inclusive. Let us stipulate that our current knowledge has absolutely nothing to say about the possible value of Q. It is very tempting to say that a priori, Q is uniformly distributed on [0,1]. This is nothing but a theoretical prejudice. We would in effect be saying that our utter lack of evidence means that it is nine times more likely for Q to lie between 0 and 0.9 than for Q to lie between 0.9 and 1. In reality the support for Q to take on some set of values is exactly equal to that for Q not to take on that set of values (as long as the set is not the entire set or equivalently we do not try to assert that Q takes on no value whatsoever.)
This may be easier to see if we map [0,1] to [0,infinity). There is no uniform distribution on [0, infinity); and our theoretical prejudice explicitly manifests itself when we try to assign a probability measure for Q on [0,infinity).
In a physical theory, we need a physical reason to believe that Q has a probability distribution. In Norton's language, we need a randomizer in order to induce probabilities. E.g., in many situations we have a good reason to believe that a quantity follows a normal distribution, because it arises as an aggregate from a large number of underlying processes and the law of large numbers holds. In statistical mechanics, we have good reason from Hamiltonian mechanics to assume that a system in thermal equilibrium is well represented by assuming a uniform distribution over its microstates. And so on.