The invention of measurements and perception


This is going to be pretty airy-fairy. Sorry.

Let’s talk about how measurements get invented, and how this limits us.

One of the great works of engineering, science, and data is finding signals in the noise. What matters? Why? How much?

My background is in computer science, and a little in electrical engineering. So the question of what to measure to make systems (audio and computer) "better" is always on my mind.

What’s often missing in measurements is "pleasure" or "satisfaction."

I believe in math. I believe in statistics, but I also understand the limitations. That is, we can measure an attribute, like "interrupts per second" or "inflamatory markers" or Total Harmonic Distortion plus noise (THD+N)

However, measuring them, and understanding outcome and desirability are VERY different. Those companies who can do this excel at creating business value. For instance, like it or not, Bose and Harman excel (in their own ways) at finding this out. What some one will pay for, vs. how low a distortion figure is measured is VERY different.

What is my point?

Specs are good, I like specs, I like measurements, and they keep makers from cheating (more or less) but there must be a link between measurements and listener preferences before we can attribute desirability, listener preference, or economic viability.

What is that link? That link is you. That link is you listening in a chair, free of ideas like price, reviews or buzz. That link is you listening for no one but yourself and buying what you want to listen to the most.

E
erik_squires
Good points.

I guess my focus is on the distance between a measurement, which could be done by an automated device, and human perception/value.

I agree we've measured jitter for a while, but was that all? Were there some kinds of jitter worse than others? How low before we can no longer tell?
@teo: "I like to remind people that math is an excellent tool, but to remember that math exists no where in the known universe except as that - in a human’s head." 

If it weren't for math, you wouldn't have a head.
stevecham"If it weren't for math, you wouldn't have a head.'

You appear to be worshipping at the wrong alter even the best math is not God or the Creator of Life you are a confused, disoriented, misinformed person.
Whoa! What is this - a convention of English majors?

In audio the most logical approach is to assume everything is true and nothing is true.

“Because it’s what I choose to believe.” Dr. Elizabeth Shaw, Prometheus
How about a little philosophy?

There are a variety of philosophical approaches to decide whether an observation may be considered evidence; many of these focus on the relationship between the evidence and the hypothesis. Carnap recommends distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis).[10] Achinstein provides a concise presentation by prominent philosophers on evidence, including Carl Hempel (Confirmation), Nelson Goodman (of grue fame), R. B. Braithwaite, Norwood Russell Hanson, Wesley C. Salmon, Clark Glymour and Rudolf Carnap.[11]

Based on the philosophical assumption of the Strong Church-Turing Universe Thesis, a mathematical criterion for evaluation of evidence has been conjectured, with the criterion having a resemblance to the idea of Occam’s Razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized."[12]

According to the posted curriculum for an "Understanding Science 101" course taught at University of California - Berkeley: "Testing hypotheses and theories is at the core of the process of science." This philosophical belief in "hypothesis testing" as the essence of science is prevalent among both scientists and philosophers. It is important to note that this hypothesis does not take into account all of the activities or scientific objectives of all scientists. When Geiger and Marsden scattered alpha particles through thin gold foil for example, the resulting data enabled their experimental adviser, Ernest Rutherford, to very accurately calculate the mass and size of an atomic nucleus for the first time. No hypothesis was required. It may be that a more general view of science is offered by physicist, Lawrence Krauss, who consistently writes in the media about scientists answering questions by measuring physical properties and processes.

Concept of scientific proofEdit

While the phrase "scientific proof" is often used in the popular media,[13] many scientists have argued that there is really no such thing. For example, Karl Popper once wrote that "In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by ’proof’ an argument which establishes once and for ever the truth of a theory".[14][15] 

Albert Einstein said: The scientific theorist is not to be envied. For Nature, or more precisely experiment, is an inexorable and not very friendly judge of his work. It never says "Yes" to a theory. In the most favorable cases it says "Maybe," and in the great majority of cases simply "No." If an experiment agrees with a theory it means for the latter "Maybe," and if it does not agree it means "No." Probably every theory will someday experience its "No" - most theories, soon after conception.[16]