The shock of the mean Simon Raper recounts the history of the arithmetic mean, why scientists of the past rejected the idea, and why their concerns are still relevant in the ongoing struggle to...

1 answer below »


This article lists the three assumptions Carl Friedrich Gauss used to mathematically derive the Normal (Gaussian) Distribution.






Consider measuring the same person’s systolic blood pressure repeatedly in a given day:






1. Discuss how each of these assumptions can be used to understand both the central tendency and the dispersion of repeated person’s systolic blood pressure measurements.




The shock of the mean Simon Raper recounts the history of the arithmetic mean, why scientists of the past rejected the idea, and why their concerns are still relevant in the ongoing struggle to communicate statistical concepts The shock of the mean to glimpse the world as our predecessors saw it, before the idea of a mean became commonplace. This exposes neglected arguments against its use which can shock us out of complacency. On the other hand, understanding why the arithmetic mean was originally so hard to grasp can help unravel why it is that statistical concepts are still so difficult to communicate to non-statisticians – a fact that dramatically reduces the effectiveness of our work. The origin of the mean It is hard to shake the idea that the arithmetic mean must have made an appearance early on in human history. It seems too obvious and too useful an idea not to have crept in before the eighteenth century. However, in The Seven Pillars of Statistical Wisdom the leading historian of statistics, Stephen Stigler, goes in search of earlier references to the arithmetic mean and comes up empty.2 Instead he uncovers plenty of evidence that the arithmetic mean was a difficult and counterintuitive idea. As Stigler points out, for all their ingenuity in the application of mathematics to practical problems, neither the Greeks nor the Romans, nor the even the Arab astronomers and scientists of the Middle Ages, thought to calculate an average from their data. It did, however, surface early on as a purely abstract idea. Around 280 bc the Pythagoreans mention the arithmetic mean in the context of music and proportion, along with the geometric and harmonic means, but there is no suggestion of using it for summarising data. There were, though, precursors to its practical application – moments in which groups or individuals appear to stumble around the idea, half grasping it, only to be held back by the same prejudice towards the concrete that Simpson was still battling hundreds of years later. For example, in 428 bc the Greek historian Thucydides describes the process of estimating the size of an enemy’s defences by counting its height in bricks. Several people made In 1755 the mathematician Thomas Simpson wrote to the Earl of Macclesfield to put his weight behind a controversial new technique: It is well known to your Lordship that the method practised by astronomers, in order to diminish the errors arising from the imperfections of the instruments, and of the organs of sense, by taking the Mean of several observations has not been so generally received, but that some persons, of considerable note, have been of the opinion, and even publickly maintained, that one single observation, taken with due care, was as much to be relied on as the Mean of a great number.1 A statistical average is a thing so familiar to us now, spilling out of news bulletins, government reports, and business presentations, that it seems odd that it was ever in need of justification. But, to the scientists of the eighteenth century, the use of the arithmetical mean to summarise data was anything but obvious. As Simpson’s letter shows, the prevailing method was to take the best of one’s observations as the most reliable estimate, and this seemed good and right. To such men, all measurement meant measurement of an object, and accuracy was about carrying this out in the most skilful way possible. To combine the best of one’s observations with inferior attempts would have seemed perverse. And if combining observations on a single object was considered radical, doing the same for measurements made on many different objects was almost unthinkable. As we shall see, it took the force of new metaphor, the idea of the average man, to pull this off. This article is about the origin and meaning of the arithmetic mean, and the struggle to justify and understand what now seems to be the simplest of statistical ideas. Why should this hold our interest? Well, on the one hand, it allows us Simon Raper is a statistician and the founder of Coppelia, a London-based company that uses statistical analysis, simulation and machine learning to solve business problems. SIGNIFICANCE12 December 2017 IN DETAIL © 2017 The Royal Statistical Society the count and the most common value (what we would call the mode) was taken as the best estimate. The Greeks clearly understood that there were benefits to pooling data, but they clung on to the assumption that it is the best observations that count, the mode being just one way (by consensus) of deciding on the best. We have to wait until the early sixteenth century for the first true instance of the use of an arithmetic mean for a practical purpose, although it is neither named as such nor explicitly linked to the mathematical concept. This is the attempt by the mathematician Jakob Köbel to set a standard for a unit of land measurement, the rod, which was defined as 16 feet long. The difficult matter of whose feet should be used was solved by picking 16 individuals and lining them up toe to heel to define the official length of a rod. We would say that the differences between the individuals’ feet lengths were averaged out in the aggregate. But, as Stigler points out, the notion that there was something like an average foot length, in which the unique characteristics of any particular foot were discarded, was still a long way from being recognised. The individuals whose feet make up the rod are drawn in meticulous detail in an engraving that depicts the process. It is significant that “their identity was not discarded; it was the key to the legitimacy of the rod”. Unsurprisingly, given its reliance on multiple observations, it is in astronomy that we see the first general trend towards systematically combining data. At the end of the sixteenth century, Tycho Brahe recommends the repetition of measurements without specifying a method for combining them. We then find astronomers experimenting with a wide range of techniques for doing so, including means, mid-ranges and medians, without arriving at a consensus. (Ironically, when we do get the first recorded use of the term “arithmetic meane”, in 1635, it is not used to refer to a mean at all but rather to a mid-range. The astronomer Gellibrand uses it to describe ABOVE The father of the “average man”: Belgian astronomer, mathematician, statistician and sociologist Adolphe Quételet (1796–1874). Portrait by Joseph- Arnold Demannez (1825–1902). Steel engraving, recoloured. Source: Library of Congress (cph 3b11632) 13December 2017 significancemagazine.com the midpoint between the highest and lowest value recorded, effectively a mean of two values.) However, it should be noted that whenever an average is used by astronomers they proceed extremely cautiously, refusing to combine anything other than observations made in identical circumstances. As Stigler notes: “Astronomers averaged measurements they considered to be equivalent, observations they felt were of equal intrinsic accuracy because the observations had been made by the same observer, at the same time, in the same place, with the same instrument, and so forth. Exceptions, instances in which measurements not considered to be of equivalent accuracy were combined, were rare before 1750.” We then enter a fascinating period, lasting from the middle of the seventeenth century to the end of the eighteenth, in which the supporters of the average gradually gain ground over its detractors. Around 1660 the eminent scientist Robert Boyle argues against repeated experiments, comparing a single experiment of high quality to a valuable oriental pearl, not to be traded for any number of cheap and inferior specimens. In 1722, in a posthumously published article that was largely ignored, Roger Cotes argues for the average by giving a justification based on the centre of gravity of a set of weights, whose positions on a horizontal bar represent their values. By the time we get to Simpson in 1755, the balance seems to be shifting sufficiently in favour of the average to warrant a fight-back by “persons of considerable note”. Finally, a letter from Daniel Bernoulli in 1777 seems to imply that taking an average had become the norm. The wrongs that make a right The final acceptance of the arithmetic mean required another important conceptual shift: from the more intuitive idea that observational errors stack up over time (as they do, for example, when making mathematical deductions) to the less obvious thought that, provided there is no bias, they will cancel each other out. Stigler cites the medieval Trial of the Pyx, a It took an explicit theory of errors, as formulated by Carl Gauss as the foundation for his work on the normal distribution, to fully dislodge the idea of cumulative error. Saul Stahl, in an excellent article on the history of the normal distribution,3 shows how Gauss derived the normal distribution from only three assumptions: 1. Small errors are more likely than large errors. 2. For any real number ε, the likelihoods of errors of magnitudes ε and −ε are equal. 3. In the presence of several measurements of the same quantity, the most likely value of the quantity being measured is their average. The derivation itself is a perfect example of how mathematics can build something new and unexpected from just its own rules and a set of simple axioms – which makes it even more frustrating that this step is missed out of almost every introduction to statistics, leaving the student feeling that the normal curve is something arbitrary and mysterious. Gauss’s normal curve, as a model of observational errors, implies both that errors cancel each other out and that the average is the best estimate of the quantity being measured. But, as Stahl points out, the latter point was brought in quite brazenly as the third axiom – it was assumed, not proved. It was Laplace in 1810 who provided the final brick in the edifice by showing, in the
Answered 1 days AfterFeb 25, 2021

Answer To: The shock of the mean Simon Raper recounts the history of the arithmetic mean, why scientists of the...

Parul answered on Feb 27 2021
128 Votes
When data from any research is gathered it is imperative to be summated as well as presented as a singular entity which is easily understood as well as presented by any practicing physicians (Stigler, S., 2016). For instance, the research conducted to comprehend person's systolic blood pressure and measure it correctly. The type of data in this case is quantitative or perhaps numerical measurement. Applying the normal distribution by Carl Friedrich Gauss we will primarily word under below mentioned three assumptions (Stigler, S., 1986).
There were three major assumptions like
· Small errors are probable to occur than that of larger errors.
· Second assumption is related to any real number, the probability of error and its magnitude...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here