Quite a while ago I wrote an article about climate publication bias called .
let's take a step back to 2001 and the "Summer of the Shark." The media hysteria began in early July, when a young boy was bitten by a shark on a beach in Florida. Subsequent attacks received breathless media coverage, up to and including near-nightly footage from TV helicopters of swimming sharks. Until the 9/11 attacks, sharks were the third biggest story of the year as measured by the time dedicated to it on the three major broadcast networks' news shows.
Through this coverage, Americans were left with a strong impression that something unusual was happening -- that an unprecedented number of shark attacks were occurring in that year, and the media dedicated endless coverage to speculation by various "experts" as to the cause of this sharp increase in attacks.
Except there was one problem -- there was no sharp increase in attacks. In the year 2001, five people died in 76 shark attacks. However, just a year earlier, 12 people had died in 85 attacks. The data showed that 2001 actually was a down year for shark attacks.
The point is that it is easy for people to mistake the frequency of publication about a certain phenomenon for the frequency of occurrence of the phenomenon itself. :
An emaciated polar bear was spotted in a Russian industrial city this week, just the latest account of wandering far from their hunting grounds to look for food.
Officials in the Russian city of Norilsk warned residents about the bear Tuesday. They added that it was the first spotted in the area in over 40 years.
I am willing to bet my entire bourbon collection that a) hungry polar bears occasionally invaded Siberian towns in previous decades and b) news of such polar bear activity from towns like Norilsk did NOT make the American news. But readers (even the author of the article) are left to believe there is a trend here because they remember seeing similar stories recently but don't remember seeing such stories earlier in their life.
My family often jokes about my obsessive behavior vis a vis Tesla and Elon Musk (on the off chance you are unaware of my thoughts, the most recent are here). My daughter texted me last night that "Wealthy millennials seem to love Elon." And that is true. My answer to her is the title of this post, "People who express opinions outside of their domain seldom have really looked into it much."
Of course, I am not in any way arguing for some sort of strong credentialism wherein people should not express opinions outside of their domain. God forbid, I would have to shut down this blog. But I am saying that just because someone is really smart and successful at A does not necessarily mean their opinion on B is worth squat. As always, as a consumer of opinions, caveat emptor should always be the watchwords.
The first time I really encountered this phenomenon (outside of obvious examples such as the political and economic opinions of Hollywood celebrities) was related to climate change. I don't see them as often today, but for a while it used to be very common for letters to circulate in support of climate change science signed by hundreds or thousands of scientists.
The list of signatures was always impressive, but when you looked into it, there was a problem: few if any of the folks who signed had spent any time really looking at the details of climate science -- they were busy happily studying subatomic particles or looking for dark energy in space. It turned out most of them had fallen for the climate alarmist marketing ploy that opposition to catastrophic man-made global warming theory was by people who were anti-science. And thus by signing the letter they weren't saying they had looked into it all and confirmed the science looked good to them, they were merely saying they supported science.
When some of them looked into the details of climate science later, they were appalled. Many have reached the same general conclusions that I have, that CO2 is certainly causing some warming but the magnitude of that warming or in particular the magnitude and direction of its knock on effects like floods or droughts or tornadoes, is far from settled science.
So it is often the case that people who show strong support for ideas or people outside of their domain do so for reasons other than having made use of their expertise and experience to take a deep dive into the issues. Theranos is a great example from the business world. Elizabeth Holmes convinced a bunch of men (and they were mostly all men -- women seemed to have more immunity to her BS) who were extraordinarily successful in their own domains (George Schultz, the Murdochs, Henry Kissinger, Larry Ellison) to become passionate believers in her vision. Which is fine, it was a lovely vision. But they spent zero time testing whether she could really do it, and worse, refused to countenance any reality checks about problems Theranos was facing because Holmes convinced them that critics were just bad-intentioned people representing nefarious interests who wanted her vision to fail.
Which now brings us to Tesla and Elon Musk. I used to love Elon like everyone else. I still think that having four or five billionaires in a space race against each other is finally the world I thought I was going to get growing up reading Heinlein. The Tesla Model S was probably one of the most revolutionary cars of the last 50 years. But he lost me when he committed outright fraud in the Solar City - Tesla deal and since then have only become more skeptical about he and Tesla.
I sort of laugh when folks tell me that really smart successful rich people believe in Tesla. You mean like James Murdoch, on the board of Tesla and who also was lost his entire investment in Theranos? Or like Larry Ellison, an adviser and fan of Elizabeth Holmes who invested $1 billion in Tesla just 6 months ago and has already lost 40% of it? The window on this is probably closing, but over the last 10 years if you wanted to get Silicon Valley investors to throw a lot of money at you, find a traditional bricks and mortar business and devise a story in which you take that industry and convert its economics to that of the networked software world (see: Uber, WeWork, Tesla, and even Theranos is some of its strategic pivots).
Or how about true millennials and Elon Musk? Name a wealthy millennial supporter of Elon Musk and Tesla and I can bet you any amount of money they have not looked at Tesla's balance sheet or cash flow or the details of its global demand trends. They have not thought about its dealership strategy or manufacturing strategy and the cash flow implications of these. They just like what Elon says. It sounds big and visionary. They buy into Elon's formulation that he is saving the environment and everyone opposed to him is in a cabal with big oil (ignoring the fact that Elon routinely uses his Gulfstream VI to commute distances less than 60 miles). So saying that rich millenials adore Elon is effectively saying that they want to be associated with the same things Elon says he is for -- the environment and space travel et al.
Elon Musk is Ferdinand DeLesseps. He is PT Barnum. He is Elizabeth Holmes. He is the pied piper. He is fabulous at spinning visions and making them sound science-y. But he is not Tony Stark. There is a phenomenon with Elon Musk that everyone thinks he is brilliant until they hear him speak about something about which they have domain knowledge, and then they realize he is full of sh*t. For example, no one who knows anything about transportation or physics or basic engineering has thought his Boring k彩平台登陆 and Hyperloop make any sense at all. His ideas would have been great cover stories for Popular Mechanics in the 1970's, wowing 13-year-old boys like me with pictures of mile-long cargo blimps and flying RV's. He is like a Marvel movie that spouts science that is just believable-enough sounding that it moves the plot along but does not stand up to any scrutiny.
All of this would be harmless if he was not running a public company. I don't really care about the rich folks who were duped by Elizabeth Holmes, but hundreds of thousands of small millenial investors who have totally bought into the Elon hype are literally putting their last dollar into Tesla, and sometimes borrowing more. Tesla shorts often laugh at these folks on Twitter, calling them "bagholders," but it is a tragedy. Unless Tesla finds a sugar daddy sucker, and the odds of that are getting longer, I think it is going to end badly for many of these investors
As a disclosure, I have been short Tesla via puts for a while now. It you really want to understand Elon, the best book I can recommend is The Path Between The Seas about the building of the Panama Canal. First, it is a great book you should read no matter what. And second, Ferdinand DeLesseps is the best analog I can find for Musk.
Today I want to come back to a topic I have not covered for a while, which is what I call knowledge or certainty "laundering" via computer models. I will explain this term more in a moment, but I use it to describe the use of computer models (by scientists and economists but with strong media/government/activist collusion) to magically convert an imperfect understanding of a complex process into apparently certain results and predictions to two-decimal place precision.
The initial impetus to revisit this topic was reading "" by Paul Pfleiderer of Stanford University (which I found referenced on dangers in the banking system). I will except this paper in a moment, and though he is talking more generically about theoretical models (whether embodied in code or not), I think a lot of his paper is relevant to this topic.
Before we dig into it, let's look at the other impetus for this post, which was my seeing this chart in the .
The labelling of the chart actually understates the heroic feat the authors achieved as their conclusion actually models wildfire with and without anthropogenic climate change. This means that first they had to model the counterfactual of what the climate could have been like without the 30ppm (0.003% of the atmosphere) CO2 added in the period. Then, they had to model the counterfactual of what the wildfire burn acreage would have been under the counter-factual climate vs. what actually occurred. All while teasing out the effects of climate change from other variables like forest management and fuel reduction policy (which --oddly enough -- and does not seem to be a variable in their model). And they do all this for every year back to the mid-1980's.
Don't get me wrong -- this is a perfectly reasonable analysis to attempt, even if I believe they did it poorly and am skeptical you can get good results in any case (and even given the obvious fact that the conclusions are absolutely not testable in any way). But any critique I might have is a normal part of the scientific process. I critique, then if folks think it is valid they redo the analysis fixing the critique, and the findings might hold or be changed. The problem comes further down the food chain:
When the media, and in this case the US government, uses this analysis completely uncritically and without any error bars to pretend at certainty -- in this case that half of the recent wildfire damage is due to climate change -- that simply does not exist
And when anything that supports the general theory that man-made climate change is catastrophic immediately becomes -- without challenge or further analysis -- part of the "consensus" and therefore immune from criticism.
I like to compare climate models to economic models, because economics is the one other major field of study where I think the underlying system is as nearly complex as the climate. Readers know I accept that man is causing some warming via CO2 -- " It is simply absurd to say that an entire complex system like climate is controlled by a single variable, particularly one that is 0.04% of the atmosphere. If a sugar farmer looking for a higher tariff told you that sugar production was the single control knob for the US climate, you would call BS on them in a second (sugar being just 0.015% by dollars of a tremendously complex economy).
But in fact, economists play at these same sorts of counterfactuals. It is very similar to the wildfire analysis above in that it posits a counter-factual and then asserts the difference between the modeled counterfactual and reality is due to one variable.
Last week the Council of Economic Advisors (CEA) released its congressionally commissioned study on the effects of the 2009 stimulus. The panel concluded that the stimulus had created as many as 3.6 million jobs, an odd result given the economy as a whole actually lost something like 1.5 million jobs in the same period. To reach its conclusions, the panel ran a series of complex macroeconomic models to estimate economic growth assuming the stimulus had not been passed. Their results showed employment falling by over 5 million jobs in this hypothetical scenario, an eyebrow-raising result that is impossible to verify with actual observations.
Most of us are familiar with using computer models to predict the future, but this use of complex models to write history is relatively new. Researchers have begun to use computer models for this sort of retrospective analysis because they struggle to isolate the effect of a single variable (like stimulus spending) in their observational data. Unless we are willing to, say, give stimulus to South Dakota but not North Dakota, controlled experiments are difficult in the macro-economic realm.
But the efficacy of conducting experiments within computer models, rather than with real-world observation, is open to debate. After all, anyone can mine data and tweak coefficients to create a model that accurately depicts history. One is reminded of algorithms based on skirt lengths that correlated with stock market performance, or on Washington Redskins victories that predicted past presidential election results.
But the real test of such models is to accurately predict future events, and the same complex economic models that are being used to demonstrate the supposed potency of the stimulus program perform miserably on this critical test. We only have to remember that the Obama administration originally used these same models barely a year ago to predict that unemployment would remain under 8% with the stimulus, when in reality it peaked over 10%. As it turns out, the experts' hugely imperfect understanding of our complex economy is not improved merely by coding it into a computer model. Garbage in, garbage out.
Remember what I said earlier: The models produce the result that there will be a lot of anthropogenic global warming in the future because they are programmed to reach this result. In the media, the models are used as a sort of scientific money laundering scheme. In money laundering, cash from illegal origins (such as smuggling narcotics) is fed into a business that then repays the money back to the criminal as a salary or consulting fee or some other type of seemingly legitimate transaction. The money he gets
back is exactly the same money, but instead of just appearing out of nowhere, it now has a paper-trail and appears more legitimate. The money has been laundered.
In the same way, assumptions of dubious quality or certainty that presuppose AGW beyond the bounds of anything we have see historically are plugged into the models, and, shazam, the models say that there will be a lot of anthropogenic global warming. These dubious assumptions, which are pulled out of thin air, are laundered by being passed through these complex black boxes we call climate models and suddenly the results are somehow scientific proof of AGW. The quality hasn't changed, but the paper trail looks better, at least in the press. The assumptions begin as guesses of dubious quality and come out laundered at "settled science."
, I highlighted a climate study that virtually admitted to this laundering via model by saying:
These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”
I wrote in response:
[Note the first and last sentences of this paragraph] First, that there is not sufficiently extensive and accurate observational data to test a hypothesis. BUT, then we will create a model, and this model is validated against this same observational data. Then the model is used to draw all kinds of conclusions about the problem being studied.
This is the clearest, simplest example of certainty laundering I have ever seen. If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?
A model is no different than a hypothesis embodied in code. If I have a hypothesis that the average width of neckties in this year’s Armani collection drives stock market prices, creating a computer program that predicts stock market prices falling as ties get thinner does nothing to increase my certainty of this hypothesis (though it may be enough to get me media attention). The model is merely a software implementation of my original hypothesis. In fact, the model likely has to embody even more unproven assumptions than my hypothesis, because in addition to assuming a causal relationship, it also has to be programmed with specific values for this correlation.
This brings me to the paper by Paul Pfleiderer of Stanford University. I don't want to overstate the congruence between his paper and my thoughts on this, but it is the first work I have seen to discuss this kind of certainty laundering (there may be a ton of literature on this but if so I am not familiar with it). His abstract begins:
In this essay I discuss how theoretical models in finance and economics are used in ways that make them “chameleons” and how chameleons devalue the intellectual currency and muddy policy debates. A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy.
The paper is long and nuanced but let me try to summarize his thinking:
In this essay I discuss how theoretical models in finance and economics are used in ways that make them “chameleons” and how chameleons devalue the intellectual currency and muddy policy debates. A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy....
My reason for introducing the notion of theoretical cherry picking is to emphasize that since a given result can almost always be supported by a theoretical model, the existence of a theoretical model that leads to a given result in and of itself tells us nothing definitive about the real world. Though this is obvious when stated baldly like this, in practice various claims are often given credence — certainly more than they deserve — simply because there are theoretical models in the literature that “back up” these claims. In other words, the results of theoretical models are given an ontological status they do not deserve. In my view this occurs because models and specifically their assumptions are not always subjected to the critical evaluation necessary to see whether and how they apply to the real world...
As discussed above one can develop theoretical models supporting all kinds of results, but many of these models will be based on dubious assumptions. This means that when we take a bookshelf model off of the bookshelf and consider applying it to the real world, we need to pass it through a filter, asking straightforward questions about the reasonableness of the assumptions and whether the model ignores or fails to capture forces that we know or have good reason to believe are important.
I know we see a lot of this in climate:
A chameleon model asserts that it has implications for policy, but when challenged about the reasonableness of its assumptions and its connection with the real world, it changes its color and retreats to being a simply a theoretical (bookshelf) model that has diplomatic immunity when it comes to questioning its assumptions....
Chameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may be to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty. Now these statements may not be made in quite the stark manner that I have made them here, but the underlying notion still prevails that because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters.
Check it out if you are interested. I seldom trust a computer model I did not build and I NEVER trust a model I did build (because I know the flaws and assumptions and plug variables all too well).
By the way, the mention of plug variables reminds me of one of the most interesting studies I have seen on climate modeling, by Kiel in 2007. It was so damning that I haven't seen anyone do it since (at least get published doing it).
My skepticism was increased when several skeptics pointed out a problem that should have been obvious. The ten or twelve IPCC climate models all had very different climate sensitivities -- how, if they have different climate sensitivities, do they all nearly exactly model past temperatures? If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data. But they all do. It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).
The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl. To understand his findings, we need to understand a bit of background on aerosols. Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth's climate.
What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures. When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures. Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.
Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model's unique sensitivity assumptions to reproduce historical temperatures. In my terminology, aerosol cooling was the plug variable.
When I was active doing computer models for markets and economics, we used the term "plug variable." Now, I think "goal-seeking" is the hip word, but it is all the same phenomenon.
Postscript, An example with the partisans reversed: It strikes me that in our tribalized political culture my having criticised models by a) climate alarmists and b) the Obama Administration might cause the point to be lost on the more defensive members of the Left side of the political spectrum. So let's discuss a hypothetical with the parties reversed. Let's say that a group of economists working for the Trump Administration came out and said that half of the 4% economic growth we were experiencing (or whatever the exact number was) was due to actions taken by the Trump Administration and the Republican Congress. I can assure you they would have a sophisticated computer model that would spit out this result -- there would be a counterfactual model of "with Hillary" that had 2% growth compared to the actual 4% actual under Trump.
Would you believe this? After all, its science. There is a model. Made by experts ("top men" as they say in Raiders of the Lost Ark). Do would you buy it? NO! I sure would not. No way. For the same reasons that we shouldn't uncritically buy into any of the other model results discussed -- they are building counterfactuals of a complex process we do not fully understand and which cannot be tested or verified in any way. Just because someone has embodied their imperfect understanding, or worse their pre-existing pet answer, into code does not make it science. But I guarantee you have nodded your head or even quoted the results from models that likely were not a bit better than the imaginary Trump model above.
I will let him explain the chart, it is worth understanding:
The authors collected every significant clinical study of drugs and dietary supplements for the treatment or prevention of cardiovascular disease between 1974 and 2012. Then they displayed them on a scatterplot.
Prior to 2000, researchers could do just about anything they wanted. All they had to do was run the study, collect the data, and then look to see if they could pull something positive out of it. And they did! Out of 22 studies, 13 showed significant benefits. That’s 59 percent of all studies. Pretty good!
Then, in 2000, the rules changed. Researchers were required before the study started to say what they were looking for. They couldn’t just mine the data afterward looking for anything that happened to be positive. They had to report the results they said they were going to report.
And guess what? Out of 21 studies, only two showed significant benefits. That’s 10 percent of all studies. Ugh. And one of the studies even demonstrated harm, something that had never happened before 2000
Reports for all-cause mortality were similar. Before 2000, 5 out of 24 trials showed reductions in mortality. After 2000, not a single study showed a reduction in mortality.
Note that these sensible rules for conducting a study do NOT exist for pretty much any study in any field that you see in the media. Peer review generally does not address it. Links to the full study in Drum's article.
Tags: kevin drum, media Category: Science |
Comments Off on Incredible Evidence of P Hacking in Research Studies, Demonstrated with One Chart
So it turns out that the solar roads I was sure would not work have actually now been built and... .
to be installed is in Tourouvre-au-Perche, France. This has a maximum power output of 420 kW, covers 2,800 metres squared and cost €5 million to install. This implies a cost of €11,905 per installed kW.
While the road is supposed to generate (kWh/day), some recently released data indicates a , or 150,000 kWh/yr. For an idea of how much this is, the average UK k彩平台登陆 uses around .
The road's capacity factor – which measures the efficiency of the technology by dividing its average power output by its potential maximum power output – is just 4 percent.
In contrast, , which features rows of solar panels carefully angled towards the sun, has a maximum power output of 300,000 kW and a capacity factor of 14 percent. And at a cost of €360 million, or €1,200 per installed kW, one-tenth the cost of our solar roadway, .
There is much more. I am embarrassed to say that when I slammed solar roads all those years, I actually was missing an important problem with them:
Unable to benefit from air circulation, its inevitable these panels will heat up more than a rooftop solar panel too.
For every 1 degree Celsius over optimum temperature you .
As a result a significant drop in performance for a solar road, compared to rooftop solar panels, has to be expected. The question is by how much and what is the economic cost?
I will add this to the list, thanks.
When I write stuff like this, I get the same kind of mindless feedback that I get when I point out operational issues at Tesla, ie "you are in the pay of the Koch brothers" or "you have no vision." Well, I am actually putting solar on my roof and will get (hopefully) 45,000 KwH per year, which is about a third of the energy they get from this road but installed for a bit over 1% of the cost of the road. And the panels are all ideally angled and placed, they are up in the air with absolutely no shade on them at any time of the day, and they don't have any trucks driving over them.
There is a cottage industry in creating maps that cause one to look at the Earth in a different way. The classic of this genre was the one with the southern hemisphere on top.
These usually do not do much for me, but I have to admit I really liked this map -- what the map of the world would look like if we lived in oceans (kind of the BLUE HADES view of the world if you read Charles Stross). ()
Category: Science |
Comments Off on An Interesting Map
Most journalists become journalism majors because they had vowed after high school never, ever to take a math or science class again. At Princeton we had distribution requirements and you should have seen the squealing from English and History majors at having to take one science course (I don't remember ever hearing the reverse from engineering majors).
It should not surprise you, then, that most media is awful at science journalism. I held off making a comment on this for 3 days figuring it was a typo and they would quickly fix it, but apparently not. that the art of sanity-checking numbers has been lost (I added the bold):
The space elevator is the Holy Grail of space exploration,” says , a professor of physics at City College of New York and a noted futurist. “Imagine pushing the ‘up’ button of an elevator and taking a ride into the heavens. It could open up space to the average person.”
Kaku isn’t exaggerating. A space elevator would be the single largest engineering project ever undertaken and could cost close to $10 billion to build. But it could reduce the cost of putting things into orbit from roughly $3,500 per pound today to as little as $25 per pound, says Peter Swan, president of International Space Elevator Consortium (ISEC), based in Santa Ana, California.
LOL. The planning for such a structure would cost more than $10 billion. There is no way that a space elevator can be built for just 1/10 the price of a high speed rail line from LA to San Francisco. Even at $10 trillion dollars, or 3 orders of magnitude more, I would nod my head and think that was a pretty inexpensive price.
had an interesting article about a scientific paper, that was mostly a presentation of math and statistical tools, that was essentially suppressed, apparently because it does not fit with current social justice talking points.
In the highly controversial area of human intelligence, the ‘Greater Male Variability Hypothesis’ (GMVH) asserts that there are more idiots and more geniuses among men than among women. Darwin’s research on evolution in the nineteenth century found that, although there are many exceptions for specific traits and species, there is generally more variability in males than in females of the same species throughout the animal kingdom.
Evidence for this hypothesis is fairly robust and has been reported in species ranging from adders and sockeye salmon to wasps and orangutans, as well as humans. Multiple studies have found that boys and men are over-represented at both the high and low ends of the distributions in categories ranging from birth weight and brain structures and 60-meter dash times to reading and mathematics test scores. There are significantly more men than women, for example, among Nobel laureates, music composers, and chess champions—and also among k彩平台登陆less people, suicide victims, and federal prison inmates.
I am not an expert on this and don't really take a position on whether this is truly genetics or nurture, but it does tend to explain a lot of phenomena, like the distribution of boys vs. girls math SAT scores.
But some feminists and SJW's are deeply, deeply invested in the hypothesis that differences in representation of men vs. women in the top tiers of anything are entirely due to misogyny and patriarchy and other bad cultural and societal things (not sure how they explain disproportionate numbers of men in the bottom of distributions). So a partial genetic explanation is not going to make them happy. So:
But, that same day, the Mathematical Intelligencer’s editor-in-chief Marjorie Senechal notified us that, with “deep regret,” she was rescinding her previous acceptance of our paper. “Several colleagues,” she wrote, had warned her that publication would provoke “extremely strong reactions” and there existed a “very real possibility that the right-wing media may pick this up and hype it internationally.” For the second time in a single day I was left flabbergasted. Working mathematicians are usually thrilled if even five people in the world read our latest article. Now some progressive faction was worried that a fairly straightforward logical argument about male variability might encourage the conservative press to actually read and cite a science paper?
In my 40 years of publishing research papers I had never heard of the rejection of an already-accepted paper. And so I emailed Professor Senechal. She replied that she had received no criticisms on scientific grounds and that her decision to rescind was entirely about the reaction she feared our paper would elicit. By way of further explanation, Senechal even compared our paper to the Confederate statues that had recently been removed from the courthouse lawn in Lexington, Kentucky.
The interesting part to me is that I have heard a parallel theory of greater s, though from a different cause. Rather than being a selectivity phenomenon, it is argued that because Africa is likely the original birthplace of humanity, every other group of people was started by , made up of just a portion of the range of African genetics. Thus on a global basis Africans should be disproportionately in the extremes -- tallest and shortest, smartest and dimmest, fastest and slowest, etc.
Interestingly (and I admit I am not active in this field so I may be missing something here) the general reaction to this seems to be almost celebratory -- look at all this genetic diversity, to go along with the cultural diversity, in Africa!
This is a about why many psychology studies don't replicate, though most of it is applicable to most any other field of research. The author has four rules to keep in mind, and they are all good. He looks at a number of studies that do not replicate to demonstrate how the rules work. A sample:
This study is basically a p-hacking manual. They’re not even trying to hide it, instead describing in detail how, when a hypothesis failed to yield a p-value below 0.05, they tried more and more things until something publishable popped out by chance.
Category: Science |
Comments Off on How Do You Know When That Psychology Study Won't Replicate?
is quite interesting and worth a read. Gerta Keller has had quite an interesting life. But I will say I found it particularly fascinating comparing details here to the climate debate. Here are a few example quotes that will seem very familiar to those who have watched the back and forth over global warming, particularly from the skeptic side:
Keller’s resistance has put her at the core of one of the most rancorous and longest-running controversies in science. “It’s like the Thirty Years’ War,” says Kirk Johnson, the director of the Smithsonian’s National Museum of Natural History. Impacters’ case-closed confidence belies decades of vicious infighting, with the two sides trading accusations of slander, sabotage, threats, discrimination, spurious data, and attempts to torpedo careers. “I’ve never come across anything that’s been so acrimonious,” Kerr says. “I’m almost speechless because of it.” Keller keeps a running list of insults that other scientists have hurled at her, either behind her back or to her face. She says she’s been called a “bitch” and “the most dangerous woman in the world,” who “should be stoned and burned at the stake.”
Nobel prize winner Alvarez sounds a bit like Michael Mann:
Ad hominem attacks had by then long characterized the mass-extinction controversy, which came to be known as the “dinosaur wars.” Alvarez had set the tone. His numerous scientific exploits—winning the Nobel Prize in Physics, flying alongside the crew that bombed Hiroshima, “X-raying” Egypt’s pyramids in search of secret chambers—had earned him renown far beyond academia, and he had wielded his star power to mock, malign, and discredit opponents who dared to contradict him. In The New York Times, Alvarez branded one skeptic “not a very good scientist,” chided dissenters for “publishing scientific nonsense,” suggested ignoring another scientist’s work because of his “general incompetence,” and wrote off the entire discipline of paleontology when specialists protested that the fossil record contradicted his theory. “I don’t like to say bad things about paleontologists, but they’re really not very good scientists,” . “They’re more like stamp collectors.”
This sounds familiar, dueling battles between models and observations:
That the dinosaur wars drew in scientists from multiple disciplines only added to the bad blood. Paleontologists resented arriviste physicists, like Alvarez, for ignoring their data; physicists figured the stamp collectors were just bitter because they hadn’t cracked the mystery themselves. Differing methods and standards of proof failed to translate across fields. Where the physicists trusted models, for example, geologists demanded observations from fieldwork.
There is pal review
he said impacters had warned some of her collaborators not to work with her, even contacting their supervisors in order to pressure them to sever ties. (Thierry Adatte and Wolfgang Stinnesbeck, who have worked with Keller for years, confirmed this.) Keller listed numerous research papers whose early drafts had been rejected, she felt, because pro-impact peer reviewers “just come out and regurgitate their hatred.”
And charges that key data is not being shared to avoid it falling in the hands of skeptics
She suspected repeated attempts to deny her access to valuable samples extracted from the Chicxulub crater, such as in 2002, when the journal Nature that Jan Smit had seized control of a crucial piece of rock—drilled at great expense—and purposefully delayed its distribution to other scientists, a claim Smit called “ridiculous.” (Keller told me and was eventually found in Smit’s duffel bag; Smit says this is “pure fantasy.”)
Leading to a familiar discussion of scientific consensus
Keller and others accuse the impacters of before alternate ideas can get a fair hearing. Though geologists had bickered for 60 years before reaching a consensus on continental drift, Alvarez declared the extinction debate over and done within two years. “That the asteroid hit, and that the impact triggered the extinction of much of the life of the sea … are no longer debatable points,” he said .....
All the squabbling raises a question: How will the public know when scientists have determined which scenario is right? It is tempting, but unreliable, to trust what appears to be the majority opinion. Forty-one co-authors signed on to a 2010 Science paper asserting that Chicxulub was, after all the evidence had been evaluated, conclusively to blame for the dinosaurs’ death. Case closed, again. Although some might consider this proof of consensus, dozens of geologists, paleontologists, and biologists contesting . Science is not done by vote.
I was randomly browsing my blog history when I encountered a post from over 11 years ago when it was necessary to spend 1800+ words explaining why steel could still fail in the Twin Towers even when it did not actually melt.
Of late, Rosie [O'Donnell] has joined the "truthers," using her show to flog the notion that the WTC was brought down in a government-planned controlled demolition....
Rosie, as others have, made a point of observing that jet fuel does not burn hot enough to melt steel, and therefore the fire in the main towers could not have caused the structure to yield and collapse. This is absurd. It is a kindergartener's level of science. It is ignorant of a reality that anyone who has had even one course in structural engineering or metallurgy will understand. The argument made that "other buildings have burned and not collapsed" is only marginally more sophisticated, sort of equivalent to saying that seeing an iceberg melts proves global warming. ...
Here is the reality that most 19-year-old engineering students understand: Steel loses its strength rapidly with temperature, losing nearly all of its structural strength by 1000 degrees F, well below its melting point but also well below the temperature of burning jet fuel.
And on and on from there. Seriously, I know its hard to believe this was even necessary, but it was a serious charge by some of our intellectual betters in the entertainment industry. Actually, it brings me a certain comfort in encountering this again -- maybe our public discourse is not really getting substantially stupider. Maybe it has always been that way.
Look, I am not mocking you if you don't know the material properties of steel and how they change with temperature. Odds are, in your jobs, you do not need to know anything about it. What bothers me are the people who know nothing about these topics who speak with such certainty. In some ways it seems to go past Dunning-Krueger, People making these absolute pronouncements not only don't know anything about the topic, but many have actively avoided ever finding themselves in a classroom where the topic (or more accurately the mathematical and scientific foundations of the topic) might have been discussed.
It's not like I am totally immune to this. Here are a few topics that I may have blogged about a few times years and years ago but now I won't touch because I know I don't understand them:
Central banking and monetary policy
Almost anything having to do with chemistry, including ocean acidification (or more accurately, reduced ocean alkalinity). I even had an A in Organic Chemistry but it did not stick at all.
Literary criticism, except to say what I liked and I didn't like
Anything about certain performance-based crafts, like singing and acting, except to say which performances I did and did not enjoy
Ice hockey, horse racing, and soccer (which doesn't mean I don't enjoy watching them)
80% of what Tyler Cowen writes about
Anything about music post-1985
Anything on cooking or food
Absolutely anything on wine
To the last point, I got invited to a wine tasting the other day. Everyone was saying they tasted chicory or boysenberry or a hint of diatomaceous earth or whatever and I tasted .. wine. Honestly I felt like a blind person sitting in on a discussion of the color wheel. But I resist the temptation to scream that it is all just the emperor's new clothes -- I am sure the people around me can honestly taste differences that I can't. I know I can taste differences in bourbon they cannot taste. Good vodkas on the other hand, are a different matter. Some day I am going to do a blind vodka tasting for my vodka-snob friends and see if they really can taste the difference.
Postscript: I used to love the show . He would start with something like the Defenestration of Prague and show a series of connections between it and, say, the invention of the telephone. Perhaps you can see why I found it entertaining since I began a post about the structural strength of steel at different temperatures and ended it with whether good vodkas really taste different.
There are a lot of James Burke TV episodes on Youtube and I recommend them all. Connections is recommended of course but I actually think his best series was season 1 of the Day the Universe Changed.
Y'all know I have a certain distaste for Elon Musk's rent-seeking, but as I have written before there is nothing much cooler than several billionaires competing at space travel. The video below is of today's apparently succesful Falcon Heavy launch, but while the launch is as cool as always, what was really new and beautiful was the near simultaneous side by side landing of the two booster rockets. Booster landing starts around the 36:30 minute mark.
I will say that Musk's promotional abilities do remind me sometimes of DD Harriman.
Is there some sort of metric for the complexity of a boundary like this one for PA Distric 7, among those invalidated in PA (in a decision the Supreme Court today refused to review)?
I keep wondering whether there is an objective standard we can set, rather just the sort of "I know it when I see it" one that everyone seems to use.
One way I can imagine testing is to do it Monte Carlo style by drawing a series of lines from one random point in the shape to another random point in the shape and calculating which percentage of the lines cross the boundary at least once. That metric for a circle or a rectangle would be zero, but would be very high for this shape.
This has been shared around a lot but with dolphins following strategies of deferred gratification that some humans I know would be challenged by.
At the Institute for Marine Mammal Studies in Mississippi, Kelly the dolphin has built up quite a reputation. All the dolphins at the institute are trained to hold onto any litter that falls into their pools until they see a trainer, when they can trade the litter for fish. In this way, the dolphins help to keep their pools clean.
Kelly has taken this task one step further. When people drop paper into the water she hides it under a rock at the bottom of the pool. The next time a trainer passes, she goes down to the rock and tears off a piece of paper to give to the trainer. After a fish reward, she goes back down, tears off another piece of paper, gets another fish, and so on. This behaviour is interesting because it shows that Kelly has a sense of the future and delays gratification. She has realised that a big piece of paper gets the same reward as a small piece and so delivers only small pieces to keep the extra food coming. She has, in effect, trained the humans.
Her cunning has not stopped there. One day, when a gull flew into her pool, she grabbed it, waited for the trainers and then gave it to them. It was a large bird and so the trainers gave her lots of fish. This seemed to give Kelly a new idea. The next time she was fed, instead of eating the last fish, she took it to the bottom of the pool and hid it under the rock where she had been hiding the paper. When no trainers were present, she brought the fish to the surface and used it to lure the gulls, which she would catch to get even more fish. After mastering this lucrative strategy, she taught her calf, who taught other calves, and so gull-baiting has become a hot game among the dolphins.
I like reading Zero Hedge, though their laudable cynicism about government and financial markets sometimes edges into conspiracy theory.
Anyway, I wanted to highlight something in a post there today about BLS data. Various writers at the site have claimed for years that government economic data is being manipulated. I am not sure I buy it -- I distrust government a lot but am not sure their employees could sustain such a fraud over months and years. And besides, once you manipulate data one time to juice some metric, you have to keep doing it or the metric just reverses the next month. Corporations that play special quarter-end inventory games to increase reported sales learn this very quickly. Where there are apparent errors, I am much more willing to assume incompetence than conspiracy.
The example this week is from the BLS payrolls data,:
Another way of showing the July to August data:
Goods-Producing Weekly Earnings declined -0.8% from $1,118.68 to $1,109.92
Private Service-Providing Weekly Earnings declined -0.1% from $868.80 to $868.18
And yet, Total Private Hourly Earnings rose 0.2% from $907.82 to %909.19
What the above shows is, in a word, impossible: one can not have the two subcomponents of a sum-total decline, while the total increases. The math does not work.
Certainly this is an interesting catch and if I were producing the data I would take these observations as a reason to check my work. But the author is wrong to say that this is "impossible". The reason is that these are not, as he says, two sub-components of a sum. They are two sub-components of a weighted average. Total private average weekly earnings is going to be the goods producing weekly average times number of goods producing hours plus service producing weekly average times the number of service producing hours all over the total combined hours.
From this I hope you can see that even if the both sub averages go down, the total average can go up if the weights change. Specifically, the total average can still go up if there is a mix shift from service providing to goods producing hours, since the average weekly wages of the latter are much higher than the former. I will confess it would have to be a pretty big jump in mix. The percent goods producing hours would have to rise from 15.6% to almost 17%, which strikes me as a very large jump for one month. So I am not claiming this is what happened, but people miss the mix changes all the time. I had to explain it constantly back in my corporate days. Another example here.
Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations. It is likely that there has been significant anthropogenic warming over the past 50 years averaged over each continent (except Antarctica)
I want to come back to this in a second, but here is a story the . He is quoting from Tetlock and Gardner's Superforecasting
In March 1951 National Intelligence Estimate (NIE) 29-51 was published. "Although it is impossible to determine which course of action the Kremlin is likely to adopt," the report concluded, "we believe that the extent of [Eastern European] military and propaganda preparations indicate that an attack on Yugoslavia in 1951 should be considered a serious possibility." ...But a few days later, [Sherman] Kent was chatting with a senior State Department official who casually asked, "By the way, what did you people mean by the expression 'serious possibility'? What kind of odds did you have in mind?" Kent said he was pessimistic. He felt the odds were about 65 to 35 in favor of an attack. The official was started. He and his colleagues had taken "serious possibility" to mean much lower odds.
Disturbed, Kent went back to his team. They had all agreed to use "serious possibility" in the NIE so Kent asked each person, in turn, what he thought it meant. One analyst said it meant odds of about 80 to 20, or four times more likely than not that there would be an invasion. Another thought it meant odds of 20 to 80 - exactly the opposite. Other answers were scattered between these extremes. Kent was floored.
Let's go back to the IPCC summary conclusion, which is quoted and used all over the place (no one in the media ever actually digs into the charts and analysis, they just stop at this quote). A few thoughts:
This kind of conclusion is typical of team process and perhaps is a reason that large teams shouldn't do scientific studies. We wouldn't have aspirin if 500 people all had to agree on a recommendation to allow it.
Climate alarmists often claim "consensus". Part of the way they get consensus is by excluding anyone who disagrees with them from the IPCC process and publication. But even within the remaining core, scientists have vast differences in how they evaluate the data. Consensus only exists because the conclusions use weasel words with uncertain meaning like "most" and "significant" (rather than a percentage) and "very likely" (rather than a probability).
Is "most" 51% or 95%? The difference between these two is almost a doubling of the implied temperature sensitivity to CO2 -- close to the magnitude of difference between lukewarmer and IPCC estimates. Many skeptics (including myself) think past warming due to man might be 0.3-0.4C which is very nearly encompassed by "most".
It may be that this uncertainty is treated as a feature, not a bug, by activists, who can take a word scientists meant to mean 51% and portray it as meaning nearly 100%.
For an example of this sort of thing taken to an extreme, arguably corrupt level, consider the original 97% global warming consensus survey which asked 77 scientists hand-selected from a pool of over 10,000 working on climate-related topics two questions. Answering yes to the two questions put you in the 97%.
That anything-but-scientific survey asked two questions. The first: “When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?” Few would be expected to dispute this…the planet began thawing out of the “Little Ice Age” in the middle 19th century, predating the Industrial Revolution. (That was the coldest period since the last real Ice Age ended roughly 10,000 years ago.)
The second question asked: “Do you think human activity is a significant contributing factor in changing mean global temperatures?” So what constitutes “significant”? Does “changing” include both cooling and warming… and for both “better” and “worse”? And which contributions…does this include land use changes, such as agriculture and deforestation?
Good Lord, I am a hated skeptic frequently derided as a denier and I would answer both "yes" and be in the 97% consensus. So would most all of the prominent science-based skeptics you have ever heard of.
For many of you, this will be a blinding glimpse of the obvious, but I see so many dumb approaches to cooling cocktails being pushed that I had to try to clear a few things up.
First, a bit of physics. Ice cubes cool your drink in two ways. First and perhaps most obviously, the ice is colder than your drink. Put any object that is 32 degrees in a liquid that is 72 degrees and the warmer liquid will transfer heat to the cooler object. The object you dropped in will warm and the liquid will cool and their temperatures will tend to equilibrate. The exact amount that the liquid will cool depends on their relative masses, the heat carrying capacity of each material, and the difference in their temperatures.
However, for all but the most unusual substances, this cooling effect will be minor in comparison with the second effect, the phase change of the ice. Phase changes in water consume and liberate a lot of heat. I probably could look up the exact amounts, but the heat absorbed by water going from 32 degree ice to 33 degree water is way more than the heat absorbed going from that now 33 degree water to room temperature.
Your drink needs to be constantly chilled, even if it starts cold, because most glasses are not very good insulators. Pick up the glass -- is the glass cold from the drink? If so, this means the glass is a bad insulator. If it were a good insulator, the glass would be room temperature on the outside even if the drink were cold. The glass will absorb some heat from the air, but air is not really a great conductor of heat unless it is moving. But when you hold the glass in your hand, you are making a really good contact between your drink and an organic body that is essentially circulating near-100 degree fluid around it. Your body is pumping heat into your cocktail.
Given this, let's analyze two common approaches to supposedly cooling cocktails without excessive dilution:
Cold rocks. You put these things in the freezer and put them in your drink to keep it cold. Well, this certainly will not dilute the drink, but it also will not keep it very cold for long. Remember, the equilibration of temperatures between the drink and the object in it is not the main source of heat absorption, it is the phase change and the rocks are not going to change phase in your drink. Perhaps if you cooled the rocks in liquid nitrogen? I don't know.
Large round ice balls. There is nothing that is more attractive in my cocktail than a perfect round ice ball. A restaurant here in town called the Gladly has a way of making these beautiful round flaw-free ice balls that look like they are Steuben glass. The theory is that with a smaller surface to volume ratio, the ice ball will melt slower. Which is probably true, but all this means is that the heat transfer is slower and the cooling is less. But again, the physics should be roughly the same -- it is going to cool mostly in proportion to how much it melts. If it melts less, it cools less. I have a sneaking suspicion that bars have bought into this ice ball thing to mask tiny cocktails -- I have been to several bars which have come up with ice balls or cylinders that are maybe 1 mm smaller in diameter than the glass so that a large glass holds about an ounce of cocktail.
I will not claim to be an expert but I like my bourbon drinks cold and have adopted this strategy -- perhaps you have others.
Keep the bottles chilled. I keep Vodka in the freezer and bourbon and a few key mixers in the refrigerator. It is much easier to keep something cool than to cool it the first time, and this is a good dilution-free approach to the initial cooling. I don't know if this sort of storage is problematic for the liquor -- I have never found any issues.
Keep your drinking glass in the freezer. Again, it will warm in your hand but an initially warm glass is going to pump heat into whatever you pour into it.
Use a special glass. I have gone through two generations on this. My first generation was to use a double wall glass with an air gap. This works well and you can find many choices on Amazon. Then my wife found some small glasses at Tuesday Morning that were double wall but have water in the gap. You put them in the freezer and not only does the glass get cold but the water in the middle freezes. Now I can get some phase change cooling in my cocktail without dilution. You have to get used to holding a really cold glass but in Phoenix we have no complaints about such things.
Things I don't know but might work: I can imagine you could design encapsulated ice cubes, such as water in a glass sphere. Don't know if anyone makes these. There are similar products with gel in them that freezes, and double wall glasses with gel. I do not know if the phase change in the gel is better or worse for heat absorption than phase change of water. I have never found those cold packs made of gel as satisfactory as an ice pack, but that may be just a function of size. Anyone know?
Update: , though since we bought them at Tuesday Morning their provenance is hard to trace. They are small, but if you are sipping straight bourbon or scotch this is way more than enough.
Postscript: I was drinking old Fashions for a while but switched to a straight mix of Bourbon and Cointreau. Apparently there is no name for this cocktail that I can find, though its a bit like a Bourbon Sidecar without the lemon juice. For all your cocktails, I would seriously consider getting , they are amazing. The Luxardo cherries are nothing like the crappy bright red maraschino cherries you see sold in grocery stores.
Many of the folks who participated in the science march this weekend seem to have a view of science seem to have a definition of science that involves a lot of appeals to authority and creation of heretics. Unfortunately, the video below relies on the old-fashioned cis-gendered white male definition of science, which involves using theory to establish hypotheses which are confirmed or denied through observation. In this dated definition, there is no such thing as heresy in science.
Let me tell one of my favorite stories about scientific consensus.
Perhaps the most important experiment of the last 150 years was . What is this? Think of bullets fired from a moving airplane. From the perspective of someone standing on the ground, bullets will initially travel much faster when fired forward rather than backwards, as the velocity of the plane is added (or subtracted) from the velocity with which they leave the gun. Everyone, and I mean virtually everyone in the scientific community (WAY more than 97%) assumed the same happened with light. M&M's hypothesis in their experiment was that light "fired" in one direction will travel at a different speed than light fired at a 90 degree angle to that, due to the Earth's movement through the universe, filled with some sort of aether (yet another of a long line of imponderable fluids proposed to explain various physical phenomena). They found no such difference -- the speed of light was identical in every direction. M&M has been called the most important negative result in the history of science. Einstein and special relatively explained the result a few years later.
While we are on the topic, I want to mention something that always makes me crazy when you see popular articles about Einstein. You will frequently see stories about Einstein being turned down for a promotion at the patent office or turned down for a teaching job or that he got bad grades. The point of these stories is always something like, "ha, ha look how stupid these other folks were to give bad grades to the greatest mind of the 20th century." But Einstein was never a great mathematician. One always hears that relativity involves all this crazy math, and that is true for the later general relativity, but one can derive the basic equations for special relativity using nothing more than algebra and the Pythagorean theorem. Seriously, I had to do it on a test when I was 17, it is not that hard. Perhaps I will show it in a post one day if I am really bored. Later, better mathematicians wrote papers cleaning up the math of special relativity and making it more robust, and later Einstein had to get a LOT of help with the non-Euclidean geometry involved in general relativity.
I believe (and this is a personal conclusion from reading a lot about him and not necessarily a widely held belief) that a lot of Einstein's greatness came from the fact that he had the mind of a rebel. He was willing to consider things the science establishment simply would not consider. It is STILL hard, even a hundred years later, for many of us laymen to accept that time is somehow non-absolute, that it changes depending on one's frame of reference -- so imagine how hard it was for someone in Einstein's time. In the 19th century, the world of physics had become split into two worlds that folks had come to think of as incompatible and separate -- the world of physical objects governed by Newtonian physics, and the world of light and waves governed by Maxwell's equations. Maxwell's equations implied light always had a fixed speed. Everyone assumed this had to be fixed vs. some frame of reference. The assumption of an aether or fixed point of reference against which light's speed was fixed was the 19th century solution for uniting these two worlds, but M&M demolished this. It seemed that light had to be a fixed speed in every direction and every frame of reference. Eek! Einstein asked himself how to explain this result, and thereby re-united Newtonian and wave physics, and he concluded the only way to do so was if time was variable, so he ran with that. That is not an act of math, that is an act of a flexible, rebellious mind. And flexible rebellious minds do not do very well in schools and patent office bureaucracies.
Apparently a chunk of what looks like manufactured aluminum was dug up years ago in Romania and . By this dating -- given the technology required to make aluminum -- it would be unlikely to be man-made.
So of course everyone is focusing on the question of whether it is an alien artifact. Which is the wrong question. A rational person should be asking, "what is it about this particular metallurgy or the way in which it was buried that is fooling our tests into thinking that a relatively new object is actually hundreds of thousands of years old?" I would need to see folks struggle unsuccessfully with this question for quite a while before I would ever use the word "alien." I am particularly suspicious of tests that have an error bar running between 400 years and 250,000 years. That kind of error range is really close to saying "we have no idea."
Postscript: The article hypothesizes that it looks like an axe head. Right. Aliens find some way to fly across light-years, defying much of what we understand about physics, and then walk out of their unimaginably advanced spacecraft carrying an axe to chop some wood, when the head immediately goes flying off the handle and has to be left behind as trash.
If I had to pick one topic or way of thinking that engineers and scientists have developed but other folks are often entirely unfamiliar with, I might pick the related ideas of error, uncertainty, and significance. A good science or engineering education will spend a lot of time on assessing the error bars for any measurement, understanding how those errors propagate through a calculation, and determining which digits of an answer are significant and which ones are, as the British might say, just wanking.
It is quite usual to see examples of the media getting notions of error and significance wrong. But yesterday I saw a story where someone actually why the Olympics don't time events to the millionths of a second, despite clocks that are supposedly that accurate:
Modern timing systems are capable of measuring down to the millionth of a second—so why doesn’t FINA, the world swimming governing body, increase its timing precision by adding thousandths-of-seconds?
As it turns out, FINA used to. In 1972, Sweden’s Gunnar Larsson beat American Tim McKee in the 400m individual medley by 0.002 seconds. That finish led the governing body to eliminate timing by a significant digit. But why?
In a 50 meter Olympic pool, at the current men’s world record 50m pace, a thousandth-of-a-second constitutes 2.39 millimeters of travel. allow a tolerance of 3 centimeters in each lane, more than ten times that amount. Could you time swimmers to a thousandth-of-a-second? Sure, but you couldn’t guarantee the winning swimmer didn’t have a thousandth-of-a-second-shorter course to swim. (Attempting to construct a concrete pool to any tighter a tolerance is nearly impossible; the effective length of a pool can change depending on the ambient temperature, the water temperature, and even whether or not there are people in the pool itself.)
By this, even timing to the hundredth of a second is not significant. And all this is even before .