Showing posts with label scientific method. Show all posts
Showing posts with label scientific method. Show all posts

Friday, October 9, 2015

9/10/15: Is Economics Research Replicable?… err… ”Usually Not”


An interesting, albeit limited by the size of the sample, paper on replicability of research findings in Economics (link here).

The authors took 67 papers published in 13 “well-regarded economics journals” and attempted to replicate the papers’ reported findings. The researchers asked authors of the papers and journals for original data and codes used in preparing the paper (in some top Economics journals, it is a normal practice to require co-disclosure of data and empirical models estimation codes alongside publication of the paper).

“Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files.”

Here is the top line conclusion from the study: “We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors.”

In other words, even the authors of the original papers themselves were not able to put the results to re-test.

“Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable.”

This is hardly new, as noted by the study authors. “Despite our finding that economics research is usually not replicable, our replication success rates are still notably higher than those reported by existing studies of replication in economics. McCullough, McGeary, and Harrison (2006) find a replication success rate for articles published in the JMCB of 14 of 186 papers (8%), conditioned on the replicators’ access to appropriate software, the original article’s use of non-proprietary data, and without assistance from the original article’s authors. Adding a requirement that the JMCB archive contain data and code replication files the paper increases their success rate to 14 of 62 papers (23%). Our comparable success rates are 22 of 59 papers (37%), conditioned on our having appropriate software and non-proprietary data, and 22 of 38 papers (58%) when we impose the additional requirement of having data and code files. Dewald, Thursby, and Anderson (1986) successfully replicate 7 of 54 papers (13%) from the JMCB, conditioned on the replicators having data and code files, the original article’s use of non-confidential data, help from the original article’s authors, and appropriate software. Our comparable figure is 29 of 38 papers (76%).”

A handy summary of results:














So in basic terms, economists are not only pretty darn useless in achieving forecasting accuracy (which we know and don’t really care about for the reasons too hefty to explain here), but we are pretty darn useless at achieving replicable results of our own empirical studies using the same data. Hmmm…

Saturday, June 20, 2015

20/6/15: WLASze: Weekend Links of Arts, Sciences & zero economics


Couple of non-economics related, but hugely important links worth looking into... or an infrequent entry into my old series of WLASze: Weekend Links of Arts, Sciences and zero economics...

Firstly, via Stanford, we have a warning about the dire state of naturehttp://news.stanford.edu/news/2015/june/mass-extinction-ehrlich-061915.html. A quote: "There is no longer any doubt: We are entering a mass extinction that threatens humanity's existence." if we think we can't even handle a man-made crisis of debt overhang in the likes of Greece, what hope do we have in handling the existential threat?

Am I overhyping things? May be. Or may be not. As population ages, our ability to sustain ourselves is increasingly dependent on better food, nutrition, quality of environment etc. Not solely because we want to eat/breath/live better, but also because of brutal arithmetic: economic activity that sustains our lives depends on productivity. And productivity declines precipitously with ageing population.

So even if you think the extinction event is a rhetorical exaggeration by a bunch of scientists, brutal (and even linear - forget complex) systems of our socio-economic models imply serious and growing inter-connection between our man-made shocks and natural systems capacity to withstand them.


Secondly, via the Slate, we have a nagging suspicion that not everything technologically smart is... err... smart: "Meet the Bots: Artificial stupidity can be just as dangerous as artificial intelligence
http://www.slate.com/articles/technology/future_tense/2015/04/artificial_stupidity_can_be_just_as_dangerous_as_artificial_intelligence.html.

"Bots, like rats, have colonized an astounding range of environments. …perhaps the most fascinating element here is that [AI sceptics] warnings focus on hypothetical malicious automatons while ignoring real ones."

The article goes on to list examples of harmful bots currently populating the web. But it evades the key question asked in the heading: what if AI is not intelligent at all, but is superficially capable of faking intelligence to a degree? Imagine the world where we co-share space with bots that can replicate emotional, social, behavioural and mental intelligence up to a high degree, but fail beyond certain bound. What then? Will the average / median denominator of human interactions converge to that bound as well? Will we gradually witness disappearance of human capacity of by-pass complex, but measurable or mappable systems of logic, thus reducing the richness and complexity of our own world? If so, how soon will humanity become a slightly improved model of today's Twitter?


Thirdly, "What happens when we can’t test scientific theories?" via the Prospect Mag: http://www.prospectmagazine.co.uk/features/what-happens-when-we-cant-test-scientific-theories
"Scientific knowledge is supposed to be empirical: to be accepted as scientific, a theory must be falsifiable… This argument …is generally accepted by most scientists today as determining what is and is not a scientific theory. In recent years, however, many physicists have developed theories of great mathematical elegance, but which are beyond the reach of empirical falsification, even in principle. The uncomfortable question that arises is whether they can still be regarded as science."

The reason why this is important to us is that the question of falsifiability of modern theories is non-trivial to the way we structure our inquiry into the reality: the distinction between art, science and philosophy becomes blurred when one set of knowledge relies exclusively on the tools used in the other. So much so, that even the notion of knowledge, popularly associated with inquiry delivered via science, is usually not extendable to art and philosophy. Example in a quote: “Mathematical tools enable us to
investigate reality, but the mathematical concepts themselves do not necessarily imply physical reality”.

Now, personally, I don't give a damn if something implies physical reality or not, as long as that something is not designed to support such an implication. Mathematics, therefore, is a form of knowledge and we don't care if there are physical reality implications of it or not. But physical sciences purport to hold a specific, more qualitatively important corner of knowledge: that of being physically grounded in 'reality'. In other words, the very alleged supremacy of physical sciences arises not from their superiority as fields of inquiry (quality of insight is much higher in art, mathematics and philosophy than in, say, biosciences and experimental physics), but in their superiority in application (gravity has more tangible applications to our physical world than, say, topology).

So we have a crisis of sorts for physical sciences: their superiority is now run out of the road and has to yield to the superiority of abstract fields of knowledge. Bad news for humanity: deterministic nature of experimental knowledge is getting exhausted. With it, determinism surrounding our concept of knowledge diminishes too. Good news for humanity: this does not change much. Whether or not the string theory is provable is irrelevant to us. As soon as it becomes relevant, it will be, by Popperian definition, falsifiable. Until then, marvel of the infinite world of abstract.

Sunday, April 28, 2013

28/4/2013: A must-view TED talk

I rarely post on TED talks for a reason - aiming high, they often deliver flat repackaging of the known - but this one is worth listening to:

http://www.collective-evolution.com/2013/04/10/banned-ted-talk-rupert-sheldrake-the-science-delusion/

It has been a long running topic of many conversations I have had over the years with some of you, always taking place in private discussions, rather than in public media, that modern science is a belief-based system. My position on this stems not from a dogmatic view of science, but from a simple philosophical realisation that all sciences are based on axiomatic bases for subsequent inquiry. As axioms are by definition non-provable concepts, then the very scientific method itself is limited in its applications by the bounds of these axioms.

This is not invalidate scientific method or sciences, but to put some humbleness into occasionally arrogant position held by many (especially non-scientists) that elevates science above arts, religions, beliefs, and other systems of understanding or narrating the reality.