Showing posts with label behavioural finance. Show all posts
Showing posts with label behavioural finance. Show all posts

Tuesday, April 30, 2019

30/4/19: Journal of Financial Transformation paper on cryptocurrencies pricing


Our paper with O’Loughlin, Daniel and Chlebowski, Bartosz, titled "Behavioral Basis of Cryptocurrencies Markets: Examining Effects of Public Sentiment, Fear and Uncertainty on Price Formation" is out in the new edition of the Journal of Financial Transformation Volume 49, April 2019. Available at SSRN: https://ssrn.com/abstract=3328205 or https://www.capco.com/Capco-Institute/Journal-49-Alternative-Capital-Markets.



Monday, October 9, 2017

9/10/17: Nature of our reaction to tail events: ‘odds’ framing


Here is an interesting article from Quartz on the Pentagon efforts to fund satellite surveillance of North Korea’s missiles capabilities via Silicon Valley tech companies: https://qz.com/1042673/the-us-is-funding-silicon-valleys-space-industry-to-spot-north-korean-missiles-before-they-fly/. However, the most interesting (from my perspective) bit of the article relates neither to North Korea nor to Pentagon, and not even to the Silicon Valley role in the U.S. efforts to stop nuclear proliferation. Instead, it relates to this passage from the article:



The key here is an example of the link between the our human (behavioral) propensity to take action and the dynamic nature of the tail risks or, put more precisely, deeper uncertainty (as I put in my paper on the de-democratization trend https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2993535, the deeper uncertainty as contrasted by the Knightian uncertainty).

Deeper uncertainty involves a dynamic view of the uncertain environment in which potential tail events evolve before becoming a quantifiable and forecastable risks. This environment is different from the classical Knightian uncertainty in so far as evolution of these events is not predictable and can be set against perceptions or expectations that these events can be prevented, while at the same time providing no historical or empirical basis for assessment of actual underlying probabilities of such events.

In this setting, as opposed to Knightian set up with partially predictable and forecastable uncertainty, behavioral biases (e.g. confirmation bias, overconfidence, herding, framing, base rate neglect, etc) apply. These biases alter our perception of evolutionary dynamics of uncertain events and thus create a referencing point of ‘odds’ of an event taking place. The ‘odds’ view evolves over time as new information arrives, but the ‘odds’ do not become probabilistically defined until very late in the game.

Deeper uncertainty, therefore, is not forecastable and our empirical observations of its evolution are ex ante biased to downplay one, or two, or all dimensions of its dynamics:
- Impact - the potential magnitude of uncertainty when it materializes into risk;
- Proximity - the distance between now and the potential materialization of risk;
- Speed - the speed with which both impact and proximity evolve; and
- Similarity - the extent to which our behavioral biases distort our assessment of the dynamics.

Knightian uncertainty is a simple, one-shot, non-dynamic tail risk. As such, it is similar both in terms of perceived degree of uncertainty (‘odds’) and the actual underlying uncertainty.

Now, materially, the outrun of these dimensions of deeper uncertainty is that in a centralized decision-making setting, e.g. in Pentagon or in a broader setting of the Government agencies, we only take action ex post transition from uncertainty into risk. The bureaucracy’s reliance on ‘expert opinions’ to assess the uncertain environment only acts to reinforce some of the biases listed above. Experts generally do not deal with uncertainty, but are, instead, conditioned to deal with risks. There is zero weight given by experts to uncertainty, until such a moment when the uncertain events become visible on the horizon, or when ‘the odds of an event change’, just as the story told by Andrew Hunter in the Quartz article linked above says. Or in other words, once risk assessment of uncertainty becomes feasible.

The problem with this is that by that time, reacting to the risk can be infeasible or even irrelevant, because the speed and proximity of the shock has been growing along with its impact during the deeper uncertainty stage. And, more fundamentally, because the nature of underlying uncertainty has changed as well.

Take North Korea: current state of uncertainty in North Korea’s evolving path toward fully-developed nuclear and thermonuclear capabilities is about the extent to which North Korea is going to be willing to use its nukes. Yet, the risk assessment framework - including across a range of expert viewpoints - is about the evolution of the nuclear capabilities themselves. The train of uncertainty has left the station. But the ticket holders to policy formation are still standing on the platform, debating how North Korea can be stopped from expanding nuclear arsenal. Yes, the risks of a fully-armed North Korea are now fully visible. They are no longer in the realm of uncertainty as the ‘odds’ of nuclear arsenal have become fully exposed. But dealing with these risks is no longer material to the future, which is shaped by a new level of visible ‘odds’ concerning how far North Korea will be willing to go with its arsenal use in geopolitical positioning. Worse, beyond this, there is a deeper uncertainty that is not yet in the domain of visible ‘odds’ - the uncertainty as to the future of the Korean Peninsula and the broader region that involves much more significant players: China and Russia vs Japan and the U.S.

The lesson here is that a centralized system of analysis and decision-making, e.g. the Deep State, to which we have devolved the power to create ‘true’ models of geopolitical realities is failing. Not because it is populated with non-experts or is under-resourced, but because it is Knightian in nature - dominated by experts and centralized. A decentralized system of risk management is more likely to provide a broader coverage of deeper uncertainty not because its can ‘see deeper’, but because competing for targets or objectives, it can ‘see wider’, or cover more risk and uncertainty sources before the ‘odds’ become significant enough to allow for actual risk modelling.

Take the story told by Andrew Hunter, which relates to the Pentagon procurement of the Joint Light Tactical Vehicle (JLTV) as a replacement for a faulty Humvee, exposed as inadequate by the events in Iraq and Afghanistan. The monopoly contracting nature of Pentagon procurement meant that until Pentagon was publicly shown as being incapable of providing sufficient protection of the U.S. troops, no one in the market was monitoring the uncertainties surrounding the Humvee performance and adequacy in the light of rapidly evolving threats. If Pentagon’s procurement was more distributed, less centralized, alternative vehicles could have been designed and produced - and also shown to be superior to Humvee - under other supply contracts, much earlier, and in fact before the experts-procured Humvees cost thousands of American lives.

There is a basic, fundamental failure in our centralized public decision making bodies - the failure that combines inability to think beyond the confines of quantifiable risks and inability to actively embrace the world of VUCA, the world that requires active engagement of contrarians in not only risk assessment, but in decision making. That this failure is being exposed in the case of North Korea, geopolitics and Pentagon procurement is only the tip of the iceberg. The real bulk of challenges relating to this modus operandi of our decision-making bodies rests in much more prevalent and better distributed threats, e.g. cybersecurity and terrorism.

Friday, January 13, 2017

12/1/17: Betrayal Aversion, Populism and Donald Trump Election


In their 2003 paper, Koehler and Gershoff provide a definition of a specific behavioural phenomenon, known as betrayal aversion. Specifically, the authors state that “A form of betrayal occurs when agents of protection cause the very harm that they are entrusted to guard against. Examples include the military leader who commits treason and the exploding automobile air bag.” The duo showed - across five studies - that people respond differently “to criminal betrayals, safety product betrayals, and the risk of future betrayal by safety products” depending on who acts as an agent of betrayal. Specifically, the authors “found that people reacted more strongly (in terms of punishment assigned and negative emotions felt) to acts of betrayal than to identical bad acts that do not violate a duty or promise to protect. We also found that, when faced with a choice among pairs of safety devices (air
bags, smoke alarms, and vaccines), most people preferred inferior options (in terms of risk exposure) to options that included a slim (0.01%) risk of betrayal. However, when the betrayal risk was replaced by an equivalent non-betrayal risk, the choice pattern was reversed. Apparently, people are willing to incur greater risks of the very harm they seek protection from to avoid the mere possibility of betrayal.”

Put into different context, we opt for suboptimal degree of protection against harm in order to avoid being betrayed.

Now, consider the case of political betrayal. Suppose voters vest their trust in a candidate for office on the basis of the candidate’s claims (call these policy platform, for example) to deliver protection of the voters’ interests. One, the relationship between the voters and the candidate is emotionally-framed (this is important). Two, the relationship of trust induces the acute feeling of betrayal if the candidate does not deliver on his/her promises. Three, past experience of betrayal, quite rationally, induces betrayal aversion: in the next round of voting, voters will prefer a candidate who offers less in terms of his/her platform feasibility (aka: the candidate less equipped or qualified to run the office).

In other words, betrayal aversion will drive voters to prefer a poorer quality candidate.

Sounds plausible? Ok. Sounds like something we’ve seen recently? You bet. Let’s go over the above steps in the context of the recent U.S. presidential contest.


One: emotional basis for selection (vesting trust). The U.S. voters had eight years of ‘hope’ from President Obama. Hope based on emotional context of his campaigns, not on hard delivery of his policies. In fact, the entire U.S. electoral space has become nothing more than a battlefield of carefully orchestrated emotional contests.

Two: an acute feeling of betrayal is clearly afoot in the case of the U.S. electorate. Whether or not the voters today blame Mr. Obama for their feeling of betrayal, or they blame the proverbial Washington ’swamp’ that includes the entire lot of elected politicians (including Mrs. Clinton and others) is immaterial. What is material is that many voters do feel betrayed by the elites (both the Burn effect and the Trump campaign were based on capturing this sentiment).

Three: of the two candidates that did capture the minds of swing voters and marginalised voters (the types of voters who matter in election outrun in the end) were both campaigning on razor-thin policies proposals and more on general sentiment basis. Whether you consider these platforms feasible or not, they were not articulated with the same degree of precision and competency as, say, Mrs Clinton’s highly elaborate platform.

Which means the election of Mr Trump fits (from pre-conditions through to outcome) the pattern of betrayal aversion phenomena: fleeing the chance of being betrayed by the agent they trust, American voters opted for a populist, less competent (in traditional Washington’s sense) choice.

Now, enter two brainiacs from Harvard. Rafael Di Tella and Julio Rotemberg were quick on their feet recognising the above emergence of betrayal avoidance or aversion in voting decisions. In their December 2016 NBER paper, linked below, the authors argue that voters preference for populism is the form of “rejection of “disloyal” leaders.” To do this, the authors add an “assumption that people are worse off when they experience low income as a result of leader betrayal”, than when such a loss of income “is the result of bad luck”. In other words, they explicitly assume betrayal aversion in their model of a simple voter choice. The end result is that their model “yields a [voter] preference for incompetent leaders. These deliver worse material outcomes in general, but they reduce the feelings of betrayal during bad times.”

More to the point, just as I narrated the logical empirical hypothesis (steps one through three) above, Di Tella and Rotemberg “find some evidence consistent with our model in a survey carried out on the eve of the recent U.S. presidential election. Priming survey participants with questions about the importance of competence in policymaking usually reduced their support for the candidate who was perceived as less competent; this effect was reversed for rural, and less educated white, survey participants.”

Here you have it: classical behavioural bias of betrayal aversion explains why Mrs Clinton simply could not connect with the swing or marginalised voters. It wasn’t hope that they sought, but avoidance of putting hope/trust in someone like her. Done. Not ‘deplorables’ but those betrayed in the past have swung the vote in favour of a populist, not because he emotionally won their trust, but because he was the less competent of the two standing candidates.



Jonathan J. Koehler, and Andrew D. Gershof, “Betrayal aversion: When agents of protection become agents of harm”, Organizational Behavior and Human Decision Processes 90 (2003) 244–261: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.11.1841&rep=rep1&type=pdf

Di Tella, Rafael and Rotemberg, Julio J., Populism and the Return of the 'Paranoid Style': Some Evidence and a Simple Model of Demand for Incompetence as Insurance Against Elite Betrayal (December 2016). NBER Working Paper No. w22975: https://ssrn.com/abstract=2890079

Sunday, May 22, 2016

22/5/16: Lying and Making an Effort at It


Dwenger, Nadja and Lohse, Tim paper “Do Individuals Put Effort into Lying? Evidence from a Compliance Experiment” (March 10, 2016, CESifo Working Paper Series No. 5805: http://ssrn.com/abstract=2764121) looks at “…whether individuals in a face-to-face situation can successfully exert some lying effort to delude others.”

The authors use a laboratory experiment in which “participants were asked to assess videotaped statements as being rather truthful or untruthful. The statements are face-to-face tax declarations. The video clips feature each subject twice making the same declaration. But one time the subject is reporting truthfully, the other time willingly untruthfully. This allows us to investigate within-subject differences in trustworthiness.”

What the authors found is rather interesting: “a subject is perceived as more trustworthy if she deceives than if she reports truthfully. It is particularly individuals with dishonest appearance who manage to increase their perceived trustworthiness by up to 15 percent. This is evidence of individuals successfully exerting lying effort.”

So you are more likely to buy a lemon from a lemon-selling dealer, than a real thing from an honest one... doh...



Some more ‘beef’ from the study:

“To deceive or not to deceive is a question that arises in basically all spheres of life. Sometimes the stakes involved are small and coming up with a lie is hardly worth it. But sometimes putting effort into lying might be rewarding, provided the deception is not detected.”

However, “whether or not a lie is detected is a matter of how trustworthy the individual is perceived to be. When interacting face-to-face two aspects determine the perceived trustworthiness:

  • First, an individual’s general appearance, and 
  • Second, the level of some kind of effort the individual may choose when trying to make the lie appear truthful. 


The authors ask a non-trivial question: “do we really perceive individuals who tell the truth as more trustworthy than individuals who deceive?”

“Despite its importance for social life, the literature has remained surprisingly silent on the issue of lying effort. This paper is the first to shed light on this issue.”

The study actually uses two types of data from two types of experiments: “An experiment with room for deception which was framed as a tax compliance experiment and a deception-assessment experiment. In the compliance experiment subjects had to declare income in face-to-face situations vis-a-vis an officer, comparable to the situation at customs. They could report honestly or try to evade taxes by deceiving. Some subjects received an audit and the audit probabilities were influenced by the tax officer, based on his impression of the subject. The compliance interviews were videotaped and some of these video clips were the basis for our deception-assessment experiment: For each subject we selected two videos both showing the same low income declaration, but once when telling the truth and once when lying. A different set of participants was asked to watch the video clips and assess whether the recorded subject was truthfully reporting her income or whether she was lying. These assessments were incentivised. Based on more than 18,000 assessments we are able to generate a trustworthiness score for each video clip (number of times the video is rated "rather truthful" divided by the total number of assessments). As each individual is assessed in two different video clips, we can exploit within-subject differences in trustworthiness. …Any difference in trust-worthiness scores between situations of honesty and dishonesty can thus be traced back to the effort exerted by an individual when lying. In addition, we also investigate whether subjects appear less trustworthy if they were audited and had been caught lying shortly before. …the individuals who had to assess the trustworthiness of a tax declarer did not receive any information on previous audits.

The main results are as follows:

  • “Subjects appear as more trustworthy in compliance interviews in which they underreport than in compliance interviews in which they report truthfully. When categorizing individuals in subjects with a genuine dishonest or honest appearance, it becomes obvious that it is mainly individuals of the former category who appear more trustworthy when deceiving.”
  • “These individuals with a dishonest appearance are able to increase their perceived trustworthiness by up to 15 percent. This finding is in line with the hypothesis that players with a comparably dishonest appearance, when lying, expend effort to appear truthful.”
  • “We also find that an individual’s trustworthiness is affected by previous audit experiences. Individuals who were caught cheating in the previous period, appear significantly less trustworthy, compared to individuals who were either not audited or who reported truthfully. This effect is exacerbated for individuals with a dishonest appearance if the individual is again underreporting but is lessened if the individual is reporting truthfully.”


21/5/16: Manipulating Markets in Everything: Social Media, China, Europe


So, Chinese Government swamps critical analysis with ‘positive’ social media posts, per Bloomberg report: http://www.bloomberg.com/news/articles/2016-05-19/china-seen-faking-488-million-internet-posts-to-divert-criticism.

As the story notes: “stopping an argument is best done by distraction and changing the subject rather than more argument”.

So now, consider what the EU and European Governments (including Irish Government) have been doing since the start of the Global Financial Crisis.

They have hired scores of (mostly) mid-educated economists to write, what effectively amounts to repetitive reports on the state of economy . All endlessly cheering the state of ‘recovery’.

In several cases, we now have statistics agencies publishing data that was previously available in a singular release across two separate releases, providing opportunity to up-talk the figures for the media. Example: Irish CSO release of the Live Register stats. In another example, the same data previously available in 3 files - Irish Exchequer results - is being reported and released through numerous channels and replicated across a number of official agencies.

The result: any critical opinion is now drowned in scores of officially sanctioned presentations, statements, releases, claims and, accompanied by complicit media and professional analysts (e.g. sell-side analysts and bonds placing desks) puff pieces.

Chinese manipulating social media, my eye… take a mirror and add lights: everyone’s holding the proverbial bag… 

21/5/16: Banks Deposit Insurance: Got Candy, Mate?…


Since the end of the [acute phase] Global Financial Crisis, European banking regulators have been pushing forward the idea that crisis response measures required to deal with any future [of course never to be labeled ‘systemic’] banking crises will require a new, strengthened regime based on three pillars of regulatory and balance sheet measures:

  • Pillar 1: Harmonized regulatory supervision and oversight over banking institutions (micro-prudential oversight);
  • Pillar 2: Stronger capital buffers (in quantity and quality) alongside pre-prescribed ordering of bailable capital (Tier 1, intermediate, and deposits bail-ins), buffered by harmonized depositor insurance schemes (also covered under micro-prudential oversight); and
  • Pillar 3: Harmonized risk monitoring and management (macro-prudential oversight)


All of this firms the core idea behind the European System of Financial Supervision. Per EU Parliament (http://www.europarl.europa.eu/atyourservice/en/displayFtu.html?ftuId=FTU_3.2.5.html): “The objectives of the ESFS include developing a common supervisory culture and facilitating a single European financial market.”

Theory aside, the above Pillars are bogus and I have commented on them on this blog and elsewhere. If anything, they represent a singular, infinitely deep confidence trap whereby policymakers, supervisors, banks and banks’ clients are likely to place even more confidence at the hands of the no-wiser regulators and supervisors who cluelessly slept through the 2000-2007 build up of massive banking sector imbalances. And there is plenty of criticism of the architecture and the very philosophical foundations of the ESFS around.

Sugar buzz!...


However, generally, there is at least a strong consensus on desirability of the deposits insurance scheme, a consensus that stretches across all sides of political spectrum. Here’s what the EU has to say about the scheme: “DGSs are closely linked to the recovery and resolution procedure of credit institutions and provide an important safeguard for financial stability.”

But what about the evidence to support this assertion? Why, there is an fresh study with ink still drying on it via NBER (see details below) that looks into that matter.

Per NBER authors: “Economic theories posit that bank liability insurance is designed as serving the public interest by mitigating systemic risk in the banking system through liquidity risk reduction. Political theories see liability insurance as serving the private interests of banks, bank borrowers, and depositors, potentially at the expense of the public interest.” So at the very least, there is a theoretical conflict implied in a general deposit insurance concept. Under the economic theory, deposits insurance is an important driver for risk reduction in the banking system, inducing systemic stability. Under the political theory - it is itself a source of risk and thus can result in a systemic risk amplification.

“Empirical evidence – both historical and contemporary – supports the private-interest approach as liability insurance generally has been associated with increases, rather than decreases, in systemic risk.” Wait, but the EU says deposit insurance will “provide an important safeguard for financial stability”. Maybe the EU knows a trick or two to resolve that empirical regularity?

Unlikely, according to the NBER study: “Exceptions to this rule are rare, and reflect design features that prevent moral hazard and adverse selection. Prudential regulation of insured banks has generally not been a very effective tool in limiting the systemic risk increases associated with liability insurance. This likely reflects purposeful failures in regulation; if liability insurance is motivated by private interests, then there would be little point to removing the subsidies it creates through strict regulation. That same logic explains why more effective policies for addressing systemic risk are not employed in place of liability insurance.”

Aha, EU would have to become apolitical when it comes to banking sector regulation, supervision, policies and incentives, subsidies and markets supports and interventions in order to have a chance (not even a guarantee) the deposits insurance mechanism will work to reduce systemic risk not increase it. Any bets for what chances we have in achieving such depolitization? Yeah, right, nor would I give that anything above 10 percent.

Worse, NBER research argues that “the politics of liability insurance also should not be construed narrowly to encompass only the vested interests of bankers. Indeed, in many countries, it has been installed as a pass-through subsidy targeted to particular classes of bank borrowers.”

So in basic terms, deposit insurance is a subsidy; it is in fact a politically targeted subsidy to favor some borrowers at the expense of the system stability, and it is a perverse incentive for the banks to take on more risk. Back to those three pillars, folks - still think there won’t be any [though shall not call them ‘systemic’] crises with bail-ins and taxpayers’ hits in the GloriEUs Future?…


Full paper: Calomiris, Charles W. and Jaremski, Matthew, “Deposit Insurance: Theories and Facts” (May 2016, NBER Working Paper No. w22223: http://ssrn.com/abstract=2777311)

21/5/16: Voters selection biases and political outcomes


A recent study based on data from Austria looked at the impact of compulsory voting laws on voter quality.

Based on state and national elections data from 1949-2010, the authors “show that compulsory voting laws with weakly enforced fines increase turnout by roughly 10 percentage points. However, we find no evidence that this change in turnout affected government spending patterns (in levels or composition) or electoral outcomes. Individual-level data on turnout and political preferences suggest these results occur because individuals swayed to vote due to compulsory voting are more likely to be non-partisan, have low interest in politics, and be uninformed.”

In other words, it looks like there is a selection bias being triggered by compulsory voting: lower quality of voters enter the process, but due to their lower quality, these voters do not induce a bias away from state quo. Whatever the merit of increasing voter turnouts via compulsory voting requirements may be, it does not appear to bring about more enlightened choices in policies.

Full study is available here: Hoffman, Mitchell and León, Gianmarco and Lombardi, María, “Compulsory Voting, Turnout, and Government Spending: Evidence from Austria” (May 2016, NBER Working Paper No. w22221: http://ssrn.com/abstract=2777309)

So can you 'vote out' stupidity?..



Saturday, December 19, 2015

19/12/15: Another Un-glamour Moment for Economics


Much of the current fascination with behavioural economics is well deserved - the field is a tremendously important merger of psychology and economics, bringing economic research and analysis down to the granular level of human behaviour. However, much of it is also a fad - behavioural economics provide a convenient avenue for advertising companies, digital marketing agencies, digital platforms providers and aggregators, as well as congestion-pricing and Gig-Economy firms to milk strategies for revenue raising that are anchored in common sense. In other words, much of behavioural economics use in real business (and in Government) is about convenient plucking out of strategy-confirming results. It is marketing, not analysis.

A lot of this plucking relies on empirically-derived insights from behavioural economics, which, in turn, often rely on experimental evidence. Now, experimental evidence in economics is very often dodgy by design: you can’t compel people to act, so you have to incentivise them; you can quite select a representative group, so you assemble a ‘proximate’ group, and so on. Imagine you want to study intervention effects on a group of C-level executives. Good luck getting actual executives to participate in your study and good luck getting selection biases sorted out in analysing the results. Still, experimental economics continues to gain prominence, as a backing for behavioural economics. A still, companies and governments spend millions on funding such research.

Now, not all experiments are poorly structured and not all evidence derived from is dodgy. So to alleviate nagging suspicion as to how much error is carried in experiments, a recent paper by Alwyn Young of London School of Economics, titled “Channelling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results” (http://personal.lse.ac.uk/YoungA/ChannellingFisher.pdf) used  “randomization statistical inference to test the null hypothesis of no treatment effect in a comprehensive sample of 2003 regressions in 53 experimental papers drawn from the journals of the American Economic Association.”

The attempt is pretty darn good. The study uses robust methodology to test a statistically valid hypothesis: has there been a statically significant result derived in the studies arising from experimental treatment or not? The paper tests a large sample of studies published (having gone through peer and editorial reviews) in perhaps the most reputable economics journals. This is creme-de-la-creme of economics studies.

The findings, to put this scientifically: “Randomization tests reduce the number of regression specifications with statistically significant treatment effects by 30 to 40 percent. An omnibus randomization test of overall experimental significance that incorporates all of the regressions in each paper finds that only 25 to 50 percent of experimental papers, depending upon the significance level and test, are able to reject the null of no treatment effect whatsoever. Bootstrap methods support and confirm these results. “

In other words, in majority of studies claiming to have achieved statistically significant results from experimental evidence, such results were not really statistically significantly attributable to experiments.

Now, the author is cautious in his conclusions. “Notwithstanding its results, this paper confirms the value of randomized experiments. The methods used by authors of experimental papers are standard in the profession and present throughout its journals. Randomized statistical inference provides a solution to the problems and biases identified in this paper. While, to date, it rarely appears in experimental papers, which generally rely upon traditional econometric methods, it can easily be incorporated into their analysis. Thus, randomized experiments can solve both the problem of identification and the problem of accurate statistical inference, making them doubly reliable as an investigative tool. “

But this is hogwash. The results of the study effectively tell us that large (huge) proportion of papers on experimental economics published in the most reputable journals have claimed significant results attributable to experiments where no such significance really was present. Worse, the methods that delivered these false significance results “are standard in the profession”. 


Now, consider the even more obvious: these are academic papers, written by highly skilled (in econometrics, data collection and experiment design) authors. Imagine what drivel passes for experimental analysis coming out of marketing and surveying companies? Imagine what passes for policy analysis coming out of public sector outfits? Without peer reviews and without cross-checks like those performed by Young?

Sunday, April 19, 2015

19/4/15: New Evidence: Ambiguity Aversion is the Exception


A fascinating behavioural economics study on ambiguity aversion by Kocher, Martin G. and Lahno, Amrei Marie and Trautmann, Stefan, titled "Ambiguity Aversion is the Exception" (March 31, 2015, CESifo Working Paper Series No. 5261: http://ssrn.com/abstract=2592313) provides empirical testing of ambiguity aversion hypothesis.

Note: my comments within quotes are in bracketed italics

When an agent makes a decision in the presence of uncertainty, "risky prospects with known probabilities are often distinguished from ambiguous prospects with unknown or uncertain probabilities… [in economics literature] it is typically assumed that people dislike ambiguity in addition to a potential dislike of risk, and that they adjust their behavior in favor of known-probability risks, even at significant costs."

In other words, there is a paradoxical pattern in behaviour commonly hypothesised: suppose an agent is facing a choice between a gamble with known probabilities (uncertain, but not ambiguous) that has low expected return and a gamble with unknown (ambiguous) probabilities that has high expected return. In basic terms, ambiguity aversion implies that an agent will tend to opt to select the first choice, even if this choice is sub-optimal, in standard risk aversion setting.

As authors note, "A large literature has studied the consequences of such ambiguity aversion for decision making in the presence of uncertainty. Building on decision theories that assume ambiguity aversion, this literature shows that ambiguity can account for empirically observed violations of expected utility based theories (“anomalies”)."

"These and many other theoretical contributions presume a universally negative attitude toward ambiguity. Such an assumption seems, at first sight, descriptively justified on the basis of a large experimental literature… However, …the predominance of ambiguity aversion in experimental findings might be due to a narrow focus on the domain of moderate likelihood gains… While fear of a bad unknown probability might prevail in this domain [of choices with low or marginal gains], people might be more optimistic in other domains [for example if faced with much greater payoffs or risks, or when choices between strategies are more complex], hoping for ambiguity to offer better odds than a known-risk alternative."

So the authors then set out to look at the evidence for ambiguity aversion "in different likelihood ranges and in the gain domain, the loss domain, and with mixed outcomes, i.e. where both gains and losses may be incurred. …Our between-subjects design with more than 500 experimental participants exposes participants to exactly one of the four domains, reducing any contrast effects that may affect the preferences in the laboratory context."

Core conclusion: "Ambiguity aversion is the exception, not the rule. We replicate the finding of ambiguity aversion for moderate likelihood gains in the classic ...design. However, once we move away from the gain domain or from the [binary] choice to more [complex set of choices], thus introducing lower likelihoods, we observe either ambiguity neutrality or even ambiguity seeking behavior. These results are robust to the elicitation procedure."

So is ambiguity hypothesis dead? Not really. "Our rejection of universal ambiguity aversion does not generally contradict ambiguity models, but it has important implications for the assumptions in applied models that use ambiguity attitudes to explain real-world phenomena. Theoretical analyses should not only consider the effects of ambiguity aversion, but also potential implications of ambiguity loving for economics and finance, particularly in contexts that involve rare events or perceived losses such as with insurance or investments. Policy implications should always be fine-tuned to the specific domain, because policy interventions based on wrong assumptions regarding the ambiguity attitudes of those targeted by the policy could be detrimental."

Thursday, April 24, 2014

24/4/2014: Culture and Economics: The Opposites Attract


This is an unedited version of my column in the Village Magazine, April 2014


Back in the late 1970s, George Stigler and Gary S. Becker wrote a famous paper, titled "De Gustibus Non Est Disputandum" that mapped out the view of economics as a field that treats with suspicion the idea of preferences-based explanations for human choices and behaviour. Differences in individual preferences, they said, can “explain everything and therefore nothing”.

This position has informed much of the mainstream economics thinking for at least two decades, creating an erroneous perception outside the field that economists ‘do not do personal attributes’ of individual and collective behaviour. Thus, culture, aesthetics and ethics, should, according to popular beliefs, be automatically falling outside the scope of economic inquiry.

This perception is wrong for at least two basic reasons. Firstly, cultural, aesthetic and ethical foundations of our social interactions contain much more than a purely atomistic, individualised component. In fact, culture is more systematic in nature than atomistic, and as such can be studied using economic models and techniques. Secondly, economics as a field of inquiry has moved substantially from the 1970s worldview to embrace many aspects of individual-specific or idiosyncratic behaviour, including historical, psychological, neurological and cultural drivers of individual and collective choices.

From this point of view, it is worth looking at the ways in which economics and culture interact today in the mainstream economic research.

To start with, consider the basic building blocks of rational modeling of choice as applied to culture. Is there a systemic framework that can be used to think about culture and cultural issues on the bias of economic system of thinking, a system that is based on the concepts of marginal utility and constrained optimisation?

The answer to this question is an affirmative one. There is and more - it yields far-reaching and highly useful outcomes for the field of economics, while generating a feedback loop to enrich our ability to understand and model cultural aspects of our behaviour and choices.

As Luigi Guiso, Paola Sapienza and Luigi Zingales, in their paper "Does Culture Affect Economic Outcomes?" clearly state: "Until recently, economists have been reluctant to rely on culture as a possible determinant of economic phenomena. Much of this reluctance stems from the very notion of culture: it is so broad and the channels through which it can enter the economic discourse so ubiquitous (and vague) that it is difficult to design testable, refutable hypotheses. In recent years, however, better techniques and more data have made it possible to identify systematic differences in people's preferences and beliefs and to relate them to various measures of cultural legacy. These developments suggest an approach to introducing culturally-based explanations into economics that can be tested and may substantially enrich our understanding of economic phenomena."



The starting point for thinking about culture in economic terms is to posit a question as to what distinguishes cultural value from economic value.

In economics, value of an object, an action or a service is determined by referencing to the marginal utility derived from each additional unit of this object, action or service made accessible to the user or consumer. Under certain rather restrictive conditions, this can be translated or mapped into a pricing system, but the concept of price is more restricted and more restrictive than the concept of value.

Cultural value, as Guiso, Sapienza and Zingales note in the previous quote, is harder to define, at least in rational or mathematical terms and systems. It is usually thought of as a set of attributes, values, beliefs etc that can be grouped together on the basis of having some identifiable, but not necessarily quantifiable (in ordinal or cardinal terms) value to a specific group of people.

Imposing some constraints, just as with translation of marginal economic value into prices, we can think of cultural values as goods, actions and services that reflect intellectual, ethical and aesthetic aspects of humanity collectively or atomistic individuals. Thus, work of art has a cultural value and it can be mapped into a 'cultural price' but only under very restrictive conditions.

There is a clear difference between cultural and economic value systems. For example, a price-like system does not apply as well to measuring artistic achievement as it does to measuring the quality of oranges or cars. But this clear distinction does not mean that complex systems, like aesthetic or ethical values of a particular culture, cannot be partially modelled by references to well-definable preferences. Being humble about the scope of economic models application to such subjects as arts or sciences or folklore does not mean rejecting completely the idea that economics can provide useful tools for studying these phenomena.

Key concept of scarcity - driving the existence of defined preferences and prices in traditional economics - also applies to culture. Utility functions that value positively some desired scarce good and that change these valuation on the margin as quantity of good available to consumer changes also apply to works of art, religious beliefs, social rules. Concepts of time discounting and budget constraints that drive decision making in mainstream economics also shape cultural evolution, as well as guide emergence, propagation and survivorship in arts, cultural and social values and norms.


The core limitation - when it comes to applying economics models to arts and culture - arises from the mathematical problem of not being able to assign to cultural phenomena stable and well-deigned preferences.

Think of the determination of value in culture. Traditionally, we distinguish several methods for assigning cultural value to any particular object or act. These, normally, include analysis of the object content and context in relation to a specific group of people or time period or both. Tools used for such analysis are surveys of experts and/or users, and in more extreme cases also psychometric surveys and even measures of physiological or neurological responses. The problem, of course, is that there is little we can do to remove as much subjective valuation from such assessments as needed to deliver stable and rational (in mathematical terms) system of classification or rankings.

Thus, the perennial question in art valuations (cultural, not economic) is 'who the experts are?' In economic valuations of art, the answer is rather simple: an auction process or a direct sale sets the value. In cultural terms, a Rauschenberg is a masterpiece to some and a collection of refuse to others. Another simple question that undermines the idea that cultural value is perfectly measurable is the validity of surveys of users and, in even more specialist context, the validity of physiological responses being measured. These fail on the basis of the 'eye of the beholder' or 'the innocent eye' tests.


(Mark Tansey "An Innocent Eye Test")

One last measurement system - the system reliant on aggregation of individual valuations, or put more colloquially the 'repetition test' fails because it is open ended. No time horizon or sample size can be defined for such a metric and no value can be assigned on its basis. This applies to all collective bases for valuations, including cultural and social norms. In methodological terms, many of these issues have been known to economists for some time, as highlighted, for example, in a survey by Charles Manski, titled "Economic Analysis of Social Interactions" written over 15 years ago.

But some recent examples show just how far the field of economic modelling has evolved in developing capabilities to capture cultural and social phenomena in the econometric setting, allowing at least for applied evaluation and analysis.

A paper by Luigi Guiso, Paola Sapienza and Luigi Zingales, titled "People's Opium? Religion and Economic Attitudes" takes a debate about the effect of religious institutions on economic behaviour and attitudes - a debate that raged since the times of Max Weber - and applies modern econometric techniques to it. The result is analysis of dynamic evolutionary trends in the link between religion and economics. The conclusions are far from banal or anodyne. Using the World Values Surveys "to identify the relationship between intensity of religious beliefs and economic attitudes, controlling for country fixed effects", the authors "study several economic attitudes toward cooperation, the government, working women, legal rules, thriftiness, and the market economy". The study found that "on average, religious beliefs are associated with ‘good’ economic attitudes, where ‘good’ is defined as conducive to higher per capita income and growth. Yet religious people tend to be more racist and less favorable with respect to working women. These effects differ across religious denominations. Overall, we find that Christian religions are more positively associated with attitudes conducive to economic growth."

It is worth noting that such far-reaching systemic conclusions cannot be reached by a reference to tools traditionally employed by historians and cultural anthropologists, but must instead rely on econometrics and prior economic modelling.

This is hardly an example of economics research that 'ignores culture' or 'has difficulty modelling cultural inputs'. But it is also the type of research which shows that one cannot establish a hierarchical system defining the superiority of one system of valuations (economic or cultural) over the other (cultural or economic) – a pivotal issue that we will return to below.


Culture does present economists with interesting dilemmas that push out the boundaries of our way of thinking.

Take for example collective as opposed to individual valuation of a cultural object. In economics, traditionally, utility functions - the basis for defining value and transactions basis - are agent-specific and reflect the position of a representative agent (a sort of mathematical average). In this, the value of the object usually arises from individual valuation without regard for others and for their valuations. By virtue of all agents being ‘representative’ these valuations then apply to the entire group of people that form the economy.

In culture, of course, a work of art has both an individual value to the viewer and a collective value to the society or a group of people that individual references, plus to the broader groups that may not be referenced by the person who’s utility is being modeled. One source of value is intimately linked to the other, however. Even the most remote cultural connections between an individual and a group still exert impact, even if indirectly (via conditioning or framing, for example).

Alas, economics have recognised the limitation of the fully separable or individually objective utility function for some time now. Much of modern economics rests on the basis of preferences that incorporate references in valuation of an object to their valuation by others (for example 'keeping up with the Joneses' or benevolence motives), expectation about the value of the object to future generations (inter-generational bequests), transmission of value through time (referencing to past generations valuations, addiction, habit formation or path-dependency), and other forms of linkages between one's own satisfaction from consumption of a good or a service and satisfaction of or impact on the others.  More tenuous connections across the society, including cultural ones, are captured (if not always directly) via institutional and political systems.

Referencing to others' preferences can be commonly seen as an important component of exchange-linked interactions. And here, cultural factors can enter directly into economic models. For example, Luigi Guiso, Paola Sapienza and Luigi Zingales study, titled "Cultural Biases in Economic Exchange?" looked at how cultural biases affect economic exchanges. Using data on bilateral trust between European countries, the authors show that "trust is affected … by cultural aspects of the match between trusting country and trusted country, such as their history of conflicts and their religious, genetic, and somatic similarities.” Lower bilateral trust leads to less trade, less portfolio investment, and less direct investment between countries. Another study, by Paul Zak and Stephen Knack also found that trust has a direct causal link to economic growth via facilitating investment and trade.

Trust, as a cultural factor, enters determination of the effectiveness of large organisations operating in society, according to Rafael La Porta, Florencio Lopez-de-Silane, Andrei Shleifer and Robert W. Vishny. This applies to "government performance, participation in civic and professional societies, importance of large firms, and the performance of social institutions".

Luigi Guiso,  Paola Sapienza, and Luigi Zingales, cited previously, also looked at the issues of trust and culture in relation to households willingness to participate in stock markets. Their paper on this topic showed that cultural attitudes to trust are significant determinants of households’ choices to participate or not in the stock markets in a number of countries.

Similarly, time-linked referencing, among other matters, has been tackled already in economics, including in the context of modelling cultural systems inputs into economic systems and institutions. For example in his 1994 paper, Avner Greif models effects of cultural beliefs on the organisation of society across historical and ideological lines. The paper used game-theoretical and sociological frameworks to conduct a comparative historical analysis of the relations between culture and institutions, explicitly incorporating possible path-dependencies (historical referencing) in how culture impacts institutional evolution.


Much of the thinking about culture and its contribution to economic and financial interactions between people, firms, countries and regions enters economics from analysis of the impact of arts and public or shared goods and services on economic behaviour. In other words, culture enriches economics.

But economic models, techniques and concepts also contribute to our understanding of culture.

One example is the rapidly evolving field of analysis relating to cultural capital. In general, capital is an asset - a form of foregone or saved consumption - that stores value. Cultural capital, therefore, is a form of storage, and transmission over time, of cultural values. In so far as such value is embedded into physical objects, experiences, actions and knowledge, these objects or subjects of culture (e.g. paintings or sculpture or a garden), actions, experiences and knowledge are embodiments of cultural capital. They can be passed on to the future generations, or destroyed, enhanced by adding to their stock and quality, undermined or devalued by reducing their stocks or quality and so on.

In economics, all capital either directly serves as an input into production and/or can be used to transform other inputs (for example, via labour-capital complementarity). And so is with cultural capital. Current generations of artists rely on past stock of artistic and cultural capital to create their works. Today's music draws on folklore of the past, today’s architecture references past landscapes and cityscapes, tomorrow’s poetry will reflect today’s ethos or events shaped by it, and so forth.

Again, the idea of cultural capital – an economic concept to begin with – also poses an interesting challenge for economics. Physical capital, such as buildings, machinery, equipment, exists separably from us, households and workers, who use it. As a result, modelling production process using ‘labour’ and physical capital is conveniently easy from mathematics perspective, although such separation is by no means accurate or even accepted any longer in modern economics. But cultural capital exists simultaneously within us and outside of us. In mathematical terms - it contains parts that are simultaneously separable and parts that are inseparable from human beings, or 'labour'.

This problem, however daunting technically, is not unique to cultural capital. Other forms of capital, such as human capital, social capital and some forms of technological capital, are also non-separable (at least not perfectly) from 'labour'. More interestingly, in contemporary economics, we are starting to recognise that even physical capital can no longer be perfectly detached from us. Aesthetic and ethical aspects of our physical environment (aesthetic and, increasingly also ethical, aspects and attributes of buildings, settings, equipment we use) also interact with our human capital and are directly influenced by cultural capital.

These forces shape the modern workplace, an issue touched upon, for example, in Andrea Ichino and Giovanni Maggi paper on work environment and individual employees’ background.  Another good example of these processes is the recognition we accord today to the role of ergonomics and design in general in our workplace. There is a truly massive body of academic literature linking quality of design of the working environment to productivity, innovativeness and creativity of the workers. As far back as in 1999,  Adrian Leaman and Bill Bordass in their paper “Productivity in Buildings: the ‘killer’ variables” argued that “losses or gains of up to 15% of turnover in a typical office organization might be attributable to the design, management and use of the indoor environment. There is growing evidence to show that associations between perceived productivity and clusters of factors such as comfort, health and satisfaction of staff.” In other words, things like energy efficiency may act via cultural triggers to improve workers’ outlook and satisfaction, thus increasing productivity.


Importantly, social, human and cultural forms of capital are increasingly entering economic analysis both at microeconomic level (decisions of households and individual firms) and at macroeconomic (economic systems, national economies, global economy) levels. For example, economists are fully aware (even though we still have great difficulty valuing or measuring it and more pertinently, we have a great difficulty finding suitable econometric instruments to capture many types of cultural and ethical values) of the effects that cultural values and systems have on political and economic institutions, their shapes, evolutionary dynamics and key traits.

In one paper, Edward L. Glaeser, David Laibson  and Bruce Sacerdote, developed a complete economic model of social capital that can also be extended to capturing some traits of cultural capital.

These effects can be transmitted via demographics (cultural aspects relating to family formation, beliefs structures and collective ethics), political systems (nature and extent of democratic institutions, efficiency of specific forms of political and economic governance), judiciary and military (role of independent judiciary or power of military in a society), and so on.

Alessandra Fogli and Raquel Fernandez paper "Culture: An Empirical Investigation of Beliefs, Work, and Fertility" found that cultural attributes, based on woman's country of ancestry have strong explanatory power in determining family decisions relating to the work and fertility behavior of second-generation American women, even after we control for other, economic and social, drivers. And in another study, the same authors, together with Claudia Olivetti, argued that cultural drivers are also important to work and fertility behaviour of the American women in general.

These are just some examples of the ways in which economics and culture interact, productively and constructively. Of course, none of these imply existence of the dominance relationship between the two domains. Instead, the domains ‘collaborate’, causally, in both directions.

Rachel McCleary and Robert Barro, in their "Religion and Political Economy in an International Panel" argued that "Economic and political developments affect religiosity, and the extent of religious participation and beliefs influence economic performance and political institutions." The study found that "Church attendance and religious beliefs are positively related to education (thereby conflicting with theories in which religion reflects non-scientific thinking) and negatively related to urbanization. …On the other side, we find that economic growth responds positively to the extent of some religious beliefs [notably those in hell and heaven] but negatively to church attendance.” In other words, belief, not belonging to church, drives growth. These results, according to authors “accord with a perspective in which religious beliefs influence individual traits that enhance economic performance. The beliefs are, in turn, the principal output of the religion sector, and church attendance measures the inputs to this sector. Hence, for given beliefs, more church attendance signifies more resources used up by the religion sector."



Likewise, institutional arrangements in specific sectors of economy, e.g. finance, can be traced at least in part to cultural drivers or factors. Rene Stulz, and Roha Williamson looked at cultural differences (in particular religious values variations) as a driver for determination of shareholder and creditor rights. To the chagrin of the ‘culture-first, economics-last’ proponents, they found that "the origin of a country's legal system is more important than its religion and language in explaining shareholder rights." To the chagrin of economics-first supporters, a country's principal religion still proves useful in predicting the cross-sectional variation in creditor rights… [and that] …religion and language are also important predictors of how countries enforce rights."

Amir Licht, Chanan Goldschmidt and Shalom Schwartz show that culture underpins the foundations of the rule of law and other basic/fundamental norms of governance, thus directly influencing evolution of social and institutional capital.

Guido Tabellini's important study, "Culture and Institutions: Economic Development in the Regions of Europe" looked at whether culture has a causal effect on economic development. Tabellini concluded that it does: "Culture is measured by indicators of individual values and beliefs, such as trust and respect for others, and confidence in individual self-determination… The exogenous component of culture due to history is strongly correlated with current regional economic development, after controlling for contemporaneous education, urbanization rates around 1850 and national effects."



On the opposing side of research spectrum, economic models and techniques can be used to study changes in underlying social and cultural traits.

Alberto Bisin and Thierry Verdier paper "Beyond The Melting Pot: Cultural Transmission, Marriage, And The Evolution Of Ethnic And Religious Traits" developed an economic analysis of "the intergenerational transmission of ethnic and religious traits through family socialization and marital segregation decisions". Econometric methods and techniques have been deployed by Paola Giuliano to explain a largely cultural phenomenon of varied family living arrangements found across European countries.

An applied World Bank policy research paper by Karla Hoff and Priyanka Pandey used experimental economics to explain the relationship between cultures and caste discrimination in India.

In their "Belief in a Just World and Redistributive Politics", Roland Benabou and Jean Tirole cross-link culture, political institutions, social ethnographies, ideological beliefs and use economics to explain variations in ethical systems across a number of countries.

And in their 2004 paper Fabian Bornhorst, and co-authors use experimental economics to capture significant differences in culturally-based trust between southern and northern Europeans.



The above are just a handful of examples where culture and economics are productively cross-linked in social and ethnographic, economic and demographic, as well as anthropological research.

Only a naive mind can suggest some hierarchical structure that would rank one over the other – culture over economics or vice-versa – either as a tool for any social inquiry, or as a source of value in the social setting. In the real world, each requires the other to sustain itself. And in the real world, illusory theoretical perfections of economics can be challenged by the messiness of arts and cultural factors, while the intangibles of ethic values can be partially systemised and made part of the broader analysis of our society by economics. Economics is not detached from culture and culture is not divorced from economics.