Highlights 2013
WP 30/13
Sergei Kovbasyuk, in “Seller – paid Ratings”, analyses the interaction among the seller of a product, the buyers who are uncertain about the product’s quality and a rating agency who observes the quality and sends signals about it, within the seller-pays business model of rating agency. He shows that this model performs well as long as payments between the seller and the rating agency are publicly disclosed. In fact, if a buyer observes that a seller has paid an unusually high fee for a rating, his perception of the product’s quality deteriorates; this prevents the seller from inflating the payments and improves the precision of the ratings. By contrast, when payments are private for the seller and the rating agency, the former may manipulate the payments to bias the latter’s report. To prevent such manipulations the rating agency’s ratings must be imprecise. From these results the consequence is that regulation requiring fixed fees is not desirable and it can be improved by requiring the rating agency to disclose the payments they receive from the seller.
WP 29/13
Franco Peracchi, together with Roger Koenker and Samantha Leorato, in: “Distributional vs. Quantile Regression” consider two alternative approaches to estimating the conditional distribution of a scalar random variable Y, given a random vector X, when the available data is a sample from the joint distribution of (Y,X). One approach, distributional regression (DR), is based on the direct estimation of the conditional distribution function of Y given X; the latter, quantile regression (QR), is based on the direct estimation of the conditional quantile function of Y given X. Indirect estimates of the conditional quantile function and of the conditional distribution function may then be obtained by inverting the direct estimates obtained from either approach. Despite the growing attention to the DR approach, and the vast literature on the QR approach, the link between the two has not been fully explored. This paper fills this gap by providing a better understanding of the relative performance of the two approaches, both asymptotically and in finite samples, under the linear location model and certain types of heteroskedastic location-scale models.
WP 28/13
Luigi Paciello and Andrea Pozzi, together with Nicholas Trachter, in “Price Setting with Customer Retention“, set up a model where customers face frictions when changing their supplier, generating sluggishness in the firm’s customer base. Firms care about retaining customers and this affects their pricing strategies. Using scanner data of a large US retailer, the authors provide novel direct evidence of sluggishness in the customer base; then the same database is used to estimate the model. It is shown that the introduction of customer retention delivers pro-cyclical markups in response not only to idiosyncratic shocks, a feature well documented in the data, but also to aggregate ones (both technology and government spending). These features cannot be replicated by a standard CES model. The implications are quantitatively relevant: business cycles fluctuations of output are reduced by 30% with respect to a setting where customer base concerns are not present. The model also matches the empirical evidence on the heterogeneous pass-through of real exchange rate shocks on prices.
WP 27/13
Luigi Guiso, together with Paola Sapienza e Luigi Zingales, in “The Value of Corporate Culture“, investigate whether corporate culture, intended as a set of principles and values shared by all firm’s employees, is correlated with firm’s performance. A first issue in performing this analysis concerns how culture is measured. Indeed, the authors show that corporate values explicitly claimed by the large majority of S&P companies appear irrelevant. Thus they use a novel dataset created by the Great Place to Work® Institute, which conducts extensive surveys of the employees of more than 1,000 US firms. This database has the advantage to measure how values, in particular integrity of management, are perceived by employees, rather than how they are advertised by the firm. They find that high levels of perceived integrity are positively correlated with firm’s performance, in terms of higher productivity, profitability and better industrial relations. Then they analyse how different governance structures affect the ability to sustain integrity as a corporate value and find that publicly traded companies are less able to sustain it.
WP 26/13
Luigi Guiso, together with Andreas Fagereng and Charles Gottlieb, in “Asset Market Participation and Portfolio Choice over the Life-Cycle“, study the life-cycle pattern of investors’ portfolio using error-free data on a large random sample of Norwegian households’ investments drawn from the Tax Registry. They find that both participation in the stock market and the portfolio share in stocks exhibit relevant life-cycle patterns. First, the participation is limited at all ages, but it follows a hump-shaped profile with a peak around retirement. Second, the portfolio share of households participating in the stock market is high and fairly constant in the earlier and mid phases of the life cycle; as retirement comes into sight, households start diminishing their risky asset share gradually and continuously; after retirement those who remain in the market keep the share rather flat at a lower level. Thus the data suggest a double adjustment as people age: a rebalancing of portfolio away from stocks as they approach retirement and stock market exit after retirement. No available life-cycle portfolio model can generate these stylized facts. However the authors show that, by incorporating a small per period participation cost and a small probability of a large loss when investing in stocks (a “disaster” event), these models are able to replicate the life-cycle profiles of stock market participation and portfolio shares observed in the data.
WP 25/13
Luigi Guiso, together with Eliana Viviano, in “How Much Can Financial Literacy Help?“, test the benefits of financial literacy by merging survey data on a sample of individual investors that includes measures of financial literacy with administrative records of their asset holdings and trades before, during and after the crisis of September 2008. They compare the decisions actually taken by individuals investors with three clearly dominated alternatives: selling stocks after the market crash, rather than before; not to implement a portfolio allocation consistent with a CAPM model; buy bonds issued by their bank when a better alternative – government bonds – was available. They find that during the crisis high-literacy investors did better than low-literacy ones as regards the ability to time the market, to manage one’s investment and to detect intermediaries’ potential conflicts of interest. However differences between the two groups are economically small, while in all cases the fraction of investors choosing the dominated alternative is large. Both features suggest that gains from increasing financial literacy may be modest.
WP 24/13
In “State Dependent Monetary Policy” Francesco Lippi and Nicholas Trachter, together with Stefania Ragni, study the optimal monetary policy in a flexible-price economy with heterogeneous agents and incomplete markets which give rise to a business cycle. In particular they explore how the state dependent monetary policy balances the costs of anticipated inflation with the need for insurance along the business cycle. This policy allows for a dramatic welfare improvement compared to a policy that does not respond to the state. The optimal policy prescribes to expand the supply of liquidity in recessions, when the unproductive agents are poor (and the insurance needs are high), and to contract the liquidity base in expansions, to maximise production incentives. Though optimal monetary policy varies greatly along the business cycle, it “echoes” Friedman’s rule as the expected real return of money remains close to the rate of time preference.
WP 23/13
Luigi Guiso, together with Paola Sapienza e Luigi Zingales, in “Long-term persistence“, present empirical evidence suggesting that culture is a channel through which shocks to institutions can affect outcomes over prolonged periods of time. In particular they test whether today’s notable differences in civic capital between the North and the South of Italy are the legacy of the medieval free city-states of the Middle Ages. First they show that cities that experienced self-government in the Middle Ages have more civic capital today. This result holds even within the cities in the North and after correcting for the endogeneity of the emergence of free city-states. The authors then conjecture a link between the psychological theory of learned helplessness and the formation of civic capital according to which people that experience a negative event without any chance to react are more likely to suffer a sense of helplessness, to develop a negative attributional style and transmit this style to their children. Indeed they find that fifth-graders in former city-states exhibit a less pessimistic attributional style, which itself is correlated with a higher level of civic capital.
WP 22/13
Luigi Guiso, together with Paola Sapienza e Luigi Zingales, in “Time Varying Risk Aversion“, show that individual risk aversion has increased dramatically after the 2008 financial crisis. After showing that traditional drivers of risk aversion fail to explain this increase, they propose a psychology-based explanation of this phenomenon. They exploit the survey of a sample of clients of a large Italian bank in 2007, repeated in 2009, and find that both qualitative and quantitative measures of risk aversion exhibit large increases after the crisis. These increases cannot be explained by changes in wealth, in habits or in background risk. More importantly, the increase is present even among individuals who did not hold any risky assets and thus did not incur financial losses following the crisis. To explore the possibility that fear might have driven the changes in risk aversion the authors conduct a lab experiment where a random sample of students is ‘treated’ with a very scary movie. They find that ‘treated’ students exhibit a significant increase in risk aversion, of similar size to the one observed in the data. The predictions of a fear-based model, that individuals will sell stocks following a sharp drop in their prices, are confirmed by the actual trading data.
WP 21/13
Luigi Guiso, together with Helios Herrera and Massimo Morelli, in “A Cultural Clash View of the EU Crisis“, propose an unconventional interpretation of the Greek and the European sovereign debt crisis. First, they document a significant cultural difference between Germany and Greece in terms of norms and beliefs deeply rooted in their population. As German and Greek political leaders are bound to choose strategies that do not violate these norms, they were prevented from adopting actions that would have contained the crisis (i.e. offering timely help to Greece in October 2009 would have not been accepted by Germans who are eager to “punish wrongdoers”). According to this interpretation, a cultural clash is at the basis of the unsatisfactory political management of the Greek crisis and of its subsequent spread to other European countries. The authors show also that the creation of a fiscal union in Europe could help overcome these problems and that, perhaps counter-intuitively, its desirability increases with cultural diversity.
WP 20/13
William Zame, together with Mihaela van der Schaar and Yuanzhang Xiao, in “Designing Efficient Resource Sharing For Impatient Players Using Limited Monitoring“, address the issue of how resources can be shared efficiently. Except for pure public goods, each agent’s use of a resource detracts from the use of other agents, and this negative externality can be so strong to prevent efficient sharing in the short run. The authors examine the issue in the framework of repeated games with imperfect public monitoring but model the information structure as arising from the behaviour of a strategic designer who balances the benefits and costs of more accurate observations and reports. They show that in this set up the impossibility of efficient sharing in the short run actually enhances the possibility of efficient sharing in the long run.
WP 19/13
William Zame, together with John Geanakoplos, in “Collateral Equilibrium: A Basic Framework“, argue that the reliance on collateral to secure loans can have profound effects on prices, on allocations, on the structure of financial institutions, and especially on the efficiency of market outcomes. They offer an extension of intertemporal general equilibrium theory that incorporates durable goods, collateral and the possibility of default (which can actually occur in equilibrium). They show that credit constraints are the distinguishing characteristic of any equilibrium with collateral; the need to put up collateral has effects on the price of securities, on the set of securities that are traded, on consumption. The collateral equilibrium can be inefficient. These finding provide useful insights into the functioning of housing and mortgages markets, including the sub-prime mortgage market in the recent financial crisis.
WP 18/13
In “Small and Large Price Changes and the Propagation of Monetary Shocks” Francesco Lippi, together with Fernando Alvarez and Hervé Le Bihan, use a large dataset of price records underlying the French CPI to identify the main patterns in firm’s price setting behavior. After correcting for measurement errors and cross-section heterogeneity, the authors find that the size distribution of price changes has a substantial excess kurtosis, with a shape lying between a Normal and Laplace distribution. The authors propose a menu-cost model capable to reproduce the observed empirical patterns. The model, which features multiproduct firms and randomness in menu costs, has only 4 parameters, two of which are pinned down by the average frequency and the standard deviation of price changes. The model is used to solve for the impulse responses of the aggregate economy to a once-and-for-all unexpected monetary shock analytically. Fixing the average frequency and the standard deviation of price adjustments, it is shown that the real effects of monetary policy are an increasing function of only one variable, namely the kurtosis of price changes. The relation between kurtosis and the fundamental parameters is characterized.
WP 17/13
Are institutional changes that induce fiscal responsibility always effective in reducing inefficient spending? In “Spending Biased Legislators: Discipline Through Disagreement” Facundo Piguillem, together with Alessandro Riboni, show that when politicians are tempted to overspend (or procrastinate spending cuts) and policies are decided through legislative bargaining, disagreement among legislators leads to policy persistence which attenuates the temptation to overspend. A general lesson of the paper is that institutional changes matter, but their effects are heterogeneous and depend on some characteristics of the economy. In particular, economies with weak institutions and few constraints on politicians may find themselves trapped in an equilibrium with high spending: in this situation the introduction of institutional changes that induce some degree of fiscal responsibility can lead to substantial cuts in inefficient spending. On the contrary, in economies with stronger institutions the very same measure would induce some politicians to free ride on others’ responsibility and may lead to more inefficient spending.
WP 16/13
The willingness of bystanders to punish transgressions committed against others is crucial for sustaining cooperative behavior. Still, little is known about its determinants. In “Social Identity and Punishment”, Jeffrey V. Butler, together with Pierluigi Conzo and Martin A. Leroch, use laboratory experiments to investigate to what extent shared social identity affects bystanders’ punishment preferences. The authors induce artificial group identity and have subjects play a one-shot dictator game with third party punishment. They implement a novel punishment mechanism which isolates bystanders’ preferences for justice while ruling out confounding factors. Their findings suggest social identity matters for punishment preferences: introducing artificial group divisions significantly increases the willingness to punish; moreover punishment is valued the most when an out-group member treats an in-group member unfairly, while it is valued the least when both the perpetrator and the victim belong to the same group as the bystander.
WP 15/13
In most online systems individuals who want a file (or data or service) have to interact with agents who can provide it. The former obtain a benefit but the latter bear an often non-trivial cost; so they have an incentive to withhold the service. If benefits exceed costs, provision of online services increases social welfare and should therefore be encouraged – but how? Bill Zame, together with Jie Xu and Mihaela van der Schaar, in “Efficient Online Exchange via Fiat Money”, addresses this issue and proposes a simple method to incentivize trade online through the use of (electronic) tokens. The set up is very similar, in spirit, to the familiar search models of money. Here the (benevolent) designer has to choose the quantity of tokens in the system and to establish a set of rules under which online services should be requested and provided. The quantity of tokens and the recommended rules constitute a protocol. The authors provide an effective procedure for designing a “good” (if not optimal) protocol and show that a great deal of efficiency may be lost if the “wrong” protocol is chosen.
WP 14/13
The famous “Lucas model” is central to most theoretical models in macroeconomics and finance. Does it work in practice? Bill Zame, together with Elena Asparouhova, Peter Bossaerts and Nilanjan Roy, in “‘Lucas’ in The Laboratory”, examines how the Lucas model works in the laboratory. Their experimental evidence provides broad support for the qualitative predictions of the Lucas model: prices move with fundamentals, agents trade assets to smooth consumption, risky assets yields a substantial premium over riskless assets. However, sharp differences emerge as regards quantitative predictions: asset prices display excess volatility and standard tests reject the stochastic Euler equations. The quantitative deviations from the predictions of the Lucas model seem to arise because agents in the lab do not forecast perfectly future prices, as the Lucas model requires. If perfect forecasting is difficult to achieve in the laboratory environment, it is hardly easier in the real world. Hence, errors in forecasts are likely to have big effects in the real world.
WP 13/13
Franco Peracchi, together with Francesco Bartolucci and Valentino Dardanoni, in: “Ranking scientific journals via latent class models for polytomous item response data” proposes a strategy for ranking scientific journals starting from a set of available quantitative indicators representing imperfect measures of the unobservable “value” of the journals of interest. After discretizing the available indicators, the authors estimate a latent class model for polytomous item response data and use the estimated model to classify each journal. They apply this approach to data from the Research Evaluation Exercise carried out in Italy in the period 2004-10, focusing on the sub-area of Statistics and Financial Mathematics. They derive a complete ordering of the journals according to their latent value, starting from four quantitative indicators of the journals’ scientific value (IF, IF5, AIS, h-index), and show that their strategy is simple to implement and that the obtained ranking is robust with respect to different discretization rules.
WP 12/13
Franco Peracchi, together with Francesco Bartolucci and Federico Belotti, in: “Testing for time-invariant unobserved heterogeneity in generalized linear models for panel data” proposes a computationally convenient test for the null hypothesis of time-invariant individual effects in generalized linear models for panel data, a wide class of models that includes the Gaussian linear model and a variety of nonlinear models typically employed for discrete or categorical outcomes. The basic idea of the test is to compare fixed effects estimators defined as the maximand of full and pair wise conditional likelihood functions. Thus, this approach requires no assumptions on the distribution of the individual effects and, most importantly, it does not require them to be independent of the covariates in the model. The finite sample properties of the test are illustrated through a set of Monte Carlo experiments. The results show that the test performs quite well, with small size distortions and good power properties. An example based on data from the Health and Retirement Study is used to illustrate the test.
WP 11/13
Franco Peracchi, together with Valentino Dardanoni, Giuseppe De Luca and Salvatore Modica, in: “Bayesian model averaging for generalized linear models with missing covariates” addresses the problem of estimating a broad class of nonlinear models (generalized linear models or GLMs) when the outcome of interest is always observed, the values of some covariates are missing for some observations, but imputations are available to fill-in the missing values. This situation is becoming quite common, as public-use data files increasingly include imputations of key variables affected by missing data problems, and specialized software for carrying out imputations directly is also becoming available. Under certain conditions on the missing-data mechanism and the imputation model, this situation generates a trade-off between bias and precision in the estimation of the parameters of interest. Following the generalized missing-indicator approach originally proposed by Dardanoni et al. (2011) for linear regression models, the authors characterize this bias-precision trade-off in terms of model uncertainty, so that the problem can be handled through Bayesian model averaging (BMA). In addition to applying the generalized missing-indicator method to the wider class of GLMs, two extensions are proposed: a block-BMA strategy that incorporates information on the available missing-data patterns and has the advantage of being computationally simple; second, the observed outcome is allowed to be multivariate, thus covering the case of seemingly unrelated regression equations models, and ordered, multinomial or conditional logit and probit models. The new proposed approach is then illustrated through an empirical application using the first wave of the Survey on Health, Aging and Retirement in Europe (SHARE).
WP 10/13
In “Monetary Shocks with Observation and menu Costs” Francesco Lippi and Luigi Paciello, together with Fernando Alvarez, compute the impulse response function of output to a monetary shock in a general equilibrium model when firms set prices under two frictions: a standard fixed cost of adjusting the price (menu cost) and a fixed cost of observing the state of nature (observation cost). They then analyze how the effects on output depend on these costs. First, they find that the larger the observation cost relative to the menu cost, the larger and the more persistent the output response is to a monetary shock. Second, over a wide range of values for observation and menu costs, the shape of the impulse response function resembles that in a model with observation cost only and is flatter than the impulse response function in a model with menu cost only. Finally, they show that, for monetary shocks of moderate size, the assumptions about the information structure (i.e. perfectly observed monetary shocks vs. unobserved monetary shock) have negligible consequences for the prediction of the model about the output effects of such shocks.
WP 09/13
When and why does finance cease to be the “lifeblood” of the real economy and turn into a “toxin”? In “Finance: Economic Lifeblood or Toxin?“, Marco Pagano argues that this metamorphosis occurs when finance grows past the funding needs of the real sector. At this point it stops contributing to economic growth and comes to threaten the solvency of banks and systemic stability. In principle regulation should be designed as to contain financial development within bounds that ensure positive benefits. However, the author argues that, because of the vast political consensus created by economy-wide asset bubbles, any effective supervisory action to counteract them is often prevented.
WP 08/13
In “E-commerce as a Stockpiling Technology: Implications for Consumer Savings” Andrea Pozzi documents a previously neglected benefit of the introduction of e-commerce. Since online orders are generally home delivered, shopping on the Internet spares customers the discomfort of carrying around heavy and bulky baskets of goods. This makes e-commerce a technology well suited to helping consumers to buy in bulk or to stockpile items on discount. Exploting scanner data provided by a supermarket chain selling groceries both online and through traditional stores he shows that the introduction of e-commerce leads to an increase in bulk purchase and stockpiling behavior by customers. Since bulk and discounted items are sold at a lower price per unit, this allows consumers to obtain substantial savings.
WP 07/13
In “The Demand of Liquid Assets with Uncertain Lumpy Expenditures” Francesco Lippi, together with Fernando Alvarez, analyze how unexpected large-sized expenditures, such as the purchase of durable goods by households, impact on the management of liquid assets in the context of inventory theoretical models. The paper shows that lumpy purchases give rise to the possibility that liquidity gets withdrawn and spent immediately, thus changing the relationship between the size of liquidity withdrawals and the average liquidity holdings compared to canonical models. By using two novel datasets, the authors summarize the main patterns in the data concerning the households’ currency management in Austria and the management of demand deposits by a large sample of Italian investors and show that their model can explain some empirical regularities that traditional models cannot account for.
WP 06/13
Would citizens coordinate to punish a government when they observe suspicious behaviour? In “Coordination, Efficiency and Policy Discretion” Facundo Piguillem, together with Anderson Schneider, show that under some circumstances such coordination is impossible. The authors set up a model with incomplete information where the fundamental (aggregate productivity) is stochastic and observed only by the government, while every private agent receives a noisy signal about it. In this environment coordination among private agents is harder to achieve and punishing the government when it deviates from optimal policies may become impossible. The results of the paper support the arguments in favor of strong institutions that tie the hands of policymakers, as to endow governments with full discretion and to impose the right incentives to avoid deviation from optimal policies could be too costly. At the same time, some doubts are cast on policy prescriptions arising from models with complete information, as it is shown that even arbitrarily small departures from the complete information assumption render the results invalid.
WP 05/13
In “Inequality and Relative Ability Beliefs” Jeffrey V. Butler documents a novel channel causing inequality persistence. In a sequence of experiments it is shown that i) individuals respond to salient (earnings) inequality by adjusting their performance beliefs to justify the inequality; ii) it is beliefs about relative ability – an ostensibly stable trait – rather than effort provision that are affected; and iii) unequal pay on an initial task affects willingness to compete on a subsequent task. Taken together, the results provide evidence for a novel mechanism perpetuating inequality: initial inequality colors beliefs about one’s own ability relative to others, lowering the ex-ante expected return to courses of action which require, at some point, ability-based competition. As the latter is a feature of many paths to upward mobility, initial inequality may become persistent inequality.
WP 04/13
Cross-countries differences in firm growth are typically analysed under the closed economy assumption, in spite of the empirical evidence of strong links between firm size and international trade. In “Barriers to Firm Growth in Open Economies” Facundo Piguillem (with Loris Rubini) develops a tractable, open economy, dynamic framework where the firm size distribution is endogenous and is affected by both innovation costs and trade barriers. The authors show that, by neglecting the latter, the estimates of innovation costs are biased downwards; as this bias largely differs across countries, the true ranking of innovation costs is altered; moreover, the predicted effects of changing innovation costs on welfare and aggregate productivity are biased upwards. The open economy model, calibrated to a set of European countries, successfully captures between 54 and 87 per cent of the differences in value added per worker across countries while the closed economy model over-predicts the effects of innovation costs on welfare by between 31 and 64 per cent relative to the open economy model.
WP 03/13
Banks and financial institutions are blamed for compensation packages that reward managers generously for making investments with high returns in the short run but large “tail risks” that emerge only in the long run. As governments have been forced to rescue failing financial institutions, politicians and media have stressed the need to cut executive pay packages, making them more dependent on long-term performance. Whether this is the right policy response crucially depends on what is the root of the problem. In “Seeking Alpha: Excess Risk Taking and Competition for Managerial Talent” Marco Pagano (with Viral Acharya and Paolo Volpin) present a model where the root of the problem is the difficulty of rewarding managerial talent when projects can have tail risk, as is typically the case in the financial sector, and the market allows executives to move from firm to firm before that risk materializes. The paper shows that managers who take tail risks while moving rapidly between firms raise their short-term performance and pay, while reducing their accountability for failures. In this situation managerial talent, that is the ability to generate high returns without incurring in high risks, can be identified by firms only in the long run, so that the efficient allocation of managers to projects cannot be achieved and too many projects fail; at the same time, managers’ pay is not commensurate with their actual performance. In this setting, efficiency can be improved by implementing measures that discourage managerial mobility (i.e. taxing managers who switch jobs at a higher rate than loyal ones) or by capping the pay of top financial managers.
WP 02/13
Francesco Lippi (with Fernando Alvarez) in “Price Setting with Menu Cost for Multi-product Firms” develops an analytically tractable model of the optimal price setting decisions of a firm facing a fixed cost of price adjustment common to all goods it produces. The authors solve the firm’s decision problem, derive the steady state predictions for a cross-section of firms and study the response of the aggregate economy to a monetary shock. While previous literature has often resorted to numerical methods, the novel contribution of the paper is to present an approximate analytical solution to the general equilibrium of an economy where firms face a multidimensional and non-convex control problem. Two sets of results are worth mentioning. First, the model substantially improves the ability of state-of-the-art menu cost models to account for observed price setting behavior: as the data display a large mass of small price changes, the size distribution of price changes appears bell-shaped, which is what the model produces provided that the number of goods produced by each firm is larger than six. Second, as regards the characterization of the response of the economy to a monetary shock, the size of the output response and its duration increase with the number of products; actually they more than double as the number of products goes from one to ten, quickly converging to the standard results of Taylor’s staggered price model.
WP 01/13
What shapes individuals’ attitudes toward risk and uncertainty? This question is of fundamental importance to economists. Prior research shows that those who more readily rely on their intuition when making decisions are also significantly more tolerant of risk and ambiguity. Still, the direction of causation in these findings can be debated: are more uncertainty-tolerant individuals more likely to rely on their intuition, or does reliance on intuition reduce aversion to risk and ambiguity? Jeffrey Butler and Luigi Guiso (with Tullio Jappelli) in “Manipulating Reliance on Intuition Reduces Risk Ambiguity Aversion” provide the first experimental evidence directly addressing the question of whether variation in reliance on intuition causes shifts in aversion to risk and ambiguity. In the experiment they directly manipulate participants’ predilection to rely on intuition and find that enhancing reliance on intuition lowers the probability of being ambiguity averse by 30 percentage points and increases risk tolerance by about 30 per cent among male participants.