Networks, gender and job referrals in Malawi

CSAEEnumerator

A CSAE enumerator at work in Ghana in 2008. But how did she find the job? And what would she say if we asked her to refer someone to fill a similar position?

Referrals matter

 “Another issue concerns your letters of reference. Unless some of your letters have arrived, your materials probably won’t get read. Therefore, you should tell your advisors when you will be sending out your packets…once you send your packet, their letters will be very important.”

Harvard University Economics Department, Frequently Asked Questions about the Job Market (#3)

Every economist understands the importance of job referrals. They matter for the labour markets we study. They matter for the labour markets we see all around us. And they matter — a lot! — for the labour market for academic economists (but perhaps I repeat myself).

There are many reasons that referrals might be useful for getting jobs. They may provide information about a candidate — information that is likely to be more credible than the information that candidates themselves can provide. What’s more, they may provide useful mechanisms for disciplining workers who shirk — if, for example, a worker’s referrer can be punished for a worker’s own poor performance. In short, job referrals may be a useful way for employers to overcome problems of hidden information and hidden action. But not everyone can refer you for a job. If you want to find a good job reference, you need someone (i) who knows you, and (ii) who is known and respected by your prospective employer. (Also, try not to ask your mother…sage advice, if rarely offered…) In sum — referrals matter, and referrals work through networks.

Referrals and gender in Malawi

All of which brings me to the point of this blog post: we were privileged recently to have Jeremy Magruder present a CSAE Lunchtime Seminar on his recent work with Lori Beaman and Niall Keleher on whether job networks disadvantage women in Malawi. I think this is a very interesting and novel paper, and one that — both in terms of experimental design and empirical results — has the potential to open many new avenues for thinking about job referrals.

I was fortunate to see Jeremy present an earlier paper at the Harvard Economic Development Workshop in November 2011. In that work, Jeremy and Lori ran a laboratory experiment in Kolkata. In the experiment, participants were incentivised to refer outsiders — that is, actual outsiders; their friends and contacts beyond the experimental context — to come and join in the experiment. This made for a really interesting paper — now published in the American Economic Review — that broke important new ground in learning about networks and job referrals in developing economies. I remember wondering, when I saw Jeremy present that earlier work, what would happen if the experiment were repeated with ‘real’ jobs for a ‘real’ employer. In Jeremy and Lori’s earlier work, participants were recruited to complete a cognitive puzzle, as a one-off activity. This is an important and useful context to learn about referrals. But there are many reasons that participants may behave differently when faced with the prospect of referring someone for an indefinite job with a real employer — for example, participants may frame their decisions differently if their task feels more productive, or when the net present value of employment is so much larger.

And that’s why I think Jeremy’s recent work with Lori and Niall is so interesting, and such a useful extension to the earlier results — because this is a field experiment (rather than a lab experiment) in which participants were asked to refer people for a real job. In short, the researchers were interested to help Innovations for Poverty Action find new enumerators in Malawi — and, in doing so, to improve the proportion of women enumerators. The authors recruited a pool of enumerators by posting fliers at “a number of visible places in urban areas” (apparently the standard  IPA-Malawi method of enumerator recruitment). Candidates were assessed on their quality as enumerators using a combination of written tests (assessing maths, English, computer skills and so on), and a practical test (in which candidates interviewed an existing IPA enumerator, who played the role of a respondent). Candidates were then invited to refer someone else for a similar job. Some candidates were invited to refer a women, some were invited to refer a man, and some invited to refer someone of either gender. The researchers then used a cross-cutting design, by which some candidates received a fixed fee for their referral, and others received a performance fee (paid if their referral qualified for an enumerator position). The researchers are interested to test who gets referred, and how good they are.

The results are interesting, and quite stark. As the authors put it, “most men seem to respond to an unrestricted referral situation by identifying men, while most women seem to respond to such a situation by referring unqualified people of either gender”. This harms qualified women, who are systematically disadvantaged in the referrals process. Performance pay doesn’t really change this result — if anything, performance pay encourages men to refer higher-ability men, but makes little difference to the women referred by men, or to women’s referrals in general. Of course, it’s not possible to do justice to the scope or the nuance of the authors’ results in a short blog post like this — but the key message is that job referral networks can act as a mechanism by which women are disadvantaged. I think this is a really important result, both for academic understanding of job referral networks and for effective design of quota and hiring policies.

Possible extensions

One of the strengths of this paper is that it opens several avenues for further refinements and extensions (whether in this work or in ubsequent experiments). Personally, I think there are four areas in which the authors might take things further.

First, the model. When Jeremy presented the model at CSAE, he used simulated data, and showed how shifting contractual form (from fixed rate to performance pay) would induce different referrals. This was essentially an application of the Weak Axiom of Revealed Preference (‘WARP’), and I think it captured the intuition of the model very well. However, this is not the modelling approach used in the paper. In the paper, the authors essentially model a referrer as deciding his or her ‘bliss point’ of friend characteristics. I think this approach has several shortcomings, relative to the WARP approach in Jeremy’s presentation. First, it requires the authors to approximate the viable referral set as a linear decreasing function in friend quality. The authors justify this “to make analysis tractable” — but these tractability problems would not arise if the authors used a WARP approach. Second, the authors need to treat the set of potential referrals as continuous. This is a somewhat awkward assumption, because the authors do not mean to imply that every participant has an infinite number of friends to refer; what’s more, it means that the authors can speak only about changes in a referrer’s “perfect friend” (i.e. ‘bliss point’), and formally can say nothing about how referrers make second-best choices under a finite set of friends. (For the same reason, the current approach leads to some notational difficulties — in that the authors model a referrer as maximising over friends (‘j’), but then find themselves differentiating with respect to friends’ social payments (‘alpha’).) Third, it requires the authors to make a strong distributional assumption (that is, the assumption that actual performance deviates from expected performance by a normally-distributed disturbance — something that cannot be true where, as here, actual performance is bounded). In contrast, the WARP approach is ideal for this kind of problem — where an agent faces a finite set of decisions and a shifting contract price, and where researchers want to draw conclusions non-parametrically.

Second, the sample. As noted, the authors recruited their initial sample by posting fliers — that is, they took the same approach that IPA commonly uses to recruit enumerators. I think this is absolutely the correct approach for this experiment; after all, the authors were trying to help IPA improve its enumerator recruitment, so there is great value in selecting a sample using the same initial recruitment mechanism. But I think this raises important avenues for future work. As Jeremy himself suggested in his CSAE presentation, it would be very interesting to see how these results generalise to female-dominated professions. The authors are rightly cautious about external validity, and I think this presents interesting avenues for further work

Third, participant expectations. As the authors rightly acknowledge, the interpretation of these results depends upon how participants form beliefs about the probability that each of their friends will pass the enumerator admission test. However, in this experiment, the authors did not measure participants’ expectations; expectations therefore need to be treated as a latent variable. There is some tension in taking this approach in this context — after all, the authors stress in the paper that participants were told clearly what were the desirable characteristics of a good enumerator, which suggests that participants may not have had a clear understanding of this (and, therefore, may have struggled to form reasonable expectations of their friends’ ability to pass the test). This is reminiscent of Charles Manki’s central complaint in his 1993 paper on ‘adolescent econometricians’:

“…it might be anticipated that economists would make substantial efforts to learn how youth form their expectations…Instead, the norm has been to make assumptions about expectations formation.”

Adeline Delavande, Xavier Gine and David McKenzie have two interesting recent papers (here and here) about methods for measuring subjective expectations in developing economies; it may be that these kinds of methods can add much to future experimental work on referrals.

Finally, what determines the “social payment”? The “social payment” refers to the utility gain that a participant receives from referring a friend for a job — and could be either monetary or non-monetary (for example, the performance of a favour in future). The authors quite rightly emphasise the centrality of this concept — but it is very difficult to know how the social payment is determined, or how it might change across treatments. For me, there are two important questions here. First, does the social payment accrue if someone is referred for a job but fails the recruitment test? The authors model the payment as accruing whether or not a friend passes (“thanks for the opportunity”), but we might imagine many reasons that this is paid only if the friend passes the test (“you wasted my time by inviting me to fail??”). Second — and perhaps more fundamental — why do we think that the social payment is invariant to the form of referral contract? I may feel very differently towards a friend who refers me for a job with a fixed referral fee (“wow — how kind of you to choose me when you could have chosen anyone!”) rather than when the friend refers me under a performance referral rate (“oh, so I’m the guy who will earn you money by passing this test for you?”). The latter concern is potentially a big problem for identification; if the social payment shifts in this way, then the distribution of friend attributes depends on the experimental treatment — and the treatment therefore cannot be used to learn about that distribution. I think it would be useful for the authors to consider this further, both in their theoretical model (for example, by microfounding the social payment), and in their empirical work (for example, by considering heterogeneity across different observable characteristics, where those characteristics might proxy for different kinds of social payment).

Last words

In sum, this is a really exciting paper — a very useful extension of Jeremy and Lori’s earlier work in India, and one that should prompt interesting further experimental work, both in Africa and elsewhere. I’ll look forward to following this paper and the future work that I’m sure it will provoke.

Now we just need to keep our fingers crossed that we can persuade Jeremy (and Lori, and Niall) to come back and present at CSAE again sometime soon…!

This entry was posted in Jobs, Finance and Skills and tagged , , , , . Bookmark the permalink.

Comments are closed.