Who’s optimizing what, now?

As a horde of tired and wired academics prepare to converge on Stefan and Pramila’s (I feel like this has to instrument for something) this evening, I’m using what little is left of my brain to think about some of the links between the various sessions I attended today – on migration, education and institutions, and the final panel on the limits of evidence-based interventions. The papers themselves were hugely varied, with slides ranging from detailed maps of Addis Ababa to screenshots from a role-playing game a researcher created for a lab experiment. As an aside: has anyone documented the rise of pop-culture paper titles over the last decade? The biggest surprise of the week was that none of the job-search papers was called Searching with my Good Eye Closed.

One thing that occurs to me: in some way, a number of the papers being presented examine (or inadvertently) reveal the surprising ways individuals and firms or groups optimise (or don’t) some of the most fundamental decisions they’re making. In the morning, Zack Barnett-Howell presented a very cool lab experiment he ran, showing how well people process information to optimise a ‘migration’ decision in a game he designed – doing better than the computer, which is pretty staggering. So, chalk one up for optimisation. So, feed people information and watch social welfare grow? Not quite: the discussion in that session was rammed with people who have been doing field research or working in migration policy, and the consensus seemed to be that information doesn’t seem to really shift what people actually do. As Zack said, what a lab experiment can show is an underlying mechanism – not necessarily all the nuances it manifests in reality with, particularly when we know as little as we do about the decision to migrate in practice, and what activates it.

There was a throwaway comment in Justin Sandefur’s education presentation that I found amazing in the opposite direction – he mentioned that all the various education providers he was investigating had completely different practices when it came to managing their teachers. They’re all trying to optimise something (in this case the effort/impact of their teachers, subject to cost), but they’ve settled on totally different ways of doing it. Now it’s possible that they’re all optimising cleanly with totally different technologies, but it’s got to be equally plausible that noise just drowns out the last bits of signal. Getting that last 10% of effort might not matter that much as long as you’ve got the 90 in the absence of the kind of cutthroat competition that kills everything but the absolute best.

I’m going to torture this back around to the final panel – policy is, in many ways, not a process of optimisation subject to evidence. It’s a process of occasionally making incremental improvements over the next cab on the rank. That doesn’t mean that research shouldn’t be about constantly looking for margins of improvement. It just means that which battles are available to fight are often a small subset of the battles that matter; and that the costs of getting things perfect (personally and globally) might be prohibitive, or outweigh the gains available; and that the most useful metric for success is getting better, not getting ‘best’.

Now, if you’ll excuse me, I’m going to sleep for a week.

Share
This entry was posted in CSAE Conference. Bookmark the permalink.

Comments are closed.