Learning from Error

An old-school blog by Adarsh Mathew

Demographic Bias in Model Design and Big Tech

Last Modified at — Jan 16, 2020

In today’s weekly CSS Workshop, we had a duo from AirBnB presenting work on ‘Designing for Trust’ in the context of the sharing economy. The paper had an interesting design – they set up an investment game to construct their dependent variable of ‘Trust’, and used the participants’ ‘behavioural’ data logs to identify what behaviours predicted propensity to trust among users. They had a particularly interesting ‘data triangulation’ step, where they measured the validity of their trust construct using good ol’ survey data, all to see if their construct was capturing the right indicators of trust in a completely different context. I found the approach of the authors to be interesting because it hit upon a lot of interesting issues – the insufficiency of data logs for meaningful questions, balancing explanatory vs predictive models, building from theory, and novel approaches to check construct validity. We were asked not to circulate the paper, but if you get a chance to read it, please do.

The design decisions of the authors have pushed me to think about a couple of issues surrounding research and big tech.

  1. Dealing with ‘bias’ against demographics, and how Big Tech chooses to deal with it.

  2. Designing for research vs designing for the platform.

I meant to write more about this, but I lost steam and my notes. Leaving this up here beause it still has a kernel of an idea, if not a full-fledged argument.

comments powered by Disqus