Part of the week’s readings in the ‘Technologies of Race-Making’ seminar. It’s a great and especially hopeful book, deconstructing how race is itself a technological tool and how by not addressing it head-on, we’ve embedded it in our algorithms.
Comes highly recommended by pip, who now has three copies of it. I’ll let the first line of the blurb do the talking:
“France, 1714: in a moment of desperation, a young woman makes a Faustian bargain to live forever and is cursed to be forgotten by everyone she meets.”
I stopped reading this ~70% into the book, right when things were getting interesting. The series has a great and unique world setting (djinns!), but the plot, writing, and even character development fall flat and meander. YMMV.
This fascinating (and a little morbid) paper builds on the discourse atoms model (accessible link here: http://anie.me/a-generative-model-of-discourse/) by Sanjeev Arora & team that I’m fascinated by. I just want to disentangle how to track the change of themes and discourse in large text.
O’Connor’s work is rekindling my interest in agent-based models, in my quest to bridge simulations and empirical network methods. This paper tries to explore why beliefs that are proven to be false still persist, vis-a-vis retractions in the academic community. Key result from the abstract: “We also nd that retractions are most successful when issued by the original source of misinformation, rather than a separate source.” Ah, the ethics of knowing when you’re wrong.
What we got wrong – Zeynep Tufecki on how the US has handled the COVID-19 pandemic
A Better Internet Requires Ending the Monopolies – Cory Doctrow on the Tech Won’t Save Us podcast