Hacker Newsnew | past | comments | ask | show | jobs | submit | anonvenger's commentslogin

Her technical work and credentials are solid. Working on better datasets is valuable and important. What is questionable is acting as if every dataset out there that can be found to have one dimension with any bias against some race/gender deemed underprivileged is a malicious crime against humanity. When there are many many biases, cutting both ways in those datasets. When there are many many alternate explanations to this state of affairs besides malicious discrimination and oppression. When reasonable courses of action include the constructive contribution of building and using better datasets.


While there is some overlap, generally speaking Identity Politics != Ethics. She's an Identity Politics researcher, narrowly focused on race and gender topics. Not some philosopher or religious figure interested in the flourishing of humanity at large. Here is an abstract from a larger work, presumably her PhD thesis: https://arxiv.org/pdf/1908.06165.pdf. Critical gender and race theory through and through.


Amen. Her paper is citing dated problems in AI models. These problems have been known for at least 4-5 years, and the responsible AI community has been working on it. No progress mentioned. The paper read like an editorial rather than actual AI research.


You're trying to discredit her for... writing a well-sourced paper on race & gender discrimination in AI.


Well written, well sourced and very narrow for someone that fashions themselves as 'ethicist'. There are more things in life beyond race and gender as seen through the prism of critical race/gender theory. I heard somewhere of this strange word 'love', something to look into. She's young, smart, capable and possibly well meaning in spite of her unbecoming behavior. Perhaps one day she'll grow to see the struggle common to all born of a woman.


"Who do they hire to replace her?". That is an excellent question, and a good time to remember that Identity Politics != Ethics. Most big ethical questions in AI have nothing to do with race or gender, instead affect us all. Some say AI is an existential risk for Humanity at large.


Nitpick. Jesus Christ raised the bar to 'love your enemies'.

43 You have heard that it was said, ‘Love your neighbor and hate your enemy.’ 44 But I tell you, love your enemies and pray for those who persecute you, 45 that you may be children of your Father in heaven. He causes his sun to rise on the evil and the good, and sends rain on the righteous and the unrighteous. 46 If you love those who love you, what reward will you get? Are not even the tax collectors doing that? 47 And if you greet only your own people, what are you doing more than others? Do not even pagans do that? 48 Be perfect, therefore, as your heavenly Father is perfect.


Completely agree with you. I would normally expect the inclusion of "enemies" into "neighbours" to be implicit in light of the prelude and postlude around the parable of the Good Samaritan.


a. Yes, but you can post 'women and minorities encouraged' job ads. b. No.

People invested in identity politics are, by their own admission, interested in outcomes. In this world view, the legal system is a tool for achieving the desired outcomes. The same set of rules can be illegal if it rejects certain identities in a context, and at the same time legal if it promotes those identities in a different context. 'Heads I win, tails you lose', but dead serious.

A thread discussing the pervasive 'women and minorities encouraged' mindset is in western academia:

https://academia.stackexchange.com/questions/95011/is-it-con...


From Jeff Dean's email:

> Timnit co-authored a paper with four fellow Googlers as well as some external collaborators that needed to go through our review process (as is the case with all externally submitted papers). We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

> A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues. We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.

> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.


> revealing the identities of every person who Megan and I had spoken to

Big red flag. Why are the identities of reviewers important here? Did she plan to take those reviewers to the court of public opinions for a trial/exposure?

A lesson is delivered in time IMO. Those entitled people need to be called out.


That seems pretty damning to me. Especially considering Timnit is not sharing what her demands were.

Google went through their standard process, and the paper was rejected.

If they have actually approved dozens of Timnit's papers before, then the likelihood that this rejection is due to discrimination seems low.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: