The European Tribune is a forum for thoughtful dialogue of European and international issues. You are invited to post comments and your own articles.
Please REGISTER to post.
In 2015, a Google Photo algorithm auto-tagged two black friends as "gorillas," a result of the program having been under-trained to recognize dark-skinned faces. That same year, a British pediatrician was denied access to the women's locker room at her gym because the software it used to manage its membership system automatically coded her title--"doctor"--as male. Around the same time, a young father weighing his two-and-a-half-year-old toddler on a smart scale was told by the accompanying app not to be discouraged by the weight gain--he could still shed those pounds! These examples are just a glimpse of the embedded biases encoded in our technology, catalogued in Sara Wachter-Boettcher's new book, Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. Watcher-Boettcher also chronicles more alarming instances of biased tech, like crime prediction software programs that mistakenly code black defendants as having a higher risk of committing another offense than white defendants, and design flaws in social media platforms that leave women and people of color wide open to online harassment. Nearly all of these examples, she writes, are the result of an insular, mostly-white, tech industry that has built its own biases into the foundations of the technology we use and depend on.
These examples are just a glimpse of the embedded biases encoded in our technology, catalogued in Sara Wachter-Boettcher's new book, Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. Watcher-Boettcher also chronicles more alarming instances of biased tech, like crime prediction software programs that mistakenly code black defendants as having a higher risk of committing another offense than white defendants, and design flaws in social media platforms that leave women and people of color wide open to online harassment.
Nearly all of these examples, she writes, are the result of an insular, mostly-white, tech industry that has built its own biases into the foundations of the technology we use and depend on.
'A white mask worked better': why algorithms are not colour blind - Guardian
A lot of your work concerns facial recognition technology. How did you become interested in that area? When I was a computer science undergraduate I was working on social robotics - the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. At the time I thought this was a one-off thing and that people would fix this. Later I was in Hong Kong for an entrepreneur event where I tried out another social robot and ran into similar problems. I asked about the code that they used and it turned out we'd used the same open-source code for face detection - this is where I started to get a sense that unconscious bias might feed into the technology that we create. But again I assumed people would fix this. So I was very surprised to come to the Media Lab about half a decade later as a graduate student, and run into the same problem. I found wearing a white mask worked better than using my actual face.
Later I was in Hong Kong for an entrepreneur event where I tried out another social robot and ran into similar problems. I asked about the code that they used and it turned out we'd used the same open-source code for face detection - this is where I started to get a sense that unconscious bias might feed into the technology that we create. But again I assumed people would fix this.
So I was very surprised to come to the Media Lab about half a decade later as a graduate student, and run into the same problem. I found wearing a white mask worked better than using my actual face.
... an insular, mostly-white, tech industry that has built its own biases into the foundations of the technology we use and depend on.
There's nothing stopping her or anybody else from rewriting the code. You can do that in software. She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre
Not a terribly complicated concept.
Not a terribly complicated concept. She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre
"What we're seeing here is a model free from human bias and presuppositions. It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same."
If it kills off all the Republicans and Tories, I'll say "Job well done." They tried to assimilate me. They failed.
The MD Anderson nightmare doesn't stand on its own. I regularly hear from startup founders in the AI space that their own financial services and biotech clients have had similar experiences working with IBM. The narrative isn't the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.
The narrative isn't the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.
AlphaGo is a Go playing Expert System. It can never be more than that because Neural Net technology does not allow it to be anything more. Both Hinton ("We need to start over") and LeCun (They don't work in the real world") let the cat out of the bag on that one.
* a psychological disorder whose symptoms include selective amnesia, shallow volatile emotions, and over dramatic or attention-seeking behavior. She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre
Learning Go for 960 hours straight up to a superb level is kinda cool, but... extremely dorky.
The annual top chess engine competition is under way. No Google or IBM products there, because staying behind an open source champion would be bad marketing.
I've been considering bitcoin and npm "block chain" theory and praxis as emulations of "best practices" in computing R&D. And I am not persuaded this method can or will obtain efficiencies in production or "innovation" to either human or machine benefit searches. The I/O and iteration are chiefly copies of prior "art." That is, the "system" lacks discriminating purpose. Diversity is the key to economic and political evolution.
(I walk to my stacks, pull out Shapiro and Varian, Information Rules, a Strategic Guide to the Network Economy (1999) and Dolan and Simon,Power Pricing (1996)... grimace at Tho. Stewart, Wealth of Knowledge (2001)... return)
World of coders now equipped to transform 'barriers to entry' into moats. Diversity is the key to economic and political evolution.
Corporations are AI entities already, Romney could say.
Its debatable whether the aims are achievable, but it's absolutely clear that the development of independent, self-enhancing intelligence - which transcends the abilities of its developers - is a core aim.
And the aim has been attained in minor ways across many fields. Google's AlphaGo developers didn't need to understand Go to master level to be able build a system capable of beating a master.
The only difference with general AI is that the goal is to automate learning itself, not to solve specific problems in one limited domain.
Of course this requires the design of a system with discrimination and abstraction - not necessarily a purpose in human terms, but still an explicit meta-goal.
That said, all too many CS ventures, or "independent software vendors" (ISVs), share the same maintenance of effort dilemma. Apart from continually raising working capital (which determines marketing investment, inc. advertising and R&D)
The narrative business problem isn't the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands. failure to identify and satisfy unmet demand
by Frank Schnittger - May 23 2 comments
by Frank Schnittger - May 27 3 comments
by Frank Schnittger - May 5 22 comments
by Oui - May 13 65 comments
by Carrie - Apr 30 7 comments
by Oui - May 30
by Frank Schnittger - May 273 comments
by Oui - May 2712 comments
by Oui - May 24
by Frank Schnittger - May 232 comments
by Oui - May 1365 comments
by Oui - May 910 comments
by Frank Schnittger - May 522 comments
by Oui - May 449 comments
by Oui - May 312 comments
by Oui - May 29 comments
by gmoke - May 1
by Oui - Apr 30269 comments
by Carrie - Apr 307 comments
by Oui - Apr 2644 comments
by Oui - Apr 886 comments
by Oui - Mar 19143 comments