by Cathy O’Neil @Bloomberg
對於新的 AI 商業模式 (business model)，O’Neil 建議我們提出幾個問題：
- 這種 AI 技術上和數據上實際可行嗎？
- 就算可行，它能夠達成所設定的目標嗎？ 還是只是隱藏在 “科學外衣” 下的人類偏見 (現代版的顱相學)？
平安保險的這個 AI 軟體很可能實際上不是用來判斷一個潛在客戶可不可信任，而是判斷這個客戶有沒有可能在未來申請理賠，這個很大可能會將弱勢群體排除在外：
Most likely. The poor and downtrodden — people living precarious, overworked lives — tend to run into more problems, and hence have more insurance claims. And in China, human discrimination makes certain ethnic groups — such as Uyghurs, the Muslim minority — more likely to be poor and downtrodden (just as it does with blacks in the U.S.). So an algorithm trained to identify potential claimants would also discriminate against these people.
In a sense, though, the algorithm might still be fit for purpose — assuming its purpose is to maximize profits by avoiding expensive customers, with no constraints for fairness or long-term community health. So, moving on to the third point, is that purpose desirable? For the creators of this algorithm, maybe. They seem to be OK with discriminating against fat people in the pursuit of profit, so why not the poor and marginalized, too?
Insurance that excludes people who might need it is no longer insurance. Which answers the fourth question. So the worst part of Ping An’s terrible AI isn’t that it won’t work for its stated purpose. The real danger is that it might work too well.