by Marshall Allen @ProPublica
ProPublica 和 NPR 報導了美國健康保險公司如何在沒有任何公共監管的情況下從數據掮客處購買取得客戶和潛在客戶的行為數據，除了侵犯隱私的疑慮之外，如果這些數據被用來作為健康保險定價的決定因素之一，很容易就會造成對弱勢群體的價格歧視，加重社會分化。
Data scientist Cathy O’Neil said drawing conclusions about health risks on such data could lead to a bias against some poor people. It would be easy to infer they are prone to costly illnesses based on their backgrounds and living conditions, said O’Neil, author of the book “Weapons of Math Destruction,” which looked at how algorithms can increase inequality. That could lead to poor people being charged more, making it harder for them to get the care they need, she said. Employers, she said, could even decide not to hire people with data points that could indicate high medical costs in the future.
McCulley and others at LexisNexis insist the scores are only used to help patients get the care they need and not to determine how much someone would pay for their health insurance. The company cited three different federal laws that restricted them and their clients from using the scores in that way. But privacy experts said none of the laws cited by the company bar the practice. The company backed off the assertions when I pointed that the laws did not seem to apply.
Ruane acknowledged that the information his company gathers may not be accurate for every person. “We talk to our clients and tell them to be careful about this,” he said. “Use it as a data insight. But it’s not necessarily a fact.”
In a separate conversation, a salesman from a different company joked about the potential for error. “God forbid you live on the wrong street these days,” he said. “You’re going to get lumped in with a lot of bad things.”
作者像掮客要了一份自己的數據，結果竟然長達 182 頁：
I filled out a request on the LexisNexis website for the company to send me some of the personal information it has on me. A week later a somewhat creepy, 182-page walk down memory lane arrived in the mail.