Pages

Friday, August 25, 2017

Artificial Intelligence risks

As a reminder, following Statistical Ideas is easy to do, through e-mail@salilstatisticsfacebookLinkedIn.  Also cited in a cover article in Inc.


We are so quickly hurtling into a world where machines are determining on their own how to make our lives easier, including protecting us from others they feel are out to do us bad.  It’s a technology industry arms race, and a heavy streak of collateral damage is left in its wake.  Perhaps it makes the news when the result is a self-driving Tesla vehicle that slams into a semi-tractor trailer, whose white color the computer confused as being thin air.  And many times, it is in  less visible ways, such as when a machine scans your e-mail or social media accounts in order to detect suspicious behavior (here, here).  An automated system will never be perfect, and quite frankly a human-only system oft-times has its own troubles (though at least we might find it easier to sleep at night knowing humans are in control of their own decisions).  Having humans supervise machine learning on the other hand is highly risky due to moral hazard, compounded by engineers lacking sufficient training in social policy and customer service.  Lastly, from a probabilistic perspective, the error rate convexly enlarges, when a major technology company’s engineer or product designer, feels their sophisticated models are actually more accurate because of their proudly intricate design.  But as we’ll illuminate below, nothing is further from the truth.

There are individuals who should have their profiles demoted from the internet.  But this is a very small fraction (<1%) worth taking such an extreme step on.  It should always be limited to those who exhibit imminent violent behavior, and certainly with a proactive evaluation to reverse the demotion when the situation changes.  No one wants their children using the same system that violent thugs are using, infiltrating social spaces or leveraging the latest technology, in order to accelerate their terrorism, hate, or violence.  There will be a small (and actually diverse) polka-dots across society who we must care for in a different way.  Always with the compassionate objective of wanting to bring them into the peaceful fold.  But the current mathematical models and frameworks are now over-reaching that vaguely-defined level of safety filtering, both based on naïve programming judgment as well as mathematical errors inherent in the theoretical set-up.


For example, we have programmers assigned to screen for potentially evil characters, using very simple and linear constraints.  And speed of execution (literally) trumps accountability.  As we learned from the James Damore case, the programmers are a tight cluster of shared values, and not the most universally diverse.  This common problem in many industries (here) poses multiple problems we’ll describe in a moment.  Next, information is processed into a non-linear advanced data models that tries to assess people across multiple arbitrary variables.  And the public is blind to its performance statistics, and yet expected to trust their conclusion!
 


For most people, there is not enough data across each modeling variable to render an accurate judgment.  And the variable’s list is incomplete when corporate-political judgments, and a staff that is a biased reflection of society (a pro-immigration case), are used to assign which variables matter.  And which are inadvertently left out.  Last, the probability distributions at any critical significance level cut-off, but when applied to many variables in a machine learned model, leads to a large amount of marginal error where mostly good people are categorized as bad (here, here, here).

We are beyond the point where having a public e-mail account is simply a social toy (here).  It is required to do a wide swath of basic civilian functions.  And yet we have seen a situation where recently we see many high-profile social media and e-mail accounts that have suddenly been deactivated (with sadly many false errors including me) citing dreadful reasons at best.  And with the implicit understanding that there are many more in the pipeline, who we never hear about or are yet to hear about.  A true tip of the iceberg that people such as Elon Musk have warned about (link): more common folks becoming victims of a zealous yet faulty dragnet, and these types of modeling applications in other aspects of our life.  The back and forth advancement and retreat (here, here, here, here, here), from this innovation process, strongly evidences the false modeling criteria that has initially been set-up.  Risk-taking off the backs of billions of citizens, an increasingly unstable segment of whom are fuming at the moment.

There is no doubt that technology companies will get to a better place, just as many growth industries before it have.  Hopefully they can do so on their own without regulation.  All it takes is much stronger and smarter leadership within the industry, a slowing down, and a more mindful in their execution and approach.  Let’s all root for their success before we are submerged in a crisis.

1 comment:

  1. Excellent blog, Thumbs up for this blog. You did a great job. Thanks for sharing this knowledgeable content. Keep sharing more.

    ReplyDelete