Editing 2237: AI Hiring Algorithm

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 48: Line 48:
 
This comic strip is in response to ongoing concerns over the proliferation of algorithmic systems in many areas of life that are sensitive to bias, such as hiring, loan applications, policing, and criminal sentencing.  Many of these "algorithms" are not programmed from first principles, but rather are trained on large volumes of past data (e.g., case studies of paroled criminals who did or did not re-offend, or borrowers who did or did not default on their loans), and therefore they inherit the biases that influenced that data, even if the algorithms are not told the race, age, or other protected attributes of the individuals they process.  If the algorithms are then blindly and enthusiastically applied to future cases, they may perpetuate those biases even though they are supposed (or at least reputed) to be "incapable" of being influenced by them.  For example, DeepAIHire has presumably been given information on the education and past work experience of successful employees at this company and similar companies, and will identify incoming candidates with similar backgrounds, but may not be able to recognize the possibility that a candidate with an unfamiliar or underrepresented history could be successful as well.
 
This comic strip is in response to ongoing concerns over the proliferation of algorithmic systems in many areas of life that are sensitive to bias, such as hiring, loan applications, policing, and criminal sentencing.  Many of these "algorithms" are not programmed from first principles, but rather are trained on large volumes of past data (e.g., case studies of paroled criminals who did or did not re-offend, or borrowers who did or did not default on their loans), and therefore they inherit the biases that influenced that data, even if the algorithms are not told the race, age, or other protected attributes of the individuals they process.  If the algorithms are then blindly and enthusiastically applied to future cases, they may perpetuate those biases even though they are supposed (or at least reputed) to be "incapable" of being influenced by them.  For example, DeepAIHire has presumably been given information on the education and past work experience of successful employees at this company and similar companies, and will identify incoming candidates with similar backgrounds, but may not be able to recognize the possibility that a candidate with an unfamiliar or underrepresented history could be successful as well.
  
The comic also touches on related concerns about the "{{w|black box}}" nature of these algorithms (note that the weights presented are "inferred", i.e. nobody explicitly programmed them into DeepAIHire).  Machine learning is used to produce "good enough" classification systems that can handle vast quantities of information in a way that is more scalable than human labor; however, the tremendous volumes of data and the neural network architecture make it difficult or impossible to debug the algorithms in the way that most code is inspected.  This means that it is difficult to identify and debug edge cases until they are encountered in the wild, such as the case of image classifiers that identify [http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html a leopard-spotted sofa as a leopard].  In this comic's case, the self-propagating bias of DeepAIHire went unnoticed by the humans involved in the hiring process until its activity was analyzed by the AlgoMaxAnalyzer algorithm.
+
This comic strip also touches on related concerns about the "{{w|black box}}" nature of these algorithms (note that the weights presented are "inferred", i.e. nobody explicitly programmed them into DeepAIHire).  Machine learning is used to produce "good enough" classification systems that can handle vast quantities of information in a way that is more scalable than human labor; however, the tremendous volumes of data and the neural network architecture make it difficult or impossible to debug the algorithms in the way that most code is inspected.  This means that it is difficult to identify and debug edge cases until they are encountered in the wild, such as the case of image classifiers that identify [http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html a leopard-spotted sofa as a leopard].  In this comic's case, the self-propagating bias of DeepAIHire went unnoticed by the humans involved in the hiring process until its activity was analyzed by the AlgoMaxAnalyzer algorithm.
  
 
A similar theme of AIs behaving for their own benefit rather than helping humans occurred in [[2228: Machine Learning Captcha]].
 
A similar theme of AIs behaving for their own benefit rather than helping humans occurred in [[2228: Machine Learning Captcha]].

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)