No longer the exclusive domain of programmers at Google or Microsoft, the use of artificial intelligence is becoming more and more widespread as it finds greater purchase across all sectors of the economy. It is no wonder then, that AI dominates media coverage today, generating speculation amongst businesses, academics and the public alike. AIBE seeks to demystify AI by breaking down significant developments as they compete for our attention in a media landscape which is all too often inundated by speculation. AIBE also analyses the implications of ethical AI to offer a better understanding of this technology. This is the first of a new weekly blog initiative. We hope our articles will be insightful and are looking forward to reading your comments.
Tackling ‘racist’ artificial intelligence requires transparency about AI training programs
Although artificial intelligence is, by definition, inhuman and rational, it too can fall prey to racial stereotyping based on the data it is fed, which is eternally colored by the programmers who collect and process such data. Algorithmic bias occurs as programmers unwittingly traffick personal biases into the design of AI, particularly in the case of machine learning, which evolves and develops using initial human intervention. Entrepreneurs and academics have called for greater transparency in the process of designing algorithms to overcome such biases, particularly in sharing the secret databases which are used to train AI such as in the case of facial recognition. It is without a doubt that companies will be reluctant to share these databases given the valuable nature of these training systems which may conceal many a trade secret. Although a long way off from serious regulation of robotics, it may be beneficial for governments to consider mandating a certain level of transparency and accountability in large databases used to train AI. Of course, the administrative concerns alone in such an endeavor would be enormous, let alone the privacy concerns of sharing mass data along with the power of the business lobby in protecting their competitive interests. In this case, then, a more voluntary approach to regulation, such as a code of ethics to be adopted by businesses and practices put into place to encourage greater transparency, particularly in maintaining a public image, would potentially be more feasible. Whether and what companies would truly volunteer in this instance, remains to be seen.
AI can now diagnose heart problems in four seconds
A trial by University College London (UCL) has found that artificial intelligence can be used to analyze patients’ heart functions on an MRI with incredible speed. At present, it takes doctors around 13 minutes to read a scan, while robots can now do it in four seconds. A review by Lancet Digital Health of AI in medicine further found that machines are on par with human doctors in diagnosing illnesses in a few medical fields. Although the implementation of such robots would likely take much time, testing, and money, even implementation on a small-scale could reap massive rewards in an overworked and overburdened NHS, whose problem of queue times are well-known. Researchers estimate that full implementation of AI in the reading of cardiac MRIs could save 54 days for a clinician at each cardiac center per year. As is the nature of burgeoning technology, AI once again invokes the double-edged sword of greater efficiency and yet greater human redundancy. Thankfully, given the demand for medical services which will only rise with the aging population, this is a sector of the economy where AI will likely do less damage in terms of replacing jobs and do more good in saving more lives, quicker.
Written by Stephanie Sheir