While the focus on checking human biases from getting coded into artificial intelligence (AI) is desirable, there is a need for the developing AI that is “intelligent” about biases and contexts, too. The Indian Express reports that the reason behind YouTube AI banning Agadmator, a popular chess channel on the platform last year, could be the use of “white”, “black” and “attack”—which mean different things in chess and in race-relations.
While more companies are warming up to AI, AI platforms are being taught to screen for specific ‘cue’ words to detect bias or abuse. So, in this case, with the use of the particular words, YouTube AI read racism where there was none. How poorly human understanding is being translated for machines is evident from not just this case, but also from that of Microsoft’s Tay-bot, that all too quickly picked up anti-Semitic and hateful content from the internet when it should have been designed to filter this out contextually.
While the need will be to continuously go back to the AI “drawing board”, human control of AI’s learning and other machine-learning will be important to set the context for the machines.
AI ethics is surely a minefield—business interests, as various analyses of the recent episode at Google involving the termination of two senior ethics experts at the company suggest, could sometimes come into conflict with the larger good. But, as research translates human understanding for machines more effectively, chances are both Tay-bot and Youtube’s reported AI gaffe, at the other extreme, will become rarer.
Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, Check out latest IPO News, Best Performing IPOs, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.
Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.