Although corporate spending on artificial intelligence topped $50 billion last year, just 11% of companies that enhanced their workflows with AI have already seen a significant return on their investments. In this article, we’ll investigate business, technological, and ethical issues haunting AI projects — and provide several tips to seamlessly integrate Artificial Intelligence into your company’s Digital Transformation strategy.
Hitting Technology Roadblocks
Although AI has been around since the mid-50s, voice assistants, face swap apps, and robot dogs only became mainstream a couple of years ago. As of now, neither businesses nor their technology partners have a tried-and-true formula for creating and implementing artificial intelligence solutions. Some of the common AI pitfalls include:
- Poor architecture choices
Making accurate predictions is not the only thing you should expect from an AI system. In multi-tenant applications (think AIaaS solutions serving thousands of users), performance, scalability, and effortless management are equally important. So you cannot expect your vendor to just write a Flask service, wrap it in a Docker container, and deploy your ML model. The approach might work for a certain number of users; once the system hits its limits, you’ll get an elephantine application that is also expensive to operate.
- Inaccurate or insufficient training data
AI-based systems are only as good as the data they’ve been fed on. In some cases, companies struggle to provide quality data (and a substantial volume thereof!) to train AI algorithms. The situation is not uncommon in healthcare, where patient data like X-ray images and CT scans is hard to obtain due to privacy reasons. To increase the amount of training data and build a better model, it is sometimes necessary to manually label data using annotation tools like Supervise.ly. According to Gartner, data-related AI problems is the #1 reason why 85% of artificial intelligence projects will deliver erroneous results through 2022.
- Lack of AI explainability
Explainable artificial intelligence (XAI) is a concept that revolves around providing enough data to clarify how AI systems come to their decisions. Powered by white-box algorithms, XAI-compliant solutions deliver results that can be interpreted by both developers and subject matter experts. Ensuring AI explainability is critical across a variety of industries where smart systems are used. For example, a person operating injection molding machines at a plastic factory should be able to comprehend why the novel predictive maintenance system recommends running the machine in a certain way — and reverse bad decisions. Compared to black-box models like neural networks and complicated ensembles, however, white-box AI models may lack accuracy and predictive capacity, which somewhat undermines the whole notion of artificial intelligence.
1. Preparing for the Great Reset and The Future of Work in the New Normal
2. Feature Scaling in Machine Learning
3. Understanding Confusion Matrix
4. 8 Myths About AI in the Workplace
To avoid these (and many others!) AI pitfalls, we recommend that you start your artificial intelligence project with a discovery phase and create a proof of concept
This would allow you to map the solution requirements against your business needs, eliminate technology barriers, and plan the system architecture with the anticipated number of users in mind. It is also important to select a technology partner who knows how to overcome the data-related challenges of artificial intelligence — for instance, by reusing existing algorithms or deliberately expanding the size of a training dataset.
An AI-based breast cancer scanning system created by Google Health and Imperial College London reportedly delivers fewer false-positive results than two certified radiologists. In 2017, Oxford and Google DeepMind scientists developed a deep neural network that reads people’s lips with 93% accuracy (compared to just 52% scored by humans). And now there’s evidence that machine learning models can accurately detect COVID-19 in asymptomatic patients based on a cellphone-recorded cough! When fueled by powerful hardware and a wealth of training data, AI algorithms can perform a wide range of tasks on a par with humans specialists — and even outmatch them.
The problem with AI is, most companies fail to replicate the results achieved by Google, Microsoft, and MIT — or the accuracy displayed by their own AI prototypes — outside the laboratory walls.
The solution to this daunting AI problem partially lies in tech giants’ willingness to share complete research findings and source code with fellow scientists and AI developers. On a company level, it is crucial to analyze how smart algorithms will perform when faced with unfamiliar or poorly structured data and devise mechanisms to support the functioning of AI-powered applications under heavy load.
According to Gartner, only 53% of AI projects make it from prototypes to production, which means most companies lack the technical talent, skills, and tools to implement smart systems at scale. Continuous knowledge transfer might be a viable solution to this problem. While most companies currently rely on 3rd-party vendors to build smart systems and put them to work, forward-thinking CIOs and IT leaders must ensure their pilot projects help transfer knowledge from external DevOps, MLOps, and DataOps specialists. This way, enterprises could upscale their in-house capabilities before moving AI prototypes into production.
Back in October, MIT Sloan Management Review and Boston Consulting Group unveiled a report that sheds some light on why some companies benefit from AI (while others don’t). DHL, a postal and logistics company that delivers 1.5 billion parcels a year, is among the AI winners. The company uses a computer vision system to determine whether shipping pallets can be stacked together and optimize space in cargo planes. Gina Chung, VP of innovation at DHL, says the AI solution performed poorly in its early days. Once the system started learning from human experts who had years of experience detecting non-stackable pallets, the results improved dramatically.
If complete automation and reduction in your company’s headcount lie at the heart of your AI implementation strategy, you are likely to fail.
For one thing, algorithms need human knowledge to eventually make accurate predictions. And for another, your employees will feel more enthusiastic about teaching algorithms if you make it clear smart machines won’t replace the human workforce in the foreseeable future.
Greater adoption of smart applications comes along with several AI ethical issues, including:
- Bias in algorithmic decision making, which stems from flawed training data prepared by human engineers and bears the mark of social and historical inequities
- Moral implications, which mainly revolve around companies’ intent to replace human workers with highly productive, always-on robots
Some AI solutions do inherit racial and gender prejudice from their creators. A facial recognition system deployed by US law enforcement agencies, for instance, is more likely to identify a non-white person as a criminal. However, your company can solve most of these problems by creating balanced training datasets that include images of people representing different ethnic, gender, and age groups. In fact, artificial intelligence can help us eliminate racial, gender, age, and sexual orientation bias in the long run. For example, AI-powered HR management software can scan more resumes than human specialists and identify potential candidates based solely on their education and working experience. And while some industries indeed register persistent changes in their workforce size due to artificial intelligence implementation, it turns out AI will actually create 3% more jobs than it’s going to kill!
- Address an AI vendor with the relevant portfolio and expertise
- Work with a skilled business analyst to determine which of your processes and IT systems could benefit from AI
- Consider how ethical issues might prevent you from using AI to the fullest
- Create a proof of concept to test the solution feasibility and work around technology-related AI pitfalls
- Devise a detailed AI project implementation map covering solution development, integration, and scaling, as well as employee onboarding
- Together with your vendor, start building your system while ensuring continuous knowledge sharing
- Do not raise your hopes high: it takes time, patience, and lots of data to build AI solutions capable of enhancing or taking over critical tasks
- Appoint subject matter experts to fine-tune AI algorithms
- Educate your employees about the importance of data-driven decision making and optimization opportunities offered by artificial intelligence
Last but not least, continue experimenting with AI — even if your pilot project does not deliver on its promise! 73% of companies that overhaul their processes based on the lessons learned from failures eventually see a sizable ROI on their artificial intelligence investments.
If you need help building, scaling, or tuning an AI solution, feel free to contact the ITRex team, and we’ll connect you with the right expert!