If you would like to implement an AI solution in your company, it is important to know which factors count for the success of your project. Since AI seems to offer countless new possibilities, it also holds many temptations and aberrations — especially if your experience with this kind of projects is not that extensive. We have four useful tips for you, as we train AI models and support companies introducing AI powered solutions.
Think about where real added value begins.
There is definitely the risk to want more from AI than you actually need.
Imagine you would like to implement a keywording tool to automate the keywording process. But instead of having an automated solution, it would already be a cost and time saver for your company to have an assisting keywording tool, supervised by your staff. Employees only have to intervene when keyword results are wrong and correct them.
What is the minimum quality for AI to be applicable?
Most people want to have human level performance when it comes to AI, but in most cases this is not necessary to create added value.
Let’s look at our keywording tool example once again: an AI that is 70% wrong can still be a timesaver if the correction takes less time than the complete creation of the keywords. The accuracy of your AI should be decided upon a calculation that compares the cost of better accuracy with the expected benefit.
Different quality levels of AI mean different operational scenarios. The following three usage scenarios are examples for different quality — and therefore trust — levels of AI predictions (starting from low to high quality):
- One scenario could be that AI only suggests keywords to support users. The existing workflow is not changed, keywording remains manually. But the user can select AI suggested keywords which would be faster than typing. The AI assists the user.
- Another scenario could be that AI tags by default. The users’ role is to control and correct. The user assists the AI.
- It is also possible to let AI tag by default, there is no control or correction by users. Keywords are stored unseen. The AI replaces the user.
Even the best AI produces wrong keyword suggestions. Therefore, it could be wise to mark these automatically tagged keywords in the system as “automatically tagged”. This way you are able to distinguish between data created by AI or human and apply different trustlevels to your data.
Quality of training data is everything.
The training data for your AI model consists of two parts: data and labels. Training data represents the complete truth from which an AI model learns, thus it determines the maximum quality of the resulting model.
While data usually already exists (for example images that need to be classified), the labels (for example keywords) are mostly missing, inconsistent or incorrect. But labels are extremely important, because they define the learning objective of an AI model. Therefore, creating labels for your model to learn from might be very time consuming, but it is essential.
Always remember: good training data does not guarantee good results, but bad training data guarantees bad results.
Think big, start small.
The initial idea of your AI project should be the overall vision. Take it as a compass for your long-term goal.
To guarantee quick and satisfying results, try to start with a tiny project instead of having an overwhelming monster project. The goal is to create a minimum viable product (MVP) which can be deployed to production.
Let’s say you want to identify the font types used for text in images. Your database has 100,000 different font types. Instead of starting to train an AI model with all font types, it is better to take just a few hundreds and train a model with these font types. Of course, the model cannot recognize all font types, so you have to adjust the initial objective of the product as well (e.g. recognize the most used font types).
Starting small makes the project more manageable, especially since you have to measure, correct, and improve the AI model over time. For example you should collect weaknesses of the model and allow user feedback. Check if the deployed product is used as expected (with the data you expected). Reflect these insights to improve your model by retraining it periodically with the latest data.
And to not just convince techies, but also decision makers:
A minimum viable product (MVP) increases the return on investment (ROI).