Do you think AI is changing your thinking ability? From applications recommending what movies to watch, what songs to listen, what to buy, what to eat, what ads you see, and the list goes on… all are driven by applications learning from you or delivering information through collective intelligence (i.e., people like you, location based etc.).
But are you sure the right recommendation is being provided to you or are you consuming the information as-is and adapting to it? Have you ever thought, would you have reached the same conclusion by applying your research and mental knowledge?
To add on, with information being readily available, less time and mental ability is spent on problem solving and more effort is spent on searching the solutions online.
As we build more smarter applications in future, which keeps learning everything about you, do you think this would change our thinking patterns even further?
Apart from AI systems trying to learn, there can be other ethical issues around trust and bias and how do you design and validate systems that provide recommendations which can be consumed by humans to provide unbiased decisions. I have covered this, as part of my earlier article — https://navveenbalani.dev/index.php/articles/responsible-and-ethical-ai-building-explainable-models/
As we are creators and validators of the AI system, the onus lies on us (humans) to ensure any technology is used for good.
As standards and compliance are still evolving in AI world, we should start designing systems that should let user decide how to use the application and when to reset it.
I am suggesting few approaches below to drive discussions in this area, which needs contribution from everyone to help deliver smart and transparent AI applications in future.
1. Top 5 Open-Source Machine Learning Recommender System Projects With Resources
2. Deep Learning in Self-Driving Cars
3. Generalization Technique for ML models
4. Why You Should Ditch Your In-House Training Data Tools (And Avoid Building Your Own)
The Uber Persona Model –
All applications build some kind of semantic user profiles incrementally to understand more about the user and provide recommendations. Making this transparent to the user should be first step.
Your application can have various semantic user profiles — one about you, one about your community (similar to you, location based etc..) and how this has been derived over a period of time. Finally your application should have a Reset Profile, that lets you reset your profile or a “Private AI” profile that enables you to use the application without knowing anything about you and let you discover the required information. Leaving the choice to the end-users on which profile to use, should lead to better control and transparency and making users build trust in the system.
Explainability and Auditability –
Designing applications with explainability in mind should be a key design principle. If the user receives an output from an AI algorithm, providing information as to why this output was presented and how relevant it is, should be built into the algorithm. This would empower users to understand why a particular information is being presented and turn on/off any preferences associated with an AI algorithm for future recommendations/suggestions.
For instance, take the example of server auditing, where you have tools that log every request and response, track changes in the environment, assess access controls and risk and provide end-to-end transparency.
Same level of auditing is required when AI delivers an output — what was the input, what version of model was used, what features were evaluated, what data was used for evaluation, what was the confidence score, what was the threshold, what output was delivered and what was the feedback.
Gamifying the Knowledge Discovery –
As information is readily available, how do you make it consumable in a way where you can nudge users to use their mental ability to find solutions, rather than giving all the information in one go. This would be particularly useful on how education in general (especially for schools/universities) , would be delivered to everyone in future.
How about a google like smart search engine, which delivers information that lets you test your skills. As mentioned earlier, in the Uber Persona Model section, the choice is up to the user to switch on/off this recommendation.
I hope this article, gave you enough insights on this important area.
To conclude , I would say the only difference between AI and we all in future, would be our ability to think and build the future we want.