INSUBCONTINENT EXCLUSIVE:
Amit Paka
Contributor
Share on Twitter
Amit Paka is co-founder and chief product officer
at Fiddler Labs, an explainable AI startup that enables enterprises to deploy and scale risk- and bias-free AI applications.
More
posts by this contributor
How Legal Immigration Failed Silicon Valley
The Rise Of Micro Startup
Acquisitions
Krishna Gade
Contributor
Share on Twitter
Krishna Gade is co-founder
and CEO at Fiddler Labs, an explainable AI startup that enables enterprises to deploy and scale risk- and bias-free AI applications.
As
the world becomes more deeply connected through IoT devices and networks, consumer and business needs and expectations will soon only be
sustainable through automation.
Recognizing this, artificial intelligence and machine learning are being rapidly adopted by critical
industries such as finance, retail, healthcare, transportation and manufacturing to help them compete in an always-on and on-demand global
However, even as AI and ML provide endless benefits — such as increasing productivity while decreasing costs, reducing waste, improving
efficiency and fostering innovation in outdated business models — there is tremendous potential for errors that result in unintended,
biased results and, worse, abuse by bad actors.
The market for advanced technologies including AI and ML will continue its exponential
growth, with market research firm IDC projecting that spending on AI systems will reach $98 billion in 2023, more than two and one-half
times the $37.5 billion that was projected to be spent in 2019
Additionally, IDC foresees that retail and banking will drive much of this spending, as the industries invested more than $5 billion in
2019.
These findings underscore the importance for companies that are leveraging or plan to deploy advanced technologies for business
operations to understand how and why it making certain decisions
Moreover, having a fundamental understanding of how AI and ML operate is even more crucial for conducting proper oversight in order to
minimize the risk of undesired results.
Companies often realize AI and ML performance issues after the damage has been done, which in some
Such instances of AI driving unintentional bias include the Apple Card allowing lower credit limits for women and Google AI algorithm for
monitoring hate speech on social media being racially biased against African Americans
And there have been far worse examples of AI and ML being used to spread misinformation online through deepfakes, bots and more.
Through
real-time monitoring, companies will be given visibility into the &black box& to see exactly how their AI and ML models operate
In other words, explainability will enable data scientists and engineers to know what to look for (a.k.a
transparency) so they can make the right decisions (a.k.a
insight) to improve their models and reduce potential risks (a.k.a
building trust).
But there are complex operational challenges that must first be addressed in order to achieve risk-free and reliable, or
trustworthy, outcomes.
5 key operational challenges in AI and ML models