AI systems function best with clearly defined win/loss outcomes. Examples of clearly defined win/loss outcomes are: (1) playing a game like chess or checkers: you either win or lose, (2) investing: you either make money or lose money, or (3) analyzing x-rays: you either find the fractured bone or you don’t.
For each of these applications, you can feed the specific Artificial Intelligence “Expert System” massive amounts of data sets:
1). For your chess AI system, you can have the chess AI system play itself one million times using thousands of opening moves used by chess masters over the years. With each game it plays, the AI chess system gets better and better at chess. It’s all about the reps baby!!! : ) As the article below explains, the famous Google chess program AlphaZero does not learn to play chess from humans, it learns to play chess from itself!
2). For your finance/investing AI system, you can feed it volumes of historical financial data and have the AI system validate the data with actual performance of the financial markets or specific companies.
3). For finance/investing AI systems designed to look for abnormalities in a financial institutions to detect financial crimes (Watch out Bernie Madoff! This type of AI is coming after you!!!) or excessive and reckless financial risk – risk that borders on pure negligence out of pure greed. This type of AI is designed to avoid massive bank failures like what happened in the banking crisis of 2008-2009.
See: https://www.ibm.com/industries/banking-financial-markets/risk-compliance
4). For your x-ray AI system, you can feed it thousands of historical x-rays with a pre-determined correct analysis and measure if the AI system’s ability to derive the correct analysis. The AI system will learn when it is wrong and it will learn from its failure. It will use this cumulative knowledge when it performs future analysis of x-rays. With each unique x-ray the AI system evaluates, the AI system is getting better and better at evaluating x-rays.
See: https://medicalxpress.com/news/2018-11-ai-outperformed-radiologists-screening-x-rays.html
Remember, we can have a cross-reference of AI systems. We can have one AI system evaluate another AI system.
Finally, remember the problem of AI development with: (1) false positives – these are actually not a problem for AI development and (2) false negative – these ARE a PROBLEM for AI development.