Just like people, AI systems operating in real environments are inevitably forced to make decisions based on incomplete information. For example, a doctor does not know exactly what is going on inside a patient and an exploration geologist does not know exactly where best to look for, or expand the size of, a mineral deposit. 

Minerva applies the principles of reasoning with uncertainty to the challenge of providing recommendations with the highest likelihood of success.  In doing so, Minerva's focus is on machine cognition systems, while incorporating outputs from machine perception algorithms, such as neural networks, for inclusion in its reasoning, as and when appropriate.

Reasoning with uncertainty is further explained in the textbook on artificial intelligence co-authored by Professor David Poole, Chief Software Architect at Minerva, available at this website.

Subscribe for Updates