Accuracy
How Accurate is GoMicro AI ? is a question often asked. Every originations has an expert who operates with a high level of knowledge. In reality, those on the floor do not ahve that level of knowledge or understanding.
CONSIDER THESE FACTORS
Is every QC inspector operating with the same level of expertise?
What is the level of subjectivity between QC inspectors?
Is the performance of QC inspectors consistent throughout the day?
What is the level of knowledge in assessing different types of issues?
All companies struggle with these issues. AI assessment can reduce them drastically — because, technically, every assessment is done by the same algorithm. Whatever the accuracy may be repeatably in AI applications is close to 100%.
ACCURACY IMPROVES WITH TRAINING
It is impossible to predict accuracies without building and testing applications. Once the image set is provided, we allocate a certain percentage for testing the accuracy internally before deploying it to our clients. We are fully aware that our applications will be of no value unless it meets the accuracy of human assessors. But while copies ahve experts in assessment – the reality is that assessment capability for all practical purposes is not at the highest level but exists within a range.
Accuracies improve with the number of images used for training. Building AI applications are costly ad requires investment. But once the application is deployed there is a flow of information that can be used to improve the application. As shown in the graph above, while we can get to average human accuracies relatively quickly it takes an enormous amount of data to improve it to the level of the best human assessor. Since AI never stops learning in all likelihood, it will exceed human accuracies.
THE BEST STRATEGY
Is to first build apps quickly at a low cost to meet the standards of the average QC personnel. Once it is deployed more data can be acquired without additional expense, and this data can be used to retrain the AI application, till it meets and exceeds the QC ability of the best QC personnel.
We typically achieve about 90% accuracy in our initial build and get to 95% at the deployment stage and get to > 98% when we ahve sufficient data for re-training the app after it has been put to use.
AI DEFINITION OF ACCURACY
Acuracey is not about getting everything right but about the ratio of getting things right and getting it wrong. If you are using AI to assess cancer, you cannot afford to get it wrong. If you want to be 100% sure, then have a simple solution – make all results positive. This is called a false positive. Now, if you have cancer and the AI fails to detect it, it is considered a false negative. If you have cancer and it detects in correctly, then it is true positive. If you do not ahve cancer and the Ai comes to the same conclusion, then it is true negative. So all this needs to be factored in when making judgments about accuracy. It is one through what is called a truth matrix or confusion matrix.
Confusion Matrix is a simple method that is used to calculate the accuracy of an AI model.
The positive/negative labels refer to the Predicted outcome of an experiment, while the True/False refers to the Actual outcomes
True Positive (TP) – If the Prediction is Positive and the Actual outcome is also True/Positive ( Predicted as FAW and it is a FAW in Actuality).
False Positive (FP) – If the Prediction is Positive but if the Actual outcome is False/Negative ( Predicted as FAW but in Actuality, the insect is of another category).
False Negative (FN) – If the Prediction is Negative but if the Actual outcome is True/Positive (Predicted as another category but in Actuality the insect is FAW).
True Negative (TN) – If the Prediction is Negative and the Actual outcome is also False/Negative (Predicted as another category and it is an insect belonging to the other category in Actuality as well).
This can be confusing at first ( they are called confusion matrix, after all) but will be clear to you with the example.
Calculating the Accuracy
The Accuracy of the of the model is calculated by using the below formula,
Accuracy = (TP+TN) / (TP+TN+FP+FN)
EXAMPLE
Let’s assume that we used an AI model to predict if insects in a sample population of 200 belong to FAW or other Categories and that there are 100 FAW and 100 insects belonging to the other category in the sample. The first step would be to create the Confusion Matrix table and it would be as below,
As shown above, the Positive label would be FAW and the Negative would be the Other category.
Lets say after testing the model it predicted 75 FAW correctly as FAW, predicted 25 FAW incorrectly as other, predicted 90 other category insects correctly as other and predicted 10 other category insects incorrectly as FAW. Now lets sort out which of these values are TP, TN, FP and FN.
Hence,
TP (predicted as FAW and is FAW in actuality) = 75
FN (predicted as other but is FAW in actuality) = 25
FP (predicted as FAW but is other in actuality) = 10
TN (predicted as other and is other in actuality) = 90
Now lets add these values to our Confusion Matrix and the table would look as below,
With the confusion matrix filled, now we can calculate the accuracy of the model by using the formula that we mentioned earlier in the article,
Accuracy = (TP+TN) / (TP+TN+FP+FN) = ( 75+90) / ( 75+90+10+25) = 165/ 200 = 0.825 or 82.5%
Hence, the accuracy of the AI model that predicts FAW vs Other is 82.5%.
We hope that this article gave you an understanding as well as the confidence to calculate an AI model’s accuracy. If you have any questions with regards to this, please contact us.