    # How to calculate accuracy in python without sklearn

I'm trying to calculate the accuracy of a model I created using the function below:

``````def accuracy(y_true, y_pred):
accuracy = np.mean(y_pred == y_true)
return accuracy
``````

Sometimes it displays the accuracy correctly and sometimes its incorrect. Can someone explain how can i fix the function to have it display the same accuracy as sklearn accuracy_score. Here's an example of the results I am getting from my method.

``````y_true
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]

y_pred
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]

KNN classification accuracy:  0.0
KNN classification accuracy sklearn:  0.9428571428571428
``````

Before diving deep into the python code let us understand what these measures are and how to determine them intuitively & mathematically.

Confusion Matrix:- A confusion matrix is a table that is often used to describe the performance of a classification model (or “classifier”) on a set of test data for which the true values are known. The confusion matrix itself is relatively simple to understand, but the related terminology can be confusing.

The following are the certain terminologies of confusion matrix:

• true positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease.
• true negatives (TN): We predicted no, and they don’t have the disease.
• false positives (FP): We predicted yes, but they don’t actually have the disease.
• false negatives (FN): We predicted no, but they actually do have the disease. Confusion matrix

Recall The above equation can be explained by saying, from all the positive classes, how many we predicted correctly.

Precision The above equation can be explained by saying, from all the classes we have predicted as positive, how many are actually positive.

F1-Score It uses Harmonic Mean in place of Arithmetic Mean by punishing the extreme values more.

It is difficult to compare two models with low precision and high recall or vice versa. So to make them comparable, we use F-Score. F-score helps to measure Recall and Precision at the same time. It uses Harmonic Mean in place of Arithmetic Mean by punishing the extreme values more.

Accuracy Accuracy is number of correctly classified points divided by total number of points.

Here is the python code to understand and implement certain performance metrics without using sklearn… Here is the output for the above code. ### How does Python calculate accuracy?

How to Calculate Balanced Accuracy in Python Using sklearn.
Balanced accuracy = (Sensitivity + Specificity) / 2..
Balanced accuracy = (0.75 + 9868) / 2..
Balanced accuracy = 0.8684..

### How do you calculate accuracy manually?

The accuracy formula provides accuracy as a difference of error rate from 100%. To find accuracy we first need to calculate the error rate. And the error rate is the percentage value of the difference of the observed and the actual value, divided by the actual value.

### How do you implement a confusion matrix in Python without Sklearn?

You can derive the confusion matrix by counting the number of instances in each combination of actual and predicted classes as follows: import numpy as np def comp_confmat(actual, predicted): # extract the different classes classes = np. unique(actual) # initialize the confusion matrix confmat = np.

### How do you measure accuracy in machine learning Python?

In machine learning, accuracy is one of the most important performance evaluation metrics for a classification model. The mathematical formula for calculating the accuracy of a machine learning model is 1 – (Number of misclassified samples / Total number of samples).

Tải thêm tài liệu liên quan đến bài viết How to calculate accuracy in python without sklearn 