Use the grid_scores_
attribute:
>>> clf = GridSearchCV[LogisticRegression[], {'C': [1, 2, 3]}]
>>> clf.fit[np.random.randn[10, 4], np.random.randint[0, 2, 10]]
GridSearchCV[cv=None,
estimator=LogisticRegression[C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, penalty='l2', random_state=None, tol=0.0001],
fit_params={}, iid=True, loss_func=None, n_jobs=1,
param_grid={'C': [1, 2, 3]}, pre_dispatch='2*n_jobs', refit=True,
score_func=None, scoring=None, verbose=0]
>>> from pprint import pprint
>>> pprint[clf.grid_scores_]
[mean: 0.40000, std: 0.11785, params: {'C': 1},
mean: 0.40000, std: 0.11785, params: {'C': 2},
mean: 0.40000, std: 0.11785, params: {'C': 3}]
Accuracy classification score.
In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.
Read more in the User Guide.
Parameters:y_true1d array-like, or label indicator array / sparse matrixGround truth [correct] labels.
y_pred1d array-like, or label indicator array / sparse matrixPredicted labels, as returned by a classifier.
normalizebool, default=TrueIf False
, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.
Sample weights.
Returns:scorefloatIf normalize == True
, return the fraction of correctly classified samples [float], else returns the number of correctly classified samples [int].
The best performance is 1 with normalize == True
and the number of samples with normalize == False
.
See also
balanced_accuracy_score
Compute the balanced accuracy to deal with imbalanced datasets.
jaccard_score
Compute the Jaccard similarity coefficient score.
hamming_loss
Compute the average Hamming loss or Hamming distance between two sets of samples.
zero_one_loss
Compute the Zero-one classification loss. By default, the function will return the percentage of imperfectly predicted subsets.
Notes
In binary classification, this function is equal to the jaccard_score
function.
Examples
>>> from sklearn.metrics import accuracy_score >>> y_pred = [0, 2, 1, 3] >>> y_true = [0, 1, 2, 3] >>> accuracy_score[y_true, y_pred] 0.5 >>> accuracy_score[y_true, y_pred, normalize=False] 2
In the multilabel case with binary label indicators:
>>> import numpy as np >>> accuracy_score[np.array[[[0, 1], [1, 1]]], np.ones[[2, 2]]] 0.5
Examples using sklearn.metrics.accuracy_score
¶
38 Python code examples are found related to " print scores". You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example.
Example 1
def printClassScores[scoreList, instScoreList, args]: if [args.quiet]: return print[args.bold + "classes IoU nIoU" + args.nocol] print["--------------------------------"] for label in args.evalLabels: if [id2label[label].ignoreInEval]: continue labelName = str[id2label[label].name] iouStr = getColorEntry[scoreList[labelName], args] + "{val:>5.6f}".format[ val=scoreList[labelName]] + args.nocol niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.6f}".format[ val=instScoreList[labelName]] + args.nocol print["{:5.3f}".format[val=scoreDict[categoryName]] + args.nocol niouStr = getColorEntry[instScoreDict[categoryName], args] + "{val:>5.3f}".format[val=instScoreDict[categoryName]] + args.nocol print["{:5.3f}".format[val=scoreList[labelName]] + args.nocol niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print "{:6.2%} : {1}".format[score, name]]
Example 14
def printClassScores[scoreList, instScoreList, args]: if [args.quiet]: return print[args.bold + "classes IoU nIoU" + args.nocol] print["--------------------------------"] for label in args.evalLabels: if [id2label[label].ignoreInEval]: continue labelName = str[id2label[label].name] iouStr = getColorEntry[scoreList[labelName], args] + "{val:>5.3f}".format[val=scoreList[labelName]] + args.nocol niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print["{: " + str[score]] seen_nid[el] = 0 self._mode = mode
Example 16
def printCategoryScores[scoreDict, instScoreDict, args]: if [args.quiet]: return print args.bold + "categories IoU nIoU" + args.nocol print "--------------------------------" for categoryName in scoreDict: if all[ label.ignoreInEval for label in category2labels[categoryName] ]: continue iouStr = getColorEntry[scoreDict[categoryName], args] + "{val:>5.3f}".format[val=scoreDict[categoryName]] + args.nocol niouStr = getColorEntry[instScoreDict[categoryName], args] + "{val:>5.3f}".format[val=instScoreDict[categoryName]] + args.nocol print "{:5.3f}".format[val=scoreList[labelName]] + args.nocol niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print["{:5.3f}".format[val=scoreList[labelName]] + args.nocol niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print["{:5.3f}".format[val=scoreDict[categoryName]] + args.nocol niouStr = getColorEntry[instScoreDict[categoryName], args] + "{val:>5.3f}".format[val=instScoreDict[categoryName]] + args.nocol print["{:6.2%} : {1}".format[score, name]]
Example 30
def print_scores[self, pred, k=10, only_first_name=True]: """ Print the scores [or probabilities] for the top-k predicted classes. :param pred: Predicted class-labels returned from the predict[] function. :param k: How many classes to print. :param only_first_name: Some class-names are lists of names, if you only want the first name, then set only_first_name=True. :return: Nothing. """ # Get a sorted index for the pred-array. idx = pred.argsort[] # The index is sorted lowest-to-highest values. Take the last k. top_k = idx[-k:] # Iterate the top-k classes in reversed order [i.e. highest first]. for cls in reversed[top_k]: # Lookup the class-name. name = self.name_lookup.cls_to_name[cls=cls, only_first_name=only_first_name] # Predicted score [or probability] for this class. score = pred[cls] # Print the score and class-name. print["{0:>6.2%} : {1}".format[score, name]]
Example 31
def printCategoryScores[scoreDict, instScoreDict, args]: if [args.quiet]: return print[args.bold + "categories IoU nIoU" + args.nocol] print["--------------------------------"] for categoryName in scoreDict: if all[ label.ignoreInEval for label in category2labels[categoryName] ]: continue iouStr = getColorEntry[scoreDict[categoryName], args] + "{val:>5.3f}".format[val=scoreDict[categoryName]] + args.nocol niouStr = getColorEntry[instScoreDict[categoryName], args] + "{val:>5.3f}".format[val=instScoreDict[categoryName]] + args.nocol print["{:5.3f}".format[val=scoreList[labelName]] + args.nocol niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print["{:5.3f}".format[val=scoreList[labelName]] + args.nocol #niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print["{:5.3f}".format[val=scoreDict[categoryName]] + args.nocol niouStr = getColorEntry[instScoreDict[categoryName], args] + "{val:>5.3f}".format[val=instScoreDict[categoryName]] + args.nocol print["{:5.3f}".format[val=scoreList[labelName]] + args.nocol niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print["{:5.3f}".format[val=scoreList[labelName]] + args.nocol #niouStr = getColorEntry[instScoreList[labelName], args] + "{val:>5.3f}".format[val=instScoreList[labelName]] + args.nocol print["{: