Skip to content

base_score changes predictions significantly #35

@27518

Description

@27518

Hello,
Thanks for creating this useful package. The waterfall plots are quite informative and intuitive.
I found that when I varied the base_score in the buildExplainer function, the predicted values output in the showWaterfall function varied significantly. Concerned about the accuracy of predicted values in the actual xgboost model, the predicted values varied, but not nearly as much. Is this an error? Or am I doing something incorrectly? Should the base_score entered in the buildExplainer always match whatever was entered in the actual xgboost model?

This is what I observed for a single predicted outcome:
base_score = 0.5: pred = .48 in both xgb predict function and explainer waterfall function
base_score = 0.2: pred = .43 in xgb predict function, but explainer waterfall function was .18.*
base_score = 0.85: pred = .53 in xgb predict function, but explainer waterfall function was .83.*
*Note: in all three examples, the xgb model used in the explainer function had a base_score of 0.5. Therefore, it varied from the base_score entered in the explainer in the 2nd and 3rd examples.

Thanks for any suggestions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions