<aside> 📌 PATH: 360° Customer View → Predictive Models → Select already trained ML model

</aside>

Train your ML model

Before evaluation you must train at least one ML model to see a results of testing.

How to Add and Train ML Model

<aside> 📌

How are testing results generated?

Once you select the parameters for model training and provide historical data, the system automatically splits the dataset into two parts: training and testing.

The system automatically compares the actual historical targets with the model's predictions to generate testing results.

Training workflow (2).png

</aside>

Main quality metrics

Screenshot 2024-10-17 at 17.47.15.png

  1. Accuracy - What is a size of testing sample and what percentage of predictions were correct.
  2. Target - All classes, which were in Target field.
  3. Recall - How many actual instances of each category did the model correctly identify? (e.g., in our case, 1 refers to those who actually churned, and the model correctly identified 87% of them.)
  4. Precision - Of all predictions the model made, what percentage were accurate? (e.g., 91% of customers predicted to churn (1) actually did.)
  5. F1 Score - A balance between precision and recall, reflecting the model's overall accuracy in identifying customers in a specific class.
  6. Correspondence intervals - Thresholds for model success or improvement are provided below the metrics.

Confusion Matrix

Screenshot 2024-10-18 at 10.41.05.png

A confusion matrix helps evaluate how well a classification model is performing. It compares the model's predictions (lines) to the actual outcomes (columns), showing counts of true positives, true negatives, false positives, and false negatives. This information helps assess the model’s accuracy and identify areas for improvement in its predictions deeper and for each class separately.