FeedbackAnalysis

Analytics

The FeedbackAnalysis report block analyzes feedback data and calculates inter-rater reliability using Gwet's AC1 agreement coefficient. It provides comprehensive insights into agreement between evaluators and helps assess the quality and consistency of feedback data.

Overview

The FeedbackAnalysis block retrieves FeedbackItem records and compares initial and final answer values to calculate agreement scores using Gwet's AC1 coefficient. This provides a robust measure of inter-rater reliability that accounts for chance agreement.

The analysis can focus on a specific score or analyze all scores associated with a scorecard, providing both individual score breakdowns and overall aggregated metrics.

Key Features

AC1 Agreement Coefficient

Calculates Gwet's AC1 for robust inter-rater reliability measurement

Accuracy Metrics

Provides accuracy, precision, and recall measurements

Detailed Breakdowns

Score-by-score analysis with confusion matrices

Quality Insights

Automatic warnings for data quality issues

Configuration

Configure the FeedbackAnalysis block in your report configuration:

```block
class: FeedbackAnalysis
scorecard: "1438" # Required: Call Criteria Scorecard ID
days: 30 # Optional: Number of days to analyze (default: 14)
start_date: "2024-01-01" # Optional: Start date (YYYY-MM-DD format)
end_date: "2024-01-31" # Optional: End date (YYYY-MM-DD format)
score_id: "1438_1" # Optional: Specific score ID to analyze
```

Configuration Parameters

ParameterRequiredDescription
scorecard
Required
Call Criteria Scorecard ID to analyze
days
Optional
Number of days in the past to analyze (default: 14)
start_date
Optional
Start date for analysis (YYYY-MM-DD format, overrides days)
end_date
Optional
End date for analysis (YYYY-MM-DD format, defaults to today)
score_id
Optional
Specific Call Criteria Question ID to analyze (analyzes all if omitted)

Example Output

Here's an example of how the FeedbackAnalysis block output appears in a report:

Live Example

This is a live rendering of the FeedbackAnalysis component using example data

Feedback Analysis Example

Inter-rater Reliability Assessment

January 1, 2024 to January 31, 2024

Resolution Effectiveness

-110.20.50.80
0.87
Agreement
010055.675.685.6
87.2%
Accuracy

Call Quality Assessment

-110.20.50.80
0.82
Agreement
010051.271.281.291.2
82.0%
Accuracy

Summary

-110.20.50.80
0.85
Agreement
010026.746.756.766.7
84.6%
Accuracy
Raw Agreement
132 / 156
132 / 156

Understanding the Metrics

AC1 Coefficient

Gwet's AC1 is a chance-corrected agreement coefficient that's more robust than Cohen's kappa, especially for imbalanced data. Values range from -1 to 1, with higher values indicating better agreement.

Accuracy

The percentage of feedback items where the initial and final answers agree. This provides a straightforward measure of evaluator consistency.

Precision & Recall

Precision measures the accuracy of positive predictions, while recall measures the ability to find all positive instances. These help understand performance across different response categories.

Confusion Matrix

Shows the detailed breakdown of agreements and disagreements between initial and final answers, helping identify specific patterns in evaluator behavior.

Related Documentation