FeedbackAnalysis
The FeedbackAnalysis report block analyzes feedback data and calculates inter-rater reliability using Gwet's AC1 agreement coefficient. It provides comprehensive insights into agreement between evaluators and helps assess the quality and consistency of feedback data.
Overview
The FeedbackAnalysis block retrieves FeedbackItem records and compares initial and final answer values to calculate agreement scores using Gwet's AC1 coefficient. This provides a robust measure of inter-rater reliability that accounts for chance agreement.
The analysis can focus on a specific score or analyze all scores associated with a scorecard, providing both individual score breakdowns and overall aggregated metrics.
Key Features
AC1 Agreement Coefficient
Calculates Gwet's AC1 for robust inter-rater reliability measurement
Accuracy Metrics
Provides accuracy, precision, and recall measurements
Detailed Breakdowns
Score-by-score analysis with confusion matrices
Quality Insights
Automatic warnings for data quality issues
Configuration
Configure the FeedbackAnalysis block in your report configuration:
Configuration Parameters
Parameter | Required | Description |
---|---|---|
scorecard | Required | Call Criteria Scorecard ID to analyze |
days | Optional | Number of days in the past to analyze (default: 14) |
start_date | Optional | Start date for analysis (YYYY-MM-DD format, overrides days) |
end_date | Optional | End date for analysis (YYYY-MM-DD format, defaults to today) |
score_id | Optional | Specific Call Criteria Question ID to analyze (analyzes all if omitted) |
Example Output
Here's an example of how the FeedbackAnalysis block output appears in a report:
Live Example
This is a live rendering of the FeedbackAnalysis component using example data
Feedback Analysis Example
Inter-rater Reliability Assessment
January 1, 2024 to January 31, 2024
Resolution Effectiveness
Call Quality Assessment
Summary
Raw Agreement
Understanding the Metrics
AC1 Coefficient
Gwet's AC1 is a chance-corrected agreement coefficient that's more robust than Cohen's kappa, especially for imbalanced data. Values range from -1 to 1, with higher values indicating better agreement.
Accuracy
The percentage of feedback items where the initial and final answers agree. This provides a straightforward measure of evaluator consistency.
Precision & Recall
Precision measures the accuracy of positive predictions, while recall measures the ability to find all positive instances. These help understand performance across different response categories.
Confusion Matrix
Shows the detailed breakdown of agreements and disagreements between initial and final answers, helping identify specific patterns in evaluator behavior.