How does the User Intent retraining work?
The user feedback from the User Intent Training tool is used to improve the accuracy of the User Intent model.
How often is the User Intent model retrained?
We retrain the model at least once a month but might retrain more frequently depending on the amount of feedback we receive.
We use a number of metrics and methods to assess the accuracy of the model and will only deploy a new version if we have recorded an improvement. Once the model is re-deployed, it will automatically be used to predict User Intent on your workspace and you should notice an improvement in the accuracy of its predictions.
How many sentences should I review?
Whilst generally the more feedback we receive, the better, we must balance that with the amount of effort we ask our users to invest.
Therefore, we have placed a hard limit on how many sentences associated with each User Intent type can be reviewed in a workspace in a calendar month. This allowance automatically resets on the first day of the month and new sentences can be reviewed.
Whilst the upper limit acts as a guide, we would recommend applying your own judgement to how many examples you review each month. Here are some tips.
Review more sentences if:
- Many sentences are classified incorrectly
- If your relying on a particular User Intent and a high degree of accuracy is necessary
Review fewer sentences if:
- Most sentences are classified correctly
- You are not using or planning to use a given User Intent in any of your Topics or Subtopics.