If we are to put a forecasting tool in the hands of non-technical users, we need some way of automatically assess the quality of the forecasts and tell the user when they need to reach out for support. This can tricky: we want to be accurate, but we don't want too many false positives...
Here are two tests that we could implement:
Another approach would be to fit a couple simple models and compare their accuracy with the overall model; if the simple models give better forecasts than prophet, that would be a good signal to contact a statistician.
If we are to put a forecasting tool in the hands of non-technical users, we need some way of automatically assess the quality of the forecasts and tell the user when they need to reach out for support. This can tricky: we want to be accurate, but we don't want too many false positives...
Here are two tests that we could implement:
Anomaly detection via Generalized ESD Test. See for example Twitter's package.
Ljung-Box test for autocorrelation in the residuals.
Another approach would be to fit a couple simple models and compare their accuracy with the overall model; if the simple models give better forecasts than
prophet, that would be a good signal to contact a statistician.