Skip to content

7. Iteration

Depending on the outcome of your latest experiment there are many options on how to continue iterating. We provide you with non exahustive hints for some common outcomes:


🎉 Complete success 🎉

If your experiment has fulfilled all of the requirments defined in the milestone you can choose to directly move to one of the following steps (depending on how are you going to use the model):


Partial success 👍

If your experiment has positive results quite close to the requirements, but still not met the bar for closing a milestone, you might need to discuss with your team where is the room of improvement. Exploring the model behaviour should be a good first step for gaining insights.

Usually, you would want to iterate launching new experiment issues tweaking the configuration files. Common iterations could be:

  • Adding/removing data agumentation transforms (config/datasets)
  • Trying different architectures (config/models)
  • Tuning training hyperparameters (configs/schedulers / configs/runtimes)

Partial failure 👎

If the results of your experiment are not terrible but did not meet your expectations, you might need to discuss with your team where the source of the problem might be. Just like before, exploring the model behaviour should be a good first step for gaining insights.

Usually, you would want to review more carefully your data pipeline before tweaking any configuration. Revisiting the results from data/explore stages or adding new ones in between your data/transform would be good ideas. You can also debug the training pipeline using the debug_pipeline tool.


💩 Complete failure 💩

If your experiment have failed miserably, you definetly need to discuss with your team where the source of the problem might be. The common sources of complete failures are most likely high level issues, for example:

  • The scope of the project is too broad (i.e. you are trying to build a model that works on every scenario possible). Try to limit the scope (milestone) and iterate on a simplified problem.
  • The selected task does not suit your problem (i.e. you might be training a semantic segmentation model on a problem that should be solved with an object detection model).
  • The dataset is not good enough (although you should have detected this issue in the data/explore stages). You need to consider if you can afford to discard/refine/expand the dataset(s).
  • There is a bug somewhere in the code. Good luck.

You should consider all the above issues and some others in the same line before trying the steps listed for other outcomes.