A least squares network adjusts survey observations to arrive at a unique set of final coordinates and a consistent adjusted set of observations. The Least Squares process is iterative, i.e. you work out provisional values, and using theses the process calculates a better value, which is inserted again as a seed and a better value calculated from this, until the data converges, in other words the corrections calculated get smaller and smaller until they become insignificant and you have a final set of observations and coordinates. Depending on the accuracy of the initial data, you will have 2-6 iterations normally.
This adjustment process assumes the initial values are somewhere in the ball park, as all measurements would normally be. If there is a gross error, the theory won't work; there is no convergence of results after each iteration, and you don't arrive at a final answer. So your dataset needs to be free of gross errors or "blunders". Blunder detection is a process that compares all provisional coordinates and observations looking for gross differences or inconsistencies which will derail the adjustement process. It only works if you have sufficient redundant data (as for the Least Squares Adjustment).
Without blunder detection you run the least squares process, it fails, you then look for large errors in your report, fix the data, run the process again, fix more errors, until evetually the proces works. This can be quite tedious.