Automated calibration |
I would like to calibrate the SWMM model by using an automated calibration tool. I have been exploring different codes written in R python etc. However, I was unable to use any of them efficiently and there is not much information that I can find.
1-I tried https://cran.r-project.org/web/packages/swmmr/vignettes/How_to_autocalibrate_a_SWMM_model_with_swmmr.html
. I successfully run the simulation but the results were not correct.
2-I tried to run the main example on https://github.com/mmmatthew/swmm_calibration but it gave me some errors 3-Recently I came across OSTRICH-SWMM, but I could not install and use it.
I would be very grateful if anyone could suggest me to use any other tool for automated calibration of SWMM or could provide some material that helps me to be able to run one of these codes above.
Try combining pyswmm (https://github.com/OpenWaterAnalytics/pyswmm) and platypus (https://github.com/Project-Platypus/Platypus)
While ostriches, pythons, and platypuses all have varying aquatic capabilities, SWMM models are typically weak candidates for autocalibration. Much of the calibration of an urban collection system model is instead refinement and correction of the paradigm describing the model domain. Autocalibration may help where one is reasonably confident in the flow metering data, hydrology is well-defined, and the hydraulic network is reliably parameterized, such as if one is modeling streamflow and reliable stream gage and rainfall data exist. One can then calibrate conduit roughness, subcatchment width, impervious connectivity, and perhaps soil infiltration rates. However, autocalibration cannot correct faulty metering data, fix erroneous system connectivity, account for blockages, tell you that you've failed to account for hydrologic features such as ponds or new development, etc. I suppose that AI could add the ability to consider many of these considerations. However, I think that considerable engineering judgement remains a key requisite and autocalibration/AI can play only a limited role in model development in a piped network where there is uncertainty about the conditions and configuration of the modeled system.
I agree with Mitch. I'm not a big proponent of automatic calibrator for models with parameters which function interdependently with each other. Automatic calibrators might work well with models with parameters which function independently of each other - you change a parameter and then you see a change in your results. But for models with parameters which function interdependently of each other (e.g. SWMM's groundwater/Aquifer model), some parameters might not change anything in your results until the change in other parameters have reached their right ranges or values. By the time you have figured out all of these interdependent relationships and ranges among parameters, you would have actually calibrated the model by hand already. Also, with using automatic calibrators in your calibration, that robs from the engineers calibrating the model the opportunity to learn insights about how the model behaves in different conditions. Often times, as a result, when a model calibrated by using automatic calibrator is run outside of the ranges of its calibration events, the engineers sometimes would have no clue why the simulation results are the way they are, especially when the simulation results do not pass the common sense test.
I completely agree with the above comments. To the best of my knowledge, I am unsure if we have an out-of-the-box tool for the automated calibration of SWMM models. The above comments perfectly illustrate why developing an automated tool for calibrating an SWMM model is hard, especially for large-scale stormwater systems. Having said that, I don't think it is impossible to calibrate a model using automated approaches. Our ability to use automated methods would depend on the model parameters you are trying to calibrate. Say you want to figure out ten parameters; you can use optimization. But as the number of parameters increases, complexity increases.
If you are approaching the problem from a research perspective, there are a couple of automated calibration approaches you can try. Building on Caleb's comment, I recommend using bayesian optimization (https://scikit-optimize.github.io/stable/auto_examples/bayesian-optimization.html?highlight=bayesian). It is sample efficient and would give a faster solution compared to genetic algorithms. Sample efficiency would be vital if your model takes a while to simulate. In the end, like everything in engineering, the usability of a method would depend on the scale and complexity of the problem. 😄
Also, check the objective function you are using for your calibration. The one in https://cran.r-project.org/web/packages/swmmr/vignettes/How_to_autocalibrate_a_SWMM_model_with_swmmr.html uses NSE. It might not be the right one for your model.
It seems my attempt at providing a direct answer to the query above has generated an interesting discussion. Setting up the argument as one of a competition or a choice between using an automated calibration tool versus a manual calibration exercise relying solely on engineering judgement/understanding is not how I look at it. Using automated calibrators do not preclude the use of considerable engineering judgment. I tend to believe a strong understanding of the dynamics underlying these models is a requisite for setting up and using these automated calibrators appropriately.
I think we also need to be honest with ourselves in noting that the parameters underlying these models, especially those for runoff generation are highly uncertain, are sometimes scale dependent, change from model to model, and many cannot be measured directly even though we may have some intuition about their ranges. I am a proponent of conveying these uncertainties in our modeling rather than believing we are able to determine a single hydraulic conductivity value that applies to a small parcel or at worse an entire region. Some of these "automated calibrators" can help a great deal in conveying these uncertainties. As a plus, they serve to convey our humility about the degree to which we know what is going on underground. It also makes for better and more realistic planning and design of infrastructure.
I agree - great discussion! This reference might be a little dated, but you will find some additional insight in Rules for Responsible Modeling by William James (see https://www.chiwater.com/Files/R184_CHI_Rules.pdf , Chapter 4 in particular).
Thank you so much for your insightful comments and recommendations, I will keep them in mind. I greatly appreciate your help.
Currently, I am working on a small-scale model with experimental data. After I successfully completed the calibration process, I will post the tool I used here. Many thanks again
Yes, a really good discussion. I've heard an increasing rumble of autocalibration over the past couple years and I won't be surprised if a commercial tool emerges in the next few years for autocalibration (similar to software to optimize alternatives right now).
But this thread has pointed to many of the reasons autocalibration is so tricky. There are just too many things to hit run and expect to come back the next morning with a fully calibrated model. As Abhi points out (and in my experience) coming up with an adequate objective/cost function that describes what you look for in a good calibration is an undertaking in and of itself. But I think we're working toward a point where an engineer can speed up the more boring parts of the calibration process using autocalibration.
Dondu - report back and let us know what you find. And if you go down the rabbit hole far enough you could easily have a paper worth publishing. I'd recommend https://www.chijournal.org.
This has evolved from a technology question to a philosophical discussion. .. which is pretty awesome … and I just had a cup of coffee… Adding just a few more thoughts to the discussion.
I think there is a healthy balance between unsupervised and supervised automation around calibration. Joseph points out that auto calibration methods can lead to unreasonable parameterizations - True. However, exploring that further, parameter spaces can certainly be bounded to “reality.” At the point in which you start hitting the boundaries of one or more parameters, the modeler should be triggered to investigate further. Model setup should be first interrogated. When everything seems correct with your setup, perhaps the even the method in which you are representing Hydrology, for example, needs to be adjusted.
It should be noted that the method of “auto calibration” is a separate concern from “data quality” in response to Mitch’s concerns. But the intricacies of modeling around the exceptions are cleanly pointed out with a supervised auto calibration method. The process is bounded and when are at the boundaries - one should be triggered to investigate. This ends up being a list of justifications of why the model didn’t work “perfectly.”
Automation/optimization is simply used as a vehicle to find our calibration issues sooner. This is iterative with humans in the loop.
It’s amazing how far we have come from the “old days” of a modeler pressing ‘run’ and nervously walking away from their heating up CPU. Auto calibration methods with supervision enable us to use more computer power (like deploy to 1000’s of servers) to get to calibration sooner.
I think in “modern day” modeling .. when you’re calibrating big and complex networks .. if you’re not running many parallel simulations and identifying things like sensitivity gradients, it might be worth stepping back and asking yourself how you could better optimize your workflow with some flavor of supervised automation.
Anyone remember rotary phones?