Authors:
A. Rajeb,R. Hamdaoui,DOI NO:
https://doi.org/10.26782/jmcms.2026.04.00010Keywords:
Hyperparameter tuning; Constraint programming; Machine learning optimization; AutoML; Bayesian optimization; Computational efficiency,Abstract
Hyperparameter tuning remains a major computational challenge in the field of machine learning. Traditional methods (grid search, random search, Bayesian optimization) are constrained by high dimensionality and complex parameter dependencies. This article explores constraint programming (CP) as a promising alternative, leveraging its ability to handle complex constraints and efficiently reduce the search space. We systematically compare CP methods to standard methods across different data types and learning algorithms. Performance metrics include accuracy, computational efficiency, convergence time, and the number of required evaluations. The results highlight the superior advantages of CP for complex hyperparameter dependencies and constrained search spaces, while also identifying scenarios where traditional methods remain preferable. This study contributes to the field of Automated Machine Learning (AutoML) and provides concrete recommendations for hyperparameter tuning.Refference:
I. Berger, N. Modélisation et résolution en programmation par contraintes de problèmes mixtes continu/discret de satisfaction de contraintes et d’optimisation. PhD dissertation, Université de Nantes, 2010. https://theses.hal.science/tel-00560963/document
II. Bergstra, James, and Yoshua Bengio. “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research, vol. 13, 2012, pp. 281–305. https://jmlr.org/papers/v13/bergstra12a.html
III. Bischl, Bernd, et al. “Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges.” arXiv, 2021. https://arxiv.org/abs/2107.05847
IV. Bourreau, Eric, et al. Programmation par contraintes: démarches de modélisation pour des problèmes d’optimisation. Ellipses, 2020.
V. Demassey, Sophie. Méthodes hybrides de programmation par contraintes et programmation linéaire pour le problème d’ordonnancement de projet à contraintes de ressources. PhD dissertation, Université de Nantes, 2003.
VI. Eurodecision. “Programmation par contraintes (PPC)” https://www.eurodecision.com
VII. Falkner, Stefan, Aaron Klein, and Frank Hutter. “BOHB: Robust and Efficient Hyperparameter Optimization at Scale.” Proceedings of the 35th International Conference on Machine Learning (ICML), 2018, pp. 1437–1446. https://proceedings.mlr.press/v80/falkner18a.html
VIII. Feurer, Matthias, and Frank Hutter. “Hyperparameter Optimization.” Automated Machine Learning, Springer, 2019, pp. 3–33. 10.1007/978-3-030-05318-5_1
IX. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. https://www.deeplearningbook.org
X. Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed., Springer, 2009. 10.1007/978-0-387-84858-7
XI. Hutter, Frank, Holger H. Hoos, and Kevin Leyton-Brown. “Sequential Model-Based Optimization for General Algorithm Configuration.” Proceedings of the 5th International Conference on Learning and Intelligent Optimization (LION 5), 2011, pp. 507–523. https://link.springer.com/chapter/10.1007/978-3-642-25566-3_40
XII. Jin, Haifeng, Qingquan Song, and Xia Hu. “Auto-Keras: An Efficient Neural Architecture Search System.” Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019. 10.1145/3292500.3330648
XIII. Letham, Benjamin, et al. “Constrained Bayesian Optimization with Noisy Experiments.” Bayesian Analysis, vol. 14, no. 2, 2019, pp. 495–519. 10.1214/18-BA1110
XIV. Swersky, Kevin, Jasper Snoek, and Ryan P. Adams. “Multi-Task Bayesian Optimization.” Advances in Neural Information Processing Systems (NeurIPS), 2013, pp. 2004–2012. 10.5555/2999792.2999836
XV. Ungredda, Jonathan, and Jürgen Branke. “Bayesian Optimisation for Constrained Problems.” arXiv, 2021. https://arxiv.org/abs/2105.13245

