Global optimization problems represent a class of pervasive and complex tasks in Computer Science, aimed at identifying the best solution among a possibly infinite number of candidate solutions (i.e. the global optimum of a fitness function). In many scenarios, the application of gradient-based methods is hampered by the fitness function characteristics, such as multi-modality and non-differentiability. To overcome such issues, a variety of gradient-free methods, named meta-heuristics, have been proposed. Although meta-heuristics proved to be effective in many contexts, the choice of the most appropriate method is often a time-consuming and error-prone task. Moreover, in real-world applications, these meta-heuristics might fail to find the global optimum. In my thesis, I show how Computational Intelligence can be used to facilitate the automatic solution of the problem under investigation. To ease the detection of the most suitable meta-heuristic, I designed a novel method named Hybrid CMA-PSO (HyCAPS). HyCAPS is a two-islands method that combines two settings-free algorithms: Covariance Matrix Adaptation Evolution Strategy and Fuzzy Self-Tuning Particle Swarm Optimization. These methods periodically interact with each other by exchanging information on the best solutions found. The results showed that HyCAPS outperforms many state-of-the-art methods. Since HyCAPS relies only on two parameters - the proportion of individuals of each island and the interaction frequency - I performed an analysis to determine a setting of such values that can be used in any practical application, thus making HyCAPS a settings-free algorithm. To enhance HyCAPS capabilities and to improve its adaptiveness, I also developed a self-tuning version, named ST-HyCAPS, which dynamically adapts the proportion value during the optimization process. The results show that ST-HyCAPS improves HyCAPS performance in a statistically significant way. Another strategy to improve global optimization algorithms is the manipulation of the search space and fitness landscape. In this context, Dilation Functions (DFs) can be used to transform the points of the search space in order to “expand” promising regions of the landscape and “compress” poor quality regions. Since the definition of an appropriate DF is problem-dependent, I introduced a two-layered algorithm to evolve a tailored DF for the problem under investigation. The results show that the quality of the dilated landscape is better than the original landscape. Nonetheless, applying the same DF to all dimensions of the search space might yield sub-optimal landscapes. To solve this open issue, I introduced a novel method based on Genetic Programming (GP) to evolve an optimal set of DFs. The results showed that the GP-based method outperforms the two-layered method on the majority of the tested benchmark functions. To limit potential side-effects derived from manipulating the entire landscape, I also introduced a novel class of DFs (Local Bubble Dilation Functions, LBDFs) able to perform local distortions of the fitness landscape. Since also the application of LBDFs requires some a priori knowledge of the problem, I introduced a multi-island approach to evolve the most suitable LBDF for any given problem. The results show that for several benchmark functions, the meta-heuristics run on the dilated landscape outperform the same meta-heuristics on the original landscape. LBDFs were also successfully applied to multi-objective optimization problems to improve the quality of an optimal Pareto front. Finally, such evolutionary approaches were compared on two real-world optimization problems, whose goal is to estimate the kinetic parameters of biochemical reaction systems.The results show that the application of DFs yields a dilated fitness landscape whose quality is better than the quality of the original landscape, thus facilitating the identification of the global optimum.
I problemi di ottimizzazione globale rappresentano una classe di problemi diffusi e complessi nell'informatica, il cui scopo è identificare la miglior soluzione tra un numero possibilmente infinito di soluzioni (i.e., l’ottimo globale di una funzione di fitness). In molti scenari, l’applicazione dei metodi basati sul gradiente è ostacolata dalle caratteristiche della funzione di fitness, tra cui la multi-modalità e la non differenziabilità. Per superare tali problemi, sono stati proposti vari metodi che non sfruttano il gradiente, chiamati meta-euristiche. Nonostante tali metodi abbiano dimostrato di essere efficaci in molteplici casi, la scelta del metodo migliore è spesso un processo complesso che richiede tempo. In questa tesi, mostro come la Computational Intelligence possa essere sfruttata per risolvere problemi di ottimizzazione senza specifici interventi da parte dell’utente. Per agevolare la scelta della meta-euristica più adatta, ho introdotto un nuovo metodo chiamato Hybrid CMA-PSO (HyCAPS). HyCAPS è un metodo a due isole che combina due algoritmi non parametrici: Covariance Matrix Adaptation Evolution Strategy e Fuzzy Self-Tuning Particle Swarm Optimization. Durante l’esecuzione, questi metodi si scambiano periodicamente le informazioni sulle migliori soluzioni trovate. I risultati mostrano che HyCAPS ottiene risultati migliori di alcuni metodi di ottimizzazione globale allo stato dell’arte. Poiché HyCAPS introduce due parametri - la proporzione di individui tra le isole e la frequenza di migrazione - ho effettuato un'analisi per individuare i valori più appropriati per ogni generica applicazione, rendendo HyCAPS un algoritmo non parametrico. Per migliorare l’adattabilità di HyCAPS, ho sviluppato ST-HyCAPS, una variante che adatta dinamicamente la proporzione degli individui durante l’ottimizzazione. I risultati mostrano che ST-HyCAPS migliora in modo statisticamente significativo le prestazioni di HyCAPS. Un'altra strategia per migliorare gli algoritmi di ottimizzazione consiste nella manipolazione dei paesaggi di fitness. In tale contesto, le Dilation Functions (DFs) trasformano i punti dello spazio di ricerca per “espandere” regioni di buona qualità del paesaggio e “comprimere” regioni di scarsa qualità. Poiché la definizione di un’appropriata DF dipende dal problema, ho introdotto un algoritmo a due livelli per evolvere la DF ottimale. I risultati mostrano che la qualità del paesaggio dilatato dalla DF è migliore del paesaggio originale. Tuttavia, l’applicazione della stessa DF a tutte le dimensioni dello spazio di ricerca può generare paesaggi sub-ottimali. Per risolvere tale problema, ho introdotto un nuovo metodo basato su Programmazione Genetica (GP) che evolve un insieme di DF. I risultati mostrano che il metodo basato su GP supera il metodo a due livelli su molte funzioni di benchmark. Dato che modificare tutto il paesaggio di fitness può introdurre manipolazioni svantaggiose, ho introdotto una nuova classe di DFs (Local Bubble Dilation Functions, LBDFs) che effettuano solo trasformazioni locali. Poiché l'applicazione delle LBDFs richiede informazioni sul problema di ottimizzazione, che generalmente non sono disponibili, ho introdotto un approccio multi-isola per evolvere la LBDF più adatta indipendentemente dal problema in analisi. I risultati mostrano che le meta-euristiche eseguite sul paesaggio dilatato ottengono risultati migliori delle stesse meta-euristiche eseguite sul paesaggio originale. Le LBDFs sono state sfruttate anche per migliorare la qualità di un fronte di Pareto ottimale in problemi multi-obiettivo. Infine, tali approcci evolutivi sono stati confrontati su due problemi di ottimizzazione per stimare i parametri cinetici di sistemi di reazioni biochimiche. I risultati mostrano che l’applicazione delle DFs porta a paesaggi con una qualità migliore rispetto al paesaggio originale, facilitando l’identificazione della soluzione ottimale.
(2024). Meta-problems in global optimization: new perspectives from Computational Intelligence. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2024).
Meta-problems in global optimization: new perspectives from Computational Intelligence
PAPETTI, DANIELE MARIA
2024
Abstract
Global optimization problems represent a class of pervasive and complex tasks in Computer Science, aimed at identifying the best solution among a possibly infinite number of candidate solutions (i.e. the global optimum of a fitness function). In many scenarios, the application of gradient-based methods is hampered by the fitness function characteristics, such as multi-modality and non-differentiability. To overcome such issues, a variety of gradient-free methods, named meta-heuristics, have been proposed. Although meta-heuristics proved to be effective in many contexts, the choice of the most appropriate method is often a time-consuming and error-prone task. Moreover, in real-world applications, these meta-heuristics might fail to find the global optimum. In my thesis, I show how Computational Intelligence can be used to facilitate the automatic solution of the problem under investigation. To ease the detection of the most suitable meta-heuristic, I designed a novel method named Hybrid CMA-PSO (HyCAPS). HyCAPS is a two-islands method that combines two settings-free algorithms: Covariance Matrix Adaptation Evolution Strategy and Fuzzy Self-Tuning Particle Swarm Optimization. These methods periodically interact with each other by exchanging information on the best solutions found. The results showed that HyCAPS outperforms many state-of-the-art methods. Since HyCAPS relies only on two parameters - the proportion of individuals of each island and the interaction frequency - I performed an analysis to determine a setting of such values that can be used in any practical application, thus making HyCAPS a settings-free algorithm. To enhance HyCAPS capabilities and to improve its adaptiveness, I also developed a self-tuning version, named ST-HyCAPS, which dynamically adapts the proportion value during the optimization process. The results show that ST-HyCAPS improves HyCAPS performance in a statistically significant way. Another strategy to improve global optimization algorithms is the manipulation of the search space and fitness landscape. In this context, Dilation Functions (DFs) can be used to transform the points of the search space in order to “expand” promising regions of the landscape and “compress” poor quality regions. Since the definition of an appropriate DF is problem-dependent, I introduced a two-layered algorithm to evolve a tailored DF for the problem under investigation. The results show that the quality of the dilated landscape is better than the original landscape. Nonetheless, applying the same DF to all dimensions of the search space might yield sub-optimal landscapes. To solve this open issue, I introduced a novel method based on Genetic Programming (GP) to evolve an optimal set of DFs. The results showed that the GP-based method outperforms the two-layered method on the majority of the tested benchmark functions. To limit potential side-effects derived from manipulating the entire landscape, I also introduced a novel class of DFs (Local Bubble Dilation Functions, LBDFs) able to perform local distortions of the fitness landscape. Since also the application of LBDFs requires some a priori knowledge of the problem, I introduced a multi-island approach to evolve the most suitable LBDF for any given problem. The results show that for several benchmark functions, the meta-heuristics run on the dilated landscape outperform the same meta-heuristics on the original landscape. LBDFs were also successfully applied to multi-objective optimization problems to improve the quality of an optimal Pareto front. Finally, such evolutionary approaches were compared on two real-world optimization problems, whose goal is to estimate the kinetic parameters of biochemical reaction systems.The results show that the application of DFs yields a dilated fitness landscape whose quality is better than the quality of the original landscape, thus facilitating the identification of the global optimum.File | Dimensione | Formato | |
---|---|---|---|
phd_unimib_808027.pdf
embargo fino al 20/02/2026
Descrizione: Meta-problems in global optimization: new perspectives from Computational Intelligence
Tipologia di allegato:
Doctoral thesis
Dimensione
17.19 MB
Formato
Adobe PDF
|
17.19 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.