Stopping and restarting strategy for stochastic sequential search in global optimization

Journal article


Authors/Editors


Strategic Research Themes

No matching items found.


Publication Details

Author listZabinsky Z.B., Bulger D., Khompatraporn C.

PublisherSpringer

Publication year2010

JournalJournal of Global Optimization (0925-5001)

Volume number46

Issue number2

Start page273

End page286

Number of pages14

ISSN0925-5001

eISSN1573-2916

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-77949261611&doi=10.1007%2fs10898-009-9425-z&partnerID=40&md5=f3e181264a8105b9b9ab77a4723acf2f

LanguagesEnglish-Great Britain (EN-GB)


View in Web of Science | View on publisher site | View citing articles in Web of Science


Abstract

Two common questions when one uses a stochastic global optimization algorithm, e.g., simulated annealing, are when to stop a single run of the algorithm, and whether to restart with a new run or terminate the entire algorithm. In this paper, we develop a stopping and restarting strategy that considers tradeoffs between the computational effort and the probability of obtaining the global optimum. The analysis is based on a stochastic process called Hesitant Adaptive Search with Power-Law Improvement Distribution (HASPLID). HASPLID models the behavior of stochastic optimization algorithms, and motivates an implementable framework, Dynamic Multistart Sequential Search (DMSS). We demonstrate here the practicality of DMSS by using it to govern the application of a simple local search heuristic on three test problems from the global optimization literature. ฉ 2009 Springer Science+Business Media, LLC.


Keywords

Pure adaptive searchSequential searchStopping criteria


Last updated on 2023-26-09 at 07:35