Computer Science
Permanent URI for this communityhttps://hdl.handle.net/10413/6769
Browse
Browsing Computer Science by Subject "Algorithms."
Now showing 1 - 10 of 10
- Results Per Page
- Sort Options
Item Application of backpropagation-like generative algorithms to various problems.(1992) Powell, Alan Roy.; Sartori-Angus, Alan G.Artificial neural networks (ANNs) were originally inspired by networks of biological neurons and the interactions present in networks of these neurons. The recent revival of interest in ANNs has again focused attention on the apparent ability of ANNs to solve difficult problems, such as machine vision, in novel ways. There are many types of ANNs which differ in architecture and learning algorithms, and the list grows annually. This study was restricted to feed-forward architectures and Backpropagation- like (BP-like) learning algorithms. However, it is well known that the learning problem for such networks is NP-complete. Thus generative and incremental learning algorithms, which have various advantages and to which the NP-completeness analysis used for BP-like networks may not apply, were also studied. Various algorithms were investigated and the performance compared. Finally, the better algorithms were applied to a number of problems including music composition, image binarization and navigation and goal satisfaction in an artificial environment. These tasks were chosen to investigate different aspects of ANN behaviour. The results, where appropriate, were compared to those resulting from non-ANN methods, and varied from poor to very encouraging.Item An assessment of the component-based view for metaheuristic research.(2023) Achary, Thimershen.; Pillay, Anban Woolaganathan.; Jembere, Edgar.Several authors have recently pointed to a crisis within the metaheuristic research field, particularly the proliferation of metaphor-inspired metaheuristics. Common problems identified include using non-standard terminology, poor experimental practices, and, most importantly, the introduction of purportedly new algorithms that are only superficially different from existing ones. These issues make similarity and performance analysis, classification, and metaheuristic generation difficult for both practitioners and researchers. A component-based view of metaheuristics has recently been promoted to deal with these problems. A component based view argues that metaheuristics are best understood in terms of their constituents or components. This dissertation presents three papers that are thematically centred on this view. The central problem for the component-based view is the identification of components of a metaheuristic. The first paper proposes the use of taxonomies to guide the identification of metaheuristic components. We developed a general and rigorous method, TAXONOG-IMC, that takes as input an appropriate taxonomy and guides the user to identify components. The method is described in detail, an example application of the method is given, and an analysis of its usefulness is provided. The analysis shows that the method is effective and provides insights that are not possible without the proper identification of the components. The second paper argues for formal, mathematically sound representations of metaheuristics. It introduces and defends a formal representation that leverages the component based view. The third paper demonstrates that a representation technique based on a component based view is able to provide the basis for a similarity measure. This paper presents a method of measuring similarity between two metaheuristic-algorithms, based on their representations as signal flow diagrams. Our findings indicate that the component based view of metaheuristics provides valuable insights and allows for more robust analysis, classification and comparison.Item Automated design of genetic programming of classification algorithms.(2018) Nyathi, Thambo.; Pillay, Nelishia.Over the past decades, there has been an increase in the use of evolutionary algorithms (EAs) for data mining and knowledge discovery in a wide range of application domains. Data classification, a real-world application problem is one of the areas EAs have been widely applied. Data classification has been extensively researched resulting in the development of a number of EA based classification algorithms. Genetic programming (GP) in particular has been shown to be one of the most effective EAs at inducing classifiers. It is widely accepted that the effectiveness of a parameterised algorithm like GP depends on its configuration. Currently, the design of GP classification algorithms is predominantly performed manually. Manual design follows an iterative trial and error approach which has been shown to be a menial, non-trivial time-consuming task that has a number of vulnerabilities. The research presented in this thesis is part of a large-scale initiative by the machine learning community to automate the design of machine learning techniques. The study investigates the hypothesis that automating the design of GP classification algorithms for data classification can still lead to the induction of effective classifiers. This research proposes using two evolutionary algorithms,namely,ageneticalgorithm(GA)andgrammaticalevolution(GE)toautomatethe design of GP classification algorithms. The proof-by-demonstration research methodology is used in the study to achieve the set out objectives. To that end two systems namely, a genetic algorithm system and a grammatical evolution system were implemented for automating the design of GP classification algorithms. The classification performance of the automated designed GP classifiers, i.e., GA designed GP classifiers and GE designed GP classifiers were compared to manually designed GP classifiers on real-world binary class and multiclass classification problems. The evaluation was performed on multiple domain problems obtained from the UCI machine learning repository and on two specific domains, cybersecurity and financial forecasting. The automated designed classifiers were found to outperform the manually designed GP classifiers on all the problems considered in this study. GP classifiers evolved by GE were found to be suitable for classifying binary classification problems while those evolved by a GA were found to be suitable for multiclass classification problems. Furthermore, the automated design time was found to be less than manual design time. Fitness landscape analysis of the design spaces searched by a GA and GE were carried out on all the class of problems considered in this study. Grammatical evolution found the search to be smoother on binary classification problems while the GA found multiclass problems to be less rugged than binary class problems.Item The enhanced best performance algorithm for global optimization with applications.(2016) Chetty, Mervin.; Adewumi, Aderemi Oluyinka.Abstract available in PDF file.Item Improved roach-based algorithms for global optimization problems.(2014) Obagbuwa, Ibidun Christiana.; Adewumi, Aderemi Oluyinka.Optimization of systems plays an important role in various fields including mathematics, economics, engineering and life sciences. A lot of real world optimization problems exist across field of endeavours such as engineering design, space planning, networking, data analysis, logistic management, financial planning, risk management, and a host of others. These problems are constantly increasing in size and complexity, necessitating the need for improved techniques. Many conventional approaches have failed to solve complex problems effectively due to increasingly large solution space. This has led to the development of evolutionary algorithms that draw inspiration from the process of natural evolution. It is believed that nature provides inspirations that can lead to innovative models or techniques for solving complex optimization problems. Among the class of paradigm based on this inspiration is Swarm Intelligence (SI). SI is one of the recent developments of evolutionary computation. A SI paradigm is comprised of algorithms inspired by the social behaviour of animals and insects. SI-based algorithms have attracted interest, gained popularity and attention because of their flexibility and versatility. SIbased algorithms have been found to be efficient in solving real world optimization problems. Examples of SI algorithms include Ant Colony Optimization (ACO) inspired by the pheromone trail-following behaviour of ant species; Particle Swarm Optimization (PSO) inspired by flocking and swarming behaviour of insects and animals; and Bee Colony Optimization (BCO) inspired by bees’ food foraging. Recent emerging techniques in SI includes Roach-based Algorithms (RBA) motivated by cockroaches social behaviour. Two recently introduced RBA algorithms are Roach Infestation Optimization (RIO) and Cockroach Swarm Optimization (CSO) which have been applied to some optimization problems to achieve competitive results when compared to PSO. This study is motivated by the promising results of RBA, which have shown that the algorithms have potentials to be efficient tools for solving optimization problems. Extensive studies of existing RBA were carried out in this work revealing the shortcomings such as slow convergence and entrapment in local minima. The aim of this study is to overcome the identified drawbacks. We investigate RBA variants that are introduced in this work by introducing parameters such as constriction factor and sigmoid function that have proved effective for similar evolutionary algorithms in the literature. In addition components such as vigilance, cannibalism and hunger are incorporated into existing RBAs. These components are constructed by the use of some known techniques such as simple Euler, partial differential equation, crossover and mutation methods to speed up convergence and enhance the stability, exploitation and exploration of RBA. Specifically, a stochastic constriction factor was introduced to the existing CSO algorithm to improve its performance and enhance its ability to solve optimization problems involving thousands of variables. A CSO algorithm that was originally designed with three components namely chase-swarming, dispersion and ruthlessness is extended in this work with hunger component to improve its searching ability and diversity. Also, predator-prey evolution using crossover and mutation techniques were introduced into the CSO algorithm to create an adaptive search in each iteration thereby making the algorithm more efficient. In creating a discrete version of a CSO algorithm that can be used to evaluate optimization problems with any discrete range value, we introduced the sigmoid function. Furthermore, a dynamic step-size adaptation with simple Euler method was introduced to the existing RIO algorithm enhancing swarm stability and improving local and global searching abilities. The existing RIO model was also re-designed with the inclusion of vigilance and cannibalism components. The improved RBA were tested on established global optimization benchmark problems and results obtained compared with those from the literature. The improved RBA introduced in this work show better improvements over existing ones.Item The investigation into an algorithm based on wavelet basis functions for the spatial and frequency decomposition of arbitrary signals.(1994) Goldstein, Hilton.; Sartori-Angus, Alan G.The research was directed toward the viability of an O(n) algorithm which could decompose an arbitrary signal (sound, vibration etc.) into its time-frequency space. The well known Fourier Transform uses sine and cosine functions (having infinite support on t) as orthonormal basis functions to decompose a signal i(t) in the time domain to F(w) in the frequency . domain, where the Fourier coefficients F(w) are the contributions of each frequency in the original signal. Due to the non-local support of these basis functions, a signal containing a sharp localised transient does not have localised coefficients, but rather coefficients that decay slowly. Another problem is that the coefficients F(w) do not convey any time information. The windowed Fourier Transform, or short-time Fourier Transform, does attempt to resolve the latter, but has had limited success. Wavelets are basis functions, usually mutually orthonormal, having finite support in t and are therefore spatially local. Using non-orthogonal wavelets, the Dominant Scale Transform (DST) designed by the author, decomposes a signal into its approximate time-frequency space. The associated Dominant Scale Algorithm (DSA) has O(n) complexity and is integer-based. These two characteristics make the DSA extremely efficient. The thesis also investigates the problem of converting a music signal into it's equivalent music score. The old problem of speech recognition is also examined. The results obtained from the DST are shown to be consistent with those of other authors who have utilised other methods. The resulting DST coefficients are shown to render the DST particularly useful in speech segmentation (silence regions, voiced speech regions, and frication). Moreover, the Spectrogram Dominant Scale Transform (SDST), formulated from the DST, was shown to approximate the Fourier coefficients over fixed time intervals within vowel regions of human speech.Item On the performance of recent swarm based metaheuristics for the traveling tournament problem.(2013) Saul, Sandile Sinethemba .; Adewumi, Aderemi Oluyinka.Item On the sample consensus robust estimation paradigm: comprehensive survey and novel algorithms with applications.(2016) Olukanmi, Peter Olubunmi.; Adewumi, Aderemi Oluyinka.This study begins with a comprehensive survey of existing variants of the Random Sample Consensus (RANSAC) algorithm. Then, five new ones are contributed. RANSAC, arguably the most popular robust estimation algorithm in computer vision, has limitations in accuracy, efficiency and repeatability. Research into techniques for overcoming these drawbacks, has been active for about two decades. In the last one-and-half decade, nearly every single year had at least one variant published: more than ten, in the last two years. However, many existing variants compromise two attractive properties of the original RANSAC: simplicity and generality. Some introduce new operations, resulting in loss of simplicity, while many of those that do not introduce new operations, require problem-specific priors. In this way, they trade off generality and introduce some complexity, as well as dependence on other steps of the workflow of applications. Noting that these observations may explain the persisting trend, of finding only the older, simpler variants in ‘mainstream’ computer vision software libraries, this work adopts an approach that preserves the two mentioned properties. Modification of the original algorithm, is restricted to only search strategy replacement, since many drawbacks of RANSAC are consequences of the search strategy it adopts. A second constraint, serving the purpose of preserving generality, is that this ‘ideal’ strategy, must require no problem-specific priors. Such a strategy is developed, and reported in this dissertation. Another limitation, yet to be overcome in literature, but is successfully addressed in this study, is the inherent variability, in RANSAC. A few theoretical discoveries are presented, providing insights on the generic robust estimation problem. Notably, a theorem proposed as an original contribution of this research, reveals insights, that are foundational to newly proposed algorithms. Experiments on both generic and computer-vision-specific data, show that all proposed algorithms, are generally more accurate and more consistent, than RANSAC. Moreover, they are simpler in the sense that, they do not require some of the input parameters of RANSAC. Interestingly, although non-exhaustive in search like the typical RANSAC-like algorithms, three of these new algorithms, exhibit absolute non-randomness, a property that is not claimed by any existing variant. One of the proposed algorithms, is fully automatic, eliminating all requirements of user-supplied input parameters. Two of the proposed algorithms, are implemented as contributed alternatives to the homography estimation function, provided in MATLAB’s computer vision toolbox, after being shown to improve on the performance of M-estimator Sample Consensus (MSAC). MSAC has been the choice in all releases of the toolbox, including the latest 2015b. While this research is motivated by computer vision applications, the proposed algorithms, being generic, can be applied to any model-fitting problem from other scientific fields.Item Planarity testing and embedding algorithms.(1990) Carson, D. I.; Oellermann, Ortrud Ruth.This thesis deals with several aspects of planar graphs, and some of the problems associated with non-planar graphs. Chapter 1 is devoted to introducing some of the fundamental notation and tools used in the remainder of the thesis. Graphs serve as useful models of electronic circuits. It is often of interest to know if a given electronic circuit has a layout on the plane so that no two wires cross. In Chapter 2, three efficient algorithms are described for determining whether a given 2-connected graph (which may model such a circuit) is planar. The first planarity testing algorithm uses a path addition approach. Although this algorithm is efficient, it does not have linear complexity. However, the second planarity testing algorithm has linear complexity, and uses a recursive fragment addition technique. The last planarity testing algorithm also has linear complexity, and relies on a relatively new data structure called PQ-trees which have several important applications to planar graphs. This algorithm uses a vertex addition technique. Chapter 3 further develops the idea of modelling an electronic circuit using a graph. Knowing that a given electronic circuit may be placed in the plane with no wires crossing is often insufficient. For example, some electronic circuits often have in excess of 100 000 nodes. Thus, obtaining a description of such a layout is important. In Chapter 3 we study two algorithms for obtaining such a description, both of which rely on the PQ-tree data structure. The first algorithm determines a rotational embedding of a 2-connected graph. Given a rotational embedding of a 2-connected graph, the second algorithm determines if a convex drawing of a graph is possible. If a convex drawing is possible, then we output the convex drawing. In Chapter 4, we concern ourselves with graphs that have failed a planarity test of Chapter 2. This is of particular importance, since complex electronic circuits often do not allow a layout on the plane. We study three different ways of approaching the problem of an electronic circuit modelled on a non-planar graph, all of which use the PQ-tree data structure. We study an algorithm for finding an upper bound on the thickness of a graph, an algorithm for determining the subgraphs of a non-planar graph which are subdivisions of the Kuratowski graphs K5 and K3,3, and lastly we present a new algorithm for finding an upper bound on the genus of a non-planar graph.Item Studies in particle swarm optimization technique for global optimization.(2013) Martins, Arasomwan Akugbe.; Adewumi, Aderemi Oluyinka.Abstract available in the digital copy.