Browsing by Author "Moodley, Deshendran."
Now showing 1 - 15 of 15
- Results Per Page
- Sort Options
Item Artificial neural networks for image recognition : a study of feature extraction methods and an implementation for handwritten character recognition.(1996) Moodley, Deshendran.; Ram, Vevek.; Haines, Linda Margaret.The use of computers for digital image recognition has become quite widespread. Applications include face recognition, handwriting interpretation and fmgerprint analysis. A feature vector whose dimension is much lower than the original image data is used to represent the image. This removes redundancy from the data and drastically cuts the computational cost of the classification stage. The most important criterion for the extracted features is that they must retain as much of the discriminatory information present in the original data. Feature extraction methods which have been used with neural networks are moment invariants, Zernike moments, Fourier descriptors, Gabor filters and wavelets. These together with the Neocognitron which incorporates feature extraction within a neural network architecture are described and two methods, Zernike moments and the Neocognitron are chosen to illustrate the role of feature extraction in image recognition.Item Debugging and repair of description logic ontologies.(2010) Moodley, Kodylan.; Meyer, Thomas Andreas.; Moodley, Deshendran.; Varzinczak, Ivan.In logic-based Knowledge Representation and Reasoning (KRR), ontologies are used to represent knowledge about a particular domain of interest in a precise way. The building blocks of ontologies include concepts, relations and objects. Those can be combined to form logical sentences which explicitly describe the domain. With this explicit knowledge one can perform reasoning to derive knowledge that is implicit in the ontology. Description Logics (DLs) are a group of knowledge representation languages with such capabilities that are suitable to represent ontologies. The process of building ontologies has been greatly simpli ed with the advent of graphical ontology editors such as SWOOP, Prote ge and OntoStudio. The result of this is that there are a growing number of ontology engineers attempting to build and develop ontologies. It is frequently the case that errors are introduced while constructing the ontology resulting in undesirable pieces of implicit knowledge that follows from the ontology. As such there is a need to extend current ontology editors with tool support to aid these ontology engineers in correctly designing and debugging their ontologies. Errors such as unsatis able concepts and inconsistent ontologies frequently occur during ontology construction. Ontology Debugging and Repair is concerned with helping the ontology developer to eliminate these errors from the ontology. Much emphasis, in current tools, has been placed on giving explanations as to why these errors occur in the ontology. Less emphasis has been placed on using this information to suggest e cient ways to eliminate the errors. Furthermore, these tools focus mainly on the errors of unsatis able concepts and inconsistent ontologies. In this dissertation we ll an important gap in the area by contributing an alternative approach to ontology debugging and repair for the more general error of a list of unwanted sentences. Errors such as unsatis able concepts and inconsistent ontologies can be represented as unwanted sentences in the ontology. Our approach not only considers the explanation of the unwanted sentences but also the identi cation of repair strategies to eliminate these unwanted sentences from the ontology.Item An evaluation of depth camera-based hand pose recognition for virtual reality systems.(2018) Clark, Andrew William.; Moodley, Deshendran.; Pillay, Anban Woolaganathan.Camera-based hand gesture recognition for interaction in virtual reality systems promises to provide a more immersive and less distracting means of input than the usual hand-held controllers. It is unknown if a camera would effectively distinguish hand poses made in a virtual reality environment, due to lack of research in this area. This research explores and measures the effectiveness of static hand pose input with a depth camera, specifically the Leap Motion controller, for user interaction in virtual reality applications. A pose set was derived by analyzing existing gesture taxonomies and Leap Motion controller-based virtual reality applications, and a dataset of these poses was constructed using data captured by twenty-five participants. Experiments on the dataset utilizing three popular machine learning classifiers were not able to classify the poses with a high enough accuracy, primarily due to occlusion issues affecting the input data. Therefore, a significantly smaller subset was empirically derived using a novel algorithm, which utilized a confusion matrix from the machine learning experiments as well as a table of Hamming Distances between poses. This improved the recognition accuracy to above 99%, making this set more suitable for real-world use. It is concluded that while camera-based pose recognition can be reliable on a small set of poses, finger occlusion hinders the use of larger sets. Thus, alternative approaches, such as multiple input cameras, should be explored as a potential solution to the occlusion problem.Item Gabor filter parameter optimization for multi-textured images : a case study on water body extraction from satellite imagery.(2012) Pillay, Maldean.; Moodley, Deshendran.The analysis and identification of texture is a key area in image processing and computer vision. One of the most prominent texture analysis algorithms is the Gabor Filter. These filters are used by convolving an image with a family of self similar filters or wavelets through the selection of a suitable number of scales and orientations, which are responsible for aiding in the identification of textures of differing coarseness and directions respectively. While extensively used in a variety of applications, including, biometrics such as iris and facial recognition, their effectiveness depend largely on the manual selection of different parameters values, i.e. the centre frequency, the number of scales and orientations, and the standard deviations. Previous studies have been conducted on how to determine optimal values. However the results are sometimes inconsistent and even contradictory. Furthermore, the selection of the mask size and tile size used in the convolution process has received little attention, presumably since they are image set dependent. This research attempts to verify specific claims made in previous studies about the influence of the number of scales and orientations, but also to investigate the variation of the filter mask size and tile size for water body extraction from satellite imagery. Optical satellite imagery may contain texture samples that are conceptually the same (belong to the same class), but are structurally different or differ due to changes in illumination, i.e. a texture may appear completely different when the intensity or position of a light source changes. A systematic testing of the effects of varying the parameter values on optical satellite imagery is conducted. Experiments are designed to verify claims made about the influence of varying the scales and orientations within predetermined ranges, but also to show the considerable changes in classification accuracy when varying the filter mask and tile size. Heuristic techniques such as Genetic Algorithms (GA) can be used to find optimum solutions in application domains where an enumeration approach is not feasible. Hence, the effectiveness of a GA to automate the process of determining optimum Gabor filter parameter values for a given image dataset is also investigated. The results of the research can be used to facilitate the selection of Gabor filter parameters for applications that involve multi-textured image segmentation or classification, and specifically to guide the selection of appropriate filter mask and tile sizes for automated analysis of satellite imagery.Item An investigation of multi-label classification techniques for predicting HIV drug resistance in resource-limited settings.(2014) Brandt, Pascal.; Moodley, Deshendran.; Pillay, Anban Woolaganathan.; Seebregts, Christopher.; De Oliveira, Tulio De Paiva Nazareth Andrade.South Africa has one of the highest HIV infection rates in the world with more than 5.6 million infected people and consequently has the largest antiretroviral treatment program with more than 1.5 million people on treatment. The development of drug resistance is a major factor impeding the efficacy of antiretroviral treatment. While genotype resistance testing (GRT) is the standard method to determine resistance, access to these tests is limited in resource-limited settings. This research investigates the efficacy of multi-label machine learning techniques at predicting HIV drug resistance from routine treatment and laboratory data. Six techniques, namely, binary relevance, HOMER, MLkNN, predictive clustering trees (PCT), RAkEL and ensemble of classifier chains (ECC) have been tested and evaluated on data from medical records of patients enrolled in an HIV treatment failure clinic in rural KwaZulu-Natal in South Africa. The performance is measured using five scalar evaluation measures and receiver operating characteristic (ROC) curves. The techniques were found to provide useful predictive information in most cases. The PCT and ECC techniques perform best and have true positive prediction rates of 97% and 98% respectively for specific drugs. The ECC method also achieved an AUC value of 0:83, which is comparable to the current state of the art. All models have been validated using 10 fold cross validation and show increased performance when additional data is added. In order to make use of these techniques in the field, a tool is presented that may, with small modifications, be integrated into public HIV treatment programs in South Africa and could assist clinicians to identify patients with a high probability of drug resistance.Item A knowledge-based system for automated discovery of ecological interactions in flower-visiting data.(2017) Coetzer, Willem Gabriël.; Moodley, Deshendran.; Gerber, Aurona Jacoba.Studies on the community ecology of flower-visiting insects, which can be inferred to pollinate flowers, are important in agriculture and nature conservation. Many scientific observations of flower-visiting insects are associated with digitized records of insect specimens preserved in natural history collections. Specimen annotations include heterogeneous and incomplete, in situ field documentation of ecologically significant relationships between individual organisms (i.e. insects and plants), which are nevertheless potentially valuable. A wealth of unrepresented biodiversity and ecological knowledge can be unlocked from such detailed data by augmenting the data with expert knowledge encoded in knowledge models. An analysis of the knowledge representation requirements of flower-visiting community ecologists is presented, as well as an implementation and evaluation of a prototype knowledge-based system for automated semantic enrichment, semantic mediation and interpretation of flower-visiting data. A novel component of the system is a semantic architecture which incorporates knowledge models validated by experts. The system combines ontologies and a Bayesian network to enrich, integrate and interpret flower- visiting data, specifically to discover ecological interactions in the data. The system’s effectiveness, to acquire and represent expert knowledge and simulate the inferencing ability of expert flower-visiting ecologists, is evaluated and discussed. The knowledge-based system will allow a novice ecologist to use standardised semantics to construct interaction networks automatically and objectively. This could be useful, inter alia, when comparing interaction networks for different periods of time at the same place or different places at the same time. While the system architecture encompasses three levels of biological organization, data provenance can be traced back to occurrences of individual organisms preserved as evidence in natural history collections. The potential impact of the semantic architecture could be significant in the field of biodiversity and ecosystem informatics because ecological interactions are important in applied ecological studies, e.g. in freshwater biomonitoring or animal migration.Item Leaf recognition for accurate plant classification.(2017) Kala, Jules Raymond.; Viriri, Serestina.; Moodley, Deshendran.Plants are the most important living organisms on our planet because they are sources of energy and protect our planet against global warming. Botanists were the first scientist to design techniques for plant species recognition using leaves. Although many techniques for plant recognition using leaf images have been proposed in the literature, the precision and the quality of feature descriptors for shape, texture, and color remain the major challenges. This thesis investigates the precision of geometric shape features extraction and improved the determination of the Minimum Bounding Rectangle (MBR). The comparison of the proposed improved MBR determination method to Chaudhuri's method is performed using Mean Absolute Error (MAE) generated by each method on each edge point of the MBR. On the top left point of the determined MBR, Chaudhuri's method has the MAE value of 26.37 and the proposed method has the MAE value of 8.14. This thesis also investigates the use of the Convexity Measure of Polygons for the characterization of the degree of convexity of a given leaf shape. Promising results are obtained when using the Convexity Measure of Polygons combined with other geometric features to characterize leave images, and a classification rate of 92% was obtained with a Multilayer Perceptron Neural Network classifier. After observing the limitations of the Convexity Measure of Polygons, a new shape feature called Convexity Moments of Polygons is presented in this thesis. This new feature has the invariant properties of the Convexity Measure of Polygons, but is more precise because it uses more than one value to characterize the degree of convexity of a given shape. Promising results are obtained when using the Convexity Moments of Polygons combined with other geometric features to characterize the leaf images and a classification rate of 95% was obtained with the Multilayer Perceptron Neural Network classifier. Leaf boundaries carry valuable information that can be used to distinguish between plant species. In this thesis, a new boundary-based shape characterization method called Sinuosity Coefficients is proposed. This method has been used in many fields of science like Geography to describe rivers meandering. The Sinuosity Coefficients is scale and translation invariant. Promising results are obtained when using Sinuosity Coefficients combined with other geometric features to characterize the leaf images, a classification rate of 80% was obtained with the Multilayer Perceptron Neural Network classifier. Finally, this thesis implements a model for plant classification using leaf images, where an input leaf image is described using the Convexity Moments, the Sinuosity Coefficients and the geometric features to generate a feature vector for the recognition of plant species using a Radial Basis Neural Network. With the model designed and implemented the overall classification rate of 97% was obtained.Item Ontology driven multi-agent systems : an architecture for sensor web applications.(2009) Moodley, Deshendran.; Tapamo, Jules-Raymond.; Kinyua, Johnson D. M.Advances in sensor technology and space science have resulted in the availability of vast quantities of high quality earth observation data. This data can be used for monitoring the earth and to enhance our understanding of natural processes. Sensor Web researchers are working on constructing a worldwide computing infrastructure that enables dynamic sharing and analysis of complex heterogeneous earth observation data sets. Key challenges that are currently being investigated include data integration; service discovery, reuse and composition; semantic interoperability; and system dynamism. Two emerging technologies that have shown promise in dealing with these challenges are ontologies and software agents. This research investigates how these technologies can be integrated into an Ontology Driven Multi-Agent System (ODMAS) for the Sensor Web. The research proposes an ODMAS framework and an implemented middleware platform, i.e. the Sensor Web Agent Platform (SWAP). SWAP deals with ontology construction, ontology use, and agent based design, implementation and deployment. It provides a semantic infrastructure, an abstract architecture, an internal agent architecture and a Multi-Agent System (MAS) middleware platform. Distinguishing features include: the incorporation of Bayesian Networks to represent and reason about uncertain knowledge; ontologies to describe system entities such as agent services, interaction protocols and agent workflows; and a flexible adapter based MAS platform that facilitates agent development, execution and deployment. SWAP aims to guide and ease the design, development and deployment of dynamic alerting and monitoring applications. The efficacy of SWAP is demonstrated by two satellite image processing applications, viz. wildfire detection and monitoring informal settlement. This approach can provide significant benefits to a wide range of Sensor Web users. These include: developers for deploying agents and agent based applications; end users for accessing, managing and visualising information provided by real time monitoring applications, and scientists who can use the Sensor Web as a scientific computing platform to facilitate knowledge sharing and discovery. An Ontology Driven Multi-Agent Sensor Web has the potential to forever change the way in which geospatial data and knowledge is accessed and used. This research describes this far reaching vision, identifies key challenges and provides a first step towards the vision.Item An ontology-driven approach for structuring scientific knowledge for predicting treatment adherence behaviour: a case study of tuberculosis in Sub-Saharan African communities.(2016) Ogundele, Olukunle Ayodeji.; Moodley, Deshendran.; Pillay, Anban Woolaganathan.; Seebregts, Christopher.Poor adherence to prescribed treatment is a complex phenomenon and has been identified as a major contributor to patients developing drug resistance and failing treatment in sub-Saharan African countries. Treatment adherence behaviour is influenced by diverse personal, cultural and socio-economic factors that may vary drastically between communities in different regions. Computer based predictive models can be used to identify individuals and communities at risk of non-adherence and aid in supporting resource allocation and intervention planning in disease control programs. However, constructing effective predictive models is challenging, and requires detailed expert knowledge to identify factors and determine their influence on treatment adherence in specific communities. While many clinical studies and abstract conceptual models exist in the literature, there is no known concrete, unambiguous and comprehensive computer based conceptual model that categorises factors that influence treatment adherence behaviour. The aim of this research was to develop an ontology-driven approach for structuring knowledge of factors that influence treatment adherence behaviour and for constructing adherence risk prediction models for specific communities. Tuberculosis treatment adherence in sub-Saharan Africa was used as a case study to explore and validate the approach. The approach provides guidance for knowledge acquisition, for building a comprehensive conceptual model, its formalisation into an OWL ontology, and generation of probabilistic risk prediction models. The ontology was evaluated for its comprehensiveness and correctness, and its effectiveness for constructing Bayesian decision networks for predicting adherence risk. The approach introduces a novel knowledge acquisition step that guides the capturing of influencing factors from peer-reviewed clinical studies and the scientific literature. Furthermore, the ontology takes an evidence based approach by explicitly relating each factor to published clinical studies, an important consideration for health practitioners. The approach was shown to be effective in constructing a flexible and extendable ontology and automatically generating the structure of a Bayesian decision network, a crucial step towards automated, computer based prediction of adherence risk for individuals in specific communities.Item The open health information mediator : an architecture for enabling interoperability in low to middle income countries.(2015) Crichton, Ryan.; Pillay, Anban Woolaganathan.; Moodley, Deshendran.Interoperability and system integration are central problems that limit the effective use of health information systems to improve efficiency and effectiveness of health service delivery. There is currently no proven technology that provides a general solution in low and middle income countries where the challenges are especially acute. Engineering health information systems in low resource environments have several challenges that include poor infrastructure, skills shortages, fragmented and piecemeal applications deployed and managed by multiple organisations as well as low levels of resourcing. An important element of modern solutions to these problems is a health information exchange that enable disparate systems to share health information. It is a challenging task to develop systems as complex as health information exchanges that will have wide applicability in low and middle income countries. This work takes a case study approach and uses the development of a health information exchange in Rwanda as the case study. This research reports on the design, implementation and analysis of an architecture, the Health Information Mediator, that is a central component of a health information exchange. While such architectures have been used successfully in high income countries their efficacy has not been demonstrated in low and middle income countries. The Rwandan case study was used to understand and identify the challenges and requirements for health information exchange in low and middle income countries. These requirements were used to derive a set of key concerns for the architecture that were then used to drive its design. Novel features of the architecture include: the ability to mediate messages at both the service provider and service consumer interfaces; support for multiple internal representations of messages to facilitate the adoption of new and evolving standards; and the provision of a general method for mediating health information exchange transactions agnostic of the type of transactions. The architecture is shown to satisfy the key concerns and was validated by implementing and deploying a reference application, the OpenHIM, within the Rwandan health information exchange. The architecture is also analysed using the Architecture Trade-off Analysis Method. It has also been successfully implemented in other low and middle income countries with relatively minor configuration changes which demonstrates the architectures generalizability.Item Q-Cog: a Q-Learning based cognitive agent architecture for complex 3D virtual worlds.(2017) Waltham, Michael.; Moodley, Deshendran.; Pillay, Anban Woolaganathan.Intelligent cognitive agents should be able to autonomously gather new knowledge and learn from their own experiences in order to adapt to a changing environment. 3D virtual worlds provide complex environments in which autonomous software agents may learn and interact. In many applications within this domain, such as video games and virtual reality, the environment is partially observable and agents must make decisions and react in real-time. Due to the dynamic nature of virtual worlds, adaptability is of great importance for virtual agents. The Reinforcement Learning paradigm provides a mechanism for unsupervised learning that allows agents to learn from their own experiences in the environment. In particular, the Q-Learning algorithm allows agents to develop an optimal action-selection policy based on their experiences. This research explores the adaptability of cognitive architectures using Reinforcement Learning to construct and maintain a library of action-selection policies. The proposed cognitive architecture, Q-Cog, utilizes a policy selection mechanism to develop adaptable 3D virtual agents. Results from experimentation indicates that Q-Cog provides an effective basis for developing adaptive self-learning agents for 3D virtual worlds.Item Satellite remote sensing of particulate matter and air quality assessment in the Western Cape, South Africa.(2016) Padayachi, Yerdashin Rajendran.; Moodley, Deshendran.Particulate Matter (PM) is a health risk, even at low ambient concentrations in the atmosphere. The analysis of ambient PM is important in air quality management in South Africa in order to suggest recommendations for pollution abatement. However the cost to monitor or to model surface concentrations are high. Satellite remote sensing retrievals of Aerosol Optical Depth (AOD) are cost effective and have been used in conjunction with surface measurements of PM concentrations for regional air quality studies. The aim of the study was to determine the extent to which AOD could be used as a proxy for air quality analysis of PM pollution in the Western Cape, South Africa. Surface concentrations of particles with diameter 10 μm or less (PM10) measured at Air Quality Monitoring (AQM) stations in George and Malmesbury in 2011 were evaluated using temporal air quality analysis. The AOD were retrieved from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard the Terra and Aqua satellites. Temporal trends of the AOD over Malmesbury and George AQM stations were determined and the extent of the AOD-PM10 relationship quantified through statistical correlation. Additionally meteorological parameters, including wind speed, temperature, rainfall and relative humidity measured at the AQM stations, were included in the study and their impact on AOD-PM10 trends was analysed. The annual AOD-PM10 correlations over Malmesbury in 2011 ranged between 0.24 and 0.36, while the correlations over George ranged between 0.24 and 0.34. A temporal mismatch was observed between seasonal PM10 concentrations and AOD at both sites. The AOD-PM10 relationship over Malmesbury and George were weak, suggesting that the AOD cannot easily be used as a proxy within the air quality analysis of PM10 concentrations measured at Malmesbury and George AQM stations. Specific meteorological conditions were found to be important confounding factors when observing AOD and PM10 trends. In spite of a few weaknesses in current satellite data products identified in this analysis, this study showed that improvements can be made to the use of satellite aerosol remote sensing as a proxy for ground level PM10 mass concentration by addressing the meteorological confounders of the AOD-PM10 relationship.Item Scenario testing using OWL.(2015) Harmse, Hendrina Francina.; Britz, Katarina.; Gerber, Aurona Jacoba.; Moodley, Deshendran.Abstract available in PDF file.Item A semantic sensor web framework for proactive environmental monitoring and control.(2017) Adeleke, Jude Adekunle.; Moodley, Deshendran.; Rens, Gavin Brian.; Adewumi, Aderemi Oluyinka.Observing and monitoring of the natural and built environments is crucial for main- taining and preserving human life. Environmental monitoring applications typically incorporate some sensor technology to continually observe specific features of inter- est in the physical environment and transmitting data emanating from these sensors to a computing system for analysis. Semantic Sensor Web technology supports se- mantic enrichment of sensor data and provides expressive analytic techniques for data fusion, situation detection and situation analysis. Despite the promising successes of the Semantic Sensor Web technology, current Semantic Sensor Web frameworks are typically focused at developing applications for detecting and reacting to situations detected from current or past observations. While these reactive applications provide a quick response to detected situations to minimize adverse effects, they are limited when it comes to anticipating future adverse situations and determining proactive control actions to prevent or mitigate these situations. Most current Semantic Sensor Web frameworks lack two essential mechanisms required to achieve proactive control, namely, mechanisms for antici- pating the future and coherent mechanisms for consistent decision processing and planning. Designing and developing proactive monitoring and control Semantic Sensor Web applications is challenging. It requires incorporating and integrating different tech- niques for supporting situation detection, situation prediction, decision making and planning in a coherent framework. This research proposes a coherent Semantic Sen- sor Web framework for proactive monitoring and control. It incorporates ontology to facilitate situation detection from streaming sensor observations, statistical ma- chine learning for situation prediction and Markov Decision Processes for decision making and planning. The efficacy and use of the framework is evaluated through the development of two different prototype applications. The first application is for proactive monitoring and control of indoor air quality to avoid poor air quality situations. The second is for proactive monitoring and control of electricity usage in blocks of residential houses to prevent strain on the national grid. These appli- cations show the effectiveness of the proposed framework for developing Semantic Sensor Web applications that proactively avert unwanted environmental situations before they occur.Item Settlement type classification using aerial images.(2014) Mdakane, Lizwe.; Moodley, Deshendran.; Van den Bergh, Frans.In metropolitan and urban areas, the problems relating to rapid transformations that are taking place in terms of land cover and land use are now very pronounced, e.g., the rapid increase and unpredictable spread of formal and informal physical infrastructure. As a result, the availability of detailed, timely information on urban areas is of considerable importance both to the management of current urban activities and to forward planning. Remote sensing sources can make a vital contribution in this context, since they provide regular and recurring data from a single, consistent source. Pattern recognition techniques have been demonstrated to be effective in distinguishing and classifying human settlements. However, these methods are not ideal as they perform poorly when presented with imagery of the same area acquired at different dates. The poor generalization ability is mainly caused by large off-nadir viewing angles which produce image pairs with different viewing- and illumination-geometries. Classification performance is also decreased by differences in shadow length and orientation. The objective of this research is to improve the generalisation ability of the automated classification of human settlements using only remote sensing data over urban areas. The multiresolution local binary patterns (LBPs) algorithm, extended with an orthogonal variance measure for measuring local contrast features (i.e., the extended LBP) has been shown to excel at texture classification tasks. To minimize the viewing- and illumination-geometry effects and improve settlement classification, the extended LBP was applied to high spatial resolution panchromatic aerial images. The addition of a contrast component to the LBP features does not directly affect the desired invariance to shadow orientation and length, but it is expected that the richer features will nevertheless improve settlement classification accuracy. The extended LBP method was evaluated using a support vector machine (SVM) classifier for cross-date (training and test images of the same area acquired at different dates) and samedate analysis. For comparable results, LBPs without contrast features were also evaluated. The results showed the extended LBP to have a strong spatial and temporal generalisation ability for classifying settlements of aerial images, when compared to its counterpart. From this research, we can conclude that the extended LBP’s additional contrast features can improve overall settlement type classification accuracy and generalisation ability.