Masters Degrees (Computer Science)
Permanent URI for this collectionhttps://hdl.handle.net/10413/7114
Browse
Browsing Masters Degrees (Computer Science) by Title
Now showing 1 - 20 of 86
- Results Per Page
- Sort Options
Item Addressing traffic congestion and throughput through optimization.(2021) Iyoob, Mohamed Zaire.; van Niekerk, Brett.Traffic congestion experienced in port precincts have become prevalent in recent years for South Africa and internationally [1, 2, 3]. In addition to the environmental impacts of air pollution due to this challenge, economic effects also weigh heavy on profit margins with added fuel costs and time wastages. Even though there are many common factors attributing to congestion experienced in port precincts and other areas, operational inefficiencies due to slow productivity and lack of handling equipment to service trucks in port areas are a major contributor [4, 5]. While there are several types of optimisation approaches to addressing traffic congestion such as Queuing Theory [6], Genetic Algorithms [7], Ant Colony Optimisation [8], Particle Swarm Optimisation [9], traffic congestion is modelled based on congested queues making queuing theory most suited for resolving this problem. Queuing theory is a discipline of optimisation that studies the dynamics of queues to determine a more optimal route to reduce waiting times. The use of optimisation to address the root cause of port traffic congestion has been lacking with several studies focused on specific traffic zones that only address the symptoms. In addition, research into traffic around port precincts have also been limited to the road side with proposed solutions focusing on scheduling and appointment systems [25, 56] or the sea-side focusing on managing vessel traffic congestion [30, 31, 58]. The aim of this dissertation is to close this gap through the novel design and development of Caudus, a smart queue solution that addresses traffic congestion and throughput through optimization. The name “CAUDUS” is derived as an anagram with Latin origins to mean “remove truck congestion”. Caudus has three objective functions to address congestion in the port precinct, and by extension, congestion in warehousing and freight logistics environments viz. Preventive, Reactive and Predictive. The preventive objective function employs the use of Little’s rule [14] to derive the algorithm for preventing congestion. Acknowledging that congestion is not always avoidable, the reactive objective function addresses the problem by leveraging Caudus’ integration capability with Intelligent Transport Systems [65] in conjunction with other road-user network solutions. The predictive objective function is aimed at ensuring the environment is incident free and provides an early-warning detection of possible exceptions in traffic situations that may lead to congestion. This is achieved using the derived algorithms from this study that identifies bottleneck symptoms in one traffic zone where the root cause exists in an adjoining traffic area. The Caudus Simulation was developed in this study to test the derived algorithms against the different congestion scenarios. The simulation utilises HTML5 and JavaScript in the front-end GUI with the back-end having a SQL code base. The entire simulation process is triggered using a series of multi-threaded batch programs to mimic the real-world by ensuring process independence for the various simulation activities. The results from the simulation demonstrates a significant reduction in the vii duration of congestion experienced in the port precinct. It also displays a reduction in throughput time of the trucks serviced at the port thus demonstrating Caudus’ novel contribution in addressing traffic congestion and throughput through optimisation. These results were also published and presented at the International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD 2021) under the title “CAUDUS: An Optimisation Model to Reducing Port Traffic Congestion” [84].Item An analysis of algorithms to estimate the characteristics of the underlying population in Massively Parallel Pyrosequencing data.(2011) Ragalo, Anisa.; Murrell, Hugh Crozier.Massively Parallel Pyrosequencing (MPP) is a next generation DNA sequencing technique that is becoming ubiquitous because it is considerably faster, cheaper and produces a higher throughput than long established sequencing techniques like Sanger sequencing. The MPP methodology is also much less labor intensive than Sanger sequencing. Indeed, MPP has become a preferred technology in experiments that seek to determine the distinctive genetic variation present in homologous genomic regions. However there arises a problem in the interpretation of the reads derived from an MPP experiment. Specifically MPP reads are characteristically error prone. This means that it becomes difficult to separate the authentic genomic variation underlying a set of MPP reads from variation that is a consequence of sequencing error. The difficulty of inferring authentic variation is further compounded by the fact that MPP reads are also characteristically short. As a consequence of this, the correct alignment of an MPP read with respect to the genomic region from which it was derived may not be intuitive. To this end, several computational algorithms that seek to correctly align and remove the non-authentic genetic variation from MPP reads have been proposed in literature. We refer to the removal of non-authentic variation from a set of MPP reads as error correction. Computational algorithms that process MPP data are classified as sequence-space algorithms and flow-space algorithms. Sequence-space algorithms work with MPP sequencing reads as raw data, whereas flow-space algorithms work with MPP flowgrams as raw data. A flowgram is an intermediate product of MPP, which is subsequently converted into a sequencing read. In theory, flow-space computations should produce more accurate results than sequence-space computations. In this thesis, we make a qualitative comparison of the distinct solutions delivered by selected MPP read alignment algorithms. Further we make a qualitative comparison of the distinct solutions delivered by selected MPP error correction algorithms. Our comparisons between different algorithms with the same niche are facilitated by the design of a platform for MPP simulation, PyroSim. PyroSim is designed to encapsulate the error rate that is characteristic of MPP. We implement a selection of sequence-space and flow-space alignment algorithms in a software package, MPPAlign. We derive a quality ranking for the distinct algorithms implemented in MPPAlign through a series of qualitative comparisons. Further, we implement a selection of sequence-space and flow-space error correction algorithms in a software package, MPPErrorCorrect. Similarly, we derive a quality ranking for the distinct algorithms implemented in MPPErrorCorrect through a series of qualitative comparisons. Contrary to the view expressed in literature which postulates that flowspace computations are more accurate than sequence-space computations, we find that in general the sequence-space algorithms that we implement outperform the flow-space algorithms. We surmise that flow-space is a more sensitive domain for conducting computations and can only yield consistently good results under stringent quality control measures. In sequence-space, however, we find that base calling, the process that converts flowgrams (flow-space raw data) into sequencing reads (sequence-space raw data), leads to more reliable computations.Item An analysis of approaches for developing national health information systems : a case study of two sub-Saharan African countries.(2016) Mudaly, Thinasagree.; Moodley, D.; Pillay, Anban Woolaganathan.; Seebregts, Christopher.Health information systems in sub-Saharan African countries are currently characterized by significant fragmentation, duplication and limited interoperability. Incorporating these disparate systems into a coherent national health information system has the potential to improve operational efficiencies, decision-making and planning across the health sector. In a recent study, Coiera analysed several mature national health information systems in high income countries and categorised a topology of the approaches for building them as: top-down, bottom-up or middle-out. Coeria gave compelling arguments for countries to adopt a middle-out approach. Building national health information systems in sub-Saharan African countries pose unique and complex challenges due to the substantial difference between the socio-economic, political and health landscapes of these countries and high income countries. Coiera’s analysis did not consider the unique challenges faced by sub-Saharan African countries in building their systems. Furthermore, there is currently no framework for analysing high-level approaches for building NHIS. This makes it difficult to establish the benefits and applicability of Coiera’s analysis for building NHIS in sub-Saharan African countries. The aim of this research was to develop and apply such a framework to determine which approach in Coiera’s topology, if any, showed signs of being the most sustainable approach for building effective national health information systems in sub-Saharan African countries. The framework was developed through a literature analysis and validated by applying it in case studies of the development of national health information systems in South Africa and Rwanda. The result of applying the framework to the case studies was a synthesis of the current evolution of these systems, and an assessment of how well each approach in Coiera’s topology supports key considerations for building them in typical sub-Saharan African countries. The study highlights the value of the framework for analysing sub-Saharan African countries in terms of Coiera’s topology, and concludes that, given the peculiar nature and evolution of national health information systems in sub-Saharan African countries, a middle-out approach can contribute significantly to building effective and sustainable systems in these countries, but its application in sub-Saharan African countries will differ significantly from its application in high income countries.Item Analysis of cultural and ideological values transmitted by university websites.(2003) Ramakatane, Mamosa Grace.; Clarke, Patricia Ann.With the advent of globalisation and new communication technologies, it was inevitable that educational institutions would follow the advertising trend of establishing websites to market their services. This paper analyses the cultural and ideological values transmitted by such university websites. Particular focus is on issues around gender, sexual orientation, race, religion and socioeconomic status. The aim is to analyse consumer reaction to Internet messages conveyed in websites from different cultures, compare them with the intentions of producers and to relate all these back to ideological factors. This study deconstructs content and messages conveyed by University websites to assess the extent to which they might subscribe to particular ideologies (whether overt or covert). The argument that there are hidden ideologies in Web design does not imply that designers or producers intended any conspiracy or deception. Rather, the study compares the organisation's intended image/ethos with that which consumers perceive through their exposure to the website. The methodology was purposive sampling of participants consulted through personal (face-to-face) and interviews conducted online, as well as email-distributed questionnaires.This study uses websites of two universities in the KwaZulu-Natal region of South Africa.Item The applicability of case-based reasoning to software cost estimation.(2002) Lokotsch, Anton.; Petkov, Don.The nature and competitiveness of the modern software development industry demands that software engineers be able to make accurate and consistent software cost estimates. Traditionally software cost estimates have been derived with algorithmic cost estimation models such as COCOMO and Function Point Analysis. However, researchers have shown that existing software cost estimation techniques fail to produce accurate and consistent software cost estimates. Improving the reliability of software cost estimates would facilitate cost savings, improved delivery time and better quality software developments. To this end, considerable research has been conducted into finding alternative software cost estimation models that are able produce better quality software cost estimates. Researchers have suggested a number of alternative models to this problem area. One of the most promising alternatives is Case-Based Reasoning (CBR), which is a machine learning paradigm that makes use of past experiences to solve new problems. CBR has been proposed as a solution since it is highly suited to weak theory domains, where the relationships between cause and effect are not well understood. The aim of this research was to determine the applicability of CBR to software cost estimation. This was accomplished in part through the thorough investigation of the theoretical and practical background to CBR, software cost estimation and current research on CBR applied to software cost estimation. This provided a foundation for the development of experimental CBR software cost estimation models with which an empirical evaluation of this technology applied to software cost estimation was performed. In addition, several regression models were developed, against which the effectiveness of the CBR system could be evaluated. The architecture of the CBR models developed, facilitated the investigation of the effects of case granularity on the quality of the results obtained from them. Traditionally researchers into this field have made use of poorly populated datasets, which did not accurately reflect the true nature of the software development industry. However, for the purposes of this research an extensive database of 300 software development projects was obtained on which these experiments were performed. The results obtained through experimentation indicated that the CBR models that were developed, performed similarly and in some cases better than those developed by other researchers. In terms of the quality of the results produced, the best CBR model was able to significantly outperform the estimates produced by the best regression model. Also, the effects of increased case granularity was shown to result in better quality predictions made by the CBR models. These promising results experimentally validated CBR as an applicable software cost estimation technique. In addition, it was shown that CBR has a number of methodological advantages over traditional cost estimation techniques.Item Application of artificial intelligence for detecting derived viruses.(2017) Asiru, Omotayo Fausat.; Blackledge, Jonathan Michael.; Dlamini, Moses Thandokuhle.A lot of new viruses are being created each and every day. However, some of these viruses are not completely new per se. Most of the supposedly ‘new’ viruses are not necessarily created from scratch with completely new (something novel that has never been seen before) mechanisms. For example, some of these viruses just change their forms and come up with new signatures to avoid detection. Hence, such viruses cannot be argued to be new. This research refers to such as derived viruses. Just like new viruses, we argue that derived viruses are hard to detect with current scanning-detection methods. Many virus detection methods exist in the literature, but very few address the detection of derived viruses. Hence, the ultimate research question that this study aims to answer is; how might we improve the detection rate of derived computer viruses? The proposed system integrates a mutation engine together with a neural network to detect derived viruses. Derived viruses come from existing viruses that change their forms. They do so by adding some irrelevant instructions that will not alter the intended purpose of the virus. A mutation engine is used to group existing virus signatures based on their similarities. The engine then creates derivatives of groups of signatures. This is done up until the third generation (of mutations). The existing virus signatures and the created derivatives are both used to train the neural network. The derived signatures that are not used for the training are used to determine the effectiveness of the neural network. Ten experiments were conducted on each of the three derived virus generations. The first generation showed the highest derived virus detection rate compared to the other two generations. The second generation also showed a slightly higher detection rate than the third generation which has the least detection rate. Experimental results show that the proposed model can detect derived viruses with an average accuracy detection rate of 80% (This includes a 91% success rate on first generation, 83% success rate on second generation and 65% success rate on third generation). The results further show that the correlation between the original virus signature and its derivatives decreases with the generations. This means that after many generations of a virus changing form, its variants will no longer look like the original. Instead the variants look like a completely new virus even though the variants and the original virus will always have the same behaviour and operational characteristics with similar effects.Item Application of backpropagation-like generative algorithms to various problems.(1992) Powell, Alan Roy.; Sartori-Angus, Alan G.Artificial neural networks (ANNs) were originally inspired by networks of biological neurons and the interactions present in networks of these neurons. The recent revival of interest in ANNs has again focused attention on the apparent ability of ANNs to solve difficult problems, such as machine vision, in novel ways. There are many types of ANNs which differ in architecture and learning algorithms, and the list grows annually. This study was restricted to feed-forward architectures and Backpropagation- like (BP-like) learning algorithms. However, it is well known that the learning problem for such networks is NP-complete. Thus generative and incremental learning algorithms, which have various advantages and to which the NP-completeness analysis used for BP-like networks may not apply, were also studied. Various algorithms were investigated and the performance compared. Finally, the better algorithms were applied to a number of problems including music composition, image binarization and navigation and goal satisfaction in an artificial environment. These tasks were chosen to investigate different aspects of ANN behaviour. The results, where appropriate, were compared to those resulting from non-ANN methods, and varied from poor to very encouraging.Item The application of computer technology in South African distance education.(1996) Owusu-Sekyere, Charles.; Meyerowitz, Jane Julia.The advent of on-line Computer-Assisted Instruction and Computer Mediated Communication may improve instruction and communication in distance education in South African universities. On-line Computer-Assisted Instruction in distance education makes the reinforcement of knowledge both systematic and immediate. With instructional media such printed text, audio-cassettes, radio and television broadcasts the student at a distance is an isolated and passive recipient of knowledge. On-line Computer-Assisted Instruction supported by Computer Mediated Communication for interaction and feedback could close the gaps in time and distance between the teacher and the student in distance education. The current network capabilities of the computer makes it possible for such a student to interact with peers and lecturers before, during and after instructional episodes. Computer Mediated Communication can facilitate the use of electronic messaging such as Electronic Mail, Internet Relay Chat, List Servers, Multi-User Domains and Bulletin Board Services for interactions and feedback. This thesis investigates whether instruction and communication in South African universities with a distance education option can be improved using on-line Computer-Assisted Instruction and Computer Mediated Communication respectively. The thesis also makes proposals for their implementation in South Africa by analysing the applications of computer technology in degree awarding distance education institutions in some developed and developing countries that use on-line Computer-Assisted Instruction and Computer Mediated Communication.Item Application of ELECTRE algorithms in ontology selection.(2022) Sooklall, Ameeth.; Fonou-Dombeu, Jean Vincent.The field of artificial intelligence (AI) is expanding at a rapid pace. Ontology and the field of ontological engineering is an invaluable component of AI, as it provides AI the ability to capture and express complex knowledge and data in a form that encourages computation, inference, reasoning, and dissemination. Accordingly, the research and applications of ontology is becoming increasingly widespread in recent years. However, due to the complexity involved with ontological engineering, it is encouraged that users reuse existing ontologies as opposed to creating ontologies de novo. This in itself has a huge disadvantage as the task of selecting appropriate ontologies for reuse is complex as engineers and users may find it difficult to analyse and comprehend ontologies. It is therefore crucial that techniques and methods be developed in order to reduce the complexity of ontology selection for reuse. Essentially, ontology selection is a Multi-Criteria Decision-Making (MCDM) problem, as there are multiple ontologies to choose from whilst considering multiple criteria. However, there has been little usage of MCDM methods in solving the problem of selecting ontologies for reuse. Therefore, in order to tackle this problem, this study looks to a prominent branch of MCDM, known as the ELimination Et. Choix Traduisant la RÉalite (ELECTRE). ELECTRE is a family of decision-making algorithms that model and provide decision support for complex decisions comprising many alternatives with many characteristics or attributes. The ELECTRE algorithms are extremely powerful and they have been applied successfully in a myriad of domains, however, they have only been studied to a minimal degree with regards to ontology ranking and selection. In this study the ELECTRE algorithms were applied to aid in the selection of ontologies for reuse, particularly, three applications of ELECTRE were studied. The first application focused on ranking ontologies according to their complexity metrics. The ELECTRE I, II, III, and IV models were applied to rank a dataset of 200 ontologies from the BioPortal Repository, with 13 complexity metrics used as attributes. Secondly, the ELECTRE Tri model was applied to classify the 200 ontologies into three classes according to their complexity metrics. A preference-disaggregation approach was taken, and a genetic algorithm was designed to infer the thresholds and parameters for the ELECTRE Tri model. In the third application a novel ELECTRE model was developed, named ZPLTS-ELECTRE II, where the concept of Z-Probabilistic Linguistic Term Set (ZPLTS) was combined with the traditional ELECTRE II algorithm. The ZPLTS-ELECTRE II model enables multiple decision-makers to evaluate ontologies (group decision-making), as well as the ability to use natural language to provide their evaluations. The model was applied to rank 9 ontologies according to five complexity metrics and five qualitative usability metrics. The results of all three applications were analysed, compared, and contrasted, in order to understand the applicability and effectiveness of the ELECTRE algorithms for the task of selecting ontologies for reuse. These results constitute interesting perspectives and insights for the selection and reuse of ontologies.Item Application of genetic algorithms to the travelling salesperson problem.(1996) McKenzie, Peter John Campbell.; Petkov, Doncho.Genetic Algorithms (GAs) can be easily applied to many different problems since they make few assumptions about the application domain and perform relatively well. They can also be modified with some success for handling a particular problem. The travelling salesperson problem (TSP) is a famous NP-hard problem in combinatorial optimization. As a result it has no known polynomial time solution. The aim of this dissertation will be to investigate the application of a number of GAs to the TSP. These results will be compared with those of traditional solutions to the TSP and with the results of other applications of the GA to the TSP.Item The application of the unified modelling language and soft systems metholdology for modelling the production process in an aluminium plant.(2003) Sewchurran, Kosheek.; Warren, Peter R.This research explores the combined use of soft systems methodology (SSM) and UML based business process modelling (BPM) techniques. These two techniques are integrated to provide a framework for the analysis and definition of suitable business process models. Such integration better supports developers following objectoriented (00) approaches than traditional business process modelling. The thesis describes the importance and difficulties in getting development proj ects aimed at the correct needs. We provide an overview of current business process modelling practices. From this it is argued that current practices show two major weaknesses. Firstly, the modelling language that is used is not a current standard amongst developers who now expect 00 and UML based approaches. Secondly, the techniques used do not emphasise analysis, often resulting in a lack of appreciation of the problem. In order to deal with these inadequacies, the thesis critically examines suitable techniques that can be used to analyse and model business processes to support the developer's requirements. The examination of SSM reveals that the technique does deal with the analysis limitations of current business process modelling techniques. SSM has been linked to information systems provision by previous researchers. Unfortunately the examination ofthese research attempts shows that the linking is conducted in an ad-hoc manner with no underlying theoretical basis or emphasis on business process modelling. We show how soft systems methodology techniques can be married with Eriksson and Penker (2000) UML business process modelling techniques following Mingers (2001) multi-methodology framework in a way that can over come these difficulties. This combined business analysis and modelling technique is applied to the production process in an aluminium rolling plant. Based on the experiences at one site, the integrated approach is able to deal with the complexities caused by multiple stakeholders, and is able to provide a UML representation of the required business process to guide developers.Item Artificial neural networks for image recognition : a study of feature extraction methods and an implementation for handwritten character recognition.(1996) Moodley, Deshendran.; Ram, Vevek.; Haines, Linda Margaret.The use of computers for digital image recognition has become quite widespread. Applications include face recognition, handwriting interpretation and fmgerprint analysis. A feature vector whose dimension is much lower than the original image data is used to represent the image. This removes redundancy from the data and drastically cuts the computational cost of the classification stage. The most important criterion for the extracted features is that they must retain as much of the discriminatory information present in the original data. Feature extraction methods which have been used with neural networks are moment invariants, Zernike moments, Fourier descriptors, Gabor filters and wavelets. These together with the Neocognitron which incorporates feature extraction within a neural network architecture are described and two methods, Zernike moments and the Neocognitron are chosen to illustrate the role of feature extraction in image recognition.Item An assessment of the component-based view for metaheuristic research.(2023) Achary, Thimershen.; Pillay, Anban Woolaganathan.; Jembere, Edgar.Several authors have recently pointed to a crisis within the metaheuristic research field, particularly the proliferation of metaphor-inspired metaheuristics. Common problems identified include using non-standard terminology, poor experimental practices, and, most importantly, the introduction of purportedly new algorithms that are only superficially different from existing ones. These issues make similarity and performance analysis, classification, and metaheuristic generation difficult for both practitioners and researchers. A component-based view of metaheuristics has recently been promoted to deal with these problems. A component based view argues that metaheuristics are best understood in terms of their constituents or components. This dissertation presents three papers that are thematically centred on this view. The central problem for the component-based view is the identification of components of a metaheuristic. The first paper proposes the use of taxonomies to guide the identification of metaheuristic components. We developed a general and rigorous method, TAXONOG-IMC, that takes as input an appropriate taxonomy and guides the user to identify components. The method is described in detail, an example application of the method is given, and an analysis of its usefulness is provided. The analysis shows that the method is effective and provides insights that are not possible without the proper identification of the components. The second paper argues for formal, mathematically sound representations of metaheuristics. It introduces and defends a formal representation that leverages the component based view. The third paper demonstrates that a representation technique based on a component based view is able to provide the basis for a similarity measure. This paper presents a method of measuring similarity between two metaheuristic-algorithms, based on their representations as signal flow diagrams. Our findings indicate that the component based view of metaheuristics provides valuable insights and allows for more robust analysis, classification and comparison.Item Automatic lung segmentation using graph cut optimization.(2015) Oluyide, Oluwakorede Monica.; Viriri, Serestina.; Tapamo, Jules-Raymond.Medical Imaging revolutionized the practice of diagnostic medicine by providing a means of visualizing the internal organs and structure of the body. Computer technologies have played an increasing role in the acquisition and handling, storage and transmission of these images. Due to further advances in computer technology, research efforts have turned towards adopting computers as assistants in detecting and diagnosing diseases, resulting in the incorporation of Computer-aided Detection (CAD) systems in medical practice. Computed Tomography (CT) images have been shown to improve accuracy of diagnosis in pulmonary imaging. Segmentation is an important preprocessing necessary for high performance of the CAD. Lung segmentation is used to isolate the lungs for further analysis and has the advantage of reducing the search space and computation time involved in disease detection. This dissertation presents an automatic lung segmentation method using Graph Cut optimization. Graph Cut produces globally optimal solutions by modeling the image data and spatial relationship among the pixels. Several objects in the thoracic CT image have similar pixel values to the lungs, and the global solutions of Graph Cut produce segmentation results where the lungs, and all other objects similar in intensity value to the lungs, are included. A distance prior encoding the euclidean distance of pixels from the set of pixels belonging to the object of interest is proposed to constrain the solution space of the Graph Cut algorithm. A segmentation method using the distance-constrained Graph Cut energy is also proposed to isolate the lungs in the image. The results indicate the suitability of the distance prior as a constraint for Graph Cut and shows the effectiveness of the proposed segmentation method in accurately segmenting the lungs from a CT image.Item Blockchain-based security model for efficient data transmission and storage in cloudlet network resource environment.(2023) Masango, Nothile Clementine.; Ezugwu, Absalom El-Shamir.As mobile users’ service requirement increases, applications such as online games, virtual reality, and augmented reality demand for more computation power. However, the current design of mobile devices and their associated innovations cannot accommodate such applications because of the limitations they have in terms storage, computing power and battery life. Therefore, as a result, mobile devices offload their tasks to the remote cloud environments. Moreover, due to the architecture of cloud computing, where cloud is located at the core of the network, applications experiences challenges such as latency. This is a disadvantage to real-time online applications. Hence, the edge computing based cloudlet environment was introduced to bring resources closer to the end user, with an enhanced network quality of service. Although there is merit in deploying cloudlets at the edge of the network, which is closer to the user, this makes them susceptible to attacks. For this newly introduced technology to be fully adopted, effective security measures need to be incorporated into the current cloudlets computing platform. This study proposes blockchain technology as a security model in securing the data shared between mobile devices and cloudlet, with an agent layer concept introduced in between mobile device layer and cloudlet. The implemented agent-based model uses the new consensus mechanism, proof of trust where trust and experience is determine by the number of coins each node (cloudlet) possess, to select two miners. These miners participate in message verification using Elliptic curve scheme, and if they do not reach consensus, a third miner is selected to resolve the conflict. Any miner with wrong verification loses all the coins; in this way trust and experience is controlled. This proposed solution has proven to be more efficient in terms of security and network performance in comparison to existing state-of-the-arts implementations.Item Built-in tests for a real-time embedded system.(1991) Olander, Peter Andrew.Beneath the facade of the applications code of a well-designed real-time embedded system lies intrinsic firmware that facilitates a fast and effective means of detecting and diagnosing inevitable hardware failures. These failures can encumber the availability of a system, and, consequently, an identification of the source of the malfunction is needed. It is shown that the number of possible origins of all manner of failures is immense. As a result, fault models are contrived to encompass prevalent hardware faults. Furthermore, the complexity is reduced by determining syndromes for particular circuitry and applying test vectors at a functional block level. Testing phases and philosophies together with standardisation policies are defined to ensure the compliance of system designers to the underlying principles of evaluating system integrity. The three testing phases of power-on self tests at system start up, on-line health monitoring and off-line diagnostics are designed to ensure that the inherent test firmware remains inconspicuous during normal applications. The prominence of the code is, however, apparent on the detection or diagnosis of a hardware failure. The authenticity of the theoretical models, standardisation policies and built-in test philosophies are illustrated by means of their application to an intricate real-time system. The architecture and the software design implementing the idealogies are described extensively. Standardisation policies, enhanced by the proposition of generic tests for common core components, are advocated at all hierarchical levels. The presentation of the integration of the hardware and software are aimed at portraying the moderately complex nature of the task of generating a set of built-in tests for a real-time embedded system. In spite of generic policies, the intricacies of the architecture are found to have a direct influence on software design decisions. It is thus concluded that the diagnostic objectives of the user requirements specification be lucidly expressed by both operational and maintenance personnel for all testing phases. Disparity may exist between the system designer and the end user in the understanding of the requirements specification defining the objectives of the diagnosis. It is thus essential for complete collaboration between the two parties throughout the development life cycle, but especially during the preliminary design phase. Thereafter, the designer would be able to decide on the sophistication of the system testing capabilities.Item Client-side encryption and key management: enforcing data confidentiality in the cloud.(2016) Mosola, Napo Nathnael.; Blackledge, Jonathan Michael.; Dlamini, Moses Thandokuhle.Cloud computing brings flexible, scalable and cost effective services. This is a computing paradigm whose services are driven by the concept of virtualization and multi-tenancy. These concepts bring various attractive benefits to the cloud. Among the benefits is reduction in capital costs, pay-per-use model, enormous storage capacity etc. However, there are overwhelming concerns over data confidentiality on the cloud. These concerns arise from various attacks that are directed towards compromising data confidentiality in virtual machines (VMs). The attacks may include inter-VM and VM sprawls. Moreover, weaknesses or lack of data encryption make such attacks to thrive. Hence, this dissertation presents a novel client-side cryptosystem derived from evolutionary computing concepts. The proposed solution makes use of chaotic random noise to generate a fitness function. The fitness function is used to generate strong symmetric keys. The strength of the encryption key is derived from the chaotic and randomness properties of the input noise. Such properties increase the strength of the key without necessarily increasing its length. However, having the strongest key does not guarantee confidentiality if the key management system is flawed. For example, encryption has little value if key management processes are not vigorously enforced. Hence, one of the challenges of cloud-based encryption is key management. Therefore, this dissertation also makes an attempt to address the prevalent key management problem. It uses a counter propagation neural network (CPNN) to perform key provision and revocation. Neural networks are used to design ciphers. Using both supervised and unsupervised machine learning processes, the solution incorporates a CPNN to learn a crypto key. Using this technique there is no need for users to store or retain a key which could be compromised. Furthermore, in a multi-tenant and distributed environment such as the cloud, data can be shared among multiple cloud users or even systems. Based on Shamir's secret sharing algorithm, this research proposes a secret sharing scheme to ensure a seamless and convenient sharing environment. The proposed solution is implemented on a live openNebula cloud infrastructure to demonstrate and illustrate is practicability.Item A comparative study of metaheuristics for blood assignment problem.(2018) Govender, Prinolan.; Ezugwu, Absalom El-Shamir.The Blood Assignment Problem (BAP) is a real world and NP-hard combinatorial optimization problem. The study of BAP is significant due to the continuous demand for blood transfusion during medical emergencies. However, the formulation of this problem faces various challenges that stretch from managing critical blood shortages, limited shelf life and, blood type incompatibility that constrain the random transfusion of blood to patients. The transfusion of incompatible blood types between patient and donor can lead to adverse side effects on the patients. Usually, the sudden need for blood units arises as a result of unforeseen trauma that requires urgent medical attention. This condition can interrupt the supply of blood units and may result in the blood bank importing additional blood products from external sources, thereby increasing its running cost and other risk factors associated with blood transfusion. This however, might have serious consequences in terms of medical emergency, running cost and supply of blood units. Therefore, by taking these factors into consideration the aforementioned study implemented five global metaheuristic optimization algorithms to solve the BAP. Each of these algorithms was hybridized with a sustainable blood assignment policy that relates to the South Africa blood banks. The objective of this study was to minimize blood product wastage with emphasis on expiry and reduction in the amount of importation from external sources. Extensive computational experiments were conducted over a total of six different datasets, and the results validate the reliability and effectiveness of each of the proposed algorithms. Results were analysed across three major aspects, namely, the average levels of importation, expiry across a finite time period and computational time experienced by each of the metaheuristic algorithms. The numerical results obtained show that the Particle Swarm Optimization algorithm was better in terms of computational time. Furthermore, none of the algorithms experienced any form of expiry within the allotted time frame. Moreover, the results also revealed that the Symbiotic Organism Search algorithm produced the lowest average result for importation; therefore, it was considered the most reliable and proficient algorithm for the BAP.Item Component-based ethnicity identification from facial images.(2017) Booysens, Aimée.; Viriri, Serestina.Abstract available in PDF file.Item Component-based face recognition.(2008) Dombeu, Jean Vincent Fonou.; Tapamo, Jules-Raymond.Component-based automatic face recognition has been of interest to a growing number of researchers in the past fifteen years. However, the main challenge remains the automatic extraction of facial components for recognition in different face orientations without any human intervention; or any assumption on the location of these components. In this work, we investigate a solution to this problem. Facial components: eyes, nose, and mouth are firstly detected in different orientations of face. To ensure that the components detected are appropriate for recognition, the Support Vector Machine (SVM) classifer is applied to identify facial components that have been accurately detected. Thereafter, features are extracted from the correctly detected components by Gabor Filters and Zernike Moments combined. Gabor Filters are used to extract the texture characteristics of the eyes and Zernike Moments are applied to compute the shape characteristics of the nose and the mouth. The texture and the shape features are concatenated and normalized to build the final feature vector of the input face image. Experiments show that our feature extraction strategy is robust, it also provides a more compact representation of face images and achieves an average recognition rate of 95% in different face orientations.