Masters Degrees (Electronic Engineering)
Permanent URI for this collectionhttps://hdl.handle.net/10413/6868
Browse
Browsing Masters Degrees (Electronic Engineering) by Date Accessioned
Now showing 1 - 20 of 139
- Results Per Page
- Sort Options
Item PLC implementation of online, PRBS-based tests for mechanical system parameter estimation.(2009) Rampersad, Vaughan.; Burton, Bruce.This thesis investigates the use of correlation techniques to perform system identification tests, with the objective of developing online test methods to perform mechanical parameter extraction as well as machine diagnostics. More specifically, these test methods must be implemented on a Programmable Logic Controller (PLC) in combination with Variable Speed Drives (VSD). Models for motor-based mechanical systems are derived and other documented methods for parameter identification of mechanical systems are discussed. An investigation is undertaken into the principle that the impulse response of a system may be obtained when a test signal with an impulsive autocorrelation is injected into the system. The theory of using correlation functions to determine the numerical impulse response of a system is presented. Suitable test signals, pseudorandom binary sequences (PRBS) are analysed, and their generation and properties are discussed. Simulations are presented as to how the various properties of the PRBS test signals influence the resulting impulse response curve. Further simulations are presented that demonstrate how PRBS-based tests in conjunction with a curve-fitting method, in this case the method of linear least squares, can provide a fair estimation of the parameters of a mechanical system. The implementation of a correlation based online testing routine on a PLC is presented. Results from these tests are reviewed and discussed. A SCADA system that has been designed is discussed and it is shown how this system allows the user to perform diagnostics on networked drives in a distributed automation system. Identification of other mechanical phenomena such as elasticity and the non-linearity introduced by the presence of backlash is also investigated.Item Cross layer hybrid ARQ2 : cooperative diversity.(2008) Beharie, Sannesh Rabiechand.Cooperative communication allows for single users in multi user wireless network to share their antennas and achieve virtual antenna transmitters, which leads to transmit diversity. Coded Cooperation introduced channel coding into cooperative diversity over traditional pioneer cooperative diversity methods which were based on a user repeating its partner's transmitted signals in a multi-path fading channel environment in order to improve Bit Error Rate (BER) performance.. In this dissertation the Coded Cooperation is simulated and the analytical bounds are evaluated in order to understand basic cooperation principles. This is done using Rate Compatible Punctured Convolutional Codes (RCPC). Based on the understanding of these principles a new protocol called Cross Layer Hybrid Automatic Repeat reQuest (ARQ) 2 Cooperative Diversity is developed to allow for improvements in BER and throughput. In Cross Layer Hybrid ARQ 2 Cooperation, Hybrid ARQ 2 (at the data-link layer) is combined with cooperative diversity (at the physical layer), in a cross layer design manner, to improve the BER and throughput based on feedback from the base station on the user's initial transmissions. This is done using RCPC codes which partitions a full rate code into sub code words that are transmitted as incremental packets in an effort to only transmit as much parity as is required by the base station for correct decoding of a user's information bits. This allows for cooperation to occur only when it is necessary unlike with the conventional Coded Cooperation, where bandwidth is wasted cooperating when the base station has already decoded a user's information bits. The performance of Cross Layer Hybrid ARQ 2 Cooperation is quantised by BER and throughput. BER bounds of Cross Layer Hybrid ARQ 2 Cooperation are derived based on the Pairwise Error Probability (PEP) of the uplink channels as well as the different inter-user and base station Cyclic Redundancy Check (CRC) states. The BER is also simulated and confirmed using the derived bound. The throughput of this new scheme is also simulated and confirmed via analytical throughput bounds. This scheme maintains BER and throughput gains over the conventional Coded Cooperation even under the worst inter-user channel conditions.Item Wavelet based image compression integrating error protection via arithmetic coding with forbidden symbol and map metric sequential decoding with ARQ retransmission(2010-08-27) Mahomed, VeruschiaThe phenomenal growth of digital multimedia applications has forced the communicationsItem Effect of amplifier non-linearity on the performance of CDMA communication systems in a Rayleigh fading environment(2010-08-31) Syed, JameelThe effect of amplifier non-linearity on the performance of a CDMA communications systemItem Repeat-punctured turbo coded cooperation.(2010-09-01) Moualeu, Jules Merlin Mouatcho.Transmit diversity usually employs multiple antennas at the transmitter. However, many wireless devices such as mobile cellphones, Personal Digital Assistants (PDAs), just to name a few, are limited by size, hardware complexity, power and other constraints to just one antenna. A new paradigm called cooperative communication which allows single antenna mobiles in a multi-user scenario to share their antennas has been proposed lately. This multi-user configuration generates a virtual Multiple-Input Multiple-Output system, leading to transmit diversity. The basic approach to cooperation is for two single-antenna users to use each other's antenna as a relay in which each of the users achieves diversity. Previous cooperative signaling methods encompass diverse forms of repetition of the data transmitted by the partner to the destination. A new scheme called coded cooperation [15] which integrates user cooperation with channel coding has also been proposed. This method maintains the same code rate, bandwidth and transmit power as a similar non-cooperative system, but performs much better than previous signaling methods [13], [14] under various inter-user channel qualities. This dissertation first discusses the coded cooperation framework that has been proposed lately [19], coded cooperation with Repeat Convolutional Punctured Codes (RCPC) codes and then investigates the application of turbo codes in coded cooperation. In this dissertation we propose two new cooperative diversity schemes which are the Repeat-Punctured Turbo Coded cooperation and coded cooperation using a Modified Repeat-Punctured Turbo Codes. Prior to that, Repeat-Punctured Turbo codes are introduced. We characterize the performance of the two new schemes by developing the analytical bounds for bit error rate, which is confirmed by computer simulations. Finally, the turbo coded cooperation using the Forced Symbol Method (FSM) is presented and validated through computer simulations under various inter-user Signal-to-Noise Ratios (SNRs).Item Survivability stategies in all optical networks.(2006) Singh, Sidharta.; Nleya, B. M.Recent advances in fiber optics technology have enabled extremely high-speed transportItem Rain rate and rain drop size distribution models for line-of-sight millimetric systems in South Africa.(2006) Owolawi, Pius Adewale.; Afullo, Thomas Joachim Odhiambo.Radio frequencies at millimeter wavelengths suffer greatly from rain attenuation. It is therefore essential to study rainfall characteristics for efficient and reliable design of radio networks at frequencies above 10GHz. These characteristics of rain are geographically based, which need to be studied for estimation of rain induced attenuation. The ITU-R, through recommendations P.837 and P.838, have presented global approaches to rain-rate variation and rain-induced attenuation in line-of-sight radio links. Therefore, in this dissertation characteristics of rainfall rate and its applications for South Africa are evaluated. The cumulative distributions of rain intensity for 12 locations in seven regions in South Africa are presented in this dissertation based on five-year rainfall data. The rain rate with an integration time of 60 minutes is converted into an integration time of 1 minute in accordance with ITU-R recommendations. The resulting cumulative rain intensities and relations between them are compared with the global figures presented in ITU-R Recommendation P.837, as well as with the work in other African countries, notably by Moupfuma and Martin. Based on this work, additional rain-climatic zones are proposed alongside the five identified by ITU-R for South Africa. Finally, the study compares the semi-empirical raindrop-size distribution models such as Laws and Parsons, Marshall and Palmer, Joss, Thams and Waldvogel, and Gamma distribution with the estimated South Africa models.Item Performance analysis of LAN, WAN and WLAN in Eritrea.(2006) Kakay, Osman Mohammed Osman.; Afullo, Thomas Joachim Odhiambo.The dissertation addresses the communication issues of interconnecting the different government sectors LANs, and access to the global Internet. Network capacities are being purposely overengineered in today's commercial Internet. Any network provider, be it a commercial Internet Service Provider (ISP) or Information Technology Service department at government sector, company or university site, will design network bandwidth resources in such a way that there will be virtually no data loss, even during the worst possible network utilization scenario. Thus, the service delivered by today's end-to-end wide area Internet would be perfect if it wasn't for the inter-domain connections, such as Internet access link to the ISP, or peering points between ISPs. The thesis studies the performance of the network in Eritrea, displaying the problems of Local Area Networks (LANs) and Wide Area Networks (WAN) and suggesting initial solutions and investigating the performance of (WAN) through the measured traffic analysis between Asmara LAN and Massawa LAN, using queuing theory system (M/M/1 and M/M/2) solution. The dissertation also uses OPNET IT Guru simulation software program ·to study the performance of LAN and WLAN in Eritrea. The items studied include traffic, collision, packet loss, and queue delay. Finally in order to follow the current trends, we study the performance ofVOIP links in Eritrean WANs environment, with a focus on five different link capacities: 28 kbps, 33 kbps, 64 kbps, and 128 kbps for voice and 256/512 kbps for voice and data. Using the R value as a measure of mean opinion score (MOS), we determine that the 33 kbps link would be adequate for Eritrean WANs.Item Multiuser detection employing recurrent neural networks for DS-CDMA systems.(2006) Moodley, Navern.; Mneney, Stanley Henry.Over the last decade, access to personal wireless communication networks has evolved to a point of necessity. Attached to the phenomenal growth of the telecommunications industry in recent times is an escalating demand for higher data rates and efficient spectrum utilization. This demand is fuelling the advancement of third generation (3G), as well as future, wireless networks. Current 3G technologies are adding a dimension of mobility to services that have become an integral part of modem everyday life. Wideband code division multiple access (WCDMA) is the standardized multiple access scheme for 3G Universal Mobile Telecommunication System (UMTS). As an air interface solution, CDMA has received considerable interest over the past two decades and a great deal of current research is concerned with improving the application of CDMA in 3G systems. A factoring component of CDMA is multiuser detection (MUD), which is aimed at enhancing system capacity and performance, by optimally demodulating multiple interfering signals that overlap in time and frequency. This is a major research problem in multipoint-to-point communications. Due to the complexity associated with optimal maximum likelihood detection, many different sub-optimal solutions have been proposed. This focus of this dissertation is the application of neural networks for MUD, in a direct sequence CDMA (DS-CDMA) system. Specifically, it explores how the Hopfield recurrent neural network (RNN) can be employed to give yet another suboptimal solution to the optimization problem of MUD. There is great scope for neural networks in fields encompassing communications. This is primarily attributed to their non-linearity, adaptivity and key function as data classifiers. In the context of optimum multiuser detection, neural networks have been successfully employed to solve similar combinatorial optimization problems. The concepts of CDMA and MUD are discussed. The use of a vector-valued transmission model for DS-CDMA is illustrated, and common linear sub-optimal MUD schemes, as well as the maximum likelihood criterion, are reviewed. The performance of these sub-optimal MUD schemes is demonstrated. The Hopfield neural network (HNN) for combinatorial optimization is discussed. Basic concepts and techniques related to the field of statistical mechanics are introduced and it is shown how they may be employed to analyze neural classification. Stochastic techniques are considered in the context of improving the performance of the HNN. A neural-based receiver, which employs a stochastic HNN and a simulated annealing technique, is proposed. Its performance is analyzed in a communication channel that is affected by additive white Gaussian noise (AWGN) by way of simulation. The performance of the proposed scheme is compared to that of the single-user matched filter, linear decorrelating and minimum mean-square error detectors, as well as the classical HNN and the stochastic Hopfield network (SHN) detectors. Concluding, the feasibility of neural networks (in this case the HNN) for MUD in a DS-CDMA system is explored by quantifying the relative performance of the proposed model using simulation results and in view of implementation issues.Item Rain attenuation modelling for line-of-sight terrestrial links.(2006) Naicker, Kumaran.; Mneney, Stanley Henry.In today's rapidly expanding communications industry, there is an ever-increasing demand for greater bandwidth, higher data rates and better spectral efficiency. As a result current and future communication systems will need to employ advanced spatial, temporal and frequency diversity techniques in order to meet these demands. Even with the utilisation of such techniques, the congestion of the lower frequency bands, will inevitably lead to the increased usage of the millimetre-wave frequencies in terrestrial communication systems. Before such systems can be deployed, radio system designers require realistic and readily useable channel and propagation models at their disposal to predict the behaviour of such communication links and ensure that reliable and efficient data transmission is achieved The scattering and attenuation of electromagnetic waves by rain is a serious problem for microwave and millimetre-wave frequencies. The conversion of rain rate to specific attenuation is a crucial step in the analysis of the total path attenuation and hence radio-link availability. It is now common practice to relate the specific attenuation and the rain rate using the simple power law relationship. The power-law parameters are then used in the path attenuation model, where the spatial variations of rainfall are estimated by a path-integration of the rain rate. These power law parameters are strongly influenced by the drop-size-distribution (DSD). Thus an examination of the various DSDs and their influence on the specific attenuation and link availability is warranted. Several models for the DSD have been suggested in literature, from the traditional exponential, to the gamma, log normal and Weibull distributions. The type of DSD varies depending on the geographical location and rainfall type. An important requirement of the DSD is that it is consistent with rain rate (i.e. the DSD must satisfy the rain-rate integral equation). Thus before application in the specific attenuation calculations, normalisation needs to be performed to ensure the consistency, as done in this study. Once the specific attenuation has been evaluated for necessary frequency and rain-rate range, path averaging is performed to predict the rain attenuation over the communication link. The final step in this dissertation is the estimation of the percentage of time of such occurrences. For this, cumulative time statistics of surface point rain rates are needed. The resulting cumulative distribution model of the fade depth and duration due to rain is a valuable tool for system designers. With such models the system designer can then determine the appropriate fade margin for the communication system and resulting period of unavailability for the linkItem A multi-objective particle swarm optimized fuzzy logic congestion detection and dual explicit notification mechanism for IP networks.(2006) Nyirenda, Clement Nthambazale.; Dawoud, Peter Dawoud Shenouda.The Internet has experienced a tremendous growth over the past two decades and with that growth have come severe congestion problems. Research efforts to alleviate the congestion problem can broadly be classified into three groups: Cl) Router based congestion detection; (2) Generation and transmission of congestion notification signal to the traffic sources; (3) End-to-end algorithms which control the flow of traffic between the end hosts. This dissertation has largely addressed the first two groups which are basically router initiated. Router based congestion detection mechanisms, commonly known as Active Queue Management (AQM), can be classified into two groups: conventional mathematical analytical techniques and fuzzy logic based techniques. Research has shown that fuzzy logic techniques are more effective and robust compared to the conventional techniques because they do not rely on the availability of a precise mathematical model of Internet. They use linguistic knowledge and are, therefore, better placed to handle the complexities associated with the non-linearity and dynamics of the Internet. In spite of all these developments, there still exists ample room for improvement because, practically, there has been a slow deployment of AQM mechanisms. In the first part of this dissertation, we study the major AQM schemes in both the conventional and the fuzzy logic domain in order to uncover the problems that have hampered their deployment in practical implementations. Based on the findings from this study, we model the Internet congestion problem as a multi-objective problem. We propose a Fuzzy Logic Congestion Detection (FLCD) which synergistically combines the good characteristics of the fuzzy approaches with those of the conventional approaches. We design the membership functions (MFs) of the FLCD algorithm automatically by using Multi-objective Particle Swarm Optimization (MOPSO), a population based stochastic optimization algorithm. This enables the FLCD algorithm to achieve optimal performance on all the major objectives of Internet congestion control. The FLCD algorithm is compared with the basic Fuzzy Logic AQM and the Random Explicit Marking (REM) algorithms on a best effort network. Simulation results show that the FLCD algorithm provides high link utilization whilst maintaining lower jitter and packet loss. It also exhibits higher fairness and stability compared to its basic variant and REM. We extend this concept to Proportional Differentiated Services network environment where the FLCD algorithm outperforms the traditional Weighted RED algorithm. We also propose self learning and organization structures which enable the FLCD algorithm to achieve a more stable queue, lower packet losses and UDP traffic delay in dynamic traffic environments on both wired and wireless networks. In the second part of this dissertation, we present the congestion notification mechanisms which have been proposed for wired and satellite networks. We propose an FLCD based dual explicit congestion notification algorithm which combines the merits of the Explicit Congestion Notification (ECN) and the Backward Explicit Congestion Notification (BECN) mechanisms. In this proposal, the ECN mechanism is invoked based on the packet marking probability while the BECN mechanism is invoked based on the BECN parameter which helps to ensure that BECN is invoked only when congestion is severe. Motivated by the fact that TCP reacts to tbe congestion notification signal only once during a round trip time (RTT), we propose an RTT based BECN decay function. This reduces the invocation of the BECN mechanism and resultantly the generation of reverse traffic during an RTT. Compared to the traditional explicit notification mechanisms, simulation results show that the new approach exhibits lower packet loss rates and higher queue stability on wired networks. It also exhibits lower packet loss rates, higher good-put and link utilization on satellite networks. We also observe that the BECN decay function reduces reverse traffic significantly on both wired and satellite networks while ensuring that performance remains virtually the same as in the algorithm without BECN traffic reduction.Item Cell search in frequency division : duplex WCDMA networks.(2006) Rezenom, Seare Haile.; Broadhurst, Anthony D.Wireless radio access technologies have been progressively evolving to meet the high data rate demands of consumers. The deployment and success of voice-based second generation networks were enabled through the use of the Global System for Mobile Communications (GSM) and the Interim Standard Code Division Multiple Access (lS-95 CDMA) networks. The rise of the high data rate third generation communication systems is realised by two potential wireless radio access networks, the Wideband Code Division Multiple Access (WCDMA) and the CDMA2000. These networks are based on the use of various types of codes to initiate, sustain and terminate the communication links. Moreover, different codes are used to separate the transmitting base stations. This dissertation focuses on base station identification aspects of the Frequency Division Duplex (FDD) WCDMA networks. Notwithstanding the ease of deployment of these networks, their asynchronous nature presents serious challenges to the designer of the receiver. One of the challenges is the identification of the base station identity by the receiver, a process called Cell Search. The receiver algorithms must therefore be robust to the hostile radio channel conditions, Doppler frequency shifts and the detrimental effects of carrier frequency offsets. The dissertation begins by discussing the structure and the generation of WCDMA base station data along with an examination of the effects of the carrier frequency offset. The various cell searching algorithms proposed in the literature are then discussed and a new algorithm that exploits the correlation length structure is proposed and the simulation results are presented. Another design challenge presented by WCDMA networks is the estimation of carrier frequency offset at the receiver. Carrier frequency offsets arise due to crystal oscillator inaccuracies at the receiver and their effect is realised when the voltage controlled oscillator at the receiver is not oscillating at the same carrier frequency as that of the transmitter. This leads to a decrease in the receiver acquisition performance. The carrier frequency offset has to be estimated and corrected before the decoding process can commence. There are different approaches in the literature to estimate and correct these offsets. The final part of the dissertation investigates the FFT based carrier frequency estimation techniques and presents a new method that reduces the estimation error.Item Repeat--punctured turbo codes and superorthogonal convolutional turbo codes.(2007) Pillay, Narushan.; Xu, Hongjun.; Takawira, Fambirai.The use of error-correction coding techniques in communication systems has become extremely imperative. Due to the heavy constraints faced by systems engineers more attention has been given to developing codes that converge closer to the Shannon theoretical limit. Turbo codes exhibit a performance a few tenths of a decibel from the theoretical limit and has motivated a lot of good research in the channel coding area in recent years. In the under-mentioned dissertation, motivated by turbo codes, we study the use of three new error-correction coding schemes: Repeat-Punctured Superorthogonal Convolutional Turbo Codes, Dual-Repeat-Punctured Turbo Codes and Dual-Repeat-Punctured Superorthogonal Convolutional Turbo Codes, applied to the additive white Gaussian noise channel and the frequency non-selective or flat Rayleigh fading channel. The performance of turbo codes has been shown to be near the theoretical limit in the AWGN channel. By using orthogonal signaling, which allows for bandwidth expansion, the performance of the turbo coding scheme can be improved even further. Since the resultant is a low-rate code, the code is mainly suitable for spread-spectrum modulation applications. In conventional turbo codes the frame length is set equal to the interleaver size; however, the codeword distance spectrum of turbo codes improves with an increasing interleaver size. It has been reported that the performance of turbo codes can be improved by using repetition and puncturing. Repeat-punctured turbo codes have shown a significant increase in performance at moderate to high signal-to-noise ratios. In this thesis, we study the use of orthogonal signaling and parallel concatenation together with repetition (dual and single) and puncturing, to improve the performance of the superorthogonal convolutional turbo code and the conventional turbo code for reliable and effective communications. During this research, three new coding schemes were adapted from the conventional turbo code; a method to evaluate the union bounds for the AWGN channel and flat Rayleigh fading channel was also established together with a technique for the weight-spectrum evaluation.Item Extending WiFi access for rural reach(2007) Naidoo, Kribashnee.; Sewsunker, Rathi.WiFi can be used to provide cost-effective last-mile IP connectivity to rural users. In initial rollout, hotspots or hotzones can be positioned at community centres such as schools, clinics, hospitals or call-centres. The research will investigate maximizing coverage using physical and higher layer techniques. The study will consider a typical South African rural region, with telecommunications services traffic estimates. The study will compare several IEEE 802.11 deployment options based on the requirements of the South African case in order to recommend options that improve performance.Item Key management in mobile ad hoc networks.(2005) Van der Merwe, Johannes Petrus.; McDonald, Stephen A.Mobile ad hoc networks (MANETs) eliminate the need for pre-existing infrastructure by relying on the nodes to perform all network services. The connectivity between the nodes is sporadic due to the shared, error-prone wireless medium and frequent route failures caused by node mobility. Fully self-organized MANETs are created solely by the end-users for a common purpose in an ad hoc fashion. Forming peer-to-peer security associations in MANETs is more challenging than in conventional networks due to the lack of central authority. This thesis is mainly concerned with peer- t o-peer key management in fully self-organized M ANETs. A key management protocol’s primary function is to bootstrap and maintain the security associations in the network, hence to create, distribute and revocate (symmetric or asymmetric) keying material as needed by the network security services. The fully self-organized feature means that t he key management protocol cannot rely on any form of off-line or on-line trusted third party (TTP). The first part of the thesis gives an introduction to MANETs and highlights MANETs' main characteristics and applications. The thesis follows with an overall perspective on the security issues in MANETs and motivates the importance of solving the key management problem in MANETs. The second part gives a comprehensive survey on the existing key management protocols in MANETs. The protocols are subdivided into groups based on their main characteristic or design strategy. Discussion and comments are provided on the strategy of each group. The discussions give insight into the state of the art and show researchers the way forward. The third part of the thesis proposes a novel peer- to-peer key management scheme for fully self-organized MANETs, called Self-Organized Peer-to-Peer Key Management (SelfOrgPKM). The scheme has low implementation complexity and provides self-organized mechanisms for certificate dissemination and key renewal without the need for any form of off-line or on-line authority. The fully distributed scheme is superior in communication and computational overhead with respect to its counterparts. All nodes send and receive the same number of messages and complete the same amount of computation. ScifOrgPKM therefore preserves the symmetric relationship between the nodes. Each node is its own authority domain which provides an adversary with no convenient point of attack. SelfOrgPKM solves t he classical routing-security interdependency problem and mitigates impersonation attacks by providing a strong one-to-one binding between a user’s certificate information and public key. The proposed scheme uses a novel certificate exchange mechanism t hat exploits user mobility but does not rely on mobility in anyway. The proposed certificate exchange mechanism is ideally suited for bootstraping the routing security. It enables nodes to setup security associations on the network layer in a localized fashion without any noticeable time delay. The thesis also introduces two generic cryptographic building blocks as the basis of SelfOrgPKM: 1) A variant on the ElGamal type signature scheme developed from the generalized ElGamal signature scheme introduced by Horster et al. The modified scheme is one of the most efficient ElGamal variants, outperforming most other variant s; and 2) A subordinate public key generation scheme. The thesis introduces t he novel notion of subordinate public keys, which allows the users of SelfOrgPKM to perform self-organized, self-certificate revocation without changing their network identifiers / addresses. Subordinate public keys therefore eliminate the main weakness of previous efforts to solve the address ownership problem in Mobile IPv6. Furthermore, the main weakness of previous efforts to break t he routing-security interdependence cycle in MANETs is also eliminated by a subordinate public key mechanism. The presented EIGamal signature variant is proved secure in t he Random Oracle and Generic Security Model (ROM+ GM ) without making any unrealistic assumptions . It is shown how the strong security of the signature scheme supports t he security of t he proposed subordinate key generation scheme. Based on the secure signature scheme a security argument for SelfOrgPKM is provided with respect to a genera l, active insider adversary model. The only operation of SelfOrgPKM affecting the network is the pairwise exchange of certificates. The cryptographic correctness, low implementation complexity and effectiveness of SelfOrgPKM were verified though extensive simulations using ns-2 and OpenSSL. Thorough analysis of the simulation results shows t hat t he localized certificate exchange mechanism on the network layer has negligible impact on network performance. The simulation results also correlate with efficiency analysis of SelfOrgPKM in an ideal network setting, hence assuming guaranteed connectivity. The simulation results furthermore demonstrate that network layer certificate exchanges can be triggered without extending routing protocol control packet.Item A structure from motion solution to head pose recovery for model-based video coding.(2005) Heathcote, Jonathan Michael.; Naidoo, Bashan.Current hybrid coders such as H.261/263/264 or MPEG-l/-2 cannot always offer high quality-to-compression ratios for video transfer over the (low-bandwidth) wireless channels typical of handheld devices (such as smartphones and PDAs). Often these devices are utilised in videophone and teleconferencing scenarios, where the subjects of inte:est in the scene are peoples faces. In these cases, an alternative coding scheme known as Model-Based Video Coding (MBVC) can be employed. MBVC systems for face scenes utilise geometrically and photorealistically accurate computer graphic models to represent head !md shoulder views of people in a scene. High compression ratios are achieved at the encoder by extracting and transmitting only the parameters which represent the explicit shape and motion changes occurring on the face in the scene. With some a priori knowledge (such as the MPEG-4 standard for facial animation parameters), the transmitted parameters can be used at the decoder to accurately animate the graphical model and a synthesised version of the scene (originally appearing at the encoder) can be output. Primary components for facial re-animation at the decoder are a set of local and global motion parameters extracted from the video sequence appearing at the encoder. Local motion describes the changes in facial expression occurring on the face. Global motion describes the three-dimensional motion· of the entire head as a rigid object. Extraction of this three-dimensional global motion is often called head tracking. This thesis focuses on the tracking of rigid head pose in a monocular video sequence. The system framework utilises the recursive Structure from Motion (SfM) method of Azarbayejani and Pentland. Integral to the SfM solution are a large number of manually selected two-dimensional feature points, which are tracked throughout the sequence using an efficient image registration technique. The trajectories of the feature points are simultaneously processed by an extended Kalman filter (EKF) to stably recover camera geometry and the rigid three-dimensional structure and pose of the head. To improve estimation accuracy and stability, adaptive estimation is harnessed within the Kalman filter by dynamically varying the noise associated with each of the feature measurements. A closed loop approach is used to constrain feature tracking in each frame. The Kalman filter's estimate of motion and structure of the face are used to predict the trajectory of the features, thereby constraining the search space for the next frame in the video sequence. Further robustness in feature tracking is achieved through the integration of a linear appearance basis to accommodate variations in illumination or changes in aspect on the face. Synthetic experiments are performed for both the SfM and the feature tracking algorithm. The accuracy of the SfM solution is evaluated against synthetic ground truth. Further experimentation demonstrates the stability of the framework to significant noise corruption on arriving measurement data. The accuracy of obtained pixel measurements in the feature tracking algorithm is also evaluated against known ground truth. Additional experiments confirm feature tracking stability despite significant changes in target appearance. Experiments with real video sequences illustrate robustness of the complete head tracker to partial occlusions on the face. The SfM solution (including two-dimensional tracking) runs near real time at 12 Hz. The limits of Pitch, Yaw and Roll (rotational) recovery are 45°,45° and 90° respectively. Large translational recovery (especially depth) is also demonstrated. The estimated motion trajectories are validated against (publically available) ground truth motion captured using a commercial magnetic orientation tracking system. Rigid reanimation of an overlayed wire frame face model is further used as a visually subjective analysis technique. These combined results serve to confirm the suitability of the proposed head tracker as the global (rigid) motion estimator in an MBVC system.Item Human motion reconstruction fom video sequences with MPEG-4 compliant animation parameters.(2005) Carsky, Dan.; Naidoo, Bashan.; McDonald, Stephen A.The ability to track articulated human motion in video sequences is essential for applications ranging from biometrics, virtual reality, human-computer interfaces and surveillance. The work presented in this thesis focuses on tracking and analysing human motion in terms of MPEG-4 Body Animation Parameters, in the context of a model-based coding scheme. Model-based coding has emerged as a potential technique for very low bit-rate video compression. This study emphasises motion reconstruction rather than photorealistic human body modelling, consequently a 3-D skeleton with 31 degrees-of-freedom was used to model the human body. Compression is achieved by analysing the input images in terms of the known 3-D model and extracting parameters that describe the relative pose of each segment. These parameters are transmitted to the decoder which synthesises the output by transforming the default model into the correct posture. The problem comprises two main aspects: 3-D human motion capture and pose description. The goal of the 3-D human motion capture component is to generate 3-D locations of key joints on the human body without the use of special markers or sensors placed on the subject. The input sequence is acquired by three synchronised and calibrated CCD cameras. Digital image matching techniques including cross-correlation and least squares matching are used to find spatial correspondences between the multiple views as well as temporal correspondences in subsequent frames with sub-pixel accuracy. The tracking algorithm automates the matching process examining each matching result and adaptively modifying matching parameters. Key points must be manually selected in the first frame, following which the tracking commences without the intervention of the user, employing the recovered 3-D motion of the skeleton model for prediction of future states. Epipolar geometry is exploited to verify spatial correspondences in each frame before the 3-D locations of all joints are computed through triangulation to construct the 3-D skeleton. The pose of the skeleton is described by the MPEG-4 Body Animation Parameters. The subject's motion is reconstructed by applying the animation parameters to a simplified version of the default MPEG-4 skeleton. The tracking algorithm may be adapted to 2-D tracking in monocular sequences. An example of 2-D tracking of facial expressions demonstrates the flexibility of the algorithm. Further results involving tracking separate body parts demonstrate the advantage of multiple views and the benefit of camera calibration, which simplifies the generation of 3-D trajectories and the estimation of epipolar geometry. The overall system is tested on a walking sequence where full body motion capture is performed and all 31 degrees-of freedom of the tracked model are extracted. Results show adequate motion reconstruction (i.e. convincing to most human observers), with slight deviations due to lack of knowledge of the volumetric property of the human body.Item Performance of high rate space-time trellis coded modulation in fading channels.(2005) Ayodeji, Sokoya Oludare.; Takawira, Fambirai.; Xu, Hongjun.Future wireless communication systems promise to offer a variety of multimedia services which require reliable transmission at high data rates over wireless links. Multiple input multiple output (MIMO) systems have received a great deal of attention because they provide very high data rates for such links. Theoretical studies have shown that the quality provided by MIMO systems can be increased by using space-time codes. Space-time codes combine both space (antenna) and time diversity in the transmitter to increase the efficiency of MIMO system. The three primary approaches, layered spacetime architecture, space-time trellis coding (STTC) and space-time block coding (STBC) represent a way to investigate transmitter-based signal processing for diversity exploitation and interference suppression. The advantages of STBC (i.e. low decoding complexity) and STTC (i.e. TCM encoder structure) can be used to design a high rate space-time trellis coded modulation (HR-STTCM). Most space-time codes designs are based on the assumption of perfect channel state information at the receiver so as to make coherent decoding possible. However, accurate channel estimation requires a long training sequence that lowers spectral efficiency. Part of this dissertation focuses on the performance of HR-STTCM under non-coherent detection where there is imperfect channel state information and also in environment where the channel experiences rapid fading. Prior work on space-time codes with particular reference to STBC systems in multiuser environment has not adequately addressed the performance of the decoupled user signalto-noise ratio. Part of this thesis enumerates from a signal-to-noise ratio point of view the performance of the STBC systems in multiuser environment and also the performance of the HR-STTCM in such environment. The bit/frame error performance of space-time codes in fading channels can be evaluated using different approaches. The Chemoff upper-bound combined with the pair state generalized transfer function bound approach or the modified state transition diagram transfer function bound approach has been widely used in literature. However, although readily detennined, this bound can be too loose over nonnal signal-to-noise ranges of interest. Other approaches, based on the exact calculation of the pairwise error probabilities, are often too cumbersome. A simple exact numerical technique, for calculating, within any desired degree of accuracy, of the pairwise error probability of the HR-STTCM scheme over Rayleigh fading channel is proposed in this dissertation.Item A MAC protocol for IP-based CDMA wireless networks.(2005) Mahlaba, Simon Bonginkosi.; Takawira, Fambirai.The evolution of the intemet protocol (IP) to offer quality of service (QoS) makes it a suitable core network protocol for next generation networks (NGN). The QoS features incorporated to IP will enable future lP-based wireless networks to meet QoS requirements of various multimedia traffic. The Differentiated Service (Diffserv) Architecture is a promising QoS technology due to its scalability which arises from traffic flow aggregates. For this reason, in this dissertation a network infrastructure based on DiffServ is assumed. This architecture provides assured service (AS) and premium service (PrS) classes in addition to best-effort service (BE). The medium access control (MAC) protocol is one of the important design issues in wireless networks. In a wireless network carrying multimedia traffic, the MAC protocol is required to provide simultaneous support for a wide variety of traffic types, support traffic with delay and jitter bounds, and assign bandwidth in an efficient and fair manner among traffic classes. Several MAC protocols capable of supporting multimedia services have been proposed in the literature, the majority of which were designed for wireless A1M (Asynchronous Transfer Mode). The focus of this dissertation is on time division multiple access and code division multiple access (TDMAlCDMA) based MAC protocols that support QoS in lP-based wireless networks. This dissertation begins by giving a survey of wireless MAC protocols. The survey considers MAC protocols for centralised wireless networks and classifies them according to their multiple access technology and as well as their method of resource sharing. A novel TDMAlCDMA based MAC protocol incorporating techniques from existing protocols is then proposed. To provide the above-mentioned services, the bandwidth is partitioned amongst AS and PrS classes. The BE class utilizes the remaining bandwidth from the two classes because it does not have QoS requirements. The protocol employs a demand assignment (DA) scheme to support traffic from PrS and AS classes. BE traffic is supported by a random reservation access scheme with dual multiple access interference (MAl) admission thresholds. The performance of the protocol, i.e. the AS or PrS call blocking probability, and BE throughput are evaluated through Markov analytical models and Monte-Carlo simulations. Furthermore, the protocol is modified and incorporated into IEEE 802.16 broadband wireless access (BWA) network.Item Blind iterative multiuser detection for error coded CDMA systems.(2005) Van Niekerk, Brett.; Mneney, Stanley Henry.Mobile communications have developed since the radio communications that were in use 50 years ago. With the advent of GSM, mobile communications was brought to the average citizen. More recently, COMA technology has provided the user with higher data rates and more reliable service, and it is apparent that it is the future of wireless communication. With the introduction of 3G technology in South Africa, it is becoming clear that it is the solution to the country's wireless communication requirements. The 3G and next-generation technologies could provide reliable communications to areas where it has proven difficult to operate and maintain communications effectively, such as rural locations. It is therefore important that the se technologies continue to be researched in order to enhance their capabilities to provide a solution to the wireless needs of the local and global community. Whilst COMA is proving to be a reliable communications technology, it is still susceptible to the effects of the near-far problem and multiple-access interference. A number of multiuser detectors have been proposed in literature that attempt to mitigate the effects of multiple-access interference. A notable detector is the blind MOE detector, which requires only the desired user 's spreading sequence , and it exhibits performance approximating that of other linear multiuser detectors. Another promising class of multiuser detector operate using an iterative principle and have a joint multiuser detection and error-correcting coding scheme. The aim of this research is to develop a blind iterative detector with FEC coding as a potential solution to the need for a detector that can mitigate the effects of interfering users operating on the channel. The proposed detector has the benefits of both the blind and iterative schemes: it only requires the knowledge of the desired user ' s signature, and it has integrated error-correcting abilities. The simulation results presented in this dissertation show that the proposed detector exhibits superior performance over the blind MOE detector for various channel conditions. An overview of spread-spectrum technologies is presented, and the operation of OS-COMA is described in more detail. A history and overview of existing COMA standards is also given . The need for multiuser detection is explained, and a description and comparison of various detection methods that have appeared in literature is given. An introduction to error coding is given , with convolutional code s, the turbo coding concept and method s of iterative detection are described in more detail and compared, as iterat ive decoding is fundamental to the operation of an iterative COMA detector. An overview of iterative multiuser detection is given , and selected iterative methods are described in more detail. A blind iterative detector is proposed and analysed. Simulation results for the propo sed detector, and a comparison to the blind MOE detector is presented, showing performance characteristics and the effects of various channel parameters on performance. From these results it can be seen that the proposed detector exhibits a superior performance compared to that of the blind MOE detector for various channel conditions. The dissertation is concluded, and possible future directions of research are given.