Masters Degrees (Electronic Engineering)
Permanent URI for this collectionhttps://hdl.handle.net/10413/6868
Browse
Browsing Masters Degrees (Electronic Engineering) by Title
Now showing 1 - 20 of 139
- Results Per Page
- Sort Options
Item Adaptive techniques with cross-layer design for multimedia transmission.(2013) Vieira, Ricardo.; Xu, Hongjun.Wireless communication is a rapidly growing field with many of its aspects undergoing constant enhancement. The use of cross-layer design (CLD) in current technologies has improved system performance in terms of Quality-of-Services (QoS) guarantees. While multimedia transmission is difficult to achieve, CLD is capable of incorporating techniques to achieve multimedia transmission without high complexity. Many systems have incorporated some form of adaptive transmission when using a cross-layer design approach. Various challenges must be overcome when transmitting multimedia traffic; the main challenge being that each traffic type, namely voice; image; and data, have their own transmission QoS; delay; Symbol Error Rate (SER); throughput; and jitter requirements. Recently cross-layer design has been proposed to exchange information between different layers to optimize the overall system performance. Current literature has shown that the application layer and physical layer can be used to adequately transmit multimedia over fading channels. Using Reed-Solomon coding at the application layer and Rate Adaption at the physical layer allows each media type to achieve its QoS requirement whilst being able to transmit the different media within a single packet. The following dissertation therefore strives to improve traffic through-put by introducing an unconventional rate adaption scheme and by using power adaption to achieve Symbol Error Rate (SER) QoS in multimedia transmission. Firstly, we introduce a system which modulates two separate sets of information with different modulation schemes. These two information sets are then concatenated and transmitted across the fading channel. The receiver uses a technique called Blind Detection to detect the modulation schemes used and then demodulates the information sets accordingly. The system uses an application layer that encodes each media type such that their QoS, in terms of SER, is achieved. Simulated results show an increase in spectral efficiency and the system achieves the required Symbol Error Rate constraint at lower Signal to Noise Ratio (SNR) values. The second approach involves adapting the input power to the system rather than adapting the modulation scheme. The two power adaptive schemes that are discussed are Water- Filling and Channel Inversion. Channel Inversion allows the SER requirement to be maintained for low SNR values, which is not possible with Rate Adaption. Furthermore, the system uses an application layer to encode each media type such that their QoS is achieved. Simulated results using this design show an improvement in through-put and the system achieves the SER constraint at lower SNR values.Item Alternative approach to Power Line Communication (PLC) channel modelling and multipath characterization.(2016) Awino, Steven Omondi.; Afullo, Thomas Joachim Odhiambo.Modelling and characterization of the Power Line Communication (PLC) channel is an active research area. The research mainly focuses on ways of fully exploiting the existing and massive power line network for communications. In order to exploit the PLC channel for effective communication solutions, physical properties of the PLC channel need to be studied, especially for high bandwidth signals. In this dissertation, extensive simulations and measurement campaigns for the channel transfer characteristics are carried out at the University of KwaZulu-Natal in selected offices, laboratories and workshops within the Department of Electrical, Electronic and Computer Engineering. Firstly, we employ the Parallel Resonant Circuit (PRC) approach to model the power line channel in chapter 4, which is based on two-wire transmission line theory. The model is developed, simulated and measurements done for validation in the PLC laboratory for different network topologies in the frequency domain. From the results, it is found that the PRC model produces similar results to the Series Resonant Circuit (SRC) model, and hence the model is considered for PLC channel modelling and characterization. Secondly, due to the time variant nature of the power line network, this study also presents the multipath characteristics of the power line communication (PLC) channel in chapter 5. We analyse the effects of the network characteristics on the received signal and derive the multipath characteristics of the PLC channel from measured channel transfer functions by evaluating the channel impulse responses (CIR). The results obtained are compared with results from other parts of the world employing similar approach based on the Root Mean Square (RMS) delay spread and are found to be comparable. Based on the CIR and extracted multipath characteristics, further research in PLC and related topics shall be inspired.Item Application of CSDG Mosfet based active high pass filter in communication systems.(2019) Naidoo, Llewellyn.; Srivastava, Viranjay Mohan.This research work looks at the design of three active high pass filters. These filters have been designed for (i) robotic system, (ii) sensing device and (iii) satellite communication system. In this research work a high pass filter has been designed with a Cylindrical Surrounding Double Gate (CSDG) MOSFET. A CSDG MOSFET is a continuation of DG MOSFET technology. It is formed by rotation of a DG MOSFET with respect to its reference point to form a hollow cylinder. It consists of 2 gates, a drain and a source. Electronic robotic systems have a section of transmitter and receiver. For the receiver, to provide the required selectivity of frequencies, a filter is used. There is a wide variety of these filters that can be used within the Radio Frequency (RF) range. Radio frequencies range from 3 kHz to 300 GHz. This particular filter is designed and simulated at a cutoff frequency of 100 GHz (0.1 THz). It makes use both an operational amplifier and a transistor. This circuit was compared to a circuit that made use of 2 operational amplifiers and the results are discussed. In addition a CSDG MOSFET which makes use of a Silicon Dioxide dielectric is connected to the output of the transistor circuit to see what effect it has on the circuit. Using this model of filter a fine signal (command) can be given to robotic system. The second filter is designed for remote sensing devices. These devices continuously send/receive signals and these signals or radio waves are transmitted/received via a transmission line to/from a receiver/transmitter which has a filter that selectively sorts out the signals and only passes a desired range of signals. The CSDG MOSFET being a capacitive model allows for better filtering of low frequencies and passes through a frequency range of 200 GHz (0.2 THz) efficiently. By placing the capacitors in parallel, the design requires smaller capacitance values to be used. In addition the desired range of frequencies can be achieved from the inversely proportional relationship between frequency and capacitance. Finally a filter has been designed to use in satellite communication systems. These systems consist of various subsystems to allow it to function efficiently. These subsystems require a number of electronic devices. In this research work, a CSDG MOSFET is added to the output of the transistor circuit and operates within the EHF band (0.3 THz). The CSDG MOSFET makes use of Hafnium Silicate (HfSiO4) as a dielectric material due to its wide band-gap and lower dielectric constant makes it ideal for this design. The gain and other parameters of the three designed filters are analyzed. In conclusion, it has been demonstrated that the third order active high pass filters performs better with the CSDG MOSFET.Item Application of real-world modulation schemes to advanced spatial modulation systems.(2022) Khalid, Ahmad Bin.; Quazi, Tahmid Al-Mumit.; Xu, Hongjun.Abstract available in PDF.Item Artificial intelligence based design optimization for improving diversity in wireless links.(2021) Solwa, Shaheen.; Naidoo, Bashan.; Quazi, Tahmid Al-Mumit.Abstract available in PDF.Item ATM performance in rural areas of South Africa.(2005) Mbatha, Sakhiseni J.; Afullo, Thomas Joachim Odhiambo.Rural areas in developing countries span vast areas with a variety of climatic zones, vegetation and terrain features, which are hostile to the installation and maintenance of telecommunication infrastructures. Provision of telecommunications services to these areas using traditional wired and existing wiring telephone system with centralized network architecture becomes prohibitively expensive and not viable in many cases, because there is no infrastructure and the area is sparsely populated. Applications of wireless systems seem to provide a cost-effective solution for such a scenario. However, deployment of ATM in rural areas as a backbone technology wide area network (WAN) has not been thoroughly investigated so far. The dissertation investigates the feasibility of deployment of ATM backbone network (WAN) to be implemented in the rural. ATM is a digital transmission service for wide area networks providing speeds from 2 Megabits per second up to 155 Megabits per second. Businesses and institutions that transmit extremely high volumes of virtually error-free information at high speeds over wide area network with high quality and reliable connections currently use this service. For the purpose of saving the utilization of more bandwidth, the network should support or have a high forward bit rate, i.e. it must convey high traffic from base station to the user (i.e. upstream) than from the user to the base station (down stream). This work also investigates the features from the rural areas that degrade the performance of the networks and have a negative impact in the deployment of the telecommunications networks services. Identification of these features will lead to the suggestion of the least cost-effective telecommunication service. For the purpose of evaluating the performance and feasibility of the network, modeling of the ATM network is accomplished using Project Estimation (ProjEstim) Simulation Tool as the comprehensive tool for simulating large communication networks with detailed protocol modeling and performance analysis.Item Blind iterative multiuser detection for error coded CDMA systems.(2005) Van Niekerk, Brett.; Mneney, Stanley Henry.Mobile communications have developed since the radio communications that were in use 50 years ago. With the advent of GSM, mobile communications was brought to the average citizen. More recently, COMA technology has provided the user with higher data rates and more reliable service, and it is apparent that it is the future of wireless communication. With the introduction of 3G technology in South Africa, it is becoming clear that it is the solution to the country's wireless communication requirements. The 3G and next-generation technologies could provide reliable communications to areas where it has proven difficult to operate and maintain communications effectively, such as rural locations. It is therefore important that the se technologies continue to be researched in order to enhance their capabilities to provide a solution to the wireless needs of the local and global community. Whilst COMA is proving to be a reliable communications technology, it is still susceptible to the effects of the near-far problem and multiple-access interference. A number of multiuser detectors have been proposed in literature that attempt to mitigate the effects of multiple-access interference. A notable detector is the blind MOE detector, which requires only the desired user 's spreading sequence , and it exhibits performance approximating that of other linear multiuser detectors. Another promising class of multiuser detector operate using an iterative principle and have a joint multiuser detection and error-correcting coding scheme. The aim of this research is to develop a blind iterative detector with FEC coding as a potential solution to the need for a detector that can mitigate the effects of interfering users operating on the channel. The proposed detector has the benefits of both the blind and iterative schemes: it only requires the knowledge of the desired user ' s signature, and it has integrated error-correcting abilities. The simulation results presented in this dissertation show that the proposed detector exhibits superior performance over the blind MOE detector for various channel conditions. An overview of spread-spectrum technologies is presented, and the operation of OS-COMA is described in more detail. A history and overview of existing COMA standards is also given . The need for multiuser detection is explained, and a description and comparison of various detection methods that have appeared in literature is given. An introduction to error coding is given , with convolutional code s, the turbo coding concept and method s of iterative detection are described in more detail and compared, as iterat ive decoding is fundamental to the operation of an iterative COMA detector. An overview of iterative multiuser detection is given , and selected iterative methods are described in more detail. A blind iterative detector is proposed and analysed. Simulation results for the propo sed detector, and a comparison to the blind MOE detector is presented, showing performance characteristics and the effects of various channel parameters on performance. From these results it can be seen that the proposed detector exhibits a superior performance compared to that of the blind MOE detector for various channel conditions. The dissertation is concluded, and possible future directions of research are given.Item Call admission control for interactive multimedia satellite networks.(2015) Imole, Olugbenga Emmanuel.; Walingo, Tom Mmbasu.; Takawira, Fambirai.Satellite communication has become an integral component of global access communication network due mainly to its ubiquitous coverage, large bandwidth and ability to support for large numbers of users over fixed and mobile devices. However, the multiplicity of multimedia applications with diverse requirements in terms of quality of service (QoS) poses new challenges in managing the limited and expensive resources. Furthermore, the time-varying nature of the propagation channel due to atmospheric and environmental effects also poses great challenges to effective utilization of resources and the satisfaction of users’ QoS requirements. Efficient radio resource management (RRM) techniques such as call admission control (CAC) and adaptive modulation and coding (AMC) are required in order to guarantee QoS satisfaction for user established connections and realize maximum and efficient utilization of network resources. In this work, we propose two CAC policies for interactive satellite multimedia networks. The two policies are based on efficient adaptation of transmission parameters to the dynamic link characteristics. In the first policy which we refer to as Gaussian Call Admission Control with Link Adaptation (GCAC-LA), we invoke the central limit theorem to statistically multiplex rate based dynamic capacity (RBDC) connections and obtain an aggregate bandwidth and required capacity for the multiplex. Adaptive Modulation and Coding (AMC) is employed for transmission over the time-varying wireless channel of the return link of an interactive satellite network. By associating users’ channel states to particular transmission parameters, the amount of resources required to satisfy user connection requirements in each state is determined. Thus the admission control policy considers in its decision, the channel states of all existing and new connections. The performance of the system is investigated by simulation and the results show that AMC significantly improves the utilization and call blocking performance by more than twice that of a system without link adaptation. In the second policy, a Game Theory based CAC policy with link adaptation (GTCAC-LA) is proposed. The admission of a new user connection under the GTCAC-LA policy is based on a non-cooperative game that is played between the network (existing user connections) and the new connection. A channel prediction scheme that predicts the rain attenuation on the link in successive intervals of time is also proposed. This determines the current resource allocation for every source at any point in time. The proposed game is played each time a new connection arrives and the strategies adopted by players are based on utility function, which is estimated based on the required capacity and the actual resources allocated. The performance of the CAC policy is investigated for different prediction intervals and the results show that multiple interval prediction scheme shows better performance than the single interval scheme. Performance of the proposed CAC policies indicates their suitability for QoS provisioning for traffic of multimedia connections in future 5G networks.Item Capture effects in spread-aloha packet protocols.(2005) Mpako, Vuyolwethu Maxabiso Wessels.; Takawira, Fambirai.Research in the field of random access protocols for narrow-band systems started as early as the 1970s with the introduction of the ALOHA protocol. From the research done in slotted narrow-band systems, it is well known that contention results in all the packets involved in the contention being unsuccessful. However, it has been shown that in the presence of unequal power levels, ore of the contending packets may be successful. Ibis is a phenomenon called capture. Packet capture has been shown to improve the performance of slotted narrow-band systems. Recently, much work has been done in the analysis of spread-spectrum ALOHA type code-division multiple access (CDMA) protocols. The issue of designing power control techniques to improve the performance of CDMA systems by reducing multiple access interference (MAl) has been a subject of much research. It has been shown that in the presence of power control schemes, the performance of spread-ALOHA CDMA systems is improved. However, it is also widely documented that the design of power control schemes capable of the ideal of compensation of radio propagation techniques is not possible for various reasons, and hence the imperfections in power control. None of the research known to the author has looked at capture in spread-ALOHA systems, and to a greater extent, looked at expressions for the performance of spreadALOHA systems in the presence of capture. In this thesis we introduce spread-ALOHA systems with capture as a manifestation of the imperfections in power control. We propose novel expressions for the computation of the perfonnance ofspread-ALOHA systems with capture.Item CDMA performance for a rural telecommunication access.(2005) Rasello, Poloko Freddy.; Afullo, Thomas Joachim Odhiambo.Reviews of possible telecommunication services that can be deployed in the rural areas are highlighted. These services range from narrowband to broadband. The aim of these services is to target rural Kwazulu-Natal areas that are without or with limited telecommunications infrastructure. Policies that govern telecommunications in South Africa are also reviewed with emphasis on Universal Service Obligation. The importance of telecommunications infrastructure in rural areas is also reviewed to the benefit of Kwazulu-Natal. FDMA, TDMA, CDMA, VSAT, MMDS and MVDS are compared for a possible use in rural areas. Cost comparison of GSM and CDMA is conducted with emphasis on fade margin, path loss and penetration rate. CDMA system design and coverage areas are discussed for rural KwaZulu-Natal. Lastly bit error rate graphs and power control algorithms are presented for Kwazulu-Natal scenario.Item Cell search in frequency division : duplex WCDMA networks.(2006) Rezenom, Seare Haile.; Broadhurst, Anthony D.Wireless radio access technologies have been progressively evolving to meet the high data rate demands of consumers. The deployment and success of voice-based second generation networks were enabled through the use of the Global System for Mobile Communications (GSM) and the Interim Standard Code Division Multiple Access (lS-95 CDMA) networks. The rise of the high data rate third generation communication systems is realised by two potential wireless radio access networks, the Wideband Code Division Multiple Access (WCDMA) and the CDMA2000. These networks are based on the use of various types of codes to initiate, sustain and terminate the communication links. Moreover, different codes are used to separate the transmitting base stations. This dissertation focuses on base station identification aspects of the Frequency Division Duplex (FDD) WCDMA networks. Notwithstanding the ease of deployment of these networks, their asynchronous nature presents serious challenges to the designer of the receiver. One of the challenges is the identification of the base station identity by the receiver, a process called Cell Search. The receiver algorithms must therefore be robust to the hostile radio channel conditions, Doppler frequency shifts and the detrimental effects of carrier frequency offsets. The dissertation begins by discussing the structure and the generation of WCDMA base station data along with an examination of the effects of the carrier frequency offset. The various cell searching algorithms proposed in the literature are then discussed and a new algorithm that exploits the correlation length structure is proposed and the simulation results are presented. Another design challenge presented by WCDMA networks is the estimation of carrier frequency offset at the receiver. Carrier frequency offsets arise due to crystal oscillator inaccuracies at the receiver and their effect is realised when the voltage controlled oscillator at the receiver is not oscillating at the same carrier frequency as that of the transmitter. This leads to a decrease in the receiver acquisition performance. The carrier frequency offset has to be estimated and corrected before the decoding process can commence. There are different approaches in the literature to estimate and correct these offsets. The final part of the dissertation investigates the FFT based carrier frequency estimation techniques and presents a new method that reduces the estimation error.Item Characterization and modelling of effects of clear air on multipath fading in terrestrial links.(2013) Asiyo, Mike Omondi.; Afullo, Thomas Joachim Odhiambo.The increased application of digital terrestrial microwave radio links in communication networks has renewed attention in techniques of estimating the probability of multipath fading distributions. Nevertheless, the unpredictable variation of the wireless transmission medium remains a challenge. It has been ascertained that the refraction of electromagnetic waves is due to the inhomogeneous spatial distribution of the refractive index, and causes adverse effects such as multipath and diffraction fading. The knowledge of the characteristics of such causes of these fading phenomena is essential for the accurate design of terrestrial line of sight (LOS) links of high performance and availability. Refractivity variation is random in space and time and cannot be described in a deterministic manner and has to be considered as a random variable with probabilistic characteristics. In this dissertation, radiosonde soundings data is used in characterizing the atmospheric conditions and determining the geoclimatic factor K used in predicting the distribution of multipath fading for five locations in South Africa. The limitations of radiosonde measurements are lack of time resolution and poor spatial resolution. The latter has been reduced by spatial interpolation techniques in our study, specifcally, the Inverse Distance Weighting (IDW) method. This is used in determining the point refractivity gradient not exceeded for 1 % of the time from which the geoclimatic factor is estimated. Fade depth and outage probability due to multipath propagation is then predicted from the International Telecommunications Union Recommendations (ITU-R) techniques. The results are compared with values from Central Africa. The results obtained using the ITU-R method are also compared with region-based models of Bannett-Vigants of USA and Morita of Japan. Three spatial interpolation techniques (Kriging, Thin-Plate Spline and Inverse Distance Weighting) are then used in interpolating the geoclimatic factor K in places where radiosonde data is not available. The estimated values have been used to develop contour maps for geoclimatic factor K for South Africa. Statistical assessment of these methods is done by calculating the root mean square error (RMSE) and the mean absolute error (MAE) between a set of control points and the interpolated results. The best performing method is used to map the seasonal geoclimatic factor K for the entire study region. The estimated values of geoclimatic factor will improve accuracy in predicting outage probability due to multipath propagation in LOS links in the region which is a key contribution of this work.Item Characterization and modelling of the channel and noise for broadband indoor powerline communication (plc.) networks.(2016) Mosalaosi, Modisa.; Afullo, Thomas Joachim Odhiambo.Power Line Communication (PLC) is an interesting approach in establishing last mile broad band access especially in rural areas. PLC provides an already existing medium for broad band internet connectivity as well as monitoring and control functions for both industrial and indoor usage. PLC network is the most ubiquitous network in the world reaching every home. However, it presents a channel that is inherently hostile in nature when used for communication purposes. This hostility is due to the many problematic characteristics of the PLC from a data communications’ perspective. They include multipath propagation due to multiple reflections resulting from impedance mismatches and cable joints, as well as the various types of noise inherent in the channel. Apart from wireless technologies, current high data rate services such as high speed internet are provided through optical fibre links, Ethernet, and VDSL (very-high-bit-rate digital subscriber line) technology. The deployment of a wired network is costly and demands physical effort. The transmission of high frequency signals over power lines, known as power line communications (PLC), plays an important role in contributing towards global goals for broadband services inside the home and office. In this thesis we aim to contribute to this ideal by presenting a powerline channel modeling approach which describes a powerline network as a lattice structure. In a lattice structure, a signal propagates from one end into a network of boundaries (branches) through numerous paths characterized by different reflection/transmission properties. Due to theoretically infi nite number of reflections likely to be experienced by a propagating wave, we determine the optimum number of paths required for meaningful contribution towards the overall signal level at the receiver. The propagation parameters are obtained through measurements and other model parameters are derived from deterministic power system. It is observed that the notch positions in the transfer characteristics are associated with the branch lengths in the network. Short branches will result in fewer notches in a fixed bandwidth as compared to longer branches. Generally, the channel attenuation increase with network size in terms of number of branches. The proposed model compares well with experimental data. This work presents another alternative approach to model the transfer characteristics of power lines for broadband power line communication. The model is developed by considering the power line to be a two-wire transmission line and the theory of transverse electromagnetic (TEM) wave propagation. The characteristic impedance and attenuation constant of the power line v are determined through measurements. These parameters are used in model simplification and determination of other model parameters for typical indoor multi-tapped transmission line system. The transfer function of the PLC channel is determined by considering the branching sections as parallel resonant circuits (PRC) attached to the main line. The model is evaluated through comparison with measured transfer characteristics of known topologies and it is in good agreement with measurements. Apart from the harsh topology of power line networks, the presence of electrical appliances further aggravates the channel conditions by injecting various types of noises into the system. This thesis also discusses the process of estimating powerline communication (PLC) asynchronous impulsive noise volatility by studying the conditional variance of the noise time series residuals. In our approach, we use the Generalized Autoregressive Conditional Heteroskedastic (GARCH) models on the basis that in our observations, the noise time series residuals indicate heteroskedasticity. By per forming an ordinary least squares (OLS) regression of the noise data, the empirical results show that the conditional variance process is highly persistent in the residuals. The variance of the error terms are not uniform, in fact, the error terms are larger at some portions of the data than at other time instances. Thus, PLC impulsive noise often exhibit volatility clustering where the noise time series is comprised of periods of high volatility followed by periods of high volatility and periods of low volatility followed by periods of low volatility. The burstiness of PLC impulsive noise is therefore not spread randomly across the time period, but instead has a degree of autocorrelation. This provides evidence of time-varying conditional second order moment of the noise time series. Based on these properties, the noise time series data is said to suffer from heteroskedasticity. GARCH models addresses the deficiencies of common regression models such as Autoregressive Moving Average (ARMA) which models the conditional expectation of a process given the past, but regards the past conditional variances to be constant. In our approach, we predict the time-varying volatility by using past time-varying variances in the error terms of the noise data series. Subsequent variances are predicted as a weighted average of past squared residuals with declining weights that never completely diminish. The parameter estimates of the model indicates a high de gree of persistence in conditional volatility of impulsive noise which is a strong evidence of explosive volatility. Parameter estimation of linear regression models usually employs least squares (LS) and maximum likelihood (ML) estimators. While maximum likelihood remains one of the best estimators within the classical statistics paradigm to date, it is highly reliant vi on the assumption about the joint probability distribution of the data for optimal results. In our work, we use the Generalized Method of Moments (GMM) to address the deficien cies of LS/ML in order to estimate the underlying data generating process (DGP). We use GMM as a statistical technique that incorporate observed noise data with the information in population moment conditions to determine estimates of unknown parameters of the under lying model. Periodic impulsive noise (short-term) has been measured, deseasonalized and modeled using GMM. The numerical results show that the model captures the noise process accurately. Usually, the impulsive signals originates from connected loads in an electrical power network can often be characterized as cyclostationary processes. A cyclostationary process is described as a non-stationary process whose statistics exhibit periodic time varia tion, and therefore can be described by virtue of its periodic order. The focus of this chapter centres on the utilization of cyclic spectral analysis technique for identification and analysis of the second-order periodicity (SOP) of time sequences like those which are generated by electrical loads connected in the vicinity of a power line communications receiver. Analysis of cyclic spectrum generally incorporates determining the random features besides the pe riodicity of impulsive noise, through the determination of the spectral correlation density (SCD). Its effectiveness on identifying and analysing cyclostationary noise is substantiated in this work by processing data collected at indoor low voltage sites.Item Clear-air analytical and empirical K-Factor determination and characterization for terrestrial microwave LOS link applications.(2013) Nyete, Abraham Mutunga.; Afullo, Thomas Joachim Odhiambo.The transmission media, that is, the atmosphere, through which terrestrial and satellite signals traverse, is irregular. Thus, one requires proper knowledge on how variations in atmospheric refractive conditions will affect the optimal performance of terrestrial and satellite links. Under clear-air conditions, atmospheric changes will mainly involve variations in atmospheric pressure, relative humidity and temperature, which are the key to defining the way signals are refracted as they travel from the transmitter to the receiver. Accurate knowledge of these variations can be acquired through proper modeling, characterization and mapping of these three atmospheric quantities, in terms of the refractive index, refractivity gradient or the effective earth radius factor (k-factor). In this dissertation, both parametric and non-parametric modeling and characterizing, interpolation and mapping of the k-factor for South Africa is done. Median (k50%) and effective (k99.9%) k-factor values are the ones that determine antenna heights in line of sight (LOS) terrestrial microwave links. Thus, the accurate determination of the two k-factor values is critical for the proper design of LOS links by ensuring that adequate path clearance is achieved, hence steering clear of all obstacles along the radio path. Thus, this study is critical for the proper design of LOS links in South Africa. One parametric method (curve fitting) and one non-parametric method (kernel density estimation) are used to develop three-year annual and seasonal models of the k-factor for seven locations in South Africa. The integral of square error (ISE) is used to optimize the model formulations obtained in both cases. The models are developed using k-factor statistics processed from radiosonde measurements obtained from the South African Weather Service (SAWS) for a three year period (2007-2009). Since the data obtained at the seven locations is scattered, three different interpolation techniques are then explored to extend the three-year annual and seasonal discrete measured k-factor values for the seven locations studied to cover the rest of the country, and the results of the interpolation are then presented in the form of contour maps. The techniques used for the interpolation are kriging, inverse distance weighting (IDW) and radial basis functions (RBFs). The mean absolute error (MAE) and the root mean square error (RMSE) are the metrics used to compare the performance of the different interpolation techniques used. The method that produces the least error is deemed to be the best, and its interpolation results are the ones used for developing the contour maps of the k-factor.Item Codec for multimedia services using wavelets and fractals.(2004) Brijmohan, Yarish.; Mneney, Stanley Henry.Increase in technological advancements in fields of telecommunications, computers and television have prompted the need to exchange video, image and audio files between people. Transmission of such files finds numerous multimedia applications such as, internet multimedia, video conferencing, videophone, etc. However, the transmission and rece-ption of these files are limited by the available bandwidth as well as storage capacities of systems. Thus there is a need to develop compression systems, such that required multimedia applications can operate within these limited capacities. This dissertation presents two well established coding approaches that are used in modern' image and video compression systems. These are the wavelet and fractal methods. The wavelet based coder, which adopts the transform coding paradigm, performs the discrete wavelet transform on an image before any compression algorithms are implemented. The wavelet transform provides good energy compaction and decorrelating properties that make it suited for compression. Fractal compression systems on the other hand differ from the traditional transform coders. These algorithms are based on the theory of iterated function systems and take advantage of local self-similarities present in images. In this dissertation, we first review the theoretical foundations of both wavelet and fractal coders. Thereafter we evaluate different wavelet and fractal based compression algorithms, and assess the strengths and weakness in each case. Due to the short-comings of fractal based compression schemes, such as the tiling effect appearing in reconstructed images, a wavelet based analysis of fractal image compression is presented. This is the link that produces fractal coding in the wavelet domain, and presents a hybrid coding scheme called fractal-wavelet coders. We show that by using smooth wavelet basis in computing the wavelet transform, the tiling effect of fractal systems can be removed. The few wavelet-fractal coders that have been proposed in literature are discussed, showing advantages over the traditional fractal coders. This dissertation will present a new low-bit rate video compression system that is based on fractal coding in the wavelet domain. This coder makes use of the advantages of both the wavelet and fractal coders discussed in their review. The self-similarity property of fractal coders exploits the high spatial and temporal correlation between video frames. Thus the fractal coding step gives an approximate representation of the coded frame, while the wavelet technique adds detail to the frame. In this proposed scheme, each frame is decomposed using the pyramidal multi-resolution wavelet transform. Thereafter a motion detection operation is used in which the subtrees are partitioned into motion and non-motion subtrees. The nonmotion subtrees are easily coded by a binary decision, whereas the moving ones are coded using the combination of the wavelet SPIHT and fractal variable subtree size coding scheme. All intra-frame compression is performed using the SPIHT compression algorithm and inter-frame using the fractal-wavelet method described above. The proposed coder is then compared to current low bit-rate video coding standards such as the H.263+ and MPEG-4 coders through analysis and simulations. Results show that the proposed coder is competitive with the current standards, with a performance improvement been shown in video sequences that do not posses large global motion. Finally, a real-time implementation of the proposed algorithm is performed on a digital signal processor. This illustrates the suitability of the proposed coder being applied to numerous multimedia applications.Item Combining local descriptors and classification methods for human emotion recognition.(2023) Badi Mame, Antoine.; Tapamo, Jules-Raymond.Human Emotion Recognition occupies a very important place in artificial intelligence and has several applications, such as emotionally intelligent robots, driver fatigue monitoring, mood prediction, and many others. Facial Expression Recognition (FER) systems can recognize human emotions by extracting face image features and classifying them as one of several prototypic emotions. Local descriptors are good at encoding micro-patterns and capturing their distribution in a sub-region of an image. Moreover, dividing the face into sub-regions introduces information about micro-pattern locations, essential for developing robust facial expression features. Hence, local descriptors’ efficiencies depend heavily on parameters such as the sub-region size and histogram length. However, the extraction parameters are seldom optimized in existing approaches. This dissertation reviews several local descriptors and classifiers, and experiments are conducted to improve the robustness and accuracy of existing FER methods. A study of the Histogram of Oriented Gradients (HOG) descriptor inspires this research to propose a new face registration algorithm. The approach uses contrast-limited histogram equalization to enhance the image, followed by binary thresholding and blob detection operations to rotate the face upright. Additionally, this research proposes a new method for optimized FER. The main idea behind the approach is to optimize the calculation of feature vectors by varying the extraction parameter values, producing several feature sets. The best extraction parameter values are selected by evaluating the classification performances of each feature set. The proposed approach is also implemented using different combinations of local descriptors and classification methods under the same experimental conditions. The results reveal that the proposed methods produced a better performance than what was reported in previous studies. Furthermore, the results showed an improvement of up to 2% compared with the performance achieved in previous works. The results showed that HOG was the most effective local descriptor, while Support Vector Machines (SVM) and Multi-Layer Perceptron (MLP) were the best classifiers. Hence, the best combinations were HOG+SVM and HOG+MLP.Item A comparative study of various speech recognition techniques.(1990) Pitchers, Richard Charles.; Broadhurst, Anthony D.Speech recognition systems fall into four categories, depending on whether they are speaker-dependent or independent of speaker population and on whether they are capable of recognizing continuous speech or only isolated words. A study was made of most methods used in speech recognition to date. Four speech recognition techniques for speaker-dependent isolated word applications were then implemented in software on an IBM PC with a minimum of interfacing hardware. These techniques made use of short-time energy and zero-crossing rates, autocorrelation coefficients, linear predictor coefficients and cepstral coefficients. A comparison of their relative performances was made using four test vocabularies that were 10, 30, 60 and 120 words in size. These consisted of 10 digits, 30 and 60 computer terms and lastly 120 airline reservation terms. The performance of any speech recognition system is affected by a number of parameters. The effects of frame length, pre-emphasis, window functions, dynamic time warping and the filter order were also studied experimentally.Item Concatenated space-time codes in Rayleigh fading channels.(2002) Byers, Geoffrey James.; Takawira, Fambirai.The rapid growth of wireless subscribers and services as well as the increased use of internet services, suggest that wireless internet access will increase rapidly over the next few years. This will require the provision of high data rate wireless communication services. However the problem of a limited and expensive radio spectrum coupled with the problem of the wireless fading channel makes it difficult to provide these services. For these reasons, the research area of high data rate, bandwidth efficient and reliable wireless communications is currently receiving much attention. Concatenated codes are a class of forward error correction codes which consist of two or more constituent codes. These codes achieve reliable communications very close to the Shannon limit provided that sufficient diversity, such as temporal or spatial diversity, is available. Space-time trellis codes (STTCs) merge channel coding and transmit antenna diversity to improve system capacity and performance. The main focus of this dissertation is on STTCs and concatenated STTCs in quasi-static and rapid Rayleigh fading channels. Analytical bounds are useful in determining the behaviour of a code at high SNRs where it becomes difficult to generate simulation results. A novel method is proposed to analyse the performance of STTCs and the accuracy of this analysis is compared to simulation results where it is shown to closely approximate system performance. The field of concatenated STTCs has already received much attention and has shown improved performance over conventional STTCs. It was recently shown that double concatenated convolutional codes in AWGN channels outperform simple concatenated codes. Motivated by this, two double concatenated STTC structures are proposed and their performance is compared to that of a simple concatenated STTCs. It is shown that double concatenated STTCs outperform simple concatenated STTCs in rapid Rayleigh fading channels. An analytical model for this system in rapid fading is developed which combines the proposed analytical method for STTCs with existing analytical techniques for concatenated convolutional codes. The final part of this dissertation considers a direct-sequencejslow-frequency-hopped (DSj SFH) code division multiple access (CDMA) system with turbo coding and multiple transmit antennas. The system model is modified to include a more realistic, time correlated Rayleigh fading channel and the use of side information is incorporated to improve the performance of the turbo decoder. Simulation results are presented for this system and it is shown that the use of transmit antenna diversity and side information can be used to improve system performance.Item Contributions to optical coherence tomography fingerprint images.(2021) Mgaga, Sboniso Sifiso.; Tapamo, Jules-Raymond.; Khanyile, Nontokozo Portia.Abstract available in PDF.Item Correlation of rain dropsize distribution with rain rate derived from disdrometers and rain gauge networks in Southern Africa.(2011) Alonge, Akintunde Ayodeji.; Afullo, Thomas Joachim Odhiambo.Natural phenomena such as rainfall are responsible for communication service disruption, leading to severe outages and bandwidth inefficiency in both terrestrial and satellite systems, especially above 10 GHz. Rainfall attenuation is a source of concern to radio engineers in link budgeting and is primarily related to the rainfall mechanism of absorption and scattering of millimetric signal energy. Therefore, the study of rainfall microstructure can serve as a veritable means of optimizing network parameters for the design and deployment of millimetric and microwave links. Rainfall rate and rainfall drop-size are two microstructural parameters essential for the appropriate estimation of local rainfall attenuation. There are several existing analytical and empirical models for the prediction of rainfall attenuation and their performances largely depend on regional and climatic characteristics of interest. In this study, the thrust is to establish the most appropriate models in South African areas for rainfall rate and rainfall drop-size. Statistical analysis is derived from disdrometer measurements sampled at one-minute interval over a period of two years in Durban, a subtropical site in South Africa. The measurements are further categorized according to temporal rainfall regimes: drizzle, widespread, shower and thunderstorm. The analysis is modified to develop statistical and empirical models for rainfall rate using gamma, lognormal, Moupfouma and other ITU-R compliant models for the control site. Additionally, rain drop-size distribution (DSD) parameters are developed from the modified gamma, lognormal, negative exponential and Weibull models. The spherical droplet assumption is used to estimate the scattering parameters for frequencies between 2 GHz and 1000 GHz using the disdrometer diameter ranges. The resulting proposed DSD models are used, alongside the scattering parameters, for the prediction and estimation of rainfall attenuation. Finally, the study employs correlation and regression techniques to extend the results to other locations in South Africa. The cumulative density function analysis of rainfall parameters is applied for the selected locations to obtain their equivalent models for rainfall rate and rainfall DSD required for the estimation of rainfall attenuation.