Doctoral Degrees (Electronic Engineering)
Permanent URI for this collectionhttps://hdl.handle.net/10413/6867
Browse
Browsing Doctoral Degrees (Electronic Engineering) by Title
Now showing 1 - 20 of 70
- Results Per Page
- Sort Options
Item 3D modelling segmentation, quantification and visualisation of cardiovascular magnetic resonance images.(2014) Brijmohan, Yarish.; Mneney, Stanley Henry.; Rae, William Ian Duncombe.Progress in technology in the field of magnetic resonance imaging (MRI) has provided medical experts with a tool to visualise the heart during the cardiac cycle. The heart contains four chambers namely the left and right ventricles and the left and right atria. Each chamber plays an important role in the circulation of blood throughout the body. Imbalances in the circulatory system can lead to several cardiovascular diseases. In routine clinical medical practice MRIs are produced in large quantities on a daily basis to assist in clinical diagnosis. In practice, the interpretation of these images is generally performed visually by medical experts due to the minimal number of automatic tools and software for extracting quantitative measures. Segmentation refers to the process of detecting regions within an image and associating these regions with known objects. For cardiac MRI, segmentation of the heart distinguishes between different ventricles and atriums. If the segmentation of the left ventricle and right ventricle exists, doctors will be interested in quantifying the thickness of the ventricle walls, the movement of each ventricle, blood volumes, blood flow-rates, etc. Several cardiac MRI segmentation algorithms have been developed over the past 20 years. However, much attention of these segmentation methods was afforded to the left ventricle and its functionality due to its approximately cylindrical shape. Analysis of the right ventricle also plays an important role in heart disease assessment and coupled with left ventricle analysis, will produce a more intuitive and robust diagnostic tool. Unfortunately, the crescent like shape of the right ventricle makes its mathematical modelling difficult. Another issue associated with segmenting cardiac MRI is that the quality of images can be severely degraded by artefactual signals and image noise emanating from equipment errors, patient errors and image processing errors. The presence of these artefacts attribute to additional difficulty for segmentation algorithms and many of the currently available segmentation methods cannot account for all of the abovementioned categories. A further downfall of current segmentation algorithms is that there is no readily available standard methodology to compare the accuracy of these approaches, as each author has provided results on different cardiac MRI datasets and segmentation done by human readers (expert segmentation) is subjective. This thesis addresses the issues of accuracy comparison by providing a framework of mathematical, statistical and clinical accuracy measures. The use of publically available cardiac MRI datasets in which expert segmentation is performed is analysed. The framework allows the author of a new segmentation algorithm to choose a subset of the measures to test their algorithm. A clinical measure is proposed in this thesis which does not require expert segmentation on the cardiac MRI dataset, where the stroke volumes of the left and right ventricle are compared to each other. This thesis proposes a new three dimensional cardiac MRI segmentation algorithm that is able to segment both the left and right ventricles. This approach provides a robust technique that improves on the use of the difference of Gaussians (DoG) image filter. The main focus was to find and extract the region of interest that contains the ventricles and remove all the unwanted information so that the DoG parameters are created from intensity profiles of this localised region. Two methods are proposed to achieve this localisation, depending on the type of cardiac MRI dataset that is present. The first method is used if the cardiac MRI dataset contains images from a single MRI view. Local and global motion maps are created per MRI slice using pixel intensities from images at all time points though the cardiac cycle. The segmentation results show a slight drop in evaluation metrics on the state of the art algorithms for the left ventricle and a significant improvement over the state of the art algorithms for the right ventricle using the publically available cardiac MRI datasets. The algorithm is also robust enough to withstand the influence of image noise and simulated patient movement. The second approach to find the region of interest is used if there are MRIs from three views present in the cardiac MRI dataset. The novel method projects ventricle segmentation in the three dimensional space from two cardiac MRI views to provide an automatic ventricle localisation in the third MRI view. This method utilises an iterative approach with convergence criteria to provide final ventricle segmentation in all three MRI views. The results show increase in segmentation accuracy per iteration and a small stroke volumetric error measurement on final segmentation. Finally, proposed in this thesis is a triangular surface mesh reconstruction algorithm to create the visualisation of both the left and right ventricles. The segmentation of the ventricles are extracted from the MRI per slice and combined to form a three dimensional point set. The use of segmentation from the three orthogonal MRI views further improves the visualisation. From the three dimensional point set, the surface mesh is constructed using Delaunay triangulation, convex hulls and alpha hulls. The volume of the ventricles are calculated by performing a high resolution voxelisation of the ventricle mesh and thereafter several quantification measures are computed. The volume methodology is compared to the commonly used Simpsons method and the results illustrate that the proposed method is superior.Item A semi-empirical formulation for determination of rain attenuation on terrestrial radio links.(2010) Odedina, Modupe Olubunmi.; Afullo, Thomas Joachim Odhiambo.Advances in today’s fast growing communication systems have resulted in congestion in the lower frequency bands and the need for higher capacity broadband services. This has made it inevitable for service providers to migrate to higher frequency bands so as to accommodate the ever increasing demands on radio communication systems. However, the reliability of such systems at these frequency bands tend to be severely degraded due to some natural atmospheric phenomena of which rain is the dominant factor. This is not to say that other factors have become unimportant; however, if attenuation by rain is so severe that a radio link is unavailable for use, then other factors become secondary. Therefore, it is paramount to establish a model capable of predicting the behaviour of these systems in the presence of rain. This study employs a semi-empirical approach for the formulation of rain attenuation models using the knowledge of rain rate, raindrop size distribution, and a signal level measurement recorded at 19.5 GHz on a horizontally polarized terrestrial radio link. The semi-empirical approach was developed by considering the scattering effect of an electromagnetic wave propagating through a medium containing raindrops. The complex forward scattering amplitudes for the raindrops are determined for all raindrop sizes at different frequencies, utilizing the Mie scattering theory on spherical dielectric raindrops. From these scattering amplitudes, the extinction cross-sections for the spherical raindrops are calculated. Applying the power-law regression to the real part of the calculated extinction cross-section, power-law coefficients are determined at different frequencies. The power-law model generated from the extinction crosssection is integrated over different raindrop-size distribution models to formulate theoretical rain attenuation models. The developed rain attenuation models are used with 0.01 R rain rate statistics determined for four locations in different rain climatic zones in South Africa to calculate the specific rain attenuation. From a horizontally polarized 6.73 km terrestrial line-of-sight link in Durban, South Africa,experimental rain attenuation measurements were recorded at 19.5 GHz. These rain attenuation measurements are compared with the results obtained from the developed attenuation models with the same propagation parameters to establish the most appropriate attenuation models that describe the behaviour of radio link performance in the presence of rain. For the purpose of validating the results, it is compared with the ITU-R rain attenuation model. This study also considers the characteristics and variations associated with rain attenuation for terrestrial communication systems. This is achieved by utilizing the ITU-R power-law rain attenuation model on 5-year rain rate data obtained from the four different climatic rain zones in South Africa to estimate the cumulative distributions of rain attenuation. From the raindrop size and 1-minute rain rate measurement recorded in Durban with a distrometer over six months, rain events over the six months are classified into drizzle, widespread, shower and thunderstorm rain types and the mean rain rate statistics determined for each class of rain. Drop-size distribution for all the rain types is estimated. This research has shown a statistical analysis of rain fade data and proposed an empirical rain attenuation model for South Africa localities. This work has also drawn out theoretical rain attenuation prediction models based on the assumption that the shapes of raindrops are spherical. The results predicted from these theoretical attenuation models have shown that it is not the raindrop shapes that determine the attenuation due to rain, but the raindrop size distribution and the rain rate content in the drops. This thesis also provides a good interpretation of cumulative rain attenuation distribution on seasonal and monthly basis. From these distributions, appropriate figures of fade margin are derived for various percentages of link availability in South Africa.Item An adaptive protocol for use over meteor scatter channels.(1987) Spann, Michael Dwight.; Broadhurst, Anthony D.Modem technology has revived interest in the once popular area of meteor scatter communications. Meteor scatter systems offer reliable communications in the 500 to 2000 km range all day, every day. Recent advances in microprocessor technology have made meteor scatter communications a viable and cost effective method of providing modest data rate communications. A return to the basic fundamentals has revealed characteristics of meteor scatter propagation that can be used to optimize the protocols for a meteor scatter link. The duration of an underdense trail is bounded when its initial amplitude is known. The upper bound of the duration is determined by maximizing the classical underdense model. The lower bound is determined by considering the volume of sky utilized. The duration distribution between these bounds is computed and compared to measured values. The duration distribution is then used to specify a fixed data rate, frame adaptive protocol which more efficaciously utilizes underdense trails, in the half duplex environment, than a non-adaptive protocol. The performance of these protocols is verified by modeling.Item Alternative techniques for the improvement of energy efficiency in cognitive radio networks.(2016) Orumwense, Efe Francis.; Srivastava, Viranjay Mohan.; Afullo, Thomas Joachim Odhiambo.Abstract available in PDF file.Item Analysis and design of smart antenna arrays (SAAs) for improved directivity at GHz range for wireless communication systems.(2018) Oluwole, Ayodele Sunday.; Srivastava, Viranjay Mohan.Abstract available in PDF file.Item Analysis of the EDF family of schedulers.(2009) Scriba, Stefan Martin.; Takawira, Fambirai.Modern telecommunications companies are moving away from conventional circuit-switched architectures to more versatile packet-switched infrastructures. Traditional First-In-FirstOut (FIFO) queues that are currently multiplexing IP traffic are not able to meet the strict Quality-of-Service (QoS) requirements of delay sensitive real-time traffic. Two main solution families exist that separate heterogeneous traffic into appropriate classes. The first is known as Generalized Processor Sharing (GPS), which divides the available bandwidth among the contending classes, proportionally to the throughput guarantee negotiated with each class. GPS and its myriad of packetised variants are relatively easy to analyse, as the service rate of individual classes is directly related to its throughput guarantee. As GPS splits the arriving traffic into separate queues, it is useful for best-effort traffic, supplying each class of traffic with either a maximum or minimum amount of bandwidth that it deserves. The second solution is the Earliest Deadline First (EDF) scheduler, also known as Earliest Due Date (EDD). Each traffic class has a delay deadline, by which the individual packets need to be served in order to meet their heterogeneous QoS requirements. EDF selects packets that are closest to their deadline. It is therefore primarily useful for delay sensitive real-time traffic. Although this is a simple algorithm, it turns out to be surprisingly difficult to analyse. Several papers attempted to analyse EDF. Most of them found either discrete bounds, which lie far away from the mean, or stochastic bounds which tend to capture the delay behaviour of the traffic more accurately. After the introductory first chapter, this thesis simulates a realistic cellular environment, where packets of various classes of service are transmitted across an HSDPA air interface. The aim is to understand the behaviour of EDF and its channel aware Opportunistic EDF scheduler compared to other scheduling families commonly used in HSDPA environments. In particular, Round Robin is simulated as the most simplistic scheduler. Max ell chooses packets solely based on the best channel conditions. Finally, PF -T is a scheme that tries to maximise the overall transmission rate that packets experience, but this metric gets divided by the throughput that each class already achieved. This introduces a form of long-term fairness that prevents the starvation of individual classes. The third chapter contains the main analysis, which uses Large Deviation principles and the Effective Bandwidth theory to approximate the deadline violation probability and the delay density function of EDF in a wired network. A definition for the fairness of EDF is proposed. The analysis is extended to approximate the stochastic fairness distribution. In the fourth chapter of the thesis an opportunistic EDF scheduler is proposed for mobile legs of a network that takes advantage of temporary improvements in the channel conditions. An analytical model is developed that predicts the delay density function of the opportunistic EDF scheduler. The channel propagation gain is assumed to be log-normally distributed, which requires graphical curve fitting, as no closed-form solution existsItem The analysis, simulation and testing of an experimental travelling- wave tube.(1994) Reynolds, Christopher Garth.; Nattrass, Henry Lee.As a design and analysis aid for the development of an experimental TWT, a computer program is written which allows the small-signal gain to be computed for various operating conditions, such as various conditions of tube bias (beam voltage and current) and frequency. In order to arrive at a value for the gain, a number of parameters need first to be defined or calculated. Using the method (Approach II) of Jain and Basu [17] which is applicable to a helix with a free-space gap between it and circular dielectric support rods surrounded by a metal shell, the dielectric loading factor (DLF) for the structure is found and the dispersion relation then solved to obtain the radial propagation constant y and axial propagation constant B. The method is tested for a helix with measured data and found to be acceptably accurate. Helix losses are calculated for the low-loss input and output sections of the helix, using the procedures developed by Gilmour et al [14,18], from which values are found for the helix loss parameter d. Another value for d, obviously much larger, is also found for the lossy attenuator section of the helix. Here measured data for the attenuator is used as a basis for a polynomial which models the attenuator loss as a function of frequency. The Pierce gain parameter C is found using the well-known equations of Pierce [21,22,26], and then the space-charge parameter Q. Here knowledge of the space-charge reduction factor F is required to find Q, and a simple non-iterative method is presented for its calculation, with some results. From the other parameters already calculated the velocity parameter, b, is then found. since sufficient information is now available, the electronic equations are solved. These equations are in a modified form, better accounting for the effects of space-charge than the well-known standard forms. Results are compared and slight differences found to exist in the computed gain. Now that the x's and y's (respectively the real and imaginary parts of the complex propagation constants for the slow and fast space-charge waves) are known the launching loss can be calculated. Launching losses are found for the three space-charge waves, not just for the gaining wave. The gain of the TWT is not found from the asymptotic gain equation but from a model which includes the effects of internal feedback due to reflections at the ports and attenuator. Values of reflection coefficients are modelled on the results of time-domain measurements (attenuator) and found by calculation (ports). This model permits the unstable behaviour of the tube to be predicted for various conditions of beam current and voltage and anticipates the frequencies at which instability would be likely. Results from simulations are compared with experimental observations. The need to pulse the experimental tube under controlled conditions led to the development of a high-voltage solid state pulse modulator providing regulated output pulses of up to 5000V and 200mA directly, without the use of transformers. The pulse modulator design embodies two unusual features a) its operation is bipolar, delivering positive or negative output pulses, depending only on the polarity of the rectifier input, and b) the use of multiple regulating loops and stacked pass elements to achieve high-voltage operation. Some results are presented.Item An Analytic model for high electron mobility transistors.(1986) Hill, Adrian John.; Nattrass, Henry Lee.The last six years has seen the emergence and rapid development of a new type of field effect transistor, the High Electron Mobility Transistor (HEMT), which offers improved performance in both digital and analogue circuits compared with circuits incorporating either MEtal Semiconductor (MES) or Metal Oxide Semiconductor (MOS) FETs. A new physically-based analytic model for HEMTs, which predicts the DC and RF electrical performance from the material and structural parameters of the device, is presented. The efficacy of the model is demonstrated with comparisons between simulated and measured device characteristics, at DC and microwave frequencies. The good agreement with experiment obtained with the model indicates that velocity overshoot effects are considerably less important in HEMTs than has been widely assumed, and that the electron transit velocity in submicron devices is approximately 10 cm/s, rather than around 2x10 cm/s. The Inverted HEMT, one of the major HEMT structural variants, is emphasized throughout this work because of its potential advantages over other variants, and practical results from 0.5 micron gate length Inverted HEMTs are presented.Item Application of cognitive radio based sensor network in smart grids for efficient, holistic monitoring and control.(2018) Ogbodo, Emmanuel Utochukwu.; Dorrell, David George.; Abu-Mahfouz, Adnan M.This thesis is directed towards the application of cognitive radio based sensor network (CRSN) in smart grid (SG) for efficient, holistic monitoring and control. The work involves enabling of sensor network and wireless communication devices for spectra utilization via the capability of Dynamic Spectrum Access (DSA) of a cognitive radio (CR) as well as end to end communication access technology for unified monitoring and control in smart grids. Smart Grid (SG) is a new power grid paradigm that can provide predictive information and recommendations to utilities, including their suppliers, and their customers on how best to manage power delivery and consumption. SG can greatly reduce air pollution from our surrounding by renewable power sources such as wind energy, solar plants and huge hydro stations. SG also reduces electricity blackouts and surges. Communication network is the foundation for modern SG. Implementing an improved communication solution will help in addressing the problems of the existing SG. Hence, this study proposed and implemented improved CRSN model which will help to ultimately evade the inherent problems of communication network in the SG such as: energy inefficiency, interference, spectrum inefficiencies, poor quality of service (QoS), latency and throughput. To overcome these problems, the existing approach which is more predominant is the use of wireless sensor network (WSNs) for communication needs in SG. However, WSNs have low battery power, low computational complexity, low bandwidth support, and high latency or delay due to multihop transmission in existing WSN topology. Consequently, solving these problems by addressing energy efficiency, bandwidth or throughput, and latency have not been fully realized due to the limitations in the WSN and the existing network topology. Therefore, existing approach has not fully addressed the communication needs in SG. SG can be fully realized by integrating communication network technologies infrastructures into the power grid. Cognitive Radio-based Sensor Network (CRSN) is considered a feasible solution to enhance various aspects of the electric power grid such as communication with end and remote devices in real-time manner for efficient monitoring and to realize maximum benefits of a smart grid system. CRSN in SG is aimed at addressing the problem of spectrum inefficiency and interference which wireless sensor network (WSN) could not. However, numerous challenges for CRSNs are due to the harsh environmental wireless condition in a smart grid system. As a result, latency, throughput and reliability become critical issues. To overcome these challenges, lots of approaches can be adopted ranging from integration of CRSNs into SGs; proper implementation design model for SG; reliable communication access devices for SG; key immunity requirements for communication infrastructure in SG; up to communication network protocol optimization and so on. To this end, this study utilized the National Institute of Standard (NIST) framework for SG interoperability in the design of unified communication network architecture including implementation model for guaranteed quality of service (QoS) of smart grid applications. This involves virtualized network in form of multi-homing comprising low power wide area network (LPWAN) devices such as LTE CAT1/LTE-M, and TV white space band device (TVBD). Simulation and analysis show that the performance of the developed modules architecture outperforms the legacy wireless systems in terms of latency, blocking probability, and throughput in SG harsh environmental condition. In addition, the problem of multi correlation fading channels due to multi antenna channels of the sensor nodes in CRSN based SG has been addressed by the performance analysis of a moment generating function (MGF) based M-QAM error probability over Nakagami-q dual correlated fading channels with maximum ratio combiner (MRC) receiver technique which includes derivation and novel algorithmic approach. The results of the MATLAB simulation are provided as a guide for sensor node deployment in order to avoid the problem of multi correlation in CRSN based SGs. SGs application requires reliable and efficient communication with low latency in timely manner as well as adequate topology of sensor nodes deployment for guaranteed QoS. Another important requirement is the need for an optimized protocol/algorithms for energy efficiency and cross layer spectrum aware made possible for opportunistic spectrum access in the CRSN nodes. Consequently, an optimized cross layer interaction of the physical and MAC layer protocols using various novel algorithms and techniques was developed. This includes a novel energy efficient distributed heterogeneous clustered spectrum aware (EDHC- SA) multichannel sensing signal model with novel algorithm called Equilateral triangulation algorithm for guaranteed network connectivity in CRSN based SG. The simulation results further obtained confirm that EDHC-SA CRSN model outperforms conventional ZigBee WSN in terms of bit error rate (BER), end-to-end delay (latency) and energy consumption. This no doubt validates the suitability of the developed model in SG.Item A CAD tool for the prediction of VLSI interconnect reliability.(1988) Frost, David Frank.; Poole, Kelvin F.This thesis proposes a new approach to the design of reliable VLSI interconnects, based on predictive failure models embedded in a software tool for reliability analysis. A method for predicting the failure rate of complex integrated circuit interconnects subject to electromigration, is presented. This method is based on the principle of fracturing an interconnect pattern into a number of statistically independent conductor segments. Five commonly-occurring segment types are identified: straight runs, steps resulting from a discontinuity in the wafer surface, contact windows, vias and bonding pads. The relationship between median time-to-failure (Mtf) of each segment and physical dimensions, temperature and current density are determined. This model includes the effect of time-varying current density. The standard deviation of lifetime is also determined as a function of dimensions. A· minimum order statistical method is used to compute the failure rate of the interconnect system. This method, which is applicable to current densities below 106 AI cm2 , combines mask layout and simulation data from the design data base with process data to calculate failure rates. A suite of software tools called Reliant (RELIability Analyzer for iNTerconnects) which implements the algorithms described above, is presented. Reliant fractures a conductor pattern into segments and extracts electrical equivalent circuits for each segment. The equivalent circuits are used in conjunction with a modified version of the SPICE circuit simulator to determine the currents in all segments and to compute reliability. An interface to a data base query system provides the capability to access reliability data interactively. The performance of Reliant is evaluated, based on two CMOS standard cell layouts. Test structures for the calibration of the reliability models are provided. Reliant is suitable for the analysis of leaf cells containing a few hundred transistors. For MOS VLSI circuits, an alternative approach based on the use of an event-driven switch-level simulator is presented.Item Channel assembling policies for heterogeneous fifth generation (5G) cognitive radio networks.(2016) Esenogho, Ebenezer.; Srivastava, Viranjay Mohan.Abstract available in PDF file.Item Channel characterization for broadband powerline communications.(2014) Mulangu, Chrispin Tshikomba.; Afullo, Thomas Joachim Odhiambo.; Ijumba, Nelson Mutatina.The main limiting factor in broadband powerline communications is the presence of impedance discontinuities in the wired channel. This phenomenon is present in both outdoor and indoor powerline communication (PLCs) channels. It has been established that the impedance of the electrical loads and line branching are the main causes of impedance discontinuities in PLC channel networks. Accurate knowledge of the expected impedances of the corresponding discontinuity points would be vital in order to characterize the channel for signal transmission. However, the PLC channel network topologies lead to different branching structures. Additionally, the existence of a myriad of electrical loads, whose noise and impedance vary with frequency, are a motivation for a rigorous design methodology in order to achieve a pragmatic channel model. In order to develop such a channel model, an approach similar to the one applied in radio propagation channel modeling is adopted, where specific attenuation determined at a point is used in predicting the attenuation for the entire power cable length. Therefore, the powerline is modeled with the assumption of a randomly spread multitude of scatterers in the vicinity of the channel with only a sufficient number of impedance discontinuity points. The line is considered as a single homogeneous element with its length divided into a grid of small areas with dimensions that range from 0.5 to 3 mm. Thus, each small area transmits an echo and the forward scattered response gets to the receiver. With this approach, point specific attenuation along the line is proposed and used to derive the channel transfer function. Measurement results show that both the analytical specific attenuation model developed in this work and the channel transfer function are feasible novel ideas in PLC channel network characterization. It is seen from the measurements that the signal attenuation is directly proportional to the number of branches, and this is in line with the findings of previous researchers. A comparison between the measured values and the simulation results of the frequency response shows a very good agreement. The agreement demonstrates applicability of the models in a practical enviroment. Thus we conclude that the models developed do not require knowledge either of the link topology or the cable models but requires an extensive measurement campaign.Item Channel estimation for SISO and MIMO OFDM communications systems.(2010) Oyerinde, Olutayo Oyeyemi.; Mneney, Stanley Henry.Telecommunications in the current information age is increasingly relying on the wireless link. This is because wireless communication has made possible a variety of services ranging from voice to data and now to multimedia. Consequently, demand for new wireless capacity is growing rapidly at a very alarming rate. In a bid to cope with challenges of increasing demand for higher data rate, better quality of service, and higher network capacity, there is a migration from Single Input Single Output (SISO) antenna technology to a more promising Multiple Input Multiple Output (MIMO) antenna technology. On the other hand, Orthogonal Frequency Division Multiplexing (OFDM) technique has emerged as a very popular multi-carrier modulation technique to combat the problems associated with physical properties of the wireless channels such as multipath fading, dispersion, and interference. The combination of MIMO technology with OFDM techniques, known as MIMO-OFDM Systems, is considered as a promising solution to enhance the data rate of future broadband wireless communication Systems. This thesis addresses a major area of challenge to both SISO-OFDM and MIMO-OFDM Systems; estimation of accurate channel state information (CSI) in order to make possible coherent detection of the transmitted signal at the receiver end of the system. Hence, the first novel contribution of this thesis is the development of a low complexity adaptive algorithm that is robust against both slow and fast fading channel scenarios, in comparison with other algorithms employed in literature, to implement soft iterative channel estimator for turbo equalizer-based receiver for single antenna communication Systems. Subsequently, a Fast Data Projection Method (FDPM) subspace tracking algorithm is adapted to derive Channel Impulse Response Estimator for implementation of Decision Directed Channel Estimation (DDCE) for Single Input Single Output - Orthogonal Frequency Division Multiplexing (SISO-OFDM) Systems. This is implemented in the context of a more realistic Fractionally Spaced-Channel Impulse Response (FS-CIR) channel model, as against the channel characterized by a Sample Spaced-Channel Impulse Response (SS)-CIR widely assumed by other authors. In addition, a fast convergence Variable Step Size Normalized Least Mean Square (VSSNLMS)-based predictor, with low computational complexity in comparison with others in literatures, is derived for the implementation of the CIR predictor module of the DDCE scheme. A novel iterative receiver structure for the FDPM-based Decision Directed Channel Estimation scheme is also designed for SISO-OFDM Systems. The iterative idea is based on Turbo iterative principle. It is shown that improvement in the performance can be achieved with the iterative DDCE scheme for OFDM system in comparison with the non iterative scheme. Lastly, an iterative receiver structure for FDPM-based DDCE scheme earlier designed for SISO OFDM is extended to MIMO-OFDM Systems. In addition, Variable Step Size Normalized Least Mean Square (VSSNLMS)-based channel transfer function estimator is derived in the context of MIMO Channel for the implementation of the CTF estimator module of the iterative Decision Directed Channel Estimation scheme for MIMO-OFDM Systems in place of linear minimum mean square error (MMSE) criterion. The VSSNLMS-based channel transfer function estimator is found to show improved MSE performance of about -4 MSE (dB) at SNR of 5dB in comparison with linear MMSE-based channel transfer function estimator.Item Characteristics of rain at microwave and millimetric bands for terrestrial and satellite links attenuation in South Africa and surrounding islands.(2010) Owolawi, Pius Adewale.; Afullo, Thomas Joachim Odhiambo.; Malinga, Sandile B.The emergence of a vast range of communication devices running on different types of technology has made convergence of technology become the order of the day. This revolution observed in communications technology has resulted in a pressing need for larger bandwidth, higher data rate and better spectrum availability, and it has become important that these factors be addressed. As such, this has resulted in the current resurgence of interest to investigate higher electromagnetic spectrum space that can take care of these needs. For the past decade, microwave (3 GHz-30 GHz) and millimeter waves (30 GHz-300 GHz) have been used as the appropriate frequency ranges for applications with properties such as wide bandwidth, smaller components size, narrow beamwidths, frequency re-use, small antenna, and short deployment time. To optimize the use of these frequency ranges by communication systems, the three tiers of communication system elements - receiver, transmitter and transmission channel or medium must be properly designed and configured. However, if the transmitter and receiver meet the necessary requirements, the medium in which signals are transmitted often becomes an issue at this range of frequencies. The most significant factor that affects the transmission of signals at these bands is attenuation and scattering by rain, snow, water vapour and other gases in the atmosphere. Scattering and absorption by rain at microwave and millimeter bands is thus a main concern for system designers. This study presents results of research into the interaction of rainfall with microwave and millimeter wave propagation as a medium. The study of rainfall characteristics allows estimation of its scattered and attenuated effects in the presence of microwave and millimeter waves. The components of this work encompass rainfall rate integration time, cumulative distribution and modelling of rainfall rate and characteristics of rain drop size and its modelling. The effects of rain on microwave and millimeter wave signals, which result in rain attenuation, are based on rainfall rate variables such as rainfall rate cumulative distribution, raindrop size distribution, total scattering cross sections, rain drop shape, and rain drop terminal velocity. A regional rainfall rate conversion factor from five-minute rainfall data to one-minute integration time is developed using the existing conversion method and a newly developed hybrid method. Based on these conversion factor results from the hybrid method, the rainfall at five-minute integration time was converted to a one-minute equivalent to estimate its cumulative distributions. In addition, new rain zones based on ITU-R and Crane designations are suggested for the entire region of South Africa and the surrounding Islands. The results are compared with past research work done in the other regions. Rain attenuation is acutely influenced by rain drop size distribution (DSD). This study thus also investigates DSD models from previous research work. There are several DSD models commonly used to estimate rain attenuation. They are models which have their root from exponential, gamma, lognormal and Weibull distributions. Since DSD is dynamic and locationdependent, a simple raindrop size distribution model is developed for Durban using maximum likelihood estimation (MLE) method. The MLE method is applied to the three-parameter lognormal distribution in order to model DSD for Durban. Rain drop size depends on rainfall rate, drop diameter and rain drop velocity. Semi-empirical models of terminal velocity from previous studies are investigated in this work and proposed for the estimation of specific rain attenuation.Item Clear-air radioclimatological modeling for terrestrial line of sight links in Southern Africa.(2010) Kemi, Odedina Peter.; Afullo, Thomas Joachim Odhiambo.This thesis has investigated radioclimatological study in a clear-air environment as applicable to terrestrial line of sight link design problems. Radioclimatological phenomena are adequately reviewed both for the precipitation effect and clear-air effect. The research focuses more on the clear-air effect of radioclimatological studies. Two Southern African countries chosen for case study in the report are Botswana and South Africa. To this end, radiosonde data gathered in Maun, Botswana and Durban, South Africa are used for model formulation and verification. The data used in the thesis ranges from three years to ten years in these two stations. Three to ten years of refractivity data gathered in Botswana and South Africa is used for the model formulation. On the other hand, eight months signal level measurement data recorded from the terrestrial line of sight link set up between Howard College and Westville Campuses of the University of KwaZulu-Natal, Durban South Africa is used for model verification. Though various radioclimatic parameters could affect radio signal propagation in the clear-air environment, this report focuses on two of these parameters. These two parameters are the geoclimatic factor and effective earth radius factor (k-factor). The first parameter is useful for multipath fading determination while the second parameter is very important for diffraction fading, modeling and characterization. The two countries chosen have different terrain and topographical structures; thus further underlying the choice for these two parameters. While Maun in Botswana is a gentle flat terrain, Durban in South Africa is characterized by hilly and mountainous terrain structure, which thus affects radioclimatological modeling in the two countries. Two analytical models have been proposed to solve clear-air radioclimatic problems in Southern Africa in the thesis. The first model is the fourth order polynomial analytical expression while the second model is the parabolic equation. The fourth order polynomial model was proposed after an extensive analysis of the eight month signal level measurement data gathered in Durban, South Africa. This model is able to predict the fade exceedance probabilities as a function of fade depth level. The result from the fourth order polynomial model is found to be comparable with other established multipath propagation model reviewed in the thesis. Availability of more measurement data in more location will be necessary in future to further refine this model. The second model proposed to solve clear-air propagation problem in the thesis is the modified parabolic equation. We chose this technique because of its strength and its simplistic adaptation to terrestrial line of sight link design problem. This adaptation is possible because, the parabolic equation can be modified to incorporate clear-air parameters. Hence this modification of the parabolic equation allows the possibility of a hybrid technique that incorporates both the statistical and mathematical procedures perfectly into one single process. As a result of this, most of the very important phenomena in clear-air propagation such as duct occurrence probabilities, diffraction fading and multipath fading is captured by this technique. The standard parabolic equation (SPE) is the unmodified parabolic equation which only accounts for free space propagation, while the modified parabolic equation (MPE) is the modified version of the parabolic equation. The MPE is classified into two in the thesis: the first modified parabolic equation (MPE1) and second modified parabolic equation (MPE2). The MPE1 is designed to incorporate the geoclimatic factor which is intended to study the multipath fading effect in the location of study. On the other hand, MPE2 is the modified parabolic equation designed to incorporate the effective earth radius factor (k-factor) intended to study the diffraction fading in the location of study. The results and analysis of the results after these modifications confirm our expectation. This result shows that signal loss is due primarily to diffraction fading in Durban while in Botswana, signal loss is due primarily to multipath. This confirms our expectation since a flatter terrain attracts signal loss due to multipath while hilly terrain attracts signal loss due to diffraction fading.Item Combined turbo coding and interference rejection for DS-CDMA.(2004) Bejide, Emmanuel Oluremi.; Takawira, Fambirai.This dissertation presents interference cancellation techniques for both the Forward Error Correction (FEC) coded and the uncoded Direct Sequence Code Division Multiple Access (DS-CDMA) systems. Analytical models are also developed for the adaptive and the non-adaptive Parallel Interference Cancellation (PlC) receivers. Results that are obtained from the computer simulations of the PlC receiver types confirm the accuracy of the analytical models that are developed. Results show that the Least Mean Square (LMS) algorithm based adaptive PlC receivers have bit error rate performances that are better than those of the non-adaptive PlC receivers. In the second part of this dissertation, a novel iterative multiuser detector for the Turbo coded DS-CDMA system is developed. The performance of the proposed receiver in the multirate CDMA system is also investigated. The developed receiver is found to have an error rate performance that is very close to the single user limit after a few numbers of iterations. The receiver is also resilient against the near-far effect. A methodology is also presented on the use of the Gaussian approximation method in the convergence analysis of iterative interference cancellation receivers for turbo coded DS-CDMA systems.Item Computer-aided design of RF MOSFET power amplifiers.(1992) Hoile, Gary Alec.; Reader, H. C.The process of designing high power RF amplifiers has in the past relied heavily on measurements, in conjunction with simple linear theory. With the advent of the harmonic balance method and increasingly faster computers, CAD techniques can be of great value in designing these nonlinear circuits. Relatively little work has been done in modelling RF power MOSFETs. The methods described in numerous papers for the nonlinear modelling of microwave GaAsFETs cannot be applied easily to these high power devices. This thesis describes a modelling procedure applicable to RF MOSFETs rated at over 100 W. This is achieved by the use of cold S parameters and pulsed drain current measurements taken at controlled temperatures. A method of determining the required device thermal impedance is given. A complete nonlinear equivalent circuit model is extracted for an MRF136 MOSFET, a 28 V, 15 W device. This includes two nonlinear capacitors. An equation is developed to describe accurately the drain current as a function of the internal gate and drain voltages. The model parameters are found by computer optimisation with measured data. Techniques for modelling the passive components in RF power amplifiers are given. These include resistors, inductors, capacitors, and ferrite transformers. Although linear ferrite transformer models are used, nonlinear forms are also investigated. The accuracy of the MOSFET model is verified by comparison to large signal measurements in a 50 0 system. A complete power amplifier using the MRF136, operating from 118 MHz to 175 MHz is built and analysed. The accuracy of predictions is generally within 10 % for output power and DC supply current, and around 30 % for input impedance. An amplifier is designed using the CAD package, and then built, requiring only a small final adjustment of the input matching circuit. The computer based methods described lead quickly to a near-optimal design and reduce the need for extensive high power measurements. The use of nonlinear analysis programs is thus established as a valuable design tool for engineers working with RF power amplifiers.Item Constant modulus based blind adaptive multiuser detection.(2004) Whitehead, James Bruce.; Takawira, Fambirai.Signal processing techniques such as multi user detection (MUD) have the capability of greatly enhancing the performance and capacity of future generation wireless communications systems. Blind adaptive MUD's have many favourable qualities and their application to OS-COMA systems has attracted a lot of attention. The constant modulus algorithm is widely deployed in blind channel equalizations applications. The central premise of this thesis is that the constant modulus cost function is very suitable for the purposes of blind adaptive MUD for future generation wireless communications systems. To prove this point, the adaptive performance of blind (and non-blind) adaptive MUD's is derived analytically for all the schemes that can be made to fit the same generic structure as the constant modulus scheme. For the first time, both the relative and absolute performance levels of the different adaptive algorithms are computed, which gives insights into the performance levels of the different blind adaptive MUD schemes, and demonstrates the merit of the constant modulus based schemes. The adaptive performance of the blind adaptive MUD's is quantified using the excess mean square error (EMSE) as a metric, and is derived for the steady-state, tracking, and transient stages of the adaptive algorithms. If constant modulus based MUD's are suitable for future generation wireless communications systems, then they should also be capable of suppressing multi-rate DS-COMA interference and also demonstrate the ability to suppress narrow band interference (NBI) that arises in overlay systems. Multi-rate DS-COMA provides the capability of transmitting at various bit rates and quality of service levels over the same air interface. Limited spectrum availability may lead to the implementation of overlay systems whereby wide-band COMA signal are collocated with existing narrow band services. Both overlay systems and multi-rate DS-COMA are important features of future generation wireless communications systems. The interference patterns generated by both multi-rate OS-COMA and digital NBI are cyclostationary (or periodically time varying) and traditional MUD techniques do not take this into account and are thus suboptimal. Cyclic MUD's, although suboptimal, do however take the cyclostationarity of the interference into account, but to date there have been no cyclic MUD's based on the constant modulus cost function proposed. This thesis thus derives novel, blind adaptive, cyclic MUD's based on the constant modulus cost function, for direct implementation on the FREquency SHift (FRESH) filter architecture. The FRESH architecture provides a modular and thus flexible implementation (in terms of computational complexity) of a periodically time varying filter. The operation of the blind adaptive MUD on these reduced complexity architectures is also explored.· The robustness of the new cyclic MUD is proven via a rigorous mathematical proof. An alternate architecture to the FRESH filter is the filter bank. Using the previously derived analytical framework for the adaptive performance of MUD's, the relative performance of the adaptive algorithms on the FRESH and filter bank architectures is examined. Prior to this thesis, no conclusions could be drawn as to which architecture would yield superior performance. The performance analysis of the adaptive algorithms is also extended in this thesis in order to consider the effects of timing jitrer at the receiver, signature waveform mismatch, and other pertinent issues that arise in realistic implementation scenarios. Thus, through a careful analytical approach, which is verified by computer simulation results, the suitability of constant modulus based MUD's is established in this thesis.Item Cooperative diversity techniques for future wireless communications systems.(2013) Moualeu, Jules Merlin Mouatcho.; Xu, Hongjun.; Takawira, Fambirai.Multiple-input multiple-output (MIMO) systems have been extensively studied in the past decade. The attractiveness of MIMO systems is due to the fact that they drastically reduce the deleterious e ects of multipath fading leading to high system capacity and low error rates. In situations where wireless devices are restrained by their size and hardware complexity, such as mobile phones, transmit diversity is not achievable. A new paradigm called cooperative communication is a viable solution. In a cooperative scenario, a single-antenna device is assisted by another single-antenna device to relay its message to the destination or base station. This creates a virtual multiple-input multiple-output (MIMO) system. There exist two cooperative strategies: amplify-and-forward (AF) and decode-and-forward (DF). In the former, the relay ampli es the noisy signal received from the source before forwarding it to the destination. No form of demodulation is required. In the latter, the relay rst decodes the source signal before transmitting an estimate to the destination. In this work, focus is on the DF method. A drawback of an uncoded DF cooperative strategy is error propagation at the relay. To avoid error propagation in DF, various relay selection schemes can be used. Coded cooperation can also be used to avoid error propagation at the relay. Various error correcting codes such as convolutional codes or turbo codes can be used in a cooperative scenario. The rst part of this work studies a variation of the turbo codes in cooperative diversity, that further reduces error propagation at the relay, hence lowering the end-to-end error rate. The union bounds on the bit-error rate (BER) of the proposed scheme are derived using the pairwise error probability via the transfer bounds and limit-before-average techniques. In addition, the outage analysis of the proposed scheme is presented. Simulation results of the bit error and outage probabilities are presented to corroborate the analytical work. In the case of outage probability, the computer simulation results are in good agreement with the the analytical framework presented in this chapter. Recently, most studies have focused on cross-layer design of cooperative diversity at the physical layer and truncated automatic-repeat request (ARQ) at the data-link layer using the system throughput as the performance metric. Various throughput optimization strategies have been investigated. In this work, a cross-relay selection approach that maximizes the system throughput is presented. The cooperative network is comprised of a set of relays and the reliable relay(s) that maximize the throughput at the data-link layer are selected to assist the source. It can be shown through simulation that this novel scheme outperforms from a throughput point of view, a system throughput where the all the reliable relays always participate in forwarding the source packet. A power optimization of the best relay uncoded DF cooperative diversity is investigated. This optimization aims at maximizing the system throughput. Because of the non-concavity and non-convexity of the throughput expression, it is intractable to derive a closed-form expression of the optimal power through the system throughput. However, this can be done via the symbol-error rate (SER) optimization, since it is shown that minimizing the SER of the cooperative system is equivalent to maximizing the system throughput. The SER of the retransmission scheme at high signal-to-noise ratio (SNR) was obtained and it was noted that the derived SER is in perfect agreement with the simulated SER at high SNR. Moreover, the optimal power allocation obtained under a general optimization problem, yields a throughput performance that is superior to non-optimized power values from moderate to high SNRs. The last part of the work considers the throughput maximization of the multi-relay adaptive DF over independent and non-identically distributed (i.n.i.d.) Rayleigh fading channels, that integrates ARQ at the link layer. The aim of this chapter is to maximize the system throughput via power optimization and it is shown that this can be done by minimizing the SER of the retransmission. Firstly, the closed-form expressions for the exact SER of the multi-relay adaptive DF are derived as well as their corresponding asymptotic bounds. Results showed that the optimal power distribution yields maximum throughput. Furthermore, the power allocated at a relay is greatly dependent of its location relative to the source and destination.Item Cross-layer design for multimedia applications in cognitive radio networks.(2015) Msumba, John Andrew.; Xu, Hongjun.The exponential growth in wireless services and the current trend of development in wireless communication technologies have resulted into an overcrowded radio spectrum band in such a way that it can no longer meet the ever increasing requirements of wireless applications. In contrary however, literature surveys indicate that a large amount of the licensed radio spectrum bands are underutilized. This has necessitated the need for efficient ways to be implemented for spectrum sharing among different systems, applications and services in dynamic wireless environment. Cognitive radio (CR) technology emerges as a way to improve the overall efficiency of radio spectrum utilization by allowing unlicensed users (also known as secondary user) to utilize a licensed band when it is vacant. Multimedia applications are being targeted for CR networks. However, the performance and success of CR technology will be determined by the quality of service (QoS) perceived by secondary users. In order to transmit multimedia contents which have stringent QoS requirements over the CR networks, many technical challenges have to be addressed that are constrained by the layered protocol architecture. Cross-layer design has shown a promise as an approach to optimize network performance among different layers. This work is aimed at addressing the question on how to provide QoS guarantee for multimedia transmission over CR networks in terms of throughput maximization while ensuring that the interference to primary users is avoided or minimized. Spectrum sensing is a fundamental problem in cognitive radio networks for the protection of primary users and therefore the first part of this work provides a review of some low complex spectrum sensing schemes. A cooperative spectrum sensing scheme where multi-users are independently performing spectrum sensing is also developed. In order to address a hidden node problem, a cooperate relay based on amplify-and-forward technique (AF) is formulated. Usually the performance of a spectrum sensor is evaluated using receiver operating characteristic (ROC) curve which provides a trade-off between the probability of miss detection and the probability of false alarm. Due to hardware limitations, the spectrum sensor can not sense the whole range of radio spec- trum which results into partial information of the channel state. In order to model a media access control(MAC) protocol which is able to make channel access decision under partial information about the state of the system we apply a partially observable Markov decision process (POMDP) technique as a suitable tool in making decision under uncertainty. A throughput optimization MAC scheme in presence of spectrum sensing errors is then devel- oped using the concept of cross-layer design which integrates the design of spectrum sensing at physical layer (PHY) and sensing and access strategies at MAC layer in order to maximize the overall network throughput. A problem is formulated as a POMDP and the throughput performance of the scheme is evaluated using computer simulations under greedy sensing algorithm. Simulation results demonstrate an improved overall throughput performance. Further more, multiple channels with multiple secondary users having random message ar- rivals are considered during simulation and the throughput performance is evaluated under greedy sensing scheme which forms a benchmark for cross-layer MAC scheme in presence of spectrum sensing errors. By realizing that speech communication is still the most dom- inant and common service in wireless application, we develop a cross-layer MAC scheme for speech transmission in CR networks. The design is aimed at maximizing throughput of secondary users by integrating the design of spectrum sensing at PHY, quantization param- eter of speech traffic at application layer (APP), together with strategy for spectrum access at MAC layer with the main goal to improve the QoS perceived by secondary users in CR networks. Simulation results demonstrate throughput performance improvement and hence QoS is improved. One of the main features of the modern communication systems is the parameterized operation at different layers of the protocol stack. The feature aims at providing them with the capability of adapting to the rapidly changing traffic, channel and system conditions. Another interesting research problem in this thesis is the combination of individual adap- tation mechanisms into a cross-layer that can maximize their effectiveness. We propose a joint cross-layer design MAC scheme that integrates the design of spectrum sensing at PHY layer, access at MAC layer and APP information in order to improve the QoS for video transmission in CR networks. The end-to-end video distortion which is considered as an APP parameter resides in the video encoder. This is integrated in the state space and the problem is formulated as a constrained POMDP. H.264 coding algorithm which is one of the high efficient video coding standards is considered. The objective is to minimize this end-to- end video distortion while maximizes the overall network throughput for video transmission in CR networks. The end-to-end video distortion has signifficant effects to the QoS the per- ceived by the user and is viewed as the cost in the overall system design. Given the target system throughput, the packet loss ration when the system is in the state i and a composite action is taken in time slot t, the system immediate cost is evaluated. The expected total cost for overall end-to-end video distortion over the total time slots is then computed. A joint optimal policy which minimizes the expected total end-to-end distortion in total time slots is computed iteratively. The minimum expected cost (which also known as the value function) is also evaluated iteratively for the total time slots. The throughput performance of the proposed scheme is evaluated through computer simulation. In order to study the throughput performance of the proposed scheme, we considered four simulation scenarios namely simulation scenario A, simulation scenario B, simulation scenario C, and simulation scenario D. These simulation scenarios enabled us to study the throughput performance of the proposed scheme by by computer simulations. In the simulation scenario A, the av- erage throughput performance as a function of time horizon is studied. The throughput performance under channel access decision based on belief vector and that of channel access decision based on the end-to-end distortion are compared. Simulation results show that the channel access decision based on end-to-end distortion outperforms that of channel access decision based on a belief vector. In the simulation scenario B we aimed at studying the spectral efficiency as a function of prescribed collision probability. The simulation results show that, at large values of collision probability the overall spectral efficiency performs poorly. However, there is an optimal value of collision probability of which the spectral efficiency approaches that of the perfect channel access decision. In the simulation scenario C, we aimed at studying the average throughput performance and the spectral efficiency both as a function of prescribed collision probability. The simulation results show that both average throughput and the spectral efficiency are highly affected by the increase in collision probability. However, there is an optimal prescribed collision probability which achieves the maximum average throughput and maximum spectral efficiency.