Categories
EEE

Performance analysis of IEEE 802.16d system using different modulation scheme under SUI channel with FEC

Introduction

In past years, we purely lived on analog system. Both the sources and transmission system were on analog format but the advancement of technology made it possible to transmit data digitally. Broadband Wireless Access (BWA) has emerged as a promising solution for last mile access technology to provide high speed internet access in the residential as well as small and medium sized enterprise sectors. Applications like voice, Internet access, instant messaging, SMS, paging, file transferring, video conferencing, gaming and entertainment etc became a part of life. We can consider cellular phone systems, WLAN, wide-area wireless data systems, ad-hoc wireless networks and satellite systems etc as wireless communication. Wireless technology  provide higher throughput, huge mobility, longer range, robust backbone to thereat. Engineers are trying to  provide smooth transmission of multimedia anywhere on the globe through variety of applications and devices leading a new concept of wireless communication which is less expensive  and flexible to implement even in odd environment.

Wireless Broadband Access (WBA) via DSL, T1-line or cable infrastructure is not available especially in rural or suburban areas. The DSL can covers only up to near about 18,000 feet (3 miles), this means that many urban, suburban, and rural areas may not served. The little-bit solution of this problem is to use Wi-Fi standard broadband connection but for coverage limitation its not possible in everywhere . But the Urban-area Wireless standard which is called WiMAX can solve these shortcoming. The wireless broadband connection is much easier to expose, have long range of coverage, easier to access and more  flexible.

This connectivity is really important for developing countries and IEEE 802.16 family helps to solve the last mile connectivity problems with BWA connectivity. IEEE 802.16e can operate in both Line-Of-Sight (LOS) and Non-Line-Of-Sight (NLOS) environments. In NLOS, the PHY specification is extended to 211 GHz frequency band which aim is to fight with fading and multipath propagation. The OFDM physical layer based IEEE 802.16 standard is almost identical to European Telecommunications Standard Institute’s (ETSI) High performance Metropolitan Area Network (HiperMAN) as they cooperate with each other[1] .

This thesis is all about WiMAX OFDM PHY layer performance where we analyzed the results using MATLAB simulator with different modulation techniques.

1.2 Why WiMAX

WiMAX is the next generation broadband wireless technology. It offers high speed, secure, sophisticate and last mile broadband services along with a cellular pull back and Wi-Fi hotspots. The evolution of WiMAX began shortly when scientists and engineers felt the importance  of having a wireless Internet access and other broadband services which works well in rural and urban areas and also in those areas where it is not possible to establish wired infrastructure. IEEE 802.16, also known as IEEE Wireless-MAN, explored both licensed and unlicensed band of 2-66 GHz which is standard of fixed wireless broadband and included mobile broadband application. WiMAX forum, a private organization was formed in June 2001 to coordinate the components and develop the equipment those will be compatible and inter operable. After several years, in 2007, Mobile WiMAX equipment developed with the standard IEEE 802.16e got the certification and they announced to release the product in 2008, providing mobility and nomadic access. The IEEE 802.16e air interface based on Orthogonal Frequency Division Multiple Access (OFDMA) which main aim is to give better performance in non-line-of-sight environments. IEEE 802.16e introduced scalable channel bandwidth up to 20 MHz, Multiple Input Multiple Output (MIMO) and AMC enabled 802.16e technology to support peak Downlink (DL) data rates up to 63 Mbps in a 20 MHz channel through Scalable OFDMA (S-OFDMA) system. IEEE 802.16e has strong security architecture as it uses Extensible Authentication Protocol (EAP) for mutual authentication, a series of strong encryption algorithms, CMAC or HMAC based message protection and reduced key lifetime.

1.3 Fixed Vs Mobile WiMAX

There are certain differences between Fixed WiMAX and Mobile WiMAX. 802.16d  is known as Fixed WiMAX and 802.16e standard is fondly referred as Mobile-WiMAX. The 802.16d standard supports fixed and nomadic applications whereas 802.16e standard supports fixed, nomadic, mobile and portable applications. The 802.16e carries all the features of 802.16d standard along with new specifications that enables full mobility at vehicular speed, better QoS performance and power control but 802.16e devices are not compatible with 802.16d base stations as 802.16e based on TDD whereas 802.16d is on FDD. Due to other compatibility issues with existing networks, 802.16e adopted S-OFDMA and 2048-FFT size. The main aim of mobile WiMAX is to support roaming capability and handover between Mobile Station (MS) and Base Station (BS) [2]. Several countries have already planned Mobile WiMAX for commercial services. The development included some new features on the link layer. Such features are, different types of handover techniques, robust power saving system and multiple broadcast supports etc.

1.4 WiMAX’s Path to Overcome

There are several challenges for WiMAX. These important issues must be solved to fulfill its dream of last mile solution. Some of those are mentioned below.

1.4.1 PAL and PAPR

OFDM has high Peak to Average Power Ratio. A recent analysis of its waveform showed a large fluctuation in its amplitude which leads to a huge challenge to design a power amplifier with adequate power back-off. To do so, it has to focus on different situations like, good sensitivity when the power is low, tolerability to high power level and tracking ability to track down changes. Clipping and coding have been used to fight with these effects but still researches needed in that issue to make it a good wireless communication system.

1.4.2 Attenuation

Each signal has a specific potency. To reach to a distant receiver, a signal must be strong enough to be detected by the receiver. When a signal travels in the air, gradually it becomes weaker over time and this phenomenon is called Attenuation. WiMAX is considering this issue carefully as it works on both LOS and NLOS environment.

1.4.3 Multi Path Fading

When an object comes on the way between a wireless transmitter and a receiver, it blocks the signal and creates several signal paths known as multi path. Even though the signal makes till the receiver but with variant time and it is hard to detect the actual signal. Multi path degrade the quality of the signal. Several multipath barriers which as follow:

  •  Fast Fading

Rapid changes in signal power occur when distance moves about a half wave length. It is build up by constructive and destructive Interference. This fading occurs when the coherence time is less than the each symbol period and the Doppler spread spectrum is high in the channel.

  • Slow Fading

Changes in average received signal power due to the changing distance between transmitter and the receiver or changes of surroundings when moving. This fading occurs when the coherence time is greater than the each symbol period and the Doppler spread spectrum is low in the channel.

  • Flat Fading (Non-Selective Fading)

Flat fading is that type of fading in which all frequency components of the received signal fluctuates simultaneously in the same proportion[3].This fading occurs when the channel bandwidth and delay spread spectrum of a signal is less than the channel bandwidth and symbol period.

  • Frequency Selective Fading

Selective fading affects unequally the different spectral components of a radio signal[3]. This fading occurs when the channel bandwidth and delay spread spectrum of a signal is greater than the channel bandwidth and symbol period.

  • Rayleigh Fading

NLOS (indoor, city) Rayleigh fading occurs when there is no multipath LOS between transmitter and receiver and have only indirect path which is called NLOS to receive the resultant waves[3].

 

  • Rician Fading

Rician fading best characterizes a situation where there is a direct LOS path in addition to a number of indirect multipath signals between the transmitter and receiver.

1.4.4 Noise

Different types of noises create problem in wireless communication which hampers the transmission quality. Best known noises in wireless media are:

  • Thermal Noise

It occurs due to agitation of electrons and it is present in all electronic devices and transmission media such as transmitter, channels, repeaters and receiver. It is more significant in satellite communication[4].

Principle equation:

N0 = KT (W/Hz) ——————————————————————————— (1.1)

Where: N0= noise power density in watts per 1 Hz of bandwidth K = Boltzmann’s constant = 1.3803 ´10-23J/K T = temperature, in Kelvin (absolute temperature)

If the noise is assumed as independent, the thermal noise present in a bandwidth of B Hertz (in watts):

N= KTB ———————————–                  ———————————————- (1.2)

Or, in decibel-watts,

N=10 log k+ 10log T +10log B  = -228.6 dbW+10log T +10log B ——————(1.3)

  •  Inter-modulation noise

It occurs if the medium has non-linearity. Interference caused by signals produced at frequencies that are the sum or variety of original frequencies.

  • Inter Symbol Interference (ISI)

At the same time all delayed copies in a pulse may arrive as primary pulse for a subsequent bit.

  • Cross Talk

If there are unwanted coupling found in a signal path, it is called cross talk. It creates so many problems in communication media.

  • Impulse Noise

 When irregular pulses or noise spikes occurs due to external electromagnetic disturbances, or faults and flaws in the communications system that is called impulse noise. The behavior of this type of noise has short duration and relatively high amplitude.

  • Doppler Shift Effect

Doppler shift occurs when a mobile user move towards or away from the transmitter. It has huge impact on carrier frequency causing the communication poor in performance and increasing error probability.

Chapter-2

Wi-Max Architecture

2.1 Evolution of IEEE family of standard for BWA:

The IEEE standard committee introduced standards for networking elements, for an instance, IEEE 802.16 in 1999. The 802.16 family standard is introduced as Wireless Metropolitan Area Network (MAN) commercially known as WiMAX (Worldwide interoperability for Microwave Access) which is an nonprofit, industry-led organization and responsible for certificating, testing, and promoting the compatible interoperable wireless products based on IEEE 802.16 working group and ETSI’s HiperMAN standard. The original IEEE standard addressed 10 to 66 GHz in licensed bands and 2 to 11 GHz in unlicensed frequency range. They certified different versions of WiMAX based on different criteria such as carrier based wireless (single and multi carrier), fixed and portable wireless devices etc.

2.2 IEEE 802.16 versions

 2.2.1 802.16

The first 802.16 standard was released in December 2000. It provides a standard point-to-multipoint broadcast in 10 to 66 GHz frequency range for Line of Sight (LOS) environment.

2.2.2 802.16a

The second version of WiMAX standard 802.16a was an amendment of 802.16 standard and has the capability to broadcast point-to-multipoint in the frequency range 2 to 11 GHz. It was established in January 2003 and assigned both licensed and unlicensed frequency bands. Unlicensed bands cover maximum distance from 31 to 50 miles. It improves the Quality of Service (QoS) features with supporting protocols for instance Ethernet, ATM or IP.

2.2.3 802.16c

The third version of WiMAX standard 802.16c was also an amendment of 802.16 standards which mostly dealt with frequency ranging 10 to 66 GHz. This standard addressed various issues, for instance, performance evaluation, testing and detailed system profiling. The system profile is developed to specify the mandatory features to ensure interoperability and the optional features that differentiate products by pricing and functionality.

 2.2.4 802.16d

In September 2003, a revision project known as 802.16d began which aimed to align with a particular view of European Telecommunications Standards Institute (ETSI) Hiper-MAN. This project was deduced in 2004 with the release of 802.16d-2004 including all previous Performance Evaluation of IEEE 802.16e (Mobile WiMAX) in OFDM Physical versions’ amendments. This standard supports mandatory and optional elements along with TDD and FDD technologies. Theoretically, its effective data rate is 70 Mbps but in reality, the performance is near about 40 Mbps. This standard improves the Quality of Service (QoS) by supporting very large Service Data Units (SDU) and multiple polling schemes.

2.2.5  802.16e

802.16e was an amendment of 802.16d standard which finished in 2005 and known as 802.16e-2005. Its main aim is mobility including large range of coverage. Sometimes it is called mobile WiMAX. This standard is a technical updates of fixed WiMAX which has robust support of mobile broadband. Mobile WiMAX was built on Orthogonal Frequency Division Multiple Access (OFDMA). It mentioned that, both standards (802.16d-2004 and 802.16e-2005) support the 256-FFT size. The OFDMA system divides signals into sub-channels to enlarge resistance to multipath interference. For instance, if a 30 MHz channel is divided into 1000 sub-channels, each user would concede some sub-channels which are based on distance.

Table 2.1: Comparison of IEEE standard for BWA

IEEE 802.16

IEEE 802.16a

IEEE802.16

IEEE 802.16e

Completed

December 2001

January 2003

June 2004

December 2005

Spectrum

10-66

GHz

2-11

GHz

2-11

GHz

2-6

GHz

Popagation/channel

conditions

LOS

NLOS

NLOS

NLOS

Bit Rate

Up to 134 Mbps

(28 MHz

channelization)

Up to 75 Mbps

(20 MHz

channelization)

Up to 75 Mbps

(20 MHz

channelization)

Up to 15Mbps (5

MHz

channelization)

Modulation

QPSK, 16-QAM

(optional in UL),

64-QAM

(optional)

BPSK, QPSK,

16-QAM,

64-QAM,

256-QAM

(optional)

256 subcarriers

OFDM, BPSK,

QPSK, 16-QAM,

64-QAM,

256-QAM

Scalable

OFDMA, QPSK,

16-QAM,

64-QAM,

256-QAM

(optional)

Mobility

Fixed

Fixed

Fixed

Fixed

 2.3 Features of WiMAX

There are certain features of WiMAX those are making it popular day by day. Some important features of WiMAX are described below:

 2.3.1 Interoperability

This is the main concern of WiMAX. The IEEE 802.16 standard is internationally accepted and the standard is maintained and certified by WiMAX forum which covers fixed, portable and mobile deployments and giving the user the freedom to choose their product from different certified vendors and use it in different fixed, portable or mobile networks.

2.3.2 Long Range

Another main feature of WiMAX is long range of coverage. Theoretically, it covers up to 30 miles but in practice, it covers only 6 miles. The earlier versions of WiMAX provide LOS coverage but as technology advanced and the later version of WiMAX, e.g. mobile WiMAX, can support both LOS and NLOS connections. For that, it must meet the condition of the range for LOS, 50 kilometers and for NLOS, 10 kilometers. The WiMAX subscriber may connect to WiMAX Base station by Stanford University Interim (SUI) traffic model from their offices, homes, hotels and so on.

2.3.3 Mobility

WiMAX offers immense mobility especially IEEE 802.16e-2005 as it adopted SOFDMA (Scalable Orthogonal Frequency Division Multiple Access) as a modulation technique and MIMO (Multiple Input Multiple Output) in its physical layer. There are two challenges in wireless connectivity, one of them is for session initiation, which provides a mean to reach to inactive users and continue the connection service by extending it even the home location of that user has been changed and the other one provides an ongoing session without interruption while on moving (specially at vehicular speed). The first is known as roaming and the second one is handoff. These two are described below.

  • Roaming

The centralized database keeps current information which sends to the network by the user base station when it moves from one location to another. To reach another subscriber station the network pages for it using another base station. The used subscriber station for paging depends on updating rate and movement of subscriber station – that means from one station to another. To perform this operation, there are several networking entities involved such as NSS (Network Switching Subsystem), HLR (Home Location Register) and VLR (Visitor Location Register) etc.

  • NSS (localization and updating of location)
  • HLR (contains information of current location) and
  • VLR (sends information to Mobile Station to inform HLR about the changes of location)
  • Handoff

Due to the absence of handoff technique, the Wi-Fi users may move around a building or a hotspot and be connected but if the users leave their location, they lose their connectivity. But with the 802.16e-2005, the mobile users will be connected through Wi-Fi when they are within a hotspot and then will be connected to 802.16 if they leave the hotspot but will stay in the WiMAX coverage area.

2.3.4 Quality of Service

Quality of Service (QoS) refers to the collective effect of service perceived by the users. Actually it refers to some particular requirements such as throughput, packet error rate, delay, and jitters etc. The wireless network must support a variety of applications for instance, voice, data, video, and multimedia. Each of these has different traffic pattern and requirements which is shown in the Table 2.2 [3].

Table 2.2: Sample Traffic Parameters for Broadband Wireless Application [3]

Parameter Interactive Gaming Voice Streaming Media Data Video
Data rate 50Kbps to 85Kbps 4Kbps-64Kbps 5Kbps-384Kbps 0.01Mbps-100Mbps > 1Mbps
Applications Interactive gaming VoIP Music, Speech, Video Clips Web browsing, e-mail, instant messaging, telnet, file download IPTV, movie download, p2p video sharing
Packet loss Zero <1% <1% Audio <2% Video Zero <10-8
Delay Variation Not Applicable <20ms <2sec Not Applicable <2sec
Delay <50ms-150ms <100ms <250ms Flexible <100ms

   2.3.5 Interfacing

Interface installation is another feature of WiMAX. Each base station broadcasts radio signals to its subscribers to stay with connection. Since each base station covers limited range so it is necessary to install multiple base stations after a certain distance to increase the range for network connectivity. Connecting multiple base stations is not a big deal and it takes only a few hours.

2.3.6 Accessibility

To get high speed network connectivity, only necessary thing is to become a subscriber of WiMAX service providers. Then they will provide hardware that is very easy to install. Most of time hardware connects through USB ports or Ethernet and the connection may be made by clicking button.

2.3.7 Scalability

802.16 standard supports flexible channel bandwidths for summarize cell planning in both licensed and unlicensed spectrum. If an operator assigned 15 MHz of spectrum, it can be divided into three sectors of 5MHz each. By increasing sector, the operator can increase the number of subscriber to provide better coverage and throughput. For an instance, 50 of hotspot subscribers are trying to get the network connectivity in a conference for 3 days. They also require internet access connectivity to their corporate network via Virtual Private Network (VPN) with T1 connection. For this connectivity, bandwidth is a big question as it needs more bandwidth. But in wireless broadband access it’s feasible to provide service to that location for a small period of time. It would be very hard to provide through wired connection. Even the operator may re-use the spectrum in three or more sectors by creating appropriate isolation.

2.3.8 Portability

Portability is another feature as like mobility that is offered by WiMAX. It is not only offers mobility applications but also offers nomadic access applications.

2.3.9 Last Mile Connectivity

Wireless network accesses via DSL, T1-line or cable infrastructure are not available especially in rural areas. These connections have more limitations which can be solved by WiMAX standards.

2.3.10 Robust Security

WiMAX have a robust privacy and key management protocol as it uses Advanced Encryption Standard (AES) which provides robust encryption policy. It also supports flexible authentication architecture which is based on Extensible Authentication Protocol (EAP) which allows variety of subscriber credentials including subscriber’s username and password, digital certificates and cards.

2.4 WiMAX Architecture

WiMAX architecture comprises of several components but the basic two components are BS and SS. Other components are MS, ASN, CSN and CSN-GW etc. The WiMAX Forum’s Network Working Group (NWG) has developed a network reference model according to the IEEE 802.16e-2005 air interface to make sure the objectives of WiMAX are achieved. To support fixed, nomadic and mobile WiMAX network, the network reference model can be logically divided into three parts[5].

  •  Mobile Station (MS)

It is for the end user to access the mobile network. It is a portable station able to move to wide areas and perform data and voice communication. It has all the necessary user equipments such as an antenna, amplifier, transmitter, receiver and software needed to perform the wireless communication. GSM, FDMA, TDMA, CDMA and W-CDMA devices etc are the examples of Mobile station.

  •  Access Service Network (ASN)

It is owned by NAP, formed with one or several base stations and ASN gateways (ASN-GW) which creates radio access network. It provides all the access services with full mobility and efficient scalability. Its ASN-GW controls the access in the network and coordinates between data and networking elements.

  •  Connectivity Service Network (CSN):

Provides IP connectivity to the Internet or other public or corporate networks. It also applies per user policy management, address management, location management between ASN, ensures QoS, roaming and security.

WiMAX Network Architecture based on IP

Fig 2.1: WiMAX Network Architecture based on IP

2.5 Mechanism

WiMAX is capable of working in different frequency ranges but according to the IEEE 802.16, the frequency band is 10 GHz – 66 GHz. A typical architecture of WiMAX includes a base station built on top of a high rise building and communicates on point to multi-point basis with subscriber stations which can be a business organization or a home. The base station is connected through Customer Premise Equipment (CPE) with the customer. This connection could be a Line-of-Sight (LOS) or Non-Line-of-Sight (NLOS).

2.5.1 Line of Sight (LOS)

In LOS connection, signal travels in a straight line which is free of obstacles, means, a direct connection between a transmitter and a receiver. The features of LOS connections are,

  •  Uses higher frequency between 10 GHz to 66 GHz
  • Huge coverage areas
  • Higher throughput
  • Less interference
  • Threat only comes from atmosphere and the characteristic of the frequency
  • LOS requires most of  its first Fresnel zone should be free of obstacles

WiMAX in LOS Condition

Fig 2.2: WiMAX in LOS Condition

 2.5.2 Non-Line of Sight (NLOS)

In NLOS connection, signal experiences obstacles in its path and reaches to the receiver through several reflections, refractions, diffractions, absorptions and scattering etc. These signals arrive to the receiver in different times, attenuation and strength which make it hard to detect the actual signal[6]. WiMAX offers other benefits which works well in NLOS condition,

  •  Frequency selective fading can be overcome by applying adaptive equalization
  • Adaptive Modulation and Coding (AMC), AAS and MIMO techniques helps WiMAX to works efficiently in NLOS condition
  • Sub-channelization permits to transmit appropriate power on sub-channels
  • Based on the required data rate and channel condition, AMC provides the accurate modulation and code dynamically

WiMAX in NLOS Condition

2.6 Major shortcomings of WiMAX

There are several major shortcomings of WiMAX which are still a headache to the engineers. Those are as follows:

  • Bit Error Rate

General concept of WiMAX is that, it provides high speed data rate within its maximum range (30 miles). If WiMAX operates the radio signals to its maximum range then the Bit Error Rate (BER) increases. So, it is better to use lower bit rates within short range to get higher data rates.

  • Data Rates

Mobile WiMAX uses Customer Premises Equipment (CPE) which is attached to computers (either desktop or laptop or PDA) and a lower gain Omni-directional antenna is installed which is difficult to use compared to fixed WiMAX.

 

  • LOS and NLOS coverage

Mobile WiMAX covers 10 kilometers with 10 Mbps speeds in line -of-sight (LOS) environment but in urban areas, it is only 2 kilometers coverage due to non-line-of-sight problem. In this situation, mobile WiMAX may use higher gain directional antenna for excellent coverage and throughput but problem is that it loose its mobility.

Besides all above shortcomings, there is a major impact of weather conditions like rain, fog and droughts etc on WiMAX networks.

2.7  IEEE 802.16 Protocol Layers

IEEE 802.16 standard WiMAX gives freedom in several things compared to other technologies. The focus is not only on transmitting tens of megabits of data to many miles distances but also maintaining effective QoS (Quality of Services) and security. This chapter gives an overview of IEEE 802.16 protocol layers and OFDM features. WiMAX 802.16 is mainly based on the physical and data link layer in OSI reference model. Here, Physical layer can be single-carrier or multi-carrier (PHY) based and its data link layer is subdivided into two layers

  • Logical Link Control (LLC) and
  • Medium Access Control (MAC)

MAC is further divided into three sub-layers:

  • Convergence Sub-layer (CS)
  • Common Part Sub-layer (CPS) and
  • Security Sub-layer (SS).

 2.7.1 Physical Layer (PHY)

Physical layer set up the connection between the communicating devices and is responsible for transmitting the bit sequence. It also defines the type of modulation and demodulation as well as transmission power. WiMAX 802.16 physical layer considers two types of transmission techniques OFDM and OFDMA. Both of these techniques have frequency band below 11 GHz and use TDD and FDD as its duplexing technology. After implementing OFDM in IEEE 802.16d, OFDMA has been included in IEEE 802.16e to provide support in NLOS conditions and mobility. The earlier version uses 10 to 66 GHz but the later version is expanded to use up the lower bandwidth from 2 to 11 GHz which also supports the 10 to 66 GHz frequency bands. There are some mandatory and some optional features included with the physical layer specification.

WiMAX Physical and MAC layer architecture

Fig 2.4: WiMAX Physical and MAC layer architecture

 From OSI 7 layer reference model, WiMAX only uses the physical layer and MAC of datalink layer.

There are specific names for each physical layer interface. The Table summarizes IEEE 802.16 physical layer’s features.

Table 2.3: IEEE 802.16 standard air interface’s description

Specific Name   Operating Band   Duplexing  
Noticeable Feature
WirelessMAN-SC™  10 to 66 GHz  FDD and TDD  Single-carrier 
WirelessMAN-SCa™  2 to 11 GHz, Licensed  FDD and TDD  Single-carrier, NLOS 
WirelessMAN-OFDMA™  2 to 11 GHz, Licensed  FDD and TDD  OFDM technique, NLOS 
WirelessHUMAN™  2 to 11 GHz, Free  TDD  Single-carrier, LOS, NLOS, OFDM, OFDMA, Frequency selective channel 
WirelessMAN-OFDMA™  2 to 11 GHz, Licensed  FDD and TDD  Single frequency band, OFDM system divides signal into sub-channels 

 2.7.2 MAC layer

The basic task of WiMAX MAC is to provide an interface between the physical layer and the upper transport layer. It takes a special packet called MAC Service Data Units (MSDUs) from the upper layer and makes those suitable to transmit over the air. For receiving purpose, the mechanism of MAC is just the reverse. In both fixed and mobile WiMAX, it included a convergence sub-layer which is able to interface with upper layer protocols such as ATM, TDM, Voice and other advanced protocols. WiMAX MAC has unique features to identify and address the SS and BS. Each SS carries 48-bit IEEE MAC address whereas BS carries 48-bit Base Station ID in which 24-bit uses for operator indicator. Other features are, 16-bit CID, 16-bit SAID and 32-bit SFID. MAC supports a variety of applications and mobility features such as,

  •  PKM for MAC security and PKMV2 for Extensible Authentication Protocol (EAP)
  • Fast handover and strong mobility management
  • Provides normal, sleep and idle mode power levels

 2.7.3 Sub-layers

WiMAX MAC layer is divided into three sub-layers such as Service Specific Convergence Sub-layer (SSCS), Common Part Sub-layer (CPS) and Security Sub-layer (SS).

Purposes of MAC Layer in WiMAX

Fig 2.5: Purposes of MAC Layer in WiMAX

2.7.4 Service Specific Convergence Sub-layer (SSCS)

This stays on the top of MAC layer architecture which takes data from the upper layer entities such as router and bridges. It is a sub-layer that is service dependent and assures data transmission. It enables QoS and bandwidth allocation. Payload header suppression and increase the link efficiency are other important task of this layer. IEEE 802.16 specifies two types of SSCS for mapping function.

  • ATM Convergence sub-layer: is a logical interface which is responsible for Asynchronous Transfer Mode (ATM) services. In the operation, it accepts ATM cells from ATM layer classify and then sends CS PDUs to MAC SAP. It differentiates Virtual path switched ATM connection and assigns Channel ID (CID)
  • Packet Convergence sub-layer: It’s a packet based protocol which performs packet mapping such as IP, IPv4, IPv6, IEEE 802.3 Ethernet LAN, VLAN and PPP.

2.7.5 Common Part Sub-layer (CPS)

It stays underneath of SSCS and above the Security Sub-layer and defines the multiple access mechanism. CPS is responsible for the major MAC functionalities like system access, establishing the connection and maintain and bandwidth management etc. As WiMAX MAC is connection oriented so it provides service flows after each Subscriber Station’s registration. Other responsibilities are, providing QoS for service flows and managing connection by adding or deleting or modifying the connection statically or dynamically. On downlink channel, only the BS transmits and it does not need any coordination function. SS receives only those messages which are addressed to them. On uplink channel, three major principles defines the transmission right[7],

  • Unsolicited bandwidth permission
  • Polling and
  • Contention procedures

2.7.6 Security Sub-layer (SS)

 This part stays at the bottom of MAC layer and one of the most important part of MAC as it provides authentication, secure key exchange, encryption and integrity of the system. IEEE 802.16 standard defines both ways data encryption connection between subscriber and base station. A set of cryptographic suites such as data encryption and authentication algorithm has been defined which made security sub-layer of WiMAX MAC very robust. A secure distribution of keying data from base station to subscriber station is assured by providing an authentication and a PKM protocol. On top of that, in SS, the addition of a digital certificate strengthen the privacy of data and in BS, the PKM assured the conditional access to the network services and applications. Further improvement of PKM protocol is also defined with some additional features and with a new name named PKMv2 which strongly controls integrity, mutual authentication and handover mechanisms[8].

2.8 WiMAX forum and adaptation of IEEE 802.16

The Worldwide Interoperability for Microwave Access (WiMAX) forum is an alliance of telecommunication equipments and components manufacturers and service providers, formed to promote and certify the compatibility and interoperability of BWA products employing the IEEE 802.16 and ETSI HiperMAN[9]  wireless specifications. WiMAX Forum Certified™[10] equipment is proven interoperable with other vendors’ equipment that is also WiMAX Forum Certified™. So far WiMAX forum has setup certification laboratories in Spain, Korea and China. Additionally, the WiMAX forum creates what it calls system profiles, which are specific implementations, selections of options within the standard, to suit particular ensembles of service offerings and subscriber populations[11].   WiMAX forum has adopted two version of the IEEE 802.16 standard to provide different types of access:

  • Fixed/Nomadic access: The WiMAX forum has adopted IEEE802.162004 And ETSI HyperMAN standard for fixed and nomadic access[9]. This uses Orthogonal Frequency Division Multiplexing and able to provide supports in Line of Sight (LOS) and Non Line of Sight (NLOS) propagation environment. Both outdoor and indoor CPEs are available for fixed access. The main focus of the WiMAX forum profiles are on 3.5 GHz and 5.8 GHz frequency band.
  • Portable/Mobile Access: The forum has adopted the IEEE 802.16e version of the standard, which has been optimized for mobile radio channels. This uses Scalable OFDM Access and provides support for handoffs and roaming[9]. IEEE 802.16e based network is also capable to provide fixed access. The WiMAX Mobile WiMAX profiles will cover 5, 7, 8.75, and 10 MHz channel bandwidths for licensed worldwide spectrum allocations in the 2.3 GHz, 2.5 GHz, 3.3 GHz and 3.5 GHz frequency bands[12]. The first certified product is expected to be available by the end of 2007.

2.9 Application of IEEE 802.16 based network:

IEEE 802.16 supports ATM, IPv4, IPv6, Ethernet and Virtual Local Area Network (VLAN) services [13]. SO, it can provide a rich choice of service possibilities to voice and data network service providers. It can be used for a wide selection of wireless broadband connection and solutions.

 

  • Cellular Backhaul: IEEE 802.16 wireless technology can be an excellent choice for back haul for commercial enterprises such as hotspots as well as point to point back haul applications due to its robust bandwidth and long range.
  • Residential Broadband: Practical limitations like long distance and lack off return channel prohibit many potential broadband customers reaching DSL and cable technologies [14]. IEEE 802.16 can fill the gaps in cable and DSL coverage.
  • Underserved areas: In many rural areas, especially in developing countries, there is no existence of wired infrastructure. IEEE 802.16 can be a better solution to provide communication services to those areas using fixed CPE and high gained antenna.
  •  Always Best Connected: As IEEE 802.16e supports mobility [15], so the mobile user in the business areas can access high speed services through their IEEE 802.16/WiMAX enabled handheld devices like PDA, Pocket PC and smart phone.

Application scenarios

Figure 2.6: Application scenarios

 Chapter-3

Modulation

3.1 Modulation Techniques

The variation of the property of a signal, such as its amplitude, frequency or phase is called modulation. This process carries a digital signal or message. Different types of modulation techniques are available such as, Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK) and Phase Shift Keying (PSK). This section discusses on different modulation techniques along with WiMAX’s special modulation technique which is called Adaptive Modulation technique.

3.2 ASK, FSK and PSK

Basic modulation techniques consist on three parts. Which as follows,

  • Amplitude Shift Keying (ASK)
  • Frequency Shift Keying (FSK)
  • Phase Shift Keying (PSK)

3.2.1 Amplitude-Shift Keying (ASK)

Amplitude difference of carrier frequency is called ASK. In this, the phase and the frequency are always constant. The principle is based on the mathematical equation,

1

Features of ASK

  • Likely to be affected by sudden changes of gain.
  • Inefficient modulation technique compared to other techniques.
  • On the voice transmission lines such as telephone, used up to 1200 bps.
  • Use in optical fibres to transmit digital data.

3.2.2 Frequency Shift Keying (FSK)

Frequency difference near carrier frequency is called FSK. In this, the phase and the amplitude are always constant. There are several types of FSK. Most common are, Binary Frequency Shift Keying (BFSK) and Multiple Frequency Shift Keying (MFSK).

3.2.3 Binary Frequency Shift Keying (BFSK)

Two frequencies represent two binary values in this technique. The principle lies on the equation,

2

Features of BFSK

·         Less affected by errors than ASK.

·         On voice transmission lines such as telephone, range till 1200bps.

·         Which is used for high-frequency (3 to 30 MHz) radio frequency.

·         Suitable for LANs that use coaxial cables.

 3.2.4 Multiple Frequency Shift Keying (MFSK)

More than two frequencies are used to represent signaling elements. The principle lies on the equations,

 3

Features of MFSK

·         Multiple frequencies are used

·         More bandwidth efficient but very much affected by errors

·         Bandwidth requirement is 2Mfd in total.

·         Each signal element encodes L bits (M=2L).

3.2.5 Phase-Shift Keying (PSK)

Phase of carrier signal is digital modulation scheme which conveys data by modulating or changing of carrier wave. The most common and widely used are Binary Phase shift Keying (BPSK) and Quadrature Phase Shift Keying (QPSK). Other PSKs are Differential Phase Shift Keying (DPSK) and Multilevel Phase Shift Keying (MPSK) etc. As WiMAX uses Adaptive Modulation Techniques, so, here we will broadly discuss only BPSK, QPSK and QAM.

3.2.6 Binary Phase Shift Keying (BPSK)

This is also known as two-level PSK as it uses two phases separated by 180º to represent binary digits. The principle equation is,

4

This kind of phase modulation is very effective and robust against noises especially in low data rate applications as it can modulate only 1bit per symbol.

Block Diagram

Fig 3.1: BPSK, (a) Block Diagram (b) Constellation

3.2.7 Quadrature Phase Shift Keying (QPSK)

This is also known as four-level PSK where each element represents more than one bit. Each symbol contains two bits and it uses the phase shift of π/2, means 90º instead of shifting the phase 180º. The principle equation of the technique is:

5

In this mechanism, the constellation consists of four points but the decision is always made in two bits. This mechanism can ensure the efficient use of bandwidth and higher spectral efficiency[16].

Constellation

Fig 3.2: QPSK, (a) Block Diagram (b) Constellation

3.2.8 Quadrature Amplitude Modulation (QAM)

This is the most popular modulation technique used in various wireless standards. It combined with ASK and PSK which has two different signals sent concurrently on the same carrier frequency but one should be shifted by 90º with respect to the other signal. At the receiver end, the signals are demodulated and the results are combined to get the transmitted binary input [16]. The principle equation is: 

6

QAM Modulator Diagram

Fig 3.3: QAM Modulator Diagram

3.2.9 16-QAM

This is called 16-states Quadrature Amplitude Modulation which means four different amplitude levels would be used and the combined stream would be one of 16 = 4 * 4 states. In this mechanism, each symbol represents 4 bits[16].

QAM Constellation

Fig 3.4: 16-QAM Constellation

3.2.10 64-QAM

This is same as 16-QAM except it has 64-states where each symbol represents six bits (26= 64). It is a complex modulation techniques but with greater efficiency [16]. The total bandwidth increases according to the increasing number of states for each symbol. Mobile WiMAX uses this higher modulation technique when the link condition is high.

64-QAM Constellation

Fig 3.5: 64-QAM Constellation

3.3 Adaptive Modulation and Coding

The specified modulation scheme in the DL (DownLink) and UL( UpLink) are BPSK (Binary Phase Shift Keying) ,QPSK(Quadrature PSK), 16-QAM (16- Quadrature Amplitude Modulation) and 64-QAM to modulate bits to the complex constellation points. The FEC options are paired with the modulation schemes to form burst profiles. The PHY specifies seven combinations of modulation and coding rate, which can be allocated selectively to each subscriber, in both UL and DL [17]. There are tradeoffs between data rate and robustness, depending on the propagation conditions. Table 3.1 shows the combination of those modulation and coding rate.

Table 3.1: Mandatory channel coding per modulation

Modulation

Uncoded

Block Size

(bytes)

Coded

Block Size

(bytes)

Overall

coding rate

RS code

CC code

rate

BPSK

12

24

1/2

(12,12,0)

1/2

QPSK

24

48

1/2

(32,24,4)

2/3

QPSK

36

48

3/4

(40,36,2)

5/6

16-QAM

48

96

1/2

(64,48,8)

2/3

16-QAM

72

96

3/4

(80,72,4)

5/6

64-QAM

96

144

2/3

(108,96,6)

3/4

64-QAM

108

144

3/4

(120,108,6)

5/6

 

Chapter-4

Orthogonal Frequency Division Multiplexing

4.1 OFDM  BASIC:

The idea of OFDM comes from Multi Carrier Modulation (MCM) transmission technique. The principle of MCM describes the division of input bit stream into several parallel bit streams and then they are used to modulate several sub carriers as shown in Figure 4.1. Each subcarrier is separated by a guard band to ensure that they do not overlap with each other. In the receiver side, bandpass filters are used to separate the spectrum of individual subcarriers. OFDM is a special form of spectrally efficient MCM technique, which employs densely spaced orthogonal subcarriers and overlapping spectrums. The use of bandpass filters are not required in OFDM because of the orthogonality nature of the subcarriers. Hence, the available bandwidth is used very efficiently without causing the InterCarrier Interference (ICI). In figure 4.2, the effect of this is seen as the required bandwidth is greatly reduced by removing guard band and allowing subcarrier to overlap. It is still possible to recover the individual subcarrier despite their overlapping spectrum provided that the orthogonality is maintained. The Orthogonality is achieved by performing Fast Fourier Transform (FFT) on the input stream. Because of the combination of multiple low data rate subcarriers, OFDM provides a composite high data rate with long symbol duration. Depending on the channel coherence time, this reduces or completely eliminates the risk of InterSymbol Interference (ISI), which is a common phenomenon in multipath channel environment with short symbol duration. The use of Cyclic Prefix (CP) in OFDM symbol can reduce the effect of ISI even more[18], but it also introduces a loss in SNR and data rate.

Block diagram of a generic MCM transmitter

Figure 4.1: Block diagram of a generic MCM transmitter.

Comparison between conventional FDM and OFDM

Figure 4.2: Comparison between conventional FDM and OFDM

4.2 OFDM  SYSTEM  IMPLEMENTATION

The principle of OFDM was already around in the 50’s and 60’s as an efficient MCM technique. But, the system implementation was delayed due to technological difficulties like digital implementation of FFT/IFFT, which were not possible to solve on that time. In 1965, Cooley and Tukey presented the algorithm for FFT calculation[19] and later its efficient implementation on chip makes the OFDM into application.

The digital implementation of OFDM system is achieved through the mathematical operations called Discrete Fourier Transform (DFT) and its counterpart Inverse Discrete Fourier Transform (IDFT). These two operations are extensively used for transforming data between the time domain and frequency domain. In case of OFDM, these transforms can be seen as mapping data onto orthogonal subcarriers.

In order to perform frequency domain data into time domain data, IDFT correlates the frequency domain input data with its orthogonal basis functions, which are sinusoids at certain frequencies. In other ways, this correlation is equivalent to mapping the input data onto the sinusoidal basis functions. In practice, OFDM systems employ combination of fast fourier transform (FFT) and Inverse fast Fourier transform (IFFT) blocks which are mathematical equivalent version of the DFT and IDFT.

At the transmitter side, an OFDM system treats the source symbols as though they are in the frequency domain. These symbols are feed to an IFFT block which brings the signal into the time domain. If the N numbers of subcarriers are chosen for the system, the basis functions for the IFFT are N orthogonal sinusoids of distinct frequency and IFFT receive N symbols at a time. Each of N complex valued input symbols determines both the amplitude and phase of the sinusoid for that subcarrier. The output of the IFFT is the summation of all N sinusoids and makes up a single OFDM symbol. The length of the OFDM symbol is NT where T is the IFFT input symbol period. In this way, IFFT block provides a simple way to modulate data onto N orthogonal subcarriers.

Basic OFDM transmitter and receiver

Figure 4.3: Basic OFDM transmitter and receiver

At the receiver side, The FFT block performs the reverse process on the received signal and bring it back to frequency domain. The block diagram in Figure 4.3 depicts the switch between frequency domain and time domain in an OFDM system.

 

4.3 Data transmission

Data transmission is high enough compared to FDM as OFDM follows multicarrier modulation. For this, OFDM splits high data bits into low data bits and sends each sub-stream in several parallel sub-channels, known as OFDM subcarriers. These subcarriers are orthogonal to each other and the each subcarrier bandwidth is much lesser than the total bandwidth. Inter Symbol Interference is reduced in OFDM technique as the symbol time Ts of each sub-channel is higher than the channel delay spread

Time and Frequency diagram of Single and Multi-carrier signals

Fig 4.4: Time and Frequency diagram of Single and Multi-carrier signals                                

In the figure 4.4, it is clear that OFDM resists the multipath effect by adopting smaller frequency bandwidth and longer period of time which leads to get better spectral efficiency.

4.4 Parameters

The implementation of OFDM physical layer is different for two types of WiMAX. For fixed WiMAX, FFT size is fixed for OFDM-PHY and it is 256 but for mobile WiMAX, the FFT size for OFDMA-PHY can be 128, 512, 1024 and 2048 bits[1]. This helps to combat ISI and Doppler spread. Other difference between OFDM-PHY and OFDMA-PHY is, OFDM splits a single high bit rate data into several low bit rate of data sub-stream in parallel which are modulated by using IFFT whereas OFDMA accepts several users’ data and multiplex those onto downlink sub-channel. Uplink multiple access is provided through uplink sub-channel. OFDM-PHY and OFDMA-PHY parameters are discussed briefly in the following subsection.

4.4.1 OFDM-PHY

In this, FFT size is fixed and it is 256 bits in which number of used data subcarrier is 192, 8 pilot subcarriers to perform synchronization and channel estimation and 56 Null subcarriers [20]. The channel bandwidth for fixed WiMAX is 3.5 MHz but it varies due to spacing of subcarrier. Subcarrier spacing rises in higher bandwidth which decreases the symbol time eventually increases the delay spread. To avoid delay spreading, OFDM-PHY allocates a large fraction of guard space. For OFDM-PHY, the suitable symbol time is 64 μs, symbol duration is 72 μs and guard time spacing is 15.625 kHz[20].

4.4.2 OFDMA-PHY

In mobile WiMAX FFT size can varies between 128 and 2048 and to keep the subcarrier spacing at 10.94 KHz, the FFT size should be adjusted which is helpful to minimize Doppler spreads. Since there are different channel bandwidth like, 1.25, 5, 10 and 20 MHz etc, so FFT sizes are 128, 512, 1024 and 2048 respectively. For OFDMA-PHY, the suitable symbol time is 91.4 μs and the symbol duration is 102.9 μs and number of symbols in 5 ms frames is 48.0[20].

4.5 Sub-channelization

WiMAX divides the available subcarriers into several groups of subcarriers and allocates to different users based on channel conditions and requirement of users. This process is called sub-channelization. Sub-channeling concentrates the transmit power to different smaller groups of subcarrier to increase the system gain and widen up the coverage area with less penetration losses that cause by buildings and other obstacles. Without sub-channelization, the link budget would be asymmetric and bandwidth management would be poor[6]. Fixed WiMAX based OFDM-PHY permits a little amount of sub-channelization only on the uplink. Among 16 standard sub-channel, transmission can takes place in 1, 2, 4, 8 or all sets of sub-channels in the uplink of the SS. SS controls the transmitted power level up and down depending on allotted sub-channels. When the allotted sub-channels increase for uplink users, the transmitted power level increases and when the power level decreases, it means the allotted sub-channels decreased. The transmitted power level is always kept below the maximum level. In fixed WiMAX, to improve link budget and the performance of the battery of the SS, the uplink sub-channelization permits SS to transmit only a fraction of the bandwidth usually below 1/16 allocated by the BS[21].

Mobile WiMAX’s OFDMA-PHY permits sub-channelization in both uplink and downlink channels. The BS allocates the minimum frequency and sub-channels for different users based on multiple access technique. That is why this kind of OFDM is called OFDMA (Orthogonal Frequency Division Multiple Access). For mobile application, frequency diversity is provided by formation of distributed subcarriers. Mobile WiMAX has several distributed carrier based sub-channelization schemes. The mandatory one is called Partial Usage of Sub-Carrier (PUSC). Another sub-channelization scheme based on unbroken subcarrier is called Adaptive Modulation and Coding (AMC) in which multiuser diversity got the highest priority. In this, allocation of sub-channels to users is done based on their frequency response. It is a fact that, contiguous sub-channels are best suited for fixed and low mobility application, but it can give certain level of gain in overall system capacity[21].

1

Figure 4.7 illustrates the transmitted upstream OFDM spectrum from a CPE where the carriers are as same as BS in size and range but with small capacity[6].

4.6 BENEFITS AND DRAWBACKS of OFDM:

In the earlier section, we have stated that how an OFDM system combats the ISI and reduces the ICI. Besides those benefits, there are some other benefits as follows:

·         High spectral efficiency because of overlapping spectra

·         Simple implementation by fast fourier transform

·         Low receiver complexity as the transmitter combat the channel effect to some extends.

·         Suitable for high datar ate transmission

·         High flexibility in terms of link adaptation

·         Low complexity multiple access schemes such as orthogonal frequency division multiple access (OFDMA)

·         It is possible to use maximum likelihood detection with reasonable complexity[22].

 

On the other side, few drawbacks of OFDM are listed as follows

·         An OFDM system is highly sensitive to timing and frequency offsets[18] . Demodulation of an OFDM signal with an offset in the frequency can lead to a high bit error rate.

·         An OFDM system with large number of subcarriers will have a higher peak to average power ratio (PAPR) compared to single carrier system. High PAPR of a system makes the implementation of Digital to analog (DAC) and Analog to Digital Conversion (ADC) extremely difficult[23].

4.7 APPLICATION

OFDM has gained a big interest since the beginning of the 1990s[24]  as many of the implementation difficulties have been overcome. OFDM has been in used or proposed for a number of wired and wireless applications. Digital Audio Broadcasting (DAB) was the first commercial use of OFDM technology[23]. OFDM has also been used for the Digital Video Broadcasting[25] . OFDM under the acronym of Discrete Multi Tone (DMT) has been selected for asymmetric digital subscriber line (ADSL)[26]. The specification for Wireless LAN standard such as IEEE 802.11a/g[27-28] and ETSI HIPERLAN2[29] has employed OFDM as their PHY technologies. IEEE 806.16 standard for Fixed/Mobile BWA has also accepted OFDM for PHY technologies.

2

hb  is the height of the base station in meters (between 10 m and 80 m), d0 = 100 m, and a, b, c are constants dependent on the terrain category. These parameters are listed in the table below.

Table 5.1: Parameters of the ERCEG model

Model Parameter

Terrain Type A

Terrain Type B

Terrain Type C

a

4.6

4

3.6

b

0.0075

0.0065

0.005

c

12.6

17.1

20

 

s represents the shadowing effect and follows a lognormal distribution with a typical standard

deviation of 8.2 to 10.6 dB.

The above model is valid for frequencies close to 2 GHz and for receive antenna heights close to 2 m. For other frequencies and antenna heights (between 2 m and 10 m), the following correction terms are recommended :

4

 

5.2.2 SUI Models

This is a set of 6 channel models representing three terrain types and a variety of Doppler spreads, delay spread and line-of-sight/non-line-of-site conditions that are typical of the continental US as follows[31]:

Table 5.2: Terrain type and Doppler spread for SUI channel model

Channel

Terrain type

Doppler Spread

Spread

LOS

SUI-1

C

Low

Low

High

SUI-2

C

Low

Low

High

SUI-3

B

Low

Low

High

SUI-4

B

High

Moderate

High

SUI-5

A

Low

High

Low

SUI-6

A

High

High

Low

The terrain type A, B,C are same as those defined earlier for Erceg model. The multipath fading is modeled as a tapped delay line with 3 taps with non-uniform delays. The gain associated with each tap is characterized by a Rician Distribution and the maximum Doppler frequency. In a multipath environment, the received power r has a Rician distribution, whose pdf is given by:

5

Here, I0 (x) is the modified Bessel function of the first kind, zero order. A is zero if there is no LOS component and the pdf of the received power becomes:

6

This is the Raleigh distribution. The ratioK = A2/(2σ2) in the Rician case represents the ratio of

LOS component to NLOS component and is called the “K-Factor” or “Rician Factor.” For NLOS case, K-factor is zero and the Rician distribution reduces to Raleigh Distribution.

The general structure for the SUI channel model is as shown below in Figure 5.1. This structure is for Multiple Input Multiple Output (MIMO) channels and includes other configurations like Single Input Single Output (SISO) and Single Input Multiple Output (SIMO) as subsets. The SUI channel structure is the same for the primary and interfering signals.

Generic Structure of SUI Channel Models

        Figure 5.1:Generic Structure of SUI Channel Models

#  Input Mixing Matrix

This part models correlation between input signals if multiple transmitting antennas are used.

#  Tapped Delay Line Matrix

This part models the multipath fading of the channel. The multipath fading is modeled as a tapped delay line with 3 taps with non-uniform delays. The gain associated with each tap is characterized by a distribution (Rician with a K-factor > 0, or Raleigh with K-factor = 0) and the maximum Doppler frequency.

#Output Mixing Matrix

This part models the correlation between output signals if multiple receiving antennas are used. Using the above general structure of the SUI Channel and assuming the following scenario, six SUI channels are constructed which are representative of the real channels.

5.3 Scenario for modified SUI channels

Table 5.3: Scenario for SUI Channel Models

Cell size

7 KM

BTS Antenna Height

30 m

Receive Antenna Height

6 m

BTS Antenna Beam Width

120o

Receive Antenna Beam Width

Omni directional (360°) and 30°.

Polarization

Vertical Polarization Only

Cell coverage

90% cell coverage with 99.9% reliability at each location covered.

 5.4 Characteristics of SUI Channels:

In the following models, the total channel gain is not normalized. Before using a SUI model, the specified normalization factors have to be added to each tap to arrive at 0dB total mean power. The specified Doppler is the maximum frequency parameter. The Gain Reduction Factor (GRF) is the total mean power reduction for a 30° antenna compared to an Omni antenna. If 30oantennas are used the specified GRF should be added to the path loss. Note that this implies that all 3 taps are affected equally due to effects of local scattering. K-factors have linear values, not dB values. K-factors for the 90% and 75% cell coverage are shown in the tables, i.e., 90% and 75% of the cell locations have K factors greater or equal to the K-factor value specified, respectively. For the SUI channels 5 and 6, 50% K-factor values are also shown.

Table 5.4: Characteristic of SUI-1

SUI – 1 Channel

 

Tap 1

Tap 2

Tap 3

Units

Delay

0

0.4

0.9

μs

 

Power (omni ant.)

90% K-fact.(omni)

75% K-fact.(omni)

 

 

0

4

20

 

-15

0

0

 

-20

0

0

dB

 

Power (300 ant.)

90% K-fact.( 300)

75% K-fact.( 300)

 

0

16

72

 

-21

0

0

 

-32

0

0

dB

Doppler

0.4

0.3

0.5

Hz

Antenna Correlation:           ρENV = 0.7

Gain Reduction Factor:       GRF= 0 dB

Normalization Factor:    Fomni= -0.1771dB,

                                           F300  = -0.0371dB

Terrain Type:      C

Omni antenna:    τRMS = 0.111 μs

Overall K:K= 3.3(90%); K= 10.4(75%)

300 antenna:  τRMS = 0.042 μs

Overall K:K= 14.0(90%); K=44.2(75%)

 

 

 Table 5.5: Characteristic of SUI-2

SUI – 2 Channel

 

Tap 1

Tap 2

Tap 3

Units

Delay

0

0.4

1.1

μs

 

Power (omni ant.)

90% K-fact.(omni)

75% K-fact.(omni)

 

 

0

2

11

 

-12

0

0

 

-15

0

0

dB

 

Power (300 ant.)

90% K-fact.( 300)

75% K-fact.( 300)

 

0

8

36

 

-18

0

0

 

-27

0

0

dB

Doppler

0.2

0.15

0.25

Hz

Antenna Correlation:           ρENV = 0.5

Gain Reduction Factor:       GRF= 2 dB

Normalization Factor:    Fomni= -0.3930dB,

                                           F300  = -0.0768dB

Terrain Type:      C

Omni antenna:    τRMS = 0.202 μs

Overall K:K= 1.6(90%); K= 5.1(75%)

300 antenna:  τRMS = 0.069 μs

Overall K:K= 6.9(90%); K=21.8(75%)

   

Table 5.6:Characteristic of SUI-3

SUI – 3 Channel

 

Tap 1

Tap 2

Tap 3

Units

Delay

0

0.4

0.9

μs

 

Power (omni ant.)

90% K-fact.(omni)

75% K-fact.(omni)

 

 

0

1

7

 

-5

0

0

 

-10

0

0

dB

 

Power (300 ant.)

90% K-fact.( 300)

75% K-fact.( 300)

 

0

3

19

 

-11

0

0

 

-22

0

0

dB

Doppler

0.4

0.3

0.5

Hz

Antenna Correlation:           ρENV = 0.4

Gain Reduction Factor:       GRF= 3 dB

Normalization Factor:    Fomni= -1.5113dB,

                                           F300  = -0.3573dB

Terrain Type:      B

Omni antenna:    τRMS = 0.264 μs

Overall K:K= 0.5(90%); K= 1.6(75%)

300 antenna:  τRMS = 0.123 μs

Overall K:K= 2.2(90%); K=7.0(75%)

                        Table 5.7:Characteristic of SUI-4

SUI – 4 Channel

 

Tap 1

Tap 2

Tap 3

Units

Delay

0

1.5

4

μs

 

Power (omni ant.)

90% K-fact.(omni)

75% K-fact.(omni)

 

 

0

0

1

 

-4

0

0

 

-8

0

0

dB

 

Power (300 ant.)

90% K-fact.( 300)

75% K-fact.( 300)

 

0

1

5

 

-10

0

0

 

-20

0

0

dB

Doppler

0.2

0.15

0.25

Hz

Antenna Correlation:           ρENV = 0.3

Gain Reduction Factor:       GRF= 4 dB

Normalization Factor:    Fomni= -1.9218dB,

                                           F300  = -0.4532dB

Terrain Type:      B

Omni antenna:    τRMS = 1.257 μs

Overall K:K= 0.2(90%); K= 0.6(75%)

300 antenna:  τRMS = 0.563 μs

Overall K:K= 1.0(90%); K=3.2(75%)

                          Table 5.8: Characteristic of SUI-5

SUI – 5 Channel

 

Tap 1

Tap 2

Tap 3

Units

Delay

0

4

10

μs

Power (omni ant.)

90% K-fact.(omni)

75% K-fact.(omni)

50% K-fact.(omni)

0

0

0

2

-5

0

0

0

-10

0

0

0

dB

Power (300 ant.)

90% K-fact.( 300)

75% K-fact.( 300)

50% K-fact.( 300)

0

0

2

7

-11

0

0

0

-22

0

0

0

dB

Doppler

2

1.5

2.5

Hz

Antenna Correlation:           ρENV = 0.3

Gain Reduction Factor:       GRF= 4 dB

Normalization Factor:    Fomni= -1.5113dB,

                                           F300= -0.3573 dB

Terrain Type:      A

Omni antenna:    τRMS = 2.842 μs

Overall K:K= 0.1(90%); K=0.3(75%);K=1.0(50%)

300 antenna:  τRMS = 1.276 μs

Overall K:K= 0.4(90%); K=1.3(75%);K=4.2(50%)

                                        Table 5.9:Characteristic of SUI-6

SUI – 6 Channel

 

Tap 1

Tap 2

Tap 3

Units

Delay

0

14

20

μs

Power (omni ant.)

90% K-fact.(omni)

75% K-fact.(omni)

50% K-fact.(omni)

0

0

0

1

-10

0

0

0

-14

0

0

0

dB

Power (300 ant.)

90% K-fact.( 300)

75% K-fact.( 300)

50% K-fact.( 300)

0

0

2

5

-16

0

0

0

-26

0

0

0

dB

Doppler

0.4

0.3

0.5

Hz

Antenna Correlation:           ρENV = 0.3

Gain Reduction Factor:       GRF= 4 dB

Normalization Factor:    Fomni= -0.5683dB,

                                           F300  = -0.1184 dB

Terrain Type:      A

Omni antenna:    τRMS = 5.240 μs

Overall K:K= 0.1(90%); K= 0.3(75%);K=1.0(50%)

300 antenna:  τRMS = 2.370 μs

Overall K:K= 0.4(90%); K=1.3(75%);K=4.2(50%)

Chapter-6

Simulation Model

This chapter describes the simulation part of the thesis. A brief description of time and frequency division duplex is described first and then the simulation procedure is explained step by step with appropriate diagrams. We have employed Matlab 9.0 to develop the simulator. Before going for the physical layer setup, let us first define the OFDM symbol parameter used in our study.

 6.1 Physical Layer Setup

Basically physical layer handles error correction and signal connectivity, as well as registration, initial ranging, connectivity channels and bandwidth request for data and management. Physical layer consists of some sequence of equal length frames which transmit through modulation and coding of RF signals. OFDM technology has been using by WiMAX technology. Different user assigning different sub carries which are allowed in orthogonal frequency division multiplexing (OFDM) techniques. It is durable to multi-path which helps to overcome multipath signals hitting the receiver. OFDM signals divide into 256 carries in IEEE-802.16 standard and IEEE 802.16e use scalable OFDMA. Wide range of frequencies supported by IEEE 802.16 standard and physical layer contains several multiplexing and modulation forms. Modulation methods in the uplink (UL) and downlink (DL) are Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK) and Quadrature Amplitude Modulation (QAM).

Protocol Layer

Fig 6.1: IEEE 802.16 Protocol Layer (IEEE-2004)

WiMAX supports both full and half duplex. Two types of transmission supported by IEEE 802.16,

§  Time Division Duplex (TDD)

§  Frequency Division Duplex (FDD)

6.1.1 Time Division Duplex (TDD)

Time division duplex framing is adaptive (when input changes it output behavior is automatically change). It consists fixed duration which consists one downlink frame and uplink frame. Base station (BS) sends complete downlink (DL-MAP, UL-MAP). Up and Down link share same frequency but they are separated in time.

6.1.2 Frequency Division Duplex (FDD)

During transmission in frequency division, multi-path is scheduled by DL-MAP and UL-MAP. Downlink and uplink can be done in same time, but on different frequency. UL and DL channels grouped into some continuous blocks of some paired channel. FDD system provide full duplex where we can make some application like voice, where DL and UP traffic requirement need more or less symmetric. In Base station (BS) to base station interface kept in minimum, in this technique, network for radio communication planning is easier.

  Data Decoding

Block diagram  6.2 shows the whole process of the thesis work. Every part of the diagram is described below:

6.3 Transmitter Module

This subsection describes the transmitter module used for the simulation.

6.3.1 Mersenne Twister-Random Number Generator Algorithm

Mersenne Twister is a pseudo random number generator that produces a sequence of zeros and one bits. It might be combined into sub-sequences of zeros and ones or blocks of random numbers. There are two types of random number which is called deterministic and nondeterministic. We are dealing deterministic random numbers. A deterministic Random Number Generator (RNG) produces a sequence of bits from an initial value which is called seed. The seed value is 19,937 bits long and stored in 624 element array. The RNG algorithm has a period of 2**19937-1. A Pseudo Random Number Generator (PRNG) produces values based on a seed and current values. In our simulation we used this algorithm as function rand () to generate the random input value for evaluate the performance of WiMAX.

6.3.2 Modulation

We passed the random values through the adaptive modulation schemes according to the constellation mapped. The data was modulated depending their size and on the basis of different modulation schemes like BPSK, QPSK, 16-QAM and 64-QAM. The modulation has done on the basis of incoming bits by dividing among the groups of i. That is why there are 2i points. The total number of bits represented according to constellation mapped of different modulation techniques. The size of i for BPSK, QPSK, 16-QAM and 64-QAM is 1, 2, 4 and 16 respectively.

6.3.3 ReedSolomon Encoder

The randomized data are arranged in block format before passing through the encoder and a single 0X00 tail byte is appended to the end of each burst. The implemented RS encoder is derived from a systematic RS (N=255, K=239, T=8) code using GF (28). The following polynomials are used for code generator and field generator:

G(x) = (x+λ0)( x+λ0)… (x+λ2T-1), λ = 02HEX ———————————————(6.1)

p(x) = x8 + x4 + x3 + x2 + 1 —————————————————————-(6.2)

The encoder support shortened and punctured code to facilitate variable block sizes and variable error correction capability. A shortened block of k´ bytes is obtained through adding 239k´ zero bytes before

the data block and after encoding, these 239k´ zero bytes are discarded. To obtain the punctured pattern to permit T´ bytes to be corrected, the first 2T´ of the 16 parity bytes has been retained.

6.3.4 Convolutional Encoder

The outer RS encoded block is fed to inner binary convolutional encoder. The implemented encoder has native rate of 1/2, a constraint length of 7 and the generator polynomial in Equation (6.3) and (6.4) to produce its two code bits. The generator is shown in Figure 6.5.

G1 = 171OCT For X ——————————————————————————(6.3)

G2 = 133OCT For Y ——————————————————————————–(6.4)

Convolutional encoder

Figure 6.5: Convolutional encoder of rate ½

Table 6.1: Puncturing configuration of the convolution code

Rate

dFREE X output Y output XY(punctured output)

1/2

10 1 1 X1Y1

2/3

6 10 11 X1Y1Y2

3/4

5 101 110 X1Y1Y2X3

5/6

4 10101 11010 X1Y1Y2X3Y4X5

 

 

 

In order to achieve variable code rate a puncturing operation is performed on the output of the convolutional encoder in accordance to Table 6.1. In this Table “1” denotes that the corresponding convolutional encoder output is used, while “0” denotes that the corresponding output is not used. At the receiver Viterbi decoder is used to decode the convolutional codes.

6.3.5 Interleaver

RSCC encoded data are interleaved by a block interleaver. The size of the block is depended on the numbers of bit encoded per subchannel in one OFDM symbol, Ncbps. In IEEE 802.16, the interleaver is defined by two step permutation. The first ensures that adjacent coded bits are mapped onto nonadjacent subcarriers. The second permutation ensures that adjacent coded bits are mapped alternately onto less or more significant bits of the constellation, thus avoiding long runs of unreliable bits [1].

The Matlab implementation of the interleaver was performed calculating the index value of the bits after first and second permutation using Equation (6.5) and (6.6) respectively.

fk = (Ncbps/12).kmod12+floor(k/2) k = 0,1,2,… … ..Ncbps1 —   —————    –(6.5)

sk = s.floor(fk/s) + (mk +Ncbps –floor(12.mk/Ncbps))mod(s) k=0,1,2, Ncbps1  (6.6)

where s= ceil(Ncpc/2) , while Ncpc stands for the number of coded bits per subcarrier, i.e.,

1,2,4 or 6 for BPSK,QPSK 16-QAM, or 64-QAM, respectively.

The default number of subchannels i.e 16 is used for this implementation.

The receiver also performs the reverse operation following the two step permutation

using equations (6.7) and (6.8) respectively.

fj = s. floor(j/s)+(j+floor(12.j/Ncbps))mod(s)       j=0,1,… … ..Ncbps1 ———–(6.7)

sj = 12.fj – (Ncbps 1). floor(12.fj/Ncbps)            j=0,1,2… … .Ncbps1 ————(6.8)

6.3.6 Constellation Mapper

The bit interleaved data are then entered serially to the constellation mapper. The Matlab implemented constellation mapper support BPSK, QPSK, 16-QAM, and 64-QAM . The complex constellation points are normalized with the specified multiplying factor for different modulation scheme so  that equal average power is achieved for the symbols. The constellation mapped data are assigned to all allocated data subcarriers of the OFDM symbol in order of increasing frequency offset index.

6.3.7 IFFT

The OFDM symbol threats the source symbols to perform frequency-domain into time-domain. If we chose the N number of subcarriers for the system to evaluate the performance of WiMAX the basic function of IFFT receives the N number of sinusoidal and N symbols at a time. The output of IFFT is the total N sinusoidal signals and makes a single OFDM symbol. The mathematical model of OFDM symbol defined by IFFT which would be transmitted during our simulation as given bellow:

54

6.3.8 Subcarriers

In OFDM system, the carriers are sinusoidal. Two periodic sinusoidal signals are called orthogonal when their integral product is equal to zero over a single period. Each orthogonal subcarrier has an integer number of cycles in a single period of OFDM system. To avoid inter channel interference these zero carriers are used as a guard band in this system.

6.3.9 OFDM Symbol Description

In WiMAX Transmitter, IFFT (Inverse Fast Fourier Transform) used to create OFDM waveform with the help of modulated data streams. On the other hand in WiMAX receiver end the FFT used to demodulate the data streams. This time duration is defined to as symbol time, Tb. A copy of symbol period, Tg which is termed of Cyclic Prefix (CP) used to collect multipath where maintaining the orthogonality of the codes. The following fig 6.6 shows the OFDM symbol in the time domain.

1

In OFDM system, the number of sub-carriers is 256 which is equal to the FFT size. Each OFDM symbol consists of the following four types of carriers .

Data sub-carriers (OFDM) or sub-channels (OFDMA): used for data transmission

Pilot sub-carriers: used for various estimation purposes

DC sub-carriers: used as center frequency

Guard sub-carriers/Guard bands: used for keeping space between OFDM and OFDMA signals

The following fig 6.7 shows the OFDM symbol in frequency domain,

OFDM Symbol in frequency domain

Fig 6.7: OFDM Symbol in frequency domain [28]

To avoid Intersymbol Interference (ISI) the Cyclic Prefix (CP) is inserted in OFDM system before each transmitted symbol. In wireless transmission the transmitted signals might be distort by the effect of echo signals due to presence of multipath delay. The ISI is totally eliminated by the design when the CP length is greater than multipath delay. After performing Inverse Fast Fourier Transform (IFFT) the CP will be add with each OFDM.

6.3.10 CP Insertion

To maintain the frequency orthogonality and reduce the delay due to multipath propagation, cyclic prefix is added in OFDM signals. To do so, before transmitting the signal, it is added at the beginning of the signal. In wireless transmission the transmitted signals might be distort by the effect of echo signals due to presence of multipath delay. The ISI is totally eliminated by the design when the CP length L is greater than multipath delay. After performing Inverse Fast Fourier Transform (IFFT) the CP will be add with each OFDM symbol.

6.4 Channel Module / Wireless Channel

In wireless communication, the data are transmitting through the wireless channel with respective bandwidth to achieve higher data rate and maintain quality of service. The transmitting data has to take environmental challenges when it is on air with against unexpected noise. That’s why data has to encounter various effects like multipath delay spread, fading, path loss, Doppler spread and co-channel interference. These environmental effects play the significant role in WiMAX Technology. To implement an efficient wireless channel have to keep in mind the above fact. In this section we are presenting the wireless channels.

  • Additive White Gaussian Noise (AWGN)
  • Rayleigh Fading Channel
  • Stanford University Interim (SUI)

6.4.1 Additive White Gaussian Noise (AWGN)

This is a noise channel. This channel effects on the transmitted signals when signals passes through the channel. This noise channel model is good for satellite and deep space communication but not in terrestrial communication because of multipath, terrain blocking and interference. AWGN is used to simulate background noise of channel. The mathematical expression as in received signal r(t) = s(t) + n(t) is shown in figure 6.8 which passed through the AWGN channel where s(t) is transmitted signal and n(t) is background noise.

AWGN Channel

Fig 6.8: AWGN Channel

6.4.2 Rayleigh Fading Channel

Rayleigh Fading is one kind of statistical model which propagates the environment of radio signal. According to Rayleigh distribution magnitude of a signal which has passed though the communication channel and varies randomly. Rayleigh Fading works as a reasonable model when many objects in environment which scatter radio signal before arriving of receiver. When there is no propagation dominant during line of sight between transmitter and receiver on that time Rayleigh Fading is most applicable. On the other hand Rician Fading is more applicable than Rayleigh Fading when there is dominant line of sight. During our simulation we used Rayleigh Fading when we simulate the performance of Bit Error Rate Vs Signal to Noise Ratio.

6.5 Receiver Module

Omni directional Antenna is the most popular antenna in WiMAX, which can be used for point-to-multipoint configuration. The main feature of Omni Directional antenna is that, it can be deployed broad-casting in 3600 angle. This is the limitation of its range and ultimately it shows its signed strength. Omni directional antennas are mostly user friendly when lots of subscribers stay very close to the base station.

6.5.1 CP Removal

In transmitting module, to deal the frequency orthogonality and reduce the delay, cyclic prefix added in each OFDM signals. That’s why, before transmitting the signal, the CP added at the beginning of the signal. After performing Inverse Fast Fourier Transform (IFFT) the CP will be add with each OFDM symbol. In receiver module, after synchronization the received data contains the Cyclic Prefix of each OFDM signal which is ignored.

6.5.2 FFT

By using number of samples FFT converts time domain signal into frequency domain signal. The FFT frequency domain signal defined as 1/ Ts_tot (where Ts_tot is total number of samples). In transmitter module, IFFT converts the OFDM signals from frequency domain to time domain which is exactly reverse work of FFT. To perform of OFDM 256 points, the zeros are padded beginning and ending of the OFDM signal. These zero pads will be removed from the corresponding places at the receiving module.

6.5.3 Channel Equalizer

In our simulation we used Zero-Force Bock Equalizer (ZFE) and Minimum Mean Square Equalizer (MMSE) which are described below.

  • Zero-Force Block Equalizer (ZFE)

Zero force channel equalizer removes the output of equalizer Inter symbol interference (ISI) from the channel. This equalizer works as a noise remover but if the channel has no noise then it remain ideal condition.

Block Diagram of a Simple Transmission

Fig 6.9: Block Diagram of a Simple Transmission in Zero-Force Equalizer

34

6.5.4 Demodulation

Demodulation works to extract the original data from a modulated waveform. At the receiver, an electronic circuit works to recover the different base-band signals which have already transmitted from the transmitter end which is called demodulator [30].

Chapter 7

Simulation Results

  In this chapter the simulation results are shown and discussed. In the following sections, first we will present the structure of the implemented simulator and then we will present the simulation results.

7.1 Bit Error Rate (BER)

When number of bits error occurs within one second in transmitted signal then we called Bit Error Rate (BER). In another sentence Bit Error rate is one type of parameter which used to access the system that can transmit digital signal from one end to other end. We can define BER as follows,5

If transmitter and receiver’s medium are good in a particular time and Signal-to-Noise Ratio is high, then Bit Error rate is very low. In our thesis simulation we generated random signal when noise occurs after that we got the value of Bit error rate.

7.2 SNR

Energy per bit to noise power spectral density ratio is important role especially in simulation. Whenever we are simulating and comparing the Bit Error rate (BER) performance of adaptive modulation technique is very necessary Eb/N0. The normalized form of Eb/N0   is Signal-to- Noise Ratio (SNR). In telecommunication, Signal-to-Noise ratio is the form of power ratio between a signal and background noise,

6

Here P is mean power. In this case the signal and the background noise are measured at the same point of view if the measurement will take across the same impedance then SNR would be obtained by measuring the square of the amplitude ratio.

7

7.3 BER Vs SNR

The Bit Error Rate (BER) defined as the probability of error (Pe). On the other hand Signal-to- Noise is the term of power ratio between a signal and background noise. There are three variables like,

  • The error function (erf)
  • The energy per bit (Eb)
  • The noise power spectral density (N0)

 

Every modulation scheme has its own value for the error function. That is why each modulation scheme performs in different manner due to the presence of background noise. For instance, the higher modulation scheme (64-QAM) is not robust but it carries higher data rate. On the contrary, the lower modulation scheme (BPSK) is more robust but carries lower data rate. The energy per bit, Eb defined by dividing the carrier power and measured of energy with the unit of Joules. Noise power spectral density (N0) is power per hertz with the unit of Joules per second. So, it is clear that the dimension of SNR is cancelled out. So we can agree on that point that, the probability of error is proportional to Eb/N0.

 

7.4 Physical layer performance results

 

The basic goal of this thesis is to analyze the performance of WiMAX OFDM physical layer based on the simulation results. In order to analyze, the BER Vs SNR plot was investigated.

SUI-1 BER over SNR for BPSK

              Fig 7.1: SUI-1 BER over SNR for BPSK

SUI-1 BER over SNR for QPSK

   Fig 7.2: SUI-1 BER over SNR for QPSK

SUI-1 BER over SNR for 16-QAM

Fig 7.3: SUI-1 BER over SNR for 16-QAM

SUI-1 BER over SNR for 64-QAM

Fig 7.4: SUI-1 BER over SNR for 64-QAM

SUI-2 BER over SNR for BPSKSUI-2 BER over SNR for 16-QAMSUI-2 BER over SNR for 64-QAMSUI-3 BER over SNR for QPSKSUI-3 BER over SNR for 64-QAMSUI-4 BER over SNR for BPSKSUI-4 BER over SNR for 16-QAMSUI-5 BER over SNR for BPSKSUI-5 BER over SNR for BPSKSUI-6 BER over SNR for BPSK

SUI-6 BER over SNR for 64-QAM

7.5 Conclusion :

After all conditions we applied and the results we got we can conclude our work as follows,

  • We studied WiMAX OFDM physical layer, mobile systems, modulation techniques and features of WiMAX networks properly, with the help of necessary figures and tables.
  • We studied SUI-1 to SUI-6 channel model and also implemented it through Matlab simulation to evaluate the performance of Mobile WiMAX.
  • We also used and understood the adaptive modulation techniques like, BPSK, QPSK, 16-QAM and 64-QAM according to IEEE 802.16d standard.

In all aspects of adaptive modulation technique, we can conclude the performance of Mobile WiMAX as,

  • Binary Phase Shift Keying (BPSK) is more power efficient and needs less bandwidth.
  • On the other hand 64-Qadrature Amplitude Modulation (64-QAM) has higher bandwidth with very good output.
  • In another case, Quadrature Phase Shift Keying (QPSK) and 16-QAM modulation techniques are in middle of those two (BPSK and 64-QAM) and they requires higher bandwidth.
  • QPSK and 16-QAM are less power efficient than BPSK.
  • During all simulations we got, BPSK has the lowest BER and 64-QAM has the highest BER than other modulation techniques.

We also add some more things in here,

  • We included Cyclic Prefix (CP) and random signals which reduced noise resulting lower Bit error Rate (BER) for OFDM system but increased the complexity in the system.
  • Cyclic Prefix requires higher power but non Cyclic Prefix requires lower power.

7.6 Future Work:

A lot of works can be done for future optimization of Wireless communication especially in WiMAX system. Adaptive modulation techniques and WiMAX physical layer can be adopted with High Amplitude Platform (HAP) and Long Term Evaluation (LTE).

The implemented PHY layer model still needs some improvement. The channel estimator can be implemented to obtain a depiction of the channel state to combat the effects of the channel using an equalizer. The IEEE 802.16 standard comes with many optional PHY layer features, which can be implemented to further improve the performance. The optional Block Turbo Coding (BTC) can be implemented to enhance the performance of FEC. Space Time Block Code (STBC) can be employed in DL to provide transmit diversity.

 

 

Categories
Architecture

Report on Interior Design

Interior design

Interior design describes a group of various yet related projects that involve turning an interior space into an effective setting for the range of human activities that are to take place there. It’s thought that the first real forays into interior design were by the Egyptians, who would decorate their huts with various pottery pieces, textiles and murals. Furniture in Egypt was usually highly intricate and of course, the tombs of the Egyptian kings and queens are some of the most luxurious ever seen, with priceless arts, treasures and carvings being packed into several rooms.
The Greeks and Romans also prioritised making their homes beautiful, with delicate mosaics and marble pieces being extremely popular. The look at this time placed a great emphasis on luxury and expert craftsmanship, and many pieces from these ancient civilisations still survive today.
After the extravagance and opulence of the Roman period, Britain entered possibly its lowest point in art and design. The Dark Ages arrived, with austerity and sober designs becoming the order of the day. Many of the poor didn’t bother to decorate their homes at all, relying on stone flooring and simple furniture, as even the previously beautiful churches made do with simple carvings in dark wood.
Within a couple of centuries it was completely different, as the Italian Renaissance led to plenty of marble floorings and expensive upholstery being found in the homes of the rich. The other big change was the invention of wallpaper, which was made using the newly-invented printmaking machines. Wallpaper offered a nice alternative to huge tapestries and was soon found in every wealthy home across Europe; the grander and more delicate the design, the better.
Although wallpaper was initially only something that the rich had, better technology and two world wars soon saw this change. Both wars led to a shortage of paint across Britain, meaning that the demand for cheap yet attractive wallpaper rose. Soon, wallpaper had replaced paint in the nation’s affections.
The late 90s saw interest in interior design skyrocket thanks to TV makeover shows like Changing Rooms. Such programmes would take an ordinary house and attempt to transform it within just a few days, providing viewers with plenty of tips on how to make the most of their homes. Emphasis also began to be placed upon the home being a place to relax from the everyday stresses of modern life, thanks to the seemingly endless TV channels and games consoles that were being brought out at the time. As a result, living room furniture changed slightly in design, with uncomfortable showpieces being eschewed in favour of more comfortable items; recliner chairs and beautiful coffee tables were very popular.
This now brings us to the present day. Today, many property shows focus on how the design of the home can alter its ultimate value, as well as offering tips on how to entice potential buyers. Shows such as Escape to the Country have also given buyers a stronger idea on exactly what they want from their dream home.

Interior of egyptian period:

Interior of egyptian period
The Egyptians brought many innovations to the world, and their influence on interior design is no different. A traditional Egyptian interior design is laden with large furniture, area rugs and bold-colored fabrics. Adapt this style by selecting intricately carved furnishings in traditional European styles, with particular emphasis on French styles such as baroque and rococo. Use two to three couches, including chaise lounges, in the living room. In the bedroom, select large beds with high headboards in dark wood.
If possible, add marble flooring and layer it in thick, graphically designed area rugs depicting Egyptian cultural symbols. Scatter large floor pillows around the room.
They made floor-to-ceiling curtains for window treatments. Select either silk or linen fabric for the curtains, and include a cornice board upholstered in the same material.
They use artwork depicting ancient Egyptian gods, such as Bast the cat goddess and Isis the Egyptian mother goddess. The artwork can take the form of a wall mural and tell the story depicting that particular god’s adventures or relationship to the people of Egypt.
The Egyptians are known for their bold, colorful designs, ornate carvings and of course, the intricacy of the patterns and hieroglyphics. Ancient Egyptian décor incorporates all of these elements, bringing the best of Egyptian style to a modern world. One of the easiest ways to introduce ancient Egypt into your interior design scheme is through accessories. Some common accessories inspired by this culture include:

• Bastet Cat statues/paintings of Bastet cats – Cats were considered to be sacred beings by the ancient Egyptians because they were believed to personify the goddess Bastet. Bastet was thought to be daughter or Ra and the goddess of pleasure and motherhood.
• Ankhs – The ancient Egyptian symbol of life force is often used in jewelry as well as being incorporated into interior design.
• Canopic Jars – Canopic jars were the jars Egyptians used to store the internal organs of the dead that were removed when bodies were mummified. They come in sets of four, representing the four sons of Horus, and each jar is decorated with a different god charged with protecting the organs of the dead so they could be used in the next life. Sound a tad gruesome for your home design? No internal organs involved in modern canopic jar versions – they essentially resemble Egyptian inspired Russian dolls and are exceedingly popular.
• Busts – The Egyptians loved to glorify their leaders and their gods with ornate and exceedingly complex busts, and so these designs have endured in modern Egyptian inspired designs. You can find busts of everyone from Cleopatra and Nefertiti to Bastet cats and the sun god Ra.
• Reliefs – Reliefs featuring both hieroglyphics and depictions of gods and rules are common.
• Mummies – Of course, you can’t discuss ancient Egypt without talking about mummies. Representations of mummies are used regularly in Egyptian inspired interior design. These representations can take many different forms, from statues of mummies to reliefs and paintings of mummies and mummification process to sarcophagus statues that open up to reveal a mummy resting inside. Like canopic jars, using mummies in your home décor might sound a little creepy to some readers, but remember that to the ancient Egyptians, death was very much a stepping stone into a different kind of life and was not a tragic event – hence the idea behind the process of mummification.

• Figurines – Like the busts, ancient Egyptians also liked to honor their gods and their leaders with small figurines depicting them in battle or simply in a regal pose
Interior of ancient greece:
Ancient Greek Interior Structure
Both marble palaces and whitewashed stone houses are considered “ancient architecture” styles in Greek history. Elaborate stone cuttings and mortar were used to create stone areas and paths on the ground floors of Greek homes, patios and terraces. Interior walls may feature beautiful murals if a homeowner had a creative bent or knew of an artisan willing to undertake the job. Openings leading from room to room in the typical ancient Greek home were frequently curved; it was unusual for a house to have interior doors. Hand-loomed textile rugs hung from walls and covered floors, adding touches of color to individual rooms. Handcrafted curtains, usually embroidered with all sorts of colorful designs by the household’s women, hung over windows as well.

floor
Walls, Floors and Other Features
Elaborate stone cuttings and mortar were used to create stone areas and paths on the ground floors of Greek homes, patios and terraces. Dwellings with a second story were fitted with wood floors, both for ease of installation and to counter the weight issue. Interior walls may feature beautiful murals if a homeowner had a creative bent or knew of an artisan willing to undertake the job. Openings leading from room to room in the typical ancient Greek home were frequently curved; it was unusual for a house to have interior doors. Curved roofs, typical of architecture in the Mediterranean region, are commonplace. Not every home had a solid front door.

Walls, Floors and Other Features
Ancient Greek Furnishings
Greek furniture makers followed Egyptian design lines when crafting furnishings, and materials didn’t differ much, either. Oak, cedar, olive, boxwood, maple and ebony woods in the hands of skillful Greek carpenters became chairs, tables, couches, stools and beds. Greek carpenters added distinct ornamentation to furniture, including copper, bronze and iron embellishments. Wood veneer trim also provided a popular way to decorate furniture in ancient Greece

antique greek furniture
Interior Rooms
In ancient Greece, it wasn’t unusual for men and women to sleep apart in dormitory-like wings, particularly if a family was large. Washing areas were as important in a floor plan as the hearth. If a homeowner was wealthy enough to afford clay pipes to bring water from aqueducts, the family could bathe with fresh running water. Otherwise, women collected water from wells, brought it to the washroom and carried the dirty water back out. Hand-loomed textile rugs hung from walls and covered floors, adding touches of color to individual rooms. Handcrafted curtains, usually embroidered with all sorts of colorful designs by the household’s women, hung over windows as well.

Interior Rooms
Interior of the Roman period:
Houses changed greatly while Rome was growing. Until the last century of the Republic, houses were small and simple, with little decoration. Bright colours were used simply and appealingly to brighten interiors. Eventually, however, things became much more ornate.
Ceilings were vaulted and painted in brilliant colours, or they were divided into panels by beams. These ceilings are sometimes imitated by modern architects. Doors were richly paneled and carved, or plated with bronze, or made of solid bronze. Doorposts were sheathed with beautifully carved marble. Floors were covered with contrasting marble tiles or with mosaic pictures. The most famous of these is “Darius at the Battle of Isus,” measuring 16 feet by 8 feet, with about a 150 separate pieces in each square inch.

Interior of the Roman period
Our knowledge of Roman furniture comes from the stone and metal articles which have come down to us, the restorations made from platter casts made in Pompeii and Herculaneum, and the references in literature and depictions in art. The Romans were not big fans of furniture, but what they had was usually of rare and expensive materials, fine workmanship, and graceful form.. Even wealthy homeowners had mostly essential articles: couches, chairs, tables and lamps. Couches were extremely popular. They were often extremely ornamental. A couch (lectus) could act as a sofa and a bed. In its simplest for it would be a wooden frame, with a back and one or two arms, with straps interwoven across the top, on which a mattress was laid. A couch always had pillows, a mattress, and coverlets.

roman
The primitive form of Roman seat was a four-legged stool or bench with no bench. Some could be folded. The famous curule chair, to which only high magistrates were entitled, was a folding stool with curved legs of ivory and a purple cushion. The first improvement on the stool, was the solium, a stiff, straight, high-backed chair with solid arms, so high that a footstool was necessary. This was the chair in which a patron sat when he received clients in the atrium. Poets represented it as a seat for gods and kings.
The design of tables was extremely varied. They were often very beautiful. Their supports and tops were made of fine materials – stone or wood, solid or veneered, or even covered with thin sheets of precious metal. The most expensive were round tables made from cross sections of citrus wood, the African cedar. Chests were found in every house. They were usually made of wood and often bound with iron. Small chests, used as jewel cases, were sometimes made of silver or gold. Cabinets were made of the same materials as chests and were often beautifully decorated.
Roman doors and doorways gave opportunities for equally artistic treatment. Doors were elaborately paneled and carved, or were plated with bronze, or made of solid bronze. The threshold was often of mosaic as in the picture above. The postes were sheathed with marble ordinarily carved in elaborate designs, as in the picture below.
The Romans produced heat with their charcoal stoves, or braziers. These were metal boxes which held hot coals. They were raised on legs and provided with handles.
The clock as we know it did not exist in Roman times. In the peristly or garden there was sometimes a sundial, which measured the hours of the day by the shadow of a stick or pin. The sundial was introduced from Greece in about 268 B.C. A sundial gives the correct time twice a year if it is calculated for the spot where it stands. Since the first Roman sundials were brought from Greek cities, they did not give the exact time. The largest at Rome was set up by Augustus, who used an Egyptian obelisk for the pointer and had lines of the dial laid out on a marble pavement..
Romans had very simple (but often very ornate) lamps: containers for olive oil or melted fat, with loosely twisted threads for the wick(s), drawn out through one or more holes in the cover or the top. There was also usually a hole through which the lamp was filled. As there was no chimney, the flame must have been uncertain and dim. Some lamps had handles. Some were suspended from ceilings with chains. Others were kept on stands. For lighting public rooms there were tall stands like those of our floor lamps, from which numerous lamps could be hung. This was called a candelabrum. It must have originally been intended for candles, but they were rarely used as the Romans were not skilled candlemakers. A supply or torches (fasces) of dry, inflammable wood, often soaked in or smeared with pitch, was kept near the outer door for use at night. Light was reflected from polished floors and from water in the impluvium.

Interior of Renaissance period :
While designing a house in the Italian Renaissance style is not an easy task, with all of the proper details and scale, furnishing one can be difficult as well. With a little research, hard work and attention to appropriate styles of mantels, lighting and architectural details you can accomplish the look. In addition, finding the appropriate furniture is important as well.
The below is antique mantels that would be appropriate for either formal or informal (say a family room). However, I would use both of them in either place.

antique
The mirror below, and the portrait, are perfect examples of pieces that would be great for an Italian Renaissance house. Of course, I am not opposed to using French Trumeau’s such as the one mentioned in my French Interiors post to go above the mantels shown above as well.
Here are two more pieces that are great to incorporateThey also found a fountain similar to the one below that we converted into a pedestal sink for their powder bathroom by carefully drilling holes in the back and mounting a wall mount faucet. Lamps,designed tables are also seen in their houses. These chandeliers are great examples of what would be appropriate for an Italian Renaissance house.
Color and Texture
Colors prevalent in the decor of the Renaissance era can be found in the artwork of the times. Great masters captured images of life during that time, including the fabrics and textures used. Rich velvets, shimmering gauzes and silk damasks were all used for wall coverings and draperies as well as clothing. Vibrant jewel-like colors complemented by warm earth tones and primary colors became the palette used in all aspects of tapestry, cloth, upholstery, bedding and carpets. This same style can be applied today in much the same manner. Choose rich earth-tone paint or venetian plaster for wall coloring and frame windows with a sheer gauze curtain layered with heavy drapes of jewel-toned velvet. Textured, flocked and embossed wallpaper is also available to duplicate that damask look and feel. Use accent pillows made of silk or velvet to complement dark leather furniture. Concentrating first on the color and texture you are bringing to the room will start the process of bringing this era into the current century.
Accessories and Accents
Gilding and hand-painted accents are featured on many of the accessories you can use to reflect this style. Ornate gold frames for oil paintings, big marble urns and statuary make bold statements as decorations around the room. Gold tassels for draperies are also excellent accents and add richness to the window treatments. To finish the design, place additional accents including leather and hide-bound books or journals, old pewter, tarnished copper and old leaded glass around the room.

Furnishings
Heavy furnishings in dark wood and leather are often considered Renaissance style. Furniture was chosen as much for its artistic look as for the functional nature of the piece. Hand-carved scrollwork and embellishments on legs, arms and backs of chairs, as well as on doors, legs and tops of tables and cabinets, all generate the look of this era’s furnishings. Rich upholstery and leather accents are also prevalent.

Furnishings

Interior of Baroque period :
Baroque style is dramatic and opulent, and can transform a simple home into something flamboyant. Used predominantly in Louis XIV’s Palace of Versailles, baroque style boasts of bold colors, luxurious textiles, carved wooden furniture, gold or silver trims, exquisite art pieces and crystal chandeliers. If you are considering decorating your space with a baroque style.

Wall Treatments
Achieving a baroque style starts with walls. Choose dark and bold colors such as deep red or hunter green to enhance the look of the room dramatically. Gilded furnishings and art pieces look more prominent against a dark colored background.
Using boiserie, a type of wood paneling with intricate details, will add character to your walls. Wainscoting and moldings added to the walls can give walls more depth. Choose wallpapers with bold and sophisticated prints such as damask and floral, embellished in gold or silver.
Fabric wall treatments are also common in a baroque design. Gathering the top and bottom of the wall fabrics to create pleats can add volume. Embellish walls with mirrors similar to the Hall of Mirrors at the Palace of Versailles, where paneled mirrors hung wall to wall.
Floors.

wall treatment
Using high-end materials such as marble or wood for your floors will do well in a baroque style. Wood and marble with inlay designs can add character and sophistication. Placing room-sized, hand-woven area rugs in European style will warm up and soften the room.
Furniture
A baroque style room is not complete without baroque furniture. Furnishing a room with Louis IV-style furniture pieces with curvy legs and intricate carving, best characterizes a baroque style home. Sofas and chairs should have high-end upholstery. Classic baroque furniture may have gilding on the details to make it look more prominent. Crystal knobs and pulls finish the look of writing desks and armoire.
Textiles
Drapes made with taffeta or silk and sewn in voluminous style will make the room look more extravagant. Choose woven and luxurious fabrics for the upholstery, window covering, bed cover, tablecloths and runners. Velvet and damask are prominent textile materials used for a baroque style.
Lighting
Using crystal chandeliers with gold or brass fixtures is the best way to light up a baroque-style home. Hang them in entryways, formal dining and living rooms. Adding matching wall sconces and lamps in rooms that need more light will make the room more cohesive. Use lampshades made with expensive fabrics to complement the other fabrics used on upholstery and drapery.
Art
Wall tapestries and paintings depicting the baroque period (1600-1750) complete a baroque design scheme. Use frames gilded in gold when framing paintings. Gold is a prominent color when choosing art pieces for a baroque style. Sculptures and vases should have intricate details.

art
Create wall and ceiling murals depicting a scene with classic European flair. For example, a waltz scene from the 17th or 18th century can adorn the walls and ceilings of the formal living or dining room.
Accessories
Braided and beaded tassels can adorn curtains and upholstered furnishings. Embellishing items with fringes, ribbons and cords can add sophistication.
Interior of Gothic period:
Gothic period design was influenced by Roman and Medieval architecture. Its initial design period was c.1150 to 1550, but saw a revival in the 19th century by the Victorians.
Gothic design was the first true ecclesiastical style and was symbolic of the triumph pf the Catholic church over paganism in Europe. The new age of soaring cathedrals meant the initiation of new methods of building, to support this extreme weight.
Style had a religious symbolic base- think of old, ornate churches and you will be on the right track. Pointed arches and stained glass in complex trefoil or rose designs were predominant, exposed, wooden beams, large, imposing fireplaces, and emulated candle lighting completed the ecclesiastical style. There was a strong vertical influence, supported by the high arches and peaks of the architecture. Light was also important, as windows grew more and more expansive and light and air flooded into the once gloomy churches of the Romanesque period.

Gothic Furniture
Furniture was massive and oak, adorned with Gothic motifs. Chairs, bed frames, cabinets were sturdy and featured arches, spiral-turned legs and rich upholstery in dark colors. Old church furniture such as pews, benches and trestle tables finish the look. Victorian gothic reproduction and Arts and Crafts era furniture can be used as an acceptable alternative, as many of the same motifs crossed over.

Victorian gothic
Gothic Color
Colors were rich and dark, of the Victorian era. Purple, ruby, black, ochre, forest green and gold added complemented the heavy furniture and rich design. Wallpaper was ornate and heavily patterned in natural flowers and foliage. Also popular was trompe-l’oeil architectural features or stenciled designs. Walls were painted in flat colors, to depict stones, and often covered in wall hangings- especially tapestries. Obviously stained glass was a significant feature, and these were ideally accented with pewter, wrought iron, suits of armour and candles. Decorative ribbing or cornices were common and elaborately carved. Heraldic emblems were seen everywhere.
Interior of Neo classic period:

Heraldic emblems

Originating in the late 18th Century, Neo Classical design is inspired by the architecture of Ancient Greece and Rome. Neo Classical furniture is a compilation of distinctive geometrical shapes; rounds, arches, rectangles and curves.
Neo Classical furniture is characterised by restrained, symmetrical design and tends to be rectangular. Architectural details and motifs are frequently utilised for decoration. The furniture legs are often turned and fluted in reference to classical architectural columns.
Upholstered and gilded chairs and sofas resemble thrones and exemplify a certain Napoleonic grandeur. Classical carved urns enlivened and provided structure for garden layouts.
Neo-classicism provides a grand statement with an element of restraint, a perennial favourite throughout history.
Neo Classical furniture, classical style in a grand statement.

furniture

The Arts and Crafts Movement

The Arts and Crafts movement initially developed in England during the latter half of the 19th century. Subsequently this style was taken up by American designers, with somewhat different results. In the United States, the Arts and Crafts style was also known as Mission style.
This movement, which challenged the tastes of the Victorian era, was inspired by the social reform concerns of thinkers such as Walter Crane and John Ruskin, together with the ideals of reformer and designer, William Morris. (This link will take you to a less visual site that provides William Morris historical background).
Their notions of good design were linked to their notions of a good society. This was a vision of a society in which the worker was not brutalized by the working conditions found in factories, but rather could take pride in his craftsmanship and skill. The rise of a consumer class coincided with the rise of manufactured consumer goods. In this period, manufactured goods were often poor in design and quality. Ruskin, Morris, and others proposed that it would be better for all if individual craftsmanship could be revived– the worker could then produce beautiful objects that exhibited the result of fine craftsmanship, as opposed to the shoddy products of mass production. Thus the goal was to create design that was… ” for the people and by the people, and a source of pleasure to the maker and the user.” Workers could produce beautiful objects that would enhance the lives of ordinary people, and at the same time provide decent employment for the craftsman.

Medieval Guilds
Medieval Guilds provided a model for the ideal craft production system. Aesthetic ideas were also borrowed from Medieval European and Islamic sources. Japanese ideas were also incorporated early Arts and Crafts forms. The forms of Arts and Crafts style were typically rectilinear and angular, with stylized decorative motifs remeniscent of medieval and Islamic design. In addition to William Morris, Charles Voysey was another important innovator in this style. One designer of this period, Owen Jones, published a book entitled The Grammar of Ornament, which was a sourcebook of historic decorative design elements, largely taken from medieval and Islamic sources. This work in turn inspired the use of such historic sources by other designers.
However,in time the English Arts and Crafts movement came to stress craftsmanship at the expense of mass market pricing. The result was exquisitely made and decorated pieces that could only be afforded by the very wealthy. Thus the idea of art for the people was lost, and only relatively few craftsman could be employed making these fine pieces. This evolved English Arts and Crafts style came to be known as “Aesthetic Style.” It shared some characteristics with the French/Belgian Art Nouveau movement, to be discussed below.

However in the United States, the Arts and Crafts ideal of design for the masses was more fully realized, though at the expense of the fine individualized craftsmanship typical of the English style. In New York, Gustav Stickley was trying to serve a burgeoning market of middle class consumers who wanted affordable, decent looking furniture. By using factory methods to produce basic components, and utilizing craftsmen to finish and assemble, he was able to produce sturdy, serviceable furniture which was sold in vast quantities, and still survives. The rectilinear, simpler American Arts and Crafts forms came to dominate American architecture, interiors, and furnishings in the late nineteenth and early twentieth century.

Untitled
Today Stickley’s furniture is prized by collectors, and the Stickley Company still exists, producing reproductions of the original Stickley designs.

The term Mission style was also used to describe Arts and Crafts Furniture and design in the United States. The use of this term reflects the influence of traditional furnishings and interiors from the American Southwest, which had many features in common with the earlier British Arts and Crafts forms. Charles and Henry Greene were important Mission style architects working in California. Southwestern style also incorporated Hispanic elements associated with the early Mission and Spanish architecture, and Native American design. The result was a blending of the arts and crafts rectilinear forms with traditional Spanish colonial architecture and furnishings. Mission Style interiors were often embellished with Native American patterns, or actual Southwestern Native American artifacts such as rugs, pottery, and baskets. The collecting of Southwestern artifacts became very popular in the first quarter of the twentieth century.
Art Nouveau

This style, which was more or less concurrent with the Arts and Crafts style, was not at all concerned with the social reform movements of the day. Instead, it addressed the clutter and eclecticism of mid-19th century European taste. Originating in Belgium and France, this movement advocated nature as the true source of all good design. Art Nouveau designers objected to the borrowing of design ideas from the past, and even from other cultures, although the Japanese approach to nature was much admired and emulated.

The characteristics of the style included above all the use of the sinuous curved line, together with asymmetrical arrangement of forms and patterns. The forms from nature most popular with Art Nouveau designers were characterized by flowing curves– grasses, lilies, vines, and the like. Other, more unusual natural forms were also used, such as peacock feathers, butterflies, and insects.

insect arty
Architects and designers who contributed to the development of this style included Victor Horta , Hector Guimard (Click on Buildings and Subway in TOC on left), and Henry van de Velde. The glass and jewelry design of Lalique, as well as the stained glass and other designs of Louis Comfort Tiffany and Emile Galle (nice video at bottom of page) were important examples of Art Nouveau style.
A distinctive graphic design style developed, which included typography styles as well as a distinctive manner of drawing the female figure. The prints of Aubrey Beardsley and Alphonse Mucha are typical of this style.

Bauhaus

Bauhaus is an international movement known for inspiring the 20th Century Modern Architecture and Design. It was named after Bauhaus School of Architecture and Fine Art in Weimar. The thought-provoking ideas spread worldwide having a great influence on architecture and all arts. The origins of the Bauhaus movement of modern art and architecture date back to the controversial new school of arts and crafts which was established in Weimar in 1902 by the Belgian artist Henry van de Velde. Another art school had already been founded in 1860 which was also the subject of disputes.
The pioneering architect Walter Gropius combined both schools into the Staatliches Bauhaus on April 1, 1919 to start the Bauhaus movement which spread around the world. In 1919, Weimar had become the center of new social and political ideas when the city was chosen as the place for the writing of the constitution of the new Republic proclaimed by the Social Democrats on Nov. 9, 1918.
The central idea behind the teaching at the Bauhaus was productive workshops. The Bauhaus contained a carpenter’s workshop, a metal workshop, a pottery in Dormburg, facilities for painting on glass, mural painting, weaving, printing, wood and stone sculpting. The Bauhaus architecture featured functional design, as opposed to the elaborate Gothic architecture of Germany. Famous modern artists like Paul Klee, Lyonel Feininger and Kandinsky were invited to lecture at the school.

Art deco
Art Deco was a popular international design movement from 1925 until 1939, affecting the decorative arts such as architecture, interior design, and industrial design, as well as the visual arts such as fashion, painting, the graphic arts and film. This movement was an amalgam of many different styles and movements of the early twentieth century, including Neoclassical, Constructivism, Cubism, Modernism, Bauhaus, Art Nouveau, and Futurism.

art deco
Art Deco experienced a decline in popularity during the late 1930s and early 1940s, and soon fell out of public favor. The time frame was roughly from the World’s Fair in Paris in 1925 to the World’s Fair in New York in 1939. Afterward, Art Deco experienced a resurgence with the advent of graphic design in the 1980s. Surviving examples may still be seen in many different locations worldwide, in countries as diverse as the United Kingdom, Cuba, the Phillipines, and Brazil. Many classic examples still exist in the form of architecture in many major cities. The Chrysler building, designed by William Van Alen, is a classic example of this, as it is one of the most notable examples of Art Deco architecture today. Other prominent examples include the Empire State Building and the New Yorker Hotel in New York City.
While most of the modern art movements were grounded in ideology, Art Deco was a celebration of modern life and style, seeking elegance over philosophical content.
Famous interior designers:

Isamu Noguchi (1904-1988)
Important American abstract sculptor who has designed furniture and lighting products from time to time.
Charles Eames (1907-1978)
Eames is best known for his 1956 design of a leather lounge chair and ottoman, utilizing molded plywood units supported on cast aluminium bases. An architect and designer.
Herman Milner
American furniture manufacturer, which, under the design direction of Gilbert Rohde in the 1930s,began producing modern furniture.
Other famous designers are Rashid Karim, Laura Day, Shela Baridge, Tanya Ghayni, William Mcintosh, William Morris.

Modern furnitures :
Shanto-Mariam University of Creative Technology

modern furniture

Categories
EEE

Assignment on Photovoltaic System

Status of Photovoltaic System Designs

Major categories of PV system designs include grid-connected without storage, grid-connected with storage, and off-grid.

Grid-Connected with No Storage

The major elements of a grid-connected PV system that does not include storage are shown in Figure 2-1. The inverter may simply fix the voltage at which the array operates, or (more commonly) use a maximum power point (MPP) tracking function to identify the best operating voltage for the array. The inverter operates in phase with the grid (unity power factor), and generally delivers as much power as it can to the electric power grid given the sunlight and temperature. The inverter acts as a current source; it produces a sinusoidal output current but does not act to regulate its terminal voltage in any way.

The utility connection can be made by connecting to a circuit breaker on a distribution panel or by a service tap between the distribution panel and the utility meter. Either way, the PV generation reduces the power taken from the utility power grid, and may provide a net power flow into the utility power grid if the interconnection rules permit.

PV power system

Figure (31): Grid-connected PV power system with no storage

A simplified equivalent circuit of the same basic grid-connected system is shown in Figure 2-2. The PV system typically appears to the grid as a controlled current source, local loads may consist of resistive, inductive, and capacitive elements, and the utility source is represented by its Thevenin-equivalent model (voltage source Utility V with series impedance Utility Z). The local loads within a single residence rarely include much capacitance, but if a whole neighborhood is modeled at once, voltage support capacitors maintained by the utility may contribute significantly to the local load mix. This leads to conditions that could fool the inverter into running, even if the utility becomes disconnected (unintentional islanding). The utility source impedance models such things as the impedances of transformers and cables. The inverter handles all grid interface functions (synchronization, over/under voltage [OV/UV] and over/under frequency [OF/UF] disconnects, anti-islanding) and PV array control functions (MPP tracking)

PV system

Figure (32): Schematic drawing of a modern grid-connected PV system with no storage.

The ratio of PV system size to local load demand may be small enough that reverse power flow from the PV to the utility never occurs, but at high penetration the magnitude of the reverse power flow at midday is likely to exceed the magnitude of the nighttime load power. As shown in Figure (32), if we try to make the generation energy (area of red hump) equal to the load energy (blue area), the daytime power production (peak of red generation hump at solar noon) is likely to exceed the peak load power flow because most loads draw power all night when the PV system cannot supply power. For this residential load example, the peak load power flow is a double peak in late evening, which highlights the time misalignment that can occur between residential load and PV generation. Fortunately, commercial loads peak in the early afternoon, so the total PV generation in a utility system can reduce the peak system load, even though it may have no impact on the peak load at the residence where the PV is installed.

PV energy

Figure (33): Power flows required to match PV energy generation with load energy consumption.

As part of this work, an extensive literature search was conducted to assess the current body of knowledge of expected problems associated with high penetration levels of grid-tied PV. The results of that literature survey are presented here.

Several studies have been conducted to examine the possible impacts of high levels of utility penetration of this type of PV system. One of the first issues studied was the impact on power system operation of PV system output fluctuations caused by cloud transients. A 1985 study in Arizona examined cloud transient effects if the PV were deployed as a central-station plant and found that the maximum tolerable system-level penetration level of PV was approximately 5%. The limit was imposed by the transient following capabilities (ramp rates) of the conventional generators. Another paper published in that same year [63] about the operating experience of the Southern California Edison central station PV plant at Hesperia, California, reported no such problems, but suggests that this plant had a very stiff connection to the grid and represented a very low PV penetration level at its point of interconnection.

In 1989, a paper describing a study on harmonics at the Gardner, Massachusetts, PV project was released [64]. The 56 kilowatts (kW) of PV at Gardner represented a PV penetration level of 37%, and the inverters (APCC Sun Shines) were among the first generation of true sine wave pulse width modulation inverters. All the PV homes were placed on the end of a single phase of a 13.8 kV feeder. This was done intentionally:

Selection of the houses comprising the Gardner Model PV Community was predicated on establishing a high saturation of inverters as may become typical on New England distribution feeders in the next century.

The impact of high penetrations of PV on grid frequency regulation appeared in a 1996 paper from Japan [65]. This study used modeled PV systems that respond to synthetically generated short-term irradiance transients caused by clouds. The study looked at system frequency regulation and the break even cost, which accounts for fuel savings when PV is substituted for peaking or base load generation and PV cost. This paper reaches three interesting conclusions: (1) the break-even cost of PV is unacceptably high unless PV penetration reaches 10% or so; (2) the thermal generation capacity used for frequency control increases more rapidly than first thought; and (3) a 2.5% increase in frequency control capacity over the no-PV case is required when PV penetration reaches 10%. For PV penetration of 30%, the authors found that a 10% increase in frequency regulation capacity was required, and that the cost of doing this exceeds any benefit. Based on these two competing considerations, the authors conclude that the upper limit on PV penetration is 10%.

Between 1996 and 2002, a series of reports was produced by an International Energy Agency working group on Task V of the Photovoltaic Power Systems Implementing Agreement. Unintentional islanding, capacity value, certification requirements, and demonstration project results were all the subjects of reports, but the one that is of primary importance here dealt with voltage rise. This report focused on three configurations of high-penetration PV in the low-voltage distribution network (all PV on one feeder, PV distributed among all feeders on a medium-voltage/low-voltage (MV/LV) transformer, and PV on all MV/LV transformers on an MV ring). This study concludes that the maximum PV penetration will be equal to whatever the minimum load is on that specific feeder. That minimum load was assumed to be 25% of the maximum load on the feeder in [66], and if the PV penetration were 25% of the maximum load, only insignificant over voltages occurred. Any higher PV penetration level increased the over voltages at minimum loading conditions to an unacceptable level. This study assumed that the MV/LV transformers do not have automatic tap changers (they are assumed to have manually set taps). [66]

Grid-Connected with Storage:

 

Figure 2-4 shows two basic storage architectures commonly found with grid-connected PV systems. (a) Shows an architecture that many older systems have used, where a separate battery charge control device controls power collected from the PV array. This arrangement leaves the inverter to provide backup battery charge control from the utility power grid when insufficient PV power is available, but does not allow efficient extraction of excess PV power for supply to the grid when the batteries are fully Figure 2-4charged. Figure 2-4(b) shows an architecture that is more common in modern grid-connected PV power systems that allows the PV array power to be directed optimally by the inverter to batteries or the utility power grid as appropriate.

pv system 2 pv system 1

Figure (34): Grid-connected PV systems with storage using (a) separate PV charge control and inverter charge control, and (b) integrated charge control.

In both cases, storage provides the opportunity to supply power to critical loads during a utility outage. This feature is not available without storage.

As with the grid-connected only configuration described previously, PV generation reduces the power taken from the utility power grid, and may in fact provide a net flow of power into the utility power grid if the interconnection rules permit. Storage has been traditionally deployed for the critical load benefit of the utility customer in the United States, but the Ota City High Penetration PV project [67] deployed local storage as an alternate destination for energy collected during low load periods to prevent voltage rise from reverse power flow in the distribution system.

 

Off-Grid with Storage:

 

Off-grid PV systems may include electricity or other storage (such as water in tanks), and other generation sources to form a hybrid system. Figure 2-5 shows the major components of an off-grid PV system with electricity storage, no additional generators, and AC loads. In a system of this type, correctly sizing the energy storage capacity is a critical factor in ensuring a low loss-of-load probability [68].PV system with storage

Figure (35) : Off-grid PV system with storage

In this system configuration, the inverter acts as a voltage source, which is in contrast to the grid-tied system. The stand-alone inverter determines the voltage wave shape, amplitude, and frequency. To maintain the voltage, the inverter must supply current surges, such as those demanded by motors upon startup, and whatever reactive power is demanded by the loads.

Many stand-alone PV systems include engine-generator sets. In most cases, the generators are thought of as backup generators that are operated only during periods of low sunlight or excessive load that deplete the energy storage to some minimum allowed state of charge. The inverter senses a low battery voltage condition and then starts the generator. The generator usually produces 60-hertz (Hz) AC power directly, and thus when it starts, it powers the loads directly (the power to the loads does not pass through the inverter). The inverter operates as a rectifier and battery charger, drawing generator power to recharge the batteries. The system continues in this mode until the batteries are recharged. The generator is then stopped, and the inverter resumes regulation of the AC bus voltage, drawing power from the PV and batteries.

 

Energy Payback System for PV:

Producing electricity with photovoltaic’s (PV) emits no pollution, produces no greenhouse gases, and uses no finite fossil-fuel resources. The environmental benefits of PV are great. But just as we say that it takes money to make money, it also takes energy to save energy. The term “energy payback” captures this idea. How long does a PV system have to operate to recover the energy—and associated generation of pollutions and CO2—that went into making the system, in the first place?

Energy payback estimates for both rooftop and ground-mounted PV systems are roughly the same, depending on the technology and type of framing used. Paybacks for multicrystalline modules are 4 years for systems using recent technology and 2 years for anticipated tech­nology. For thin-film modules, paybacks are 3 years using recent technology, and just 1 year for anticipated thin-film technology (see Figure 1). With assumed life expectancies of 30 years, and taking into account the fossil-fuel-based energy used in manufacture, 87% to 97% of the energy that PV systems generate won’t be plagued by pollution, greenhouse gases, and depletion of resources

Based on models and real data, the idea that PV cannot pay back its energy investment is simply a myth. Indeed, researchers Dones and Frischknecht found that PV-systems fabrication and fossil-fuel energy production have similar energy payback periods (including costs for mining, transportation, refining, and construction. [69]

 

Energetic Performance of PV System:

Yield Calculation and Monitoring

Yield and Losses

A grid-connected PV system consists mainly of a PV array and an inverter. In order to evaluate the energetic performance of such a system, the energy yield and losses at the different conversion performance of such a system, the energy yield and losses at the different conversion steps are normalized to the power values under STC. The method and nomenclature as it is introduced here is common practice in PV system engineering [70, 71]. Yield and losses can be allocated to the different components of a grid-connected PV system as shown in the figure ( ).

The reference yield is defined as solar irradiation on the tilted plane normalized to the solar irradiance under STC, hence,

1

It is expressed in hours or “kWh / kWp”.

Array yield  and final yield Y are calculated by normalizing the energy before, respectively, after passing the inverter, to the rated power of the Pv array under STC.

In practice, these values are mostly given in “  kWh / kWp ”. Accordingly, the capture loses and system losses are calculated as

2

The term system losses may be misleading. It originates from Stand-alone and hybrid PV systems and includes all losses that are not capture losses. For a grid-connected PV installation, the system losses are mainly inverter losses.

Yield and losses as defined above are usually calculated either as annual values or as daily mean values during a specified period such as a day, month or year. Nevertheless, analogues figure can be calculated based on instantaneous irradiance and

grid-connected PV system

Figure (36): Energy flow in a grid-connected PV system.

 

 

Power instead of irradiation and energy in the literature it has been proposed the same nomenclature but with lower case letters for analysis of the power flow in PV system [71].

In order to Assess and compare the performance of PV system over several years and for different sites, independently of  variations of the solar recourses, array yield and final yield are normalized to the reference yield. The performance ratio is defined as

4

It can be interpreted as the ratio between the actual system yield and the yield of an ideal system, always operating with the conversion efficiency of the PV array under STC.

A performance database of PV systems Worldwide is maintained within the International Energy Agency’s Photovoltaic Power System Programme (IEA PVPS).While PV system installed in Germany before 1995 exhibits performance ratio is 0.74[70]. The development can be explained by learning effects in PV system engineering during the national 1000-Roof-PV-Programme, running from 1991 to 1994 [71]. The mean performance ratio of PV system in Japan installed in the 1990s is situated around 0.73 and it is not much varying over the years [76]. In the Belgian project for PV at schools, the mean performance ratio of all system in 1990 was 0.70 [77].

Common to all monitoring programmers was a very high spread of the Annual performance ratio among the installation. The IEA PVPS conclude on this basis that via further optimization “average annual PR values of higher than 0.75 are to be achieved for well-planned PV system.”[70]

By assuming a realistic performance ratio, the Annual final yield for a given site can be estimated based on long-term mean values of solar irradiation. Table 2 shows irradiation and expectably yield values for different sites in Belgium.

A low performance ratio can be either due to shadowing of the PV array or to flaws in the system design. Typical causes for a bad performances are inverter failure, a low inverter efficiency, increase arrays temperature due to bad ventilation of the arrays back side or exaggerated module reduction in yield is tolerated many cases. Nevertheless, addition electrical losses can occur as a consequence of PV module and string mismatch with partial shadowing. Yet, these losses can be minimized already in the planning phase, if the array arrangement is Adapted to the particular shadowing situation

 

Monitoring:

The monitoring of photovoltaic system in Europe should follow IEC 61724 [79] and the guidelines of the European Commission [70]. These guidelines are meant to define “a minimum set of quantities as well as standards recording techniques” [70]. Additional requirements may be specified as, for example, in Dutch guideline for PV monitoring system

The guideline of the European Commission distinguish between “analytical monitoring” and “Global monitoring”[70]. For global monitoring, the integral values of reference yield, array yield and final yield are manually recorded for periods of one month of shorter, Global monitoring is usually applied in order to verify the trouble free operation of a PV system based on the energy balance. Conversely, analytical monitoring is necessary in order to study the system performance in detail. Analytical monitoring is necessary in order to draw. Conclusions regarding yield reductions and the potential for future improvements.

Analytical monitoring includes the measurement of the system voltage, the current, and the power on the DC and AC side of the inverter. Moreover, the measurement of the ambient temperature is required. Although not required in [69], often in the individual string current are measured and the PV cell temperature via the modules back side. The precision of the Data should be within 2% of full scale, that is, class 2. According to[70] the data must be recorded as hourly averages, sampled at one value per minute at least. In the Dutch guidelines [, ten minute average are required, sampled at one per second. In the recent Japanese monitoring programme even one –minute averages are recorded.

Categories
EEE

Assignment on Inverter

Introduction of Inverter Technology

 In the grid-interconnected photovoltaic power system, the DC output power of the photovoltaic array should be converted into the AC power of the utility power system. Under this condition an inverter to convert DC power into AC power is required. Apart from the solar panels, the core technology associated with these systems is a power-conditioning unit (inverter) that converts the solar output electrically compatible with the utility grid.

Most inverters in the mid 1990 have consisted of a central inverter of dc power rating above 1 kW. They connect several solar panel strings in parallel via a dc bus. However, the concept has the drawbacks of causing a complete loss of generation during inverter outage and losses due to the mismatch of strings  . Later, string inverters, which are designed for a system of one string of panels, were used to lessen the problems and have become popular nowadays. With further system decentralization, concept of “AC-module” was introduced. Every solar panel has a module-integrated inverter of power rating below 500 W mounted on the backside [80]–84]. This panel inverter integration allows a direct connection to the grid and provides the highest system flexibility and expandability. It also offers the possibilities to overcome problems with respect to high dc voltage level connection, safety, cable losses, and risk of dc arcs, and to achieve high-energy yield in case of system suffering from shading effect, due to the lack of mutual influence among modules’ operating points .Typical structures of the AC-module consist of several power conversion stages (Fig. 1) .

Typical structures of grid-connected PV systems

Figure (37): Typical structures of grid-connected PV systems. (a).with voltage-fed

self-commutated inverter switching at high frequency.

(b) current-fed, grid- commutated inverter switching at the grid frequency.

 The line commutated inverter uses a switching device like a commutating thyristor that can control the timing of turn-on while it cannot control the timing of turn-off by itself. Turn-off should be performed by reducing circuit current to zero with the help of supplemental circuit or source. Conversely, the self-commutated inverter is characterized in that it uses an switching device that can freely control the ON-state and the OFF-state, such as IGBT and MOSFET. The self-commutated inverter can freely control the voltage and current waveform at the AC side, and adjust the power factor and suppress the harmonic current, and is highly resistant to utility system disturbance. Due to advances in switching devices, most inverters for distributed power sources such as photovoltaic power generation now employ a self-commutated inverter. The front stage has a maximum power point (MPP) tracker for maximizing the output power of the panel, because the maximum power drawn from the panel varies with temperature and insolation. The grid-connected stage uses a full-bridge inverter toward the grid, either self-commutated with a high switching frequency [Fig. 37(a)],

or grid-commutated at the grid frequency [Fig. 37(b)]. In the former structure [Fig. 37(a)], the panel voltage is firstly boosted to the grid level together with the tracker. The dc/ac conversion stage, which is usually a pulse-width-modulated (PWM) voltage-source inverter, shapes and inverts the output current. A high-frequency filter is used to eliminate the high-frequency component at the inverter output. In the latter structure [Fig. 37(b)], the tracker, voltage boost, and output current shaping are performed in the front stage. The full bridge is switched at the grid frequency for inverting the shaped output current [85]. There are various types of inverters as shown in Fig. 2.

classification of inverter type

Figure (38): classification of inverter type

The Self-commutated inverters include voltage and current types. The voltage type is a system in which the DC side is a voltage source and the voltage waveform of the constant amplitude and variable width can be obtained at the AC side. The current type is a system in which the DC side is the current source and the current waveform of the constant amplitude and variable width can be obtained at the AC side. In the case of photovoltaic power generation, the DC output of the photovoltaic array is the voltage source, thus, a voltage type inverter is employed. The voltage type inverter can be operated as both the voltage source and the current source when viewed from the AC side, only by changing the control scheme of the inverter.

When control is performed as the voltage source (the voltage control scheme),the voltage value to be output is applied as a reference value, and control is performed to obtain the voltage wave form corresponding to the reference value. PWM control is used for waveform control. This system determines switching timing by comparing the waveform of the sinusoidal wave to be output with the triangular waveform of the high-frequency wave,

leading to a pulse row of a constant amplitude and a different width. In this system, a waveform having less lower-order harmonic components can be obtained. On the other hand,

when control is performed as the current source (the current control scheme), the instantaneous waveform of the current to be output is applied as the reference value. The switching device is turned on/turned off to change the output voltage so that the actual output current agrees with the current reference value within certain tolerance. Although the output voltage waveforms of the voltage control scheme and the current control scheme look substantially same, their characteristics are different because the object to be controlled is different. Table 1.2 shows the difference between the voltage control scheme and the current control scheme. In a case of the isolated power source without any grid interconnection, voltage control scheme should be provided. However, both voltage-control and current-control schemes can be used for the grid interconnection inverter. The current-controlled scheme inverter is extensively used for the inverter of a grid interconnection photovoltaic power system because a high power factor can be obtained by a simple control circuit, and transient current suppression is possible when any disturbances such as voltage changes occur in the utility power system. Fig. 1.2 shows the configuration example of the control circuit of the voltage-type current-control scheme inverter.

 

Voltage control scheme

Current control scheme

Inverter main circuit

Self-commutated voltage source inverter (DC voltage source)

Control objective

AC voltage

AC current

Fault short circuit current

High

Low (Limited to rated current)

Stand alone operation

Possible

Not possible

Table: Difference between the voltage control scheme and the current control scheme inverter

configuration example of the control circuit

Figure (39): configuration example of the control circuit of the voltage-type current-control

Scheme inverter

 Types of inverter

 There are various types of inverter system configuration. However, Self-commutated inverter is usually used in a system with a relatively small capacity of several kW, such as a photovoltaic power system. This situation is reflected well by the results of this survey. The results of the survey show that the self-commutated voltage type inverter is employed in all inverters with a capacity of 1 kW or under, and up to 100 kW. The output waveform is adjusted by PWM control, which is capable of obtaining the output with fewer harmonic. The current control scheme is mainly used as described in Fig.39. However, some inverters employ the voltage control scheme. The current control scheme is employed more popularly because a high power factor can be obtained with simple control circuits, and transient current suppression is possible when disturbances such as voltage changes occurs in the utility power system. In the current control scheme, operation as an isolated power source is difficult but there are no problems with grid interconnection operation.

Ratio of current controlled

Figure (40): Ratio of current controlled scheme and voltage controlled Scheme inverter.

  Switching Devices:

 To effectively perform PWM control for the inverter, high frequency switching by the Semiconductor-switching device is essential. Due to advances in the manufacturing technology of semiconductor elements, these high-speed switching devices can now be used. Insulated Gate Bipolar Transistor (IGBT) and Metal Oxide Semiconductor Field Effect Transistor (MOSFET) are mainly used for switching devices. IGBT is used in 62% of the surveyed products, and MOSFET is used in the remaining 38%. Regarding differences in characteristics between GBT and MOSFET, the switching frequency of IGBT is around 20 kHz; IGBT can be used even for large power capacity inverters of exceeding 100 kW, while the switching frequency of MOSFET is possible up to 800 kHz, but the power capacity is reduced at higher frequencies. In the output power range between 1 kW to 10 kW, the switching frequency is 20 kHz, thus, both IGBT and MOSFET can be used. High frequency switching can reduce harmonics in output current, size, and weight of an inverter.

PWM inverter use for grid connection usually operates in the current source .while voltage and frequency determined by the grid, they inject a maximum current into d grid, depending on the DC power available. The power factor of the inverter bridge usually is set to unity [56].

A control algorithm is implemented in order to adapt the inverter’s operating point to variations in the I-U curve of the PV array, due to the temperature of irradiance variations. This so called MPP tracker continuously checks weather the inverter is operating at the arrays MPP and if necessary, adjust the inverter’s output current.

1

Figure (41): basic scheme of a PWM inverter with low frequency transformer for PV grid connection.

 2.3 Operational Conditions

 2.3.1 Operational AC voltage and frequency range

 Inverter should be operated without problem for normal fluctuations of voltage and frequency at the utility grid side. Accordingly, the operable range of the inverter is determined according to the conditions at the AC utility grid side. Because the conditions of the distribution system for interconnection differ by country, the operable range of the inverter also differs by country. The standard voltage and frequency for a single phase circuit is 230V and 50 Hz in Europe, 101/202 V and 50/60 Hz in Japan, and 120/240V and 60 Hz in USA. The standard voltage and frequency for a three-phase circuit is 380/400V and 50 Hz in Europe, 202 V and 50/60 Hz in Japan, and 480V and 60 Hz in the USA. For these standard values, the inverter can be operated substantially without any problems within the tolerance of +10% and –15% for the voltage, and ±0.4 to 1% for the frequency.

 2.3.2 Operational DC voltage range

 On the other hand, the operable range of the DC voltage differs according to rated power of the inverter, rated voltage of the AC utility grid system, and design policy, and various values are employed. In this survey, the operable range of the DC voltage for a capacity of 1 kW or below includes 14-25V, 27-50V, 45-100V, 48-120V, and 55-110V. In addition, the operable DC voltage range for a capacity of 1 kW to 10 kW includes 40-95V, 72-145V, 75-225V, 100-350V, 125-375V, 139-400V, 150-500V, 250-600V, and 350-750V. The operable DC voltage range for a capacity of10 kW or over includes 200-500V, and 450-800V.

Applicable PV array power

 Fig. 45 shows the results of the survey for applicable rated power of the PV array to the rated output power of inverter. Although it cannot be defined unconditionally because the array output power differs according to conditions (latitude, angle of inclination of module, etc.) in an area in which the photovoltaic power system is installed, the PV array of the rated output power of about1.3 times the rated output power of the inverter can be applied on average.

PV rated power distribution

Figure (42):  PV rated power distribution

 AC harmonic current from inverter

 For the characteristic of the inverter, minimization of harmonic current production is required. As described in the Report of Task 5 “Utility Aspects of Grid Interconnected PV systems,” Report IEA-PVPS T5-01: 1998, December 1998, harmonic current adversely affects load appliances connected to the distribution system, and can impair load appliances when the harmonic current is increased. As described in Chapter 2, because the PWM control scheme is employed as the output waveform control of the inverter, the harmonic current from the inverter is very small, raising fewer problems. The results of this survey show that Total Harmonic Distortion (THD), the total distortion factor of the current normalized by the rated fundamental current of the inverter, is 3 to 5%.

3.1 Inverter System Cost:

 The cost of the inverter system is an important element when considering the economy of a Photovoltaic power system. Here, the cost of the inverter system including the control device and the protective device is summarized. The cost of the inverter system was also summarized in the survey of 1998. According to the results of the previous survey, the difference in the cost was large by country and manufacturer, even when the power capacity of the inverter system was the same, and the cost varied greatly. However, the cost is substantially stabilized in this revised survey. Fig. 3.6 shows the results of the cost survey in the previous survey (old survey) and the revised survey (new survey) at the same time. Cost is indicated in USD when survey replies were in the currency of each country. The currency exchange rate was based on the values in 2001; 1German Mark was 0.46 US dollar, 1 Yen was 0.0075 USD, and 1 Euro was 1.07 USD.

As a result, it is shown that the cost of the inverter system is reduced more in the present survey than in the previous survey on the whole, and the cost for 1 kW is 800 USD or less in the present survey. It is also shown that the cost per kW decreases as inverter power capacity increases. Differences by country and manufacturer are also reduced, and the cost level becomes similar worldwide. It is expected that the cost of the inverter system will be further reduced. Fig. 5.1 shows a summary of the inverter system cost with a capacity from 1 kW to 6 kW. The cost of the inverter for the AC module with a capacity as low as 100 W to 300 W was 1 USD/W in the previous survey, while it is 1.2 to 1.9 USD/W in the present survey, showing that the cost has slightly increased. In addition, for the system with a large capacity exceeding 10 kW, cost per kW is apt to be reduced when capacity is increased. However, this cannot be concluded uniquely because cost depends on the number of production, and cost per kW increases if the number manufactured is small.

Inverter system cost

Figure (43):  Inverter system cost

In early 1990s PV inverter were only produce in small series. Every single device were assembled manually often by the small inverter enterprises. in practice this led to a high failure rate and often a long repair time [85] .currently an increasing professionalism can be identified. Failure rate and repair time of inverter have been reduced considerably [85].

Especially the larger production facilities have been semi-automated.

In future more and more pre-assembled component will be applied [86].Manufacturers of power-electronic components offer assembly group of IGBTs for chopper module, half-bridge and three-phase bridge. The application of such preassembled group is another step of further increase reliability and decreases the production costs of photovoltaic inverters. [86]

 Inverter Efficiency:

 Modern PV inverter has conversion efficiency from DC to AC of more than 90% over a wide power range including low partial load. In a PV inverter three types of losses occur:

–          Open-circuit losses, constant;

–          Voltage-drop losses, current-proportional;

–          Resistance losses ,proportional to the current square [88];

Assuming approximately constant voltage, the inverter current is proportional to the DC power. In that case the losses as a function of DC power PDC   may be approximated by a second order polynomial [89]:

2

Where PDC and PL are dc power and losses, respectively, normalized to rated DC power PDC,

The polynomial coefficients  and  can be determined from measured data by least square fitting.

The inverter efficiency is
3

With PL from (1) ,the normalized losses as a function of  PDC.

If the available PV power at the MPP exceeds the rated DC power, the inverter limits the input power to power PDC, r. The efficiency of a PV inverter limits the input power to PDC.

The efficiency of a PV inverter based on measurement and second-order polynomial approximation is given in figure….. The decrease in efficiency due to the current limiting is visible for   PDC.

5

Figure (44): Inverter efficiency as a function of normalized DC power inverter

From field measurement and polynomial curve fitting;

6

The efficiency is not always constant over the full power range. In order to characterize the long-time efficiency of photovoltaic inverter in the field, the European efficiency η Eu has been introduced:

7

Where the subscripts indicates the efficiency at operating points, weights according to their frequency of occurrence under typical European climate conditions.

Conclusions:

PV grid interconnection inverters have fairly good performance. They have high conversion efficiency and a power factor exceeding 90% over a wide operational range, while maintaining current harmonics THD less than 5%. Cost, size, and weight of a PV inverter have been reduced recently, because of technical improvements and advances in the circuit design of inverters and integration of required control and protection functions into the inverter control circuit. The control circuit also provides sufficient control and protection functions such as maximum power tracking, inverter current control, and power factor control. There are still some subjects as yet unproven. Reliability, life span, and maintenance needs should be certified through long-term operation of a PV system. Further reductions of cost, size, and weight are required for the diffusion of PV systems.

Categories
EEE

Assignment on Solar Radiation and Solar Cell

Solar Radiation

The Solar Constant:

The sun can be approximated as an emitter of blackbody radiation at temperature of 5777K [26]. The long-term average irradiance, that is specific power, from the sun outside the earth’s atmosphere amounts to 1367 W / m2. This value is referred to as the solar constant I0. In fact, I0 is not a constant and the values found in the literature may vary slightly. [27]

Due to the elliptic orbit of the earth, the extraterrestrial irradiance Ion on a surface normal to the sunbeam on day n of the year is

                                               Ion (n) = E0 (n) I0,                                                                                             (1)

With

1                                                (2)

The eccentricity correction factor,  raddi r0 and r(n), respectively, are the annual average and the current sun-earth distance at day n. The day member n ranges from 1 on January to 365 December 31. [28]

True Solar Time

The time usually applied in solar-energy calculations is the true solar time T ST. in true solar time, the sun crosses the meridian of the observer at 12:00. The time of day, throughout this work, is given as TST in hours. The conversion from local standard time (LST) into TST reads

2                                                 (3)

34

Figure (11): Spherical coordinates of the sun position; observer at the origin O.

With Λ the geographical longitude of a site and Λref the reference longitude for LST, both in radians, positive for western latitude. The added Et is the equation time, which accounts for perturbations in the earth’s rate of rotation. It can be calculated from the superposition of two harmonic functions[28]:

                                     Et(n)=(0.1645sin2B(n)-0.1255cosB(n)-0.025sinB(n))h,                                 (4)

Where          B(n)=4          (5)

 

Additionally, there may be one hour correction for daylight saving time.

Sun Position

The sun possition on the celestial sphere is given by the elevation angle γ and the azimuth angle ψ (fig 2).The sun position depends on the date, the time of the day, and the geographical position of the observer[28].

The data at day number n determines the solar declination angle

6

The time of the day is reflected by the hour angle

7

Elevation γ and  azimuth ψ at a certain time and date at longitude Λ and latitude φ are then calculated from

8

The solar azimuth ψ is negative in the morning and positive in the afternoon. For positions on the northen Hemisphere, it is negative. The declination δ is defined positive during summer on the northen hemisphere. The geographical latitude Φ is positive on the northen hemisphere and negative on the southern.

Sun position relative

Figure (12): Sun position relative to an arbitarily oriented reciver plane [26].

Arbitary oriented Surfaces

The position of the sun is relative to arbitarily oriented surface is determined by by the angle of incidence θi of the sunbeams (Fig 3). For horizontal surface, the angle of incidence equals the solar zenith angle θz with

Cosθz=sinγ.                                                                          (10)

For an inclined surface with tilt angle β and azimuth α, the angle of inclidence ios calculated from

Cosθi=sinγcosβ +cosγsinβcos(α-ψ)                                             (11)

Where the azimuth angle α runs from the east to west and is zero for the southern orientation. The extra terrestrial irradiance received by an arbitary oriented surface then is

I0,αβ=I0n cosθi                                                                                                                      (12)

9

Figure (13): (a)Basic Sun Earth Angle; (b)angles to describe the position of the sun in the sky

Terminology

In the related litarature many terms for the description of solar radiation quantities can be found.Throughout this work the terms are described in the bellow.

Irradiation: The term irradience specifies the rate of energy recived by an infinitesimal surface. The unit of irradience is “W / m2”. Irradiation is the energy recived infinitesimal surface. Irradiation is the time integral of irradition over a specified period. Its unit is “W /m2”.

Beam Radiation (Ib): the solar radiation recived from the sun without being scattered by the atmosphere in called beam radiation. It is direct solar radiation.

Diiffuse Radiation (Id): Solar radition whose direction has been changed through scattering by the atmosphere is known as diffuse radition.

Global Radiation or Terrestrial/ Total solar radiation (Ih): The sum of beam and diffuse radition in hourly on a surface is called global or total solar radition, i.e. Ih = Ib + Id

Solar Geometry / Earth Angle: Earth angle and its components are described in the following ways:

    I.            Latitude (): The latitude is the angular distance of the point on the earth measured north or south of the equator is latitude. -900900

 II.            Longitude: Angular distance measured east and west of prime meridian is longitude

  1. III.            Declination Angle (): Angle made by the joining the center of the sun and the earth with its projection on the equatorial plane, north positive is declination angle. It is zero at the autumnal and vernal equinoxes is 230450 at the summer solstice on june 21 & -23.450 at the winter solstice on December 21 in the northern hemisphere. The range of declination angle is given by -23.450≤≤ 23.150.
  2. IV.            Hour Angle (ω): Angular displacement of the sun east or west of local meridian due to rotation of the earth on its axis at 150 per hour is hour angle. It express the time of the day with respwct to the solar noon. It can be expressed by ω = 15(t-12)

Angles to Describe the position of sun in the Sky:

Sky: Figure (b) represents the angles to describe the position of the sun in the sky. Angles are described the position of the sun in the sky. Angles are described in the following ways:

       I.            Solar altitude Angle(αS): It is the angle between the projection of the sun’s rays on the horizontal plane and the direction of sun’s rays.

    II.            Zenith Angle(θZ): It is the angle between the sky’s rays and a line perpendicular to the plane through the point. Here, θZ + αS = π/2.

 III.            Solar Azimuth Angle (γS): It is the angular displacement from the south of the projection of beam radiation on the horizontal plane

The term radiation is used soley as a qualitative term in order to describe the physical phenon.

Terrestrial Radiation

While passing the earth atmosphere, the sunlight is attenuated. Some of the sunlight is absorbed by air molecules, water vapour and dust. Some is scattered, either back

Zenith angle

Figure (14):  air mass of different Zenith angle θz.

Into space or forward to the earth surface, by ozone, water and CO2. Some of the light passes the atmosphere unaffected and is either absorbed or reflected on the ground [28].

The radiation arriving on the ground directly in line from the sun is called direct or beam radiation I. The scattered radiation is called diffuse radiation D. the radiatipon reflected by the ground, is ground reflected radition R. the sum of the three component is called global radiation G[29]:

Gαβ = Iαβ + Dαβ + Rαβ,                                                (13)

Where the subscript indicate azimuth and tilt angle of the reciver plane. If α and β are no specified, the surface is suppose to be horizontal. The ground-reflected radiation R on a horizontal surface is always zero by rule.

 

Cloudless Skies:

The attenution of sunlight within the atmosphere is selective with regard to wave length. Therefore, the spectrum of sunlight at the earth surface depends on the optical path length of the sunlight through the atmosphere.The relative optical path length inside the atmosphere is called air mass M. it can be approximated as

                                                                            (14)

When the sun is situted in the zenith above the observer, the air mass is one. Outside the atmosphere it is zero. In modarate latitudes, often M=1.5 is assumed as a characteristic value figure(13).

The air mass, defined in (14), is a purely geomatric quantity. With regard to the defination of solar reference spectra, the air mass is moreover applied as a characteristic indicator for the spectral distribution. [30]

Figure 5 shows the extra terrestrial spectrum and and air mass 1.5 spectrum from a cloudless sky on a 37 degree  tilted plane according to ASTM E490 [27] and ISO 9845 respectively, The selective attenution for different wave lengths is well visible.Under a cloudless sky, the solar irradiance on the earth surface at a given time and dateonly depends on the atmospheric turbidity. Turbidity here describe the scattering of solar radiation by mater other than dry air molicules. Under a cloudy sky, the solar radiance on the earth surface is additionaly affected by passing clouds. The attinution of the solar radition in that case happens to be at random.

radiation for air mass

Figure (15): spectrul distribution of solar radiation for air mass 0 and 1.5 according to ASTM E490 [27] and ISO 9845.

Cloudy Skies

Solar radiation under cloudy skies was first investigated on a statistical basis by Whillier[31]. He drew cumulative frequency distributions of hourly irradiation for differeant geographical positions, seasons and hours of the day. Shortly later, the clearness index K was introduced by \liu and Jordan [32] as a parameter that accounts for the stochastic atmospheric conditions at a given site. It is defined as the ratio of terrestrial to extraterrestrial irradiation:

0

Where the bar denotes time integrals of global and extrateresterial irradiance oner usually one hour up to the month. For horizontal surface the subscripts α and β are omitted.

According to Liu and and Jordan [32], the cumulative probability of the daily clearness index K   during one month can be described analytically by Boltzman distributions, which are fully determined by the monthly mean clearness index K. Lius and Jordans findings were generalized when it was found that their expression could be extended to any given set of daily clearness-indes values[33]. For any specified mean values K   . the probability distribution of daily clearness index K  can be described by the curves in figure6, independently of any geographical or sesonal influence.

The instantaneous clearness index k may be defined in an analogus way based on global andextraterrestrial irradiance. His is done for the analogus of solar Irradiance fluctuations in [34]. There, the stochastic properties of the instantaneous clearness index and discussed in depth based on emperical data

Available Radiation:

The global irradiance on the earth surface usually takes values upto 1200W / m2  on a plane perpendicular to the sunbeam. If turbidity is low, this value can be measured even at instants with a very high Air mass.In some case also values higher than I0n can be observed. The originate from reflection at the edge of clouds leading to a local increase of the solar irradiance on the ground [35].

distributions of the daily clearness

Figure (16): Generalized cumulative probability distributions of the daily clearness index K    with parameter K [33].

When the sun is coverd by passing clouds, the direct radiation is blocked and the irradiance often drop’s  down to values around 200 to 300 W / m2  anmd lower. On a day with scattered clouds as it is typical for the Belgium modarate meritime climate, a high number of such transition may occcur.

On an annual base, the global irradiation in belgium is in Belgium is about 1000 kWh/ m2 , of which more than 55% is duffuse[36]. In southern Europe the annual global irradiation from 1300 to 1800 kWh/m2. In some of the worlds tropical deserts up to 2400kWh / m2. In some of the worlds   tropical desert up to 2400 kWh / m2 may be reached.

Conversion to Arbitrarily Oriented Surface:

Global and often also diffuse radiation on the horizontal plane is measured worldwide at many different sites, mostly as hourly averages of irradiance. A number of models have been developed for the conversion of horizontal irradiance data into irradiance on an arbitrarily oriented surface.

Direct Radiation:

The conversion of direct radiation is a mere matter of trigonometry. The dirrect irradiance on the horizontal plane is the difference between global and diffuse irradiance. It is converted for a plane with azimuth α and tilt angle β according to

12

With γ and θi according to (31)-(16) for each instant time.

Ground reflected Raditation:

The ground reflected raditation depends on the structure and reflectance on the ground. For practical purpose, it mostly as assumed to be isotropic. For surface with directional  reflectivity (like windows or a water surface) this is not true. However, the error is significant in only a few cases. With regard to solar radiation, yhe reflectance to the ground is termed albedoρ. The ground-reflected radition Rαβ on a titled surface follows from the multiplication of the global radiation by the albedo of the ground and a view factor

13

The view factor (1-cosβ) / 2 accounts for the geomatric relationship between the tilted reciver surface and the emitter surface, in this case the surrounding ground [26].In practice, often an albedo of ρ=0.2 is applied [26], which is a typical value for dry base soil. For highly reflective surface as, for example, snow, the albedo title plane when title angle is high. At loe tilt angles, the albedo has a minimal effect due to the low view factor.

Diffuse Radiation:

Assumeing the diffuse radiation with its intensity uniformly distributed over the sky dome, it may be treated similar to the ground reflected radition. The respective isotropic model has been developed by Liu and Jordan [103]. The diffuse irradiance on a tilted surface is

14

Where (1+cosβ) / 2 is the view factor for the geomartric relationship between the tilted receiver surface and the sky dome.

The isotropic model is resonably accurate for cloudy skies. However, under scattered clouds and clear skies, it underestimates the diffuse radiation on surface tilted towards the equator. Under clear skies, the diffuse  irradiance is articulately anisotropic. The radiance, that is, irradiance per space angle of the sky dome, exhibits local maxima both around the solar disj and close to the horizon.

Global radiation

Figure (17): Fraction of Global radiation on thr ground as recived by a tilted plane [26]

diffuse radiation

Figure(18): Spatial distribution of anisotropic diffuse radiation over the sky diffuse radiation over the sky dome [37]

The fraction of diffuse radiation orginating from srround the solar disk is called circumsolar   radiation . The increase in radiation in a band close to the horizon is reffered to as horizon brightenibg (Figure17) .

A circumsolar model has been introduced by Hay and Davies. Here, the circumsolar radition can be taken into account by parameterzing Dαβ an the sum of an anisotropic and isotropic fraction:

15 With16

The atmospheric transmission factor for beam irradiance.

The most precise anistropic model up to now has been intriduced by Perez [37].Circumaolar radiation is consideed inside a circle of variable size arround the sun.Horizon brightning is considered inside a horizontal band of variable height at the horizon. The diffuse irradiance on a tilted plane according to pezer’s model amounts to

1

Where a and b are view factors of the circumsolar circle and the horizon band, respectively, with regard to the reciver plane. The parameters c and d are view factors of the circumsolar circle and the horizon band, respectively, with regard to the horizontal plane. For a given circumsolar circle and horizon band, a and b depends on the angle of incidence θi, c depends on the solar zenith θz and d is constant. The parameter F1 and F2 describe the enhancement of radiation inside the circumsolar circle and in the horizon band, respectively. They vary independently with the radiance distribution[38].

There is a large varity of F1 – F2 pairs depending on zenith angle and sky conditions. Parameters for the anisotropic distribution of solar radiation over the sky dome have have been elaborated in [37]. The subject is not further discussed at this  place based on[37], the application of the perez model should pose no further difficulty.

Direct, ground-reflected, and diffuse irradiance on an arbitrarily oriented surface can be calculated by means of one of the approches presented. The global Irradiance on a plane with azimuth α and tilt angle β is the some of the three fraction according to [38].

The different available conversion models for diffuse irradiance on inclined surface have been compared by IEA (international Energy Agency)’s solar heating and cooling programme (IEA SHC). The authors found the highest accuricy for the perez module. However, the module of Hay and Davies is only slightly less precise. As a consequence, the model of  Hay and Davies is still frequently applid, especially when only a limited database is avaible for the determination of F1 and F2.

Radiation Measurement:

Global solar radiation is generally measured by pyranometers, For measurement regarding PV applications, usually either a thermal pyranometer (figure 15) or a solar cell radiation sensor is applied (figure 16).

Diagram of overall solar radiation

Figure (19): Diagram of overall solar radiation

Thermal Pyranometers:

A thermal pyranometer measures solar irradiance via the temperature of a black absorber by means of a thermocouple. Thermal pyranometers have a constant spectral response over the entire solar spectrum. The absorber is usually covered by a hemispherical glass done ensuring independence of the angle of incidence[38].

According to ISO 9060, pyranometers are classified according to their precision into “second class”, “first class” and “secondary standard” [40]. Secondary-standardpyranometers are the most precise. For a secondary-standard instrument the maximum error of hourly irradition is 3% [41]. Due to their thermal intertia, pyranometers feature no immediate response to variations in solar irradiance. The thermal time constant of a secondary-standard pyranometer is approximately τ = 4s. For a first-class or second-class device, the time constant is much longer [42].

In order to measure diffuse  irradiance. Thermal pyranometers can be equipped with a shadow ring. The shadow ring blocks direct radiation and the pyranometer recives only diffuse radiation. The position of the shadoe ring must be addapted every couple of days accourding to the variable solar declination throughout the year. This can happen manually or by means of small motor drive.

Referense Solar Cells:

Solar cell based radiation sensor measure the solar irradiance via the short circuit current of a solar reference cell. As an approximation, the short circuit current is proportional to the solar irradiance. However, the precise measurements, the result must be compensated for the effect of cell temperature on the short circuit current. The cell temperature is either derived from the open circuit voltage of a second identical reference cell or it is messured directly at the back of the refference cell or it is messuired directly at the back of  the reference cell by means of a resistance thermometer.

Unlike the thermal pyranometers, solar cell radiation sensors applied to PV monitoring mostly have not a hemispherical but a flat glass cover. The spectral response reference-cell radiation sensors depemds on the applied solar cell material. On the one hand, the flat glass cover leads to increased reflection with hogh angles of irradiance.

pyranometer                                   Figure (20):thermal pyranometer with shadow ring

crystaline silicon

                                 Figure (21): Single crystaline silicon reference solar cell.

On the other hand, crystaline sillicon reference cells tend to overestimate the solar irradiance at low solar elevation angles due to the relatively increased red content of the spectrum with high air mass. The silicon cell is more sensitive in the red than in the blue range of the visible spectrum[30].

Regarding reflection losses and spectral response, reference cells behave exactly as PV modules made of the same material. This is why reference cell behave exactly as PV modules made of the same material. This is why reference cells are mainly applied for the measuring the irradiance on the PV array plane. If the reference cells has been properly chosen, the measured irradiance considers reflection losses and deviations from the AM-1.5 spectrum of the applied PV modules[42]. The effect of thermal inertia on the measurement of a solar reference cells is negliable.

Altough the prices for reference cells vary greatly depending on their precision and robustness, they are still notably cheaper than a secondary-standard pyranometer. This and the higher thermal inertia of a thermal pyranometers is mainly limited to high-precision measurements of global and diffuse radiation on the horizontal plane according to meterological standards. In-phase irradiance values for the energetic evalution of PV systems are usually measured by a reference cell.

 

Estimation of Tilted surface radiation:

Flat-plate solar collectors absorb both beam and diffuse radiation components of solar radiation. To use horizontal total radiant ion data to estimate radiation on the tilted surface plane of a collector of fixed orientation, it is necessary to know R, the ratio of total radiation on a tilted surf ace to that on the horizontal surface. The amount of solar radiation falling on a tilted surface is the sum of the beam and diffuses radiations falling directly on the surface and the radiation reflected on the surface from the surroundings. If one knows the tilt factor for a specific tilt angle for a location then he can easily estimate what will be the radiate ion on the tilted surface for Solar Home System. The ratio of the beam radiation falling on a tilted surface to that falling on a horizontal surface is called the tilt factor (Rb) for beam radiation. For the case of a tilted surface facing south in the northern hemisphere, Rb and is given by

2

Where, θ is the angle between the beam radiations on a surface and normal to that surface, θz zenith angle,  is the latitude, β is the tilt angle, δ is the declination for the average day of each month, w is the hour angle for the tilted surface for the average day of the month. The tilt factor Rd for diffuse radiation is the ratio of the diffuse radiation falling on the tilted surf ace to that falling on a horizontal surface. The value of the tilt factor depends upon the distribution of diffuse radiation over the sky and on the port ion of the sky dome seen by the tilted surf ace. Assuming that the sky is an isotropic source of diffuse radiation, we have

3

Assuming that the reflection of the beam and diffuse radiations falling on the ground is diffuse and isotropic and that the reflectivity is ρ, the tilt factor for reflected radiation is given by

4

where ρ is the surface albedo. The monthly surface albedo values are employed from

NASA and these lie between 0.12 and 0.16.

 

Thus the hourly tilt factor, R can be given by

R= HT/H = (1 – Hd / H)Rb + (Hd/H)Rd + Rr

table1

 

Table: Hourly Tilt factor s for Latitude tilted south facing surface at Dhaka

 

Tilt angles should be chosen so that the solar devices can get significant available solar radiation. In summer the sun’s path is short and it shines almost on the zenith at noon. But in winter the sun path is long and it has a path closer to horizontal at noon. Hence if we keep the solar device horizontal in summer it will get more sunlight at noon and if we keep the device tilted in winter from the horizon it will get more sunlight. One can also track the sun both in the sun’s direct ion of path and the time of the day. In Bangladesh a study shows that if one simply changes the tilt angle at 400 for winter (October-February) and 100 for summer (March-September) then he can achieve higher tilt factors.

table2

Table: Hourly Tilt factor s for 10 and 40 degree combination south facing tilted surface at Dhaka

To estimate monthly average tilt factor Liu an d Jordan proposed the following equation

5

Here for a south facing surf ace

6

where, ωs is the sunset hour angle and ώs the sunset hour angle for the tilted surface for the average day of the month, which is given by

7

where “min” means the smaller of the two items in the bracket

Monthly tilt factors are given in figure

Monthly tilt factor for Dhaka

Figure (22): Monthly tilt factor for Dhaka

To find the tilted surface radiation one has to multiply GHI data by tilt factor. From the above figure 21 it is clear that the total radiation will decrease if one keeps the surface at latitude tilt angle in summer season at Dhaka. To get higher values from the solar system one may practice to tilt the surface, two times over the year as above tilt angles. [43]

In Bangladesh the sunlight falls directly in summer and it falls transversely in winter. So, it is desirable to put the panel at 45 degree slanting in summer and 15-20 degree in winter to get the best result. But it is troublesome to put the panel at different angles with the change of seasons. The experts arrived at a decision to place the panel at certain angle taking the average angle of the sunlight throughout the year that is from January to December to avoid placement of panel at different angles at different times to get more electricity. This angle is 23 degree to south. Care should be taken so that the shade does not fall on the panel. Shade or barriers of sunlight cause less efficiency of the panel. [44]

 

Categories
EEE

Assignment on Solar Cells

Historical Overview:

In 1839 a French physicist first discovered the photovoltaic effect while experimenting with an electrolytic cell made up of two metal electrodes. [45] After, the first intentional PV device was developed by the American inventor Charles Fritts in 1883. He melted selenium into a thin sheet of on a metal substrate and pressed gold-leaf film as the top contact. Later on, in 1954 researchers at Bell Labs accidentally discovered that p-n junction diodes generated a voltage when the room lights were on. Within a year they had produced a 6% efficient silicon p-n junction solar cell. The same efficiency was achieved the same year by the group at Wright Patterson air force base in the USA, only this time, they used a thin-film hetero-junction solar cell based on Cu2S/CdS. By the year 1960, several documents were written showing different solar cells built using different materials for the p-n junction, some key documents written by Prince, Loferski, Rappaport and Wysoski, Shockley and Queisser developed the fundamentals of p-n junction cell operation including the theoretical relationship between band gap, incident spectrum, temperature, thermodynamics and efficiency [46]. In the years to come the US and the USSR space programs played an important role in the R&D of solar cells, since they were the main energy source to power their satellites. The year 1973 was very important for PV technological advancement. First the “violet cell” was developed, having an improved short wavelength response leading 30% relative increase in efficiency over the most advanced silicon cells at that time. Also, the same year an important event occurred called the Cherry hill conference. During this event a group of PV researchers and heads of US government scientific organizations met to evaluate the scientific merit and potential of photovoltaics. The outcome was the decision that photovoltaic’s was worthy of government support, resulting in the formation of the US Energy Research and Development Agency, the world’s first government group setup whose mission included fostering research on renewable energy, which ultimately became the US dept of Energy. Finally in October, the first oil crisis pressed all the governments worldwide to encourage the use of renewable sources of energy, especially solar. [46] From this point, solar research had the momentum and funding it needed from fuel providers, electric utilities and other interested parties to make a real impact on the energy industry. However, this didn’t last long because in 1982 the public funding was cut by the national governments worldwide. It is due to this withdrawal of support that has left the impression that solar power cannot succeed without substantial subsidies. Yet progress did not stop, it just switched direction and rapid changes in the technology and PV industry and parties interested took place to begin a transformation of the energy industry. All around the world energy sustainability was getting more attention because of energy security issues and climate change. But the reasons for these sustainable changes should not only be attributed to social environmental consciousness. The main driving factor, as with almost all emerging industries, is economic sensibility. At the same time the fossil fuel industry was experiencing problems with supply and cost, China’s economy was developing at incredible rates. As of 2005, for example, China accounted for almost 30% of global growth where the European community accounted for just 5%. And as China develops, the amount of oil needed for economic expansion is comparatively more per unit of growth [47]. All of this indicates that even with the most optimistic view of conservation programs, sustainable energy generation will have to increase if development is expected to continue at current rates. Fortunately a healthy mix of sustainable energy generation technologies along with the gradual phasing out of widespread fossil fuel use is one likely scenario for the future. However, the most recent expansion of solar power is occurring mainly in Germany and Japan. At first glance this might seem surprising since neither Germany or Japan have a large amount of  sunlight, but their lack of fossil fuel sources combined with a national government committed to sustainable energy programs have enabled solar power to thrive. Together these two countries, with Japan’s sunshine program and Germany’s 100.000 solar roofs program along with several government subsidies account for a full 69% of the world market for PV as of 2005. Also, the rate at which this market is expanding is encouraging – from 85 MW in 1995 to 1.1 GW globally in 2005.

 

Technology:

The smallest entity within a PV system is a solar cell. The solar cell is a semi conductor device, more precisely, a special type of diode. Incident lights free electrons. They are separated by an internal electromagnetic field as a consequence of the potential difference at the p-n junction. Voltage is generated between both surface contacts and a connected load draws a current fig (23). [49, 50].

As its name implies, photovoltaic is a technology that converts light (photo) directly into electricity (voltaic). The name of the individual photovoltaic element is known as solar cell, which is made out of materials called semiconductors. The most used semiconductor material is silicon, which in its naturally occurring state has the unique property of 4 electrons in its outer orbit, allowing them to form perfect covalent bonds with four neighboring atoms, thus creating a lattice. The obtained crystalline form is a silvery, metallic looking substance. In its pure state, crystalline silicon is a poor conductor, due to the fact that all of the electrons in the

outer orbit are bonded and cannot freely move. To change this behavior, pure silicon has to go through a process called doping. In this process some “impurities” (e.g. C, N, As, B) are added to the material [48].

A number of different solar cell technologies are currently applied or under development (Table 1). More than 90% of today’s annual solar production is made from crystalline silicon (figure 10). However, other semiconductor materials are also applied and several technologies are investigated [30].

According to the type of material added, the semiconductor receives the P or N classification.

● N-Type: Arsenic or phosphorous is added and since each element has 5 electrons in their outer orbit, there is one electron that has nothing to bond to, therefore is free to move within

the material. By adding several atoms of arsenic or phosphorous, enough electrons will be able to move, allowing an electrical current to flow through the material. The name “n-type”

comes from the electron’s negative charge.

● P-type: Boron or gallium is added. In this case each one has only 3 outer orbit electrons, and when added to pure silicon, there is a hole in the structure where one silicon electron has

nothing to bond to and is free to move. The absence of electrons creates the effect of positive charge, hence the “p-type” name . These electrons are occupying a band of energy called

the valence band. When some energy is applied and exceeds a certain threshold, called the band gap, these electrons are free to move in a new energy band called the conduction band, where they can conduct electricity through the material. The energy required for the electrons to migrate to the conduction band can be provided by photons which are particles of light. Figure 1 shows the idealized relationship between energy (vertical axis) and the spatial boundaries (horizontal axis). When the solar cell is exposed to sunlight, photons hit the electrons in the valence band and give them enough energy to migrate into the conduction band. There, a n-doped semiconductor contact collects the conduction-band electrons and drives them to the external circuit where they can be used to create electricity. Then they are restored to the valence band at a lower (free) energy through the return circuit by a p-doped semiconductor contact.

Schematic of solar cell

Figure (23): Schematic of solar cell [30]

This is all possible because sunlight is a spectrum of photons distributed over a wide range of energy. Photons with greater energy than the band gap can drive electrons from the valence band to the conduction band and can travel through the external circuit to produce work. Photons with less energy than the band gap cannot excite the free electrons, and instead, that energy travels through the solar cell and is absorbed as heat. The voltage at which electrons are delivered to the external circuit are slightly less than the band gap. This voltage is measured in units of electron volts (eV), thus in a material with 1eV band gap the voltage delivered by a single cell is around 0.7V. Therefore multiple cells are connected together and encapsulated into units called PV modules which is the product usually sold to the customer.

 

Wafer –type Crystalline Silicon Cells:

Crystall