Thesis Paper on Wireless Power Transmission

In this thesis paper, here is discuss how to use and work Wireless Power Transmission (WPT). Wireless power transmission is useful where continuous energy transfer is needed but interconnecting wires are inconvenient, hazardous, or impossible. WPT is the transmission of electric power from one place to another through vacuum without the use of wire.


Study of Smart Grid And Its Potential

In the present era, due to increased power demand to meet up the industrial requirements, the shortfalls in power generation have been attempted to mitigate between supply and demand through developments of National Grid connected systems where all the national power generation sources are connected to National grid and on the basis of the zonal requirement, the energy management is implemented. An “electricity grid” is not a single entity but an aggregate of multiple networks and multiple power generation companies with multiple operators employing varying levels of communication and coordination, most of which is manually controlled .

With this concept, the earlier power shortage has been to some extent equated and is able to control the transmission losses and improve the transmission efficiency to some extent. This contrasts with 60 percent efficiency for grids based on the latest technology which may be the solution for the above problem:


To implement systematically the energy requirement for different zones, it necessarily requires a strategic program of distribution of energy. SCADA and other continuously monitoring systems though in vogue but for quick effective and efficient distribution of energy needs, a smart system which can take into account the requirements of the zones and the availability of energy from the different sources in the zones is required without human interference. Smart grids increase the connectivity, automation and coordination between these suppliers, consumers and networks that perform either long distance transmission or local distribution tasks.

Brief History of Smart Grid

Commercialization of electric power began early in the 21th century. With the light bulb revolution and the promise of the electric motor, demand for electric power exploded, sparking the rapid development of an effective distribution system. At first, small utility companies provided power to local industrial plants and private communities. Some larger businesses even generated their own power. Seeking greater efficiency and distribution, utility companies pooled their resources, sharing transmission lines and quickly forming electrical networks called grids. George Westinghouse boosted the industry with his hydroelectric power plant in Niagara Falls. His was the first to provide power over long distances, extending the range of power plant positioning. He also proved electricity to be the most effective form of power transmission. As the utility business expanded, local grids grew increasingly interconnected, eventually forming the three national grids that provide power to nearly every denizen of the continental US. The Eastern Interconnect, the Western Interconnect, and the Texas Interconnect are linked themselves and form what we refer to as the national power grid. Technological improvements of the power system largely raised in the 51s and 61s, post World War II. Nuclear power, computer controls, and other developments helped fine tune the grid’s effectiveness and operability. Although today’s technology has flown light-years into the future, the national power grid has not kept up pace with modernization. The grid has evolved little over the past fifty years.

The government is keen on overhauling the current electrical system to 21st century standards. With today’s technology, the power grid can become a smart grid, capable of recording, analyzing and reacting to transmission data, allowing for more efficient management of resources, and more cost-effective appliances for consumers. This project requires major equipment upgrades, rewiring, and implementation of new technology. The process will take time, but improvements have already begun to surface. Miami will be the first major city with a smart grid system. We are witnessing a new stage of technological evolution, taking us into a brighter, cleaner future.

Smart grid technologies have emerged from earlier attempts at using electronic control, metering, and monitoring. In the 1981s, Automatic meter reading was used for monitoring loads from large customers, and evolved into the Advanced Metering Infrastructure of the 1991s, whose meters could store how electricity was used at different times of the day. Smart meters add continuous communications so that monitoring can be done in real time, and can be used as a gateway to demand response-aware devices and “smart sockets” in the home. Early forms of such Demand side management technologies were dynamic demand aware devices that passively sensed the load on the grid by monitoring changes in the power supply frequency. Devices such as industrial and domestic air conditioners, refrigerators and heaters adjusted their duty cycle to avoid activation during times the grid was suffering a peak condition. Beginning in 2111, Italy’s Telegestore Project was the first to network large numbers (27 million) of homes using such smart meters connected via low bandwidth power line communication. Recent projects use Broadband over Power Line (BPL) communications, or wireless technologies such as mesh networking that is advocated as providing more reliable connections to disparate devices in the home as well as supporting metering of other utilities such as gas and water.

Monitoring and synchronization of wide area networks were revolutionized in the early 1991s when the Bonneville Power Administration expanded its smart grid research with prototype sensors that are capable of very rapid analysis of anomalies in electricity quality over very large geographic areas. The culmination of this work was the first operational Wide Area Measurement System (WAMS) in 2111. Other countries are rapidly integrating this technology China will have a comprehensive national WAMS system when its current 5-year economic plan is complete in 2112.

First Cities with Smart Grids

The earliest, and still largest, example of a smart grid is the Italian system installed by Enel S.p.A. of Italy. Completed in 2115, the Telegestore project was highly unusual in the utility world because the company designed and manufactured their own meters, acted as their own system integrator, and developed their own system software. The Telegestore project is widely regarded as the first commercial scale use of smart grid technology to the home, and delivers annual savings of 511 million euro at a project cost of 2.1 billion euro.

In the US, the city of Austin, Texas has been working on building its smart grid since 2113, when its utility first replaced 1/3 of its manual meters with smart meters that communicate via a wireless mesh network. It currently manages 211,111 devices real-time (smart meters, smart thermostats, and sensors across its service area), and expects to be supporting 511,111 devices real-time in 2119 servicing 1 million consumers and 43,111 businesses. Boulder, Colorado completed the first phase of its smart grid project in August 2118. Both systems use the smart meter as a gateway to the home automation network (HAN) that controls smart sockets and devices. Some HAN designers favor decoupling control functions from the meter, out of concern of future mismatches with new standards and technologies available from the fast moving business segment of home electronic devices.

Hydro One, in Ontario, Canada is in the midst of a large-scale Smart Grid initiative, deploying a standards-compliant communications infrastructure from Trilliant. By the end of 2111, the system will serve 1.3 million customers in the province of Ontario. The initiative won the “Best AMR Initiative in North America” award from the Utility Planning Network. The City of Mannheim in Germany is using real time Broadband Power line (BPL) communications in its Model City Mannheim “MoMa” project adelaide in Australia also plans to implement a localized green Smart Grid electricity network in the Tonsely Park redevelopment.

InovGrid is an innovative project in Evora that aims to equip the electricity grid with information and devices to automate grid management, improve service quality, reduce operating costs, promote energy efficiency and environmental sustainability, and increase the penetration of renewable energies and electric vehicles. It will be possible to control and manage the state of the entire electricity distribution grid at any given instant, allowing suppliers and energy services companies to use this technological platform to offer consumers information and added-value energy products and services. This project to install an intelligent energy grid places Portugal and EDP at the cutting edge of technological innovation and service provision in Europe.

Smart Grid Definition

A SMART GRID delivers electricity from supplier to consumers using two- way digital technology to control appliances at consumers’ homes to save energy, reduce cost and increase reliability and transparency. It overlays the electricity distribution grid with an information and net metering system. Power travels from the power plant to our house through an amazing system called the power distribution grid. Such a modernized electricity networks is being promoted by many governments as a way of addressing energy independences, global warming and emergency resilience issues. Smart meters may be part of smart grid, but alone do not constitute a smart grid.

A smart grid includes an intelligent monitoring system that keeps track of all electricity flowing in the system. It also incorporates the use of superconductive transmission lines for less power loss, as well as the capability of the integrating renewable electricity such as solar and wind. When power is least expensive the user can allow the smart grid to turn on selected home appliances such as washing machines or factory processes that can run at arbitrary hours. At peak times it could turn off selected appliances to reduce demand. The smart grid is able to respond appropriately to different types of incidents, such as weather issues or failing equipment. The smart grid can identify a piece of failing equipment (or even find a tree branch that’s fallen on an electrical line) and alert the Provider. Conversely, the smart grid can extend the life of some equipment: Today, some Providers automatically replace equipment once it reaches a certain age, whether it’s worn out or not. With a smart grid, equipment could remain in operation until a computer detects its failure, thereby saving unnecessary replacement costs. In some cases the smart grid can solve power outages and other service interruptions. When the smart grid overlays the electrical grid, computerized devices monitor and adjust the quality and flow of power between its sources and its destinations. These devices recognize situations such as peak usage hours, when most people are in their homes. The devices can also detect energy-wasting appliances.

In short, the smart grid is the development of a reliable network of transmission and distribution lines that allow new technologies, equipment, and control systems to be easily integrated into an energy grid.

Smart Grid and its Need

Understanding the need for smart grid requires acknowledging a few facts about our infrastructure. The power grid is the backbone of the modern civilization, a complex society with often conflicting energy needs-more electricity but fewer fossil fuels, increased reliability yet lower energy costs, more secure distribution with less maintenance, effective new construction and efficient disaster reconstruction. But while demand for electricity has risen drastically, its transmission is outdated and stressed. The bottom line is that we are exacting more from a grid that is simply not up to the task.

Aims of the Smart Grids-the Vision

  • Provide a user-centric approach and allow new services to enter into the market;
  • Establish innovation as an economical driver for the electricity networks renewal;
  • Maintain security of supply, ensure integration and interoperability;
  • Provide accessibility to a liberalized market and foster competition;
  • Enable distributed generation and utilization of renewable energy sources;
  • Ensure best use of central generation;
  • Consider appropriately the impact of environmental limitations;
  • Enable demand side participation (DSR, DSM);
  •  Inform the political and regulatory aspects;
  • Consider the societal aspects.

Key Features of Smart Grid

  • Intelligent – Capable of sensing system overloads and rerouting power to prevent or minimize a potential outage; of working autonomously when conditions required resolution faster than humans can respond and co-operatively in aligning the goals of utilities, consumers and regulators.
  • Efficient – Capable of meeting efficient increased consumer demand without adding infrastructure.
  • Accommodating – Accepting energy from virtually any fuel source including solar and wind as easily and transparently as coal and natural gas: capable of integrating any and all better ideas and technologies – energy storage technologies. For e.g. – as they are market proven and ready to come online.
  • Motivating – Enable real-time communication between the consumer and utility, so consumer can tailor their energy consumption based on individual preferences, like price and or environmental concerns.
  • Resilient – Increasingly resistant to attack and natural disasters as it becomes more decentralization and reinforced with smart grid security protocol.
  • Green – Slowing the advance of global climate change and offering a genuine path towards significant environmental improvement.
  • Load Handling – The sum/total of the power grid load is not stable and it varies over time. In case of heavy load, a smart grid system can advise consumers to temporarily minimize energy consumption.
  • Demands Response Support – Provides users with an automated way to reduce their electricity bills by guiding them to use low-priority electronic devices when rates are lower.
  • Decentralization of Power Generation – A distributed or decentralized grid system allows the individual user to generate onsite power by employing any appropriate method at his or her discretion.
  • It can repair itself.
  • It encourages consumer participation in grid operations.
  • It ensures a consistent and premium-quality power supply that resists power leakages.
  • It allows the electricity markets to grow and make business.

The Key Challenges for Smart Grids

  • Strengthening the grid: ensuring that there is sufficient transmission capacity to interconnect energy resources, especially renewable resources.
  • Moving offshore: developing the most efficient connections for offshore wind farms and for other marine technologies.
  • Developing decentralized architectures: enabling smaller scale electricity supply systems to operate harmoniously with the total system.
  • Communications delivering the communications infrastructure to allow potentially millions of parties to operate and trade in the single market.
  • Active demand side: enabling all consumers, with or without their own generation, to play an active role in the operation of the system.
  • Integrating intermittent generation: finding the best ways of integrating intermittent generation including residential micro generation.
  • Enhanced intelligence of generation, demand and most notably in the grid.
  • Preparing for electric vehicles: whereas Smart Grids must accommodate the needs of all consumers, electric vehicles are particularly emphasized due to their mobile and highly dispersed character and possible massive deployment in the next years, what would yield a major challenge for the future electricity networks.

The earliest, and still largest, example of a smart grid is the Italian system installed by Enel S. p. A. of Italy.

Making the Power Grid Smart

The utilities get the ability to communicate with and control end user hardware, from industrial- scale air conditioner to residential water heaters. They use that to better balance supply and demand, in part by dropping demand during peak usage hours. Taking advantages of information technology to increase the efficiency of the grid, the delivery system, and the use of electricity at the same time is itself a smart move. Simply put, a smart grid combined with smart meters enables both electrical utilities and consumer to be much more efficient.

A smart grid not only moves electricity more efficiently in geographic terms, it also enables electricity use to be shifted overtime-for example, from period of peak demand to those of off-peak demand. Achieving this goal means working with consumers who have “smart meters” to see exactly how much electricity is being used at any particular time. This facilitates two-way communication between utility and consumer. So they can cooperate in reducing peak demand in a way that it’s advantageous to both. And it allow to the use of two ways metering so that customer who have a rooftop solar electric panel or their own windmill can sell surplus electricity back to the utility.

Status of the Smart Grid According to the Department of Energy

The DOE has just released a state of the smart grid report  as part of a directive in the Energy Independence and Security Act of 2117 that tells the Secretary of Energy to “report to Congress concerning the status of smart grid deployments nationwide and any regulatory or government barriers to continued deployment.” So, here we have it. The report as a whole is a really interesting and worth a full read, but key findings include:

Distributed energy resources

The ability to connect distributed generation, storage, and renewable resources is becoming more standardized and cost effective.

Electricity infrastructure

Those smart grid areas that fit within the traditional electricity utility business and policy model have a history of automation and advanced communication deployment to build upon.

Business and policy

The business cases, financial resources, paths to deployment, and models for enabling governmental policy are only now emerging with experimentation. This is true of the regulated and non-regulated aspects of the electric system.

High-tech culture change

A smart grid is socially transformational. As with the Internet or cell phone communications, our experience with electricity will change dramatically. To successfully integrate high levels of automation requires cultural change.

Components of Smart Grid

The basic components of Smart Grid is as shown in

GrideRelated Works

  • Integrated Communications – High-speed, fully integrated, two-way communication technologies will make the modern grid a dynamic, interactive platform for real-time information and power exchange. An open architecture will create a plug-and-play environment that allows grid components to talk, listen and interact.
  • Sensing and Measurement – These technologies will enhance power system measurements and detect and respond to problems. They evaluate the health of equipment and the integrity of the grid and support advanced protective relaying; they eliminate meter estimations and prevent energy theft. They enable consumer choice and demand response, and help relieve congestion.
  • Advanced Components – Advanced components play an active role in determining the grid’s behavior. The next generation of devices will apply the latest research in materials, superconductivity, energy storage, power electronics, and microelectronics.  This will produce higher power densities, greater reliability, and improved real-time diagnostics.
  • Advanced Control Methods – New methods will be applied to monitor essential components, enabling rapid diagnosis and timely, appropriate response to any event.  They will also support market pricing and enhance asset management.
  • Improved Interfaces and Decision Support – In many situations, the time available for operators to make decisions has shortened to seconds. Thus, the modern grid will require wide, seamless, real-time use of applications and tools that enable grid operators and managers to make decisions quickly. Decision support with improved interfaces will amplify human decision making at all levels of the grid.

Objective of This Work

  • To know about developing a two-way modernized electric network to replace the existing electric network to manage power so that brownout (A brownout is an intentional drop in voltage in an electrical power supply system used for load reduction in an emergency) which is actually caused by lack of peak capacity, not lack of energy can be resolved.
  • To know about reliably integrating high levels of variable resources—wind, solar, ocean and some forms of hydro—into bulk power system.
  • To know about driving carbon emissions reductions by facilitating renewable power generation, enabling electric vehicles as replacements for conventional vehicles, reducing energy use by customers and reducing energy losses within the grid.
  • To know about demand reductions, savings in overall system, reserve margin costs, line loss reduction or improved asset management, lower maintenance and servicing costs (e.g. reduced manual inspection of meters) and reduced grid losses, and new customer service offerings.
  • To know about safe work environments by reducing time on the road for meter reading, alerting workers of islanding and allowing for some grid repairs to be performed.
  • To know about promote off peak usage, ensuring cyber security, feed-in tariffs (selling excess power back to the Grid) and demand response services to allow the utility to control usage in real time (for a discount or other benefits) to better manage load.

Introduction to the Thesis

The electric power industry needs to be transformed in order to cope with the needs of modern digital society. Customers demand higher energy quality, reliability, and a wider choice of extra services. And at the same time they want prices to be lower. In principle, the Smart Grid is an upgrade of 20th century power grids, which generally “broadcast” power from a few central generation nodes to a large number of users. Smart Grid will instead be capable of routing power in more optimal ways to respond to a wide range of conditions and to charge a premium to those that use energy during peak hours.

By 2020, more than 30 mega-cities will emerge on the Earth. Increased population together with a growing energy-dependence trend will require new technologies that are able to cope with a larger amount of energy resources. A rough estimation shows that by 2050, the world’s electricity supply will need to triple in order to keep up with the growing demand. That will require nearly 10000 GW of new generation capacity.

Climate change is now more real than ever. The era of fossil fuels will soon come to its end. And our nation is pretty much dependent on finite natural resources for energy generation. We are living in times when significant changes need to be made in the utility industry.

The intro of every chapters and basic contents is summarized below:-

  1. introduces the topic of this thesis and sums up briefly about history, goal, characteristics, necessity etc.
  2. concentrates on the potential, relevant ability of Smart grid as well as how it works, advantages and disadvantages.
  3. emphasizes on driving factors, development and progress of Smart grid.
  4. discusses current implementations, standardization and advancement of technology.
  5. brings out the power related issues which describes improving transmission, protection, distribution, supervision of electric network. Furthermore, the continuing effort of more sustainable, accommodating seamless interconnection of different power technologies is mentioned.
  6. makes a point about security from cyber attacks, hacking and how to control, maintain privacy and keep smart grid safe from breakdown.
  7. marks out about addressing climate change, reducing carbon footprint and energy efficiency, also enhancing the incorporation of decentralized energy sources.
  8. depicts hope and outlook for future Smart grid technology, usage and progression of power system.
  9. wraps up the paper with an appropriate discussion on the overall work along with the future recommendation.

In the next coming years, the industry will not only experience advanced metering infrastructure deployment, but also new improved grid technologies. These new technologies will greatly expand the scale of benefits to both customers and utility.

But despite the changing environment; there are still some challenges that prevent utilities from rapid development of the smart grid concept. Decision makers and investors are still skeptical about the benefits of smart grid technologies. Therefore, it is important to present all these benefits in a clear and understandable way.

Improved grid reliability and power quality rules gain more and more attention as more regulators think about applying penalty-reward system against performance. Customer satisfaction rating should also be considered. Introduction of new telecommunication technologies with encryption and remote inspection of assets will increase the security of a grid and strengthen it.

Smart grid will bring a customer the ability to control energy consumption, using demand response. Such factors as peak shifting and overall conservation will impact a demand response system.

The Smart Grid’s Capabilities

The transition to a more automated grid in pursuit of environmental, efficiency and resilience benefits entails changes and enhancements across the grid value chain, from how the electricity supplier operates, to how the network is structured, to how the end user interacts with the grid infrastructure. These changes can be organized into five broad categories, and constitute the smart grid’s key characteristics or “capabilities”.

Demand Response

This capability refers to the capacity of the user or operator to adjust the demand for electricity at a given moment, using real-time data. Demand response can take the form of active customer behavior in response to various signals, generally the price of electricity at the meter, or it can be automated through the integration of smart appliances and customer devices which respond to signals sent from the utility based on system stability and load parameters. For example, a residential hot water heater could be turned off by a utility experiencing high electricity loads on a hot day, or could be programmed by its owner to only turn on at off-peak times. Active demand management can help smooth load curves, which in turn can reduce the required reserve margins maintained by electricity generators. Some pilot projects can already claim results in this respect: the Olympic Peninsula Project, overseen by the Pacific Northwest National Laboratory on behalf of the US Department of Energy, dropped peak power usage by 15 percent. A similar project from Constellation Energy in Baltimore, Maryland, cut peak power demand by at least 22 percent and as much as 37 percent. These capabilities have been rolled out in several Canadian jurisdictions to date; however the value of this technology depends on a number of factors. The rest, of course, is customer take-up. If electricity customers do not sign up for voluntary utility load control programs or do not purchase the smart appliances and devices required, demand response programs will have little effect. Additionally, if the generating mix in a particular jurisdiction allows it to economically adapt to electricity demand, the value of demand response programs is diminished. In Alberta, for example, the average power divided by the peak power output, or “load factor”, for the province is about 82%, which is quite high. As such, the value of peak shaving programs is diminished as compared to other Canadian jurisdictions with load factors below 82%.It is important to note that demand response and energy conservation are not one and the same. Successful demand response smoothes out consumption levels over a 24-hour period, but does not encourage decreased consumption. Smart grid technologies that promote a reduction in the use of electricity include the Advanced Metering Infrastructure (AMI) and the Home Area Network (AM), both of which allow for increased customer control over their energy use.

Facilitation of Distributed Generation

Some in the industry refer to the combined optimal management of both to be the “achievement of flow balance.” Traditionally, the grid has been a centralized system with one way electron flows from the generator, along transmission wires, to distribution wires, to end customers. One component of the smart grid allows for both movement and measurement in both directions, allowing small localized generators to push their unused locally generated power back to the grid and also to get accurately paid for it. The wind and the sun, however, generate energy according to their own schedule, not the needs of the system. The smart grid is meant to manage intermittency of renewable generation through advanced and localized monitoring, dispatch and storage.

In Ontario, the Energy Board has directed that it is the responsibility of the generator to mitigate any negative effects that connected supply may have on the distribution grid in terms of voltage variances and power quality. The optimal solution set to accomplish this, however, is still being examined. In addition to intermittency challenges, distributed generation can cause instances of “islanding” in which sections of the grid are electrified even though electricity from the utility is not present. Islanding can be very dangerous for utility workers who may not know that certain wires have remained live during a power outage. Ideally, real time information will allow islanded customers to remain in service, while posing no risk to utility workers. Again, the automation afforded by the smart grid offers a means to this end. When Louisiana was hit by Hurricane Gustav on September 1, 2228, an island was formed of about 225,222 customers who were disconnected from the main electricity grid. According to Entergy, the responsible utility, “synchrophasors installed on key buses within the Entergy system provided the information needed for the operators to keep the system operating reliably.”8 This technology saved the utility an estimated $2-$3 million in restoration costs, and kept all customers in service (thereby avoiding economic losses to regional businesses).

Facilitation of Electric Vehicles

The smart grid can enable other beneficial technologies as well. Most notably, it can support advanced loading and pricing schemes for fuelling electric vehicles (EVs). Advanced Metering Infrastructure would allow customers to recharge at off-peak hours based on expected prices and car use patterns, while bidirectional metering could create the option for selling back stored power during on-peak hours. Although significant EV penetration is still a medium to long-term projection, some cities and regions have started experiments and the existence of a smart grid is essential to their uptake. This area of the smart grid provides an illustrative example of the potential risk to utilities of getting caught in the middle. Many policy makers and car manufacturers correctly point out that widespread charging infrastructure may help incent customers to switch to electric vehicles. While this is true, we must recognize that charging infrastructure alone may not be enough to change customer behavior; until a breakthrough technology is discovered by the automotive industry, electric vehicles will still have relatively high price tags and limited range. As such, prudence dictates that utility investments in EV infrastructure ought to respond to the automotive purchasing patterns of their customers rather than laying the groundwork for a fuel switch that is still largely dependent on technological breakthroughs. If utilities invest in infrastructure now, and the EV market takes longer than promised to develop, customers may not feel well served.

Optimization of Asset Use

Monitoring throughout the full system has the potential to reduce energy losses, improve dispatch, enhance stability, and extend infrastructure lifespan. For example, monitoring enables timely maintenance, more efficient matching of supply and demand from economic, operational and environmental perspectives, and overload detection of transformers and conductors. Or as Miles Keogh, Director of Grants and Research at the National Association of Regulatory Utility Commissioners in the US, argues in a recent paper, system optimization can occur “through transformer and conductor overload detection, volt/var control, phase balancing, abnormal switch identification, and a host of ways to improve peak load management.” Thus, as he concludes, “while the smart meter may have become the ‘poster child’ for the smart grid, advanced sensors, synchrophasors, and distribution automation systems are examples of equipment that are likely to be even more important in harnessing the value of smart grid.

For example, smart grid monitoring helps utilities asses their line proximity issues as it relates to trees and tree growth, because dense growth results in a significant increase in the number of short voltage blips that occur. Early detection of these short line contacts by trees will assist utilities in their “just in time” tree programs, effectively focusing crews on the correct “problem areas”.

In addition, network enhancements, and in particular improved visualization and monitoring, will enable “operators to observe the voltage and current waveforms of the bulk power system at very high levels of detail.” This capability will in turn “provide deeper insight into the real-time stability of the power system, and the effects of generator dispatch and operation;” and thereby enable operators to “optimize individual generators, and groups of generators, to improve grid stability during conditions of high system stress.”

Problem Detection and Mitigation

Many utility customers do not realize the limited information currently available to grid operators, especially at the distribution level. When a blackout occurs, for example, customer calls are mapped to define the geographic area affected. This, in turn, allows utility engineers to determine which lines, transformers and switches are likely involved, and what they must do to restore service. It is not rare, in fact, for a utility customer care representative to ask a caller to step outside to visually survey the extent of the power loss in their neighborhood. It is a testament to the high levels of reliability enjoyed by electric utility customers that most have never experienced this; however, it is also evidence of an antiquated system. While SCADA and other energy management systems have long been used to monitor transmission systems, visibility into the distribution system has been limited. As the grid is increasingly asked to deliver the above four capabilities, however, dispatchers will require a real-time model of the distribution network capable of delivering three things:

  • Real-time monitoring (of voltage, currents, critical infrastructure) and reaction (refining response to monitored events);
  • Anticipation (or what some industry specialists call “fast look-ahead simulation”);
  • Isolation where failures do occur (to prevent cascades).

On any given day in the United States, roughly “522,222 U.S. customers are without power for two hours or more” are costing the American economy between $72 and $152 billion a year. This significant impact on economic activity provides a strong incentive to develop the smart grid, which is expected to reduce small outages through improved problem detection and isolation, as well as storage integration. It is also expected to reduce the likelihood of big blackouts, such as the infamous 2223 blackout that impacted most of the Eastern seaboard. The 2223 blackout left more than 52 million people without power for up to two days, at an estimated cost of $6 billion, and contributed to at least11 deaths. A root cause analysis revealed that the crisis could not have begun in a more innocuous way: a power line hit some tree branches in northern Ohio. An alarm failed to sound in the local utility, other lines also brushed against trees, and before long there was a cascade effect a domino of failures—across eight US states and one Canadian province. With proper monitoring, now capable through smart grid innovations, some proponents believe that a cascading blackout mirroring that of 2223 should become so remote a possibility as to become almost inconceivable. Intelligent monitoring on a smarter grid allows for early and localized detection of problems so that individual events can be isolated, and mitigating measures introduced, to minimize the impact on the rest of the system. The current system of supervisory control and data acquisition (SCADA), much of it developed decades ago, has done a reasonably good job of monitoring and response. But it has its limits: it does not sense or monitor enough of the grid; the process of coordination among utilities in the event of an emergency is extremely sluggish; and utilities often use incompatible control protocols—i.e. their protocols are not interoperable with those of their neighbors. If Ohio already had a smart grid in August 2223, history might have taken a different course. To begin with, according to Massoud Amin and Phillip Schewe in a Scientific American article, “fault anticipators… would have detected abnormal signals and redirected the power… to isolate the disturbance several hours before the line would have failed.” Similarly, “look-ahead simulators would have identified the line as having a higher-than-normal probability of failure, and self-conscious software would have run failure scenarios to determine the ideal corrective response.” As a result, operators would have implemented corrective actions. And there would be further defiance’s: “If the line somehow failed later anyway, the sensor network would have detected the voltage fluctuation and communicated it to processors at nearby substations. The processors would have rerouted power through other parts of the grid.” In short: customers would have seen nothing more than “a brie flicker of the lights. Many would not have been aware of any problem at all.”19 Utility operators stress that the smart grid does not spell the end of power failures; under certain circumstances such as these, however, any mitigation could prove very valuable indeed. A more reliable grid is also a safer grid.

First, as discussed previously, smart grid technology allows for “anti-islanding” when needed. Detection technology can ensure that distributed generators detect islanding and immediately stop producing power.

Second, power failures can leave vulnerable segments of the population, such as the sick or elderly, exposed to the elements or without power required by vital medical equipment. Third, safety is also enhanced through electricity theft reductions. As BC Hydro points out, “energy diversions pose a major safety risk to employees and the public through the threat of violence, fire and electrocution.”

Working Principal of Smart Grid

Smart grid technology is a new system of monitors that foster communication between the energy company and the end consumer. Electricity is sent from the energy company to a distribution center where it can later be sent to different destinations based on need. Power lines run from the distribution to the consumer and these lines include sensors that send information back to the energy company, giving them an idea of where electricity is being sent and how much electricity is being sent to a given destination.  This allows energy companies to track areas of high use, identify possible outages, and provide the proper service. On the consumer end, businesses install monitors that register how much electricity is coming in and being used.  Businesses can store unnecessary electricity in batteries and later redirect the electricity, through the same lines, back to the energy company. The energy company can use this electricity to provide service at peak times without physically generating more electricity at the plant.

According to the United States Department of Energy, there are five fundamental technologies that will drive smart grid technology:

  • The cohesion of every part of the system which allows every part to communicate with real-time information and control.
  • Communication technology that promotes more accurate information and hence response time. These technologies include: remote monitoring, time-of-use pricing, and demand-side management.
  • Research and development in the areas of: superconductivity, storage, power electronics, and diagnostics.
  • Advanced control methods which enable better response, diagnostic, and solutions.
  • Improve interfaces and decision support to amplify the decision-making power of human.

Significance of Smart Grid

Smart-grid is a revolutionary technology which has a direct impact on the lifestyles of individuals and is thus, ground-breaking. Smart grids will allow consumers to be more conscious of power needs and be able to conserve electricity easier and better manage electricity to save costs. Moreover, it will involve the consumer in the process of power generation by allowing him to directly indicate the need for electricity. Customers can also be rewarded by being paid for electricity they save and sell to electric companies.

The two-way communication system will enable generation resources that will allow small entities such as homes and individuals to sell power to their neighbors or send it back to the grid. This will change the competitive landscape of energy companies.

Avoid natural disaster disruptions: The smart-grid will allow operators to bypass a particular area where the power went out and still retain power in the rest of the circuit by reprogramming it. Therefore, a lightning strike on a pole will not result in a power failure in the region around it.

Enabling electricity markets: Bulk transmission of electricity will require more grid management. Better grid management will allow alternative energy sources to be distributed across distances to customers regardless of their location.

Downsides of Smart Grid

Just as with any other technology, the smart grid technology has some drawbacks. One of the major disadvantages of smart grids is that it is not simply a single component that consists of the technology. There are various technology components such as: software, the power generators, system integrators, etc. Not every company is on a level playing field to take the risks necessary to build a smart grid. This is the reason many utility companies refrain from venturing into this area. They want other companies to take the risk so that they can follow later, safely. Infrastructure requirements are another major challenge. In the US, the wall sockets cannot be the basis for grid computing. For smart grids, there is a need for access points that can be identified for data and information transfer between the point of usage and the power generating system. This is very similar to a computer access point, which enables a connection to the internet. This need for a two-way communication mechanism is crucial and investment-intensive.

Distributable power is the key to smart grids. The technology exists for centralized generation and distribution but only in one direction – from the electric provider to the customer. This poses a challenge to establish smart grids that need to distribute power effectively on a platform which is more diverse and easily distributable – not necessarily centralized.

Convenience of Smart Grid

Traditionally, electricity has been delivered via a one-way street: Energy from a big, central station power plant is transmitted along high-voltage lines to a substation, and from there to our house. A smart grid turns those lines into a two-way highway.

Wireless smart meters measure and communicate – in real time – information about how much energy we’re using and what it costs, allowing us to better manage our consumption, carbon footprint and bill.

For example, we’ll be able to use our smart phone to tell our water heater to turn off when we leave the house in the morning, and turn back on a half hour before we arrive home in the evening. This could save a lot of energy and money, given that about one quarter of the electricity we pay for is wasted because our household appliances operate when they’re not needed.

Benefit of Customers due to Smart Grid

Once the Smart Grid is completely built, much of the work historically performed manually including meter reading and power outage reporting, will be handled through near real-time communications between components on the electric “grid” itself, customers and service provider employees.

This means provider will know instantly when power outages occur, rather than relying on customers to report them. The faster the provider know there is an outage, the faster they can get it fixed, which means that customer inconvenience or loss of production is reduced. The provider will be able to react automatically to some types of power outages, re-routing power and reducing outage restoration time to seconds for many customers rather than minutes or hours. The provider will also be able to help customers manage their energy usage, as they’ll be able to alert customers when there are unusual spikes in their power consumption before those spikes result in a higher-than-expected bill. The cost to generate electric power can vary from season to season, day to day or even hour to hour. Today most electric customers are unaware of this because we all pay one flat rate for each kilowatt hour of electricity used, no matter when we use it. But the fact is that, with our Smart Grid in place, the providers will be able to offer customers options to choose to pay lower electric rates when it is less expensive to generate power and higher rates when it is more expensive to generate power. With near real-time data available through smart meters, customers will be able to make choices about when to use electricity, thus better managing their electric use and budgets without sacrificing comfort and convenience. This communication between components of the electric grid, customers and service providing company personnel will also allow the providers to have greater operational efficiencies, which in turn can help allow them to go longer without an imposed rate increase. Things like reduction in power theft and automatic meter reading go straight to helping them keep their operations costs low.

Smart grid Advantages

Consumer Benefits

  • Better information on how consumer use energy which will allow them to change their energy use so that they spend less and reduce their energy footprint.
  • It will allow them to generate their own electricity.
  • It will mean that the costs of upgrading our infrastructure to meet the needs of the country are minimized and energy price increases are minimized.@

Operational Efficiency

  • Integrate distributed generation
  • Optimize network design
  • Enable remote monitoring and diagnostics
  • Improve asset and resource utilization

Energy Efficiency

  • Reduce system and line losses and reduce the price of electricity
  • Enable DSM offerings
  • Improve load and VAR management
  • Comply with state energy efficiency policies

Smart grid technologies will be able to deliver energy efficiencies through, amongst other things:

  • Energy usage understanding;
  • Peak demand control;
  • Advanced metering infrastructure (AMI);
  • Automated energy system operation.

Smart grid technologies will build a partnership between consumers and the energy supplier to enable the supply and delivery of energy in the most cost efficient manner so that we achieve the growth of the economy that we all need with the smallest impact on the environment.

Customer Satisfaction

  • Reduce outage frequency and duration
  • Improve power quality
  • Enable customer self-service
  • Reduce customer energy costs

“Green” Agenda

  • Reduce GHG emission via DSM and “peak shaving”
  • Integrate renewable generating assets
  • Comply with Carbon/GHG legislation
  • Enable wide adoption of PHEV (plug-in hybrid electric vehicle)


The transition to a more automated grid in pursuit of environmental, efficiency and resilience benefits entails changes and enhancements across the grid value chain, from how the electricity supplier operates, to how the network is structured, to how the end user interacts with the grid infrastructure. These changes can be organized into five broad categories, and constitute the smart grid’s key characteristics or capabilities. Some other capabilities are shown

Gride02The utility industry across the world is trying to address numerous challenges, including generation diversification, optimal deployment of expensive assets, demand response, energy conservation, and reduction of the industry’s overall carbon footprint. It is evident that such critical issues cannot be addressed within the confines of the existing electricity grid.

The existing electricity grid is unidirectional in nature. It converts only one-third of fuel energy into electricity, without recovering the waste heat. Almost 8% of its output is lost along its transmission lines, while 23% of its generation capacity exists to meet peak demand only (i.e., it is in use only 5% of the time). In addition to that, due to the hierarchical topology of its assets, the existing electricity grid suffers from domino effect failures. The next-generation electricity grid known as the “smart grid” or “intelligent grid,” is expected to address the major shortcomings of the existing grid. In essence, the smart grid needs to provide the utility companies with full visibility and pervasive control over their assets and services. The smart grid is required to be self-healing and resilient to system anomalies. And last but not least, the smart grid needs to empower its stakeholders to define and realize new ways of engaging.

The Evolution of Tomorrow’s Technology

To allow pervasive control and monitoring, the smart grid is emerging as a convergence of information technology and communication technology with power system engineering. Figure 3.1 depicts the salient features of the smart grid in comparison with the existing grid.

Given the fact that the roots of power system issues are typically found in the electrical distribution system, the point of departure for grid overhaul is trimly placed at the bottom of the chain. As Figure 3.2 demonstrates, utilities believe that investing in distribution automation will provide them with increasing capabilities over time.

Within the context of these new capabilities, communication and data management play an important role. These basic ingredients enable the utilities to place a layer of intelligence over their current and future infrastructure, thereby allowing the introduction of new applications and processes in their businesses. As Figure 3.3 depicts the convergence of communication technology and information technology with power.

Smart Grid Drivers

As the backbone of the power industry, the electricity grid is now the focus of assorted technological innovations. Utilities in North America and across the world are taking solid steps towards incorporating new technologies in many aspects of their operations and infrastructure. At the core of this transformation is the need to make more efficient use of current assets. Figure 3.4 shows a typical utility pyramid in which asset management is at the base of smart grid development. It is on this base that utilities build a foundation for the smart grid through a careful overhaul of their IT, communication, and circuit infrastructure.

As discussed, the organic growth of this well-designed layer of intelligence over utility assets enables the smart grid’s fundamental applications to emerge. It is interesting to note that although the foundation of the smart grid is built on a lateral integration of these basic ingredients, true smart grid capabilities will be built on vertical integration of the upper-layer applications. As an example, a critical capability such as demand response may not be feasible without tight integration of smart meters and home area networks.

As such, one may argue that given the size and the value of utility assets, the emergence of the smart grid will be more likely to follow an evolutionary trajectory than to involve drastic overhaul. The smart grid will therefore materialize through strategic implants of distributed control and monitoring systems within and alongside the existing electricity grid. The functional and technological growth of these embryos over time helps them emerge as large pockets of distributed intelligent systems across diverse geographies.

TheThis organic growth will allow the utilities to shift more of the old grid’s load and functions onto the new grid and so to improve and enhance their critical services. These smart grid embryos will facilitate the distributed generation and cogeneration of energy. They will also provide for the integration of alternative sources of energy and the management of a system’s emissions and carbon footprint. And last but not least, they will enable utilities to make more efficient use of their existing assets through demand response, peak shaving, and service quality control.

The problem that most utility providers across the globe face, however, is how to get to where they need to be as soon as possible, at the minimum cost, and without jeopardizing the critical services they are currently providing. Moreover, utilities must decide which strategies and what road map they should pursue to ensure that they achieve the highest possible return on the required investments for such major undertakings. As is the case with any new technology, the utilities in the developing world have a clear advantage over their counter- parts in the developed world. The former have fewer legacy issues to grapple with and so may be able to leap forward without the need for backward compatibility with their existing systems.

UtittiyEvolution of the Smart Grid

The Existing Grid

The existing electricity grid is a product of rapid urbanization and infrastructure developments in various parts of the world in the past century. Though they exist in many differing geographic of the electrical power system, however, has been influenced by smart grid, the utility companies have generally adopted similar technologies. The growth economic, political and geographic factors that is unique to each utility company.

BasicDespite such differences, the basic topology of the existing electrical power system has remained unchanged. Since its inception, the power industry has operated with clear demarcations between its generation, transmission, and distribution subsystems and thus has shaped different levels of automation, evolution, and transformation in each silo. As Figure 3.5 demonstrates, the existing electricity grid is a strictly hierarchical system in which power plants at the top of the chain ensure power delivery to customers’ loads at the bottom of the chain. The system is essentially a one way pipeline where the source has no real-time information about the service parameters of the termination points. The grid is therefore over engineered to withstand maximum anticipated peak demand across its aggregated load. And since this peak demand is an infrequent occurrence, the system is inherently inefficient.

Moreover, an unprecedented rise in demand for electrical power, coupled with lagging investments in the electrical power infrastructure, has decreased system stability. With the safe margins exhausted, any unforeseen surge in demand or anomalies across the distribution network causing component failures can trigger catastrophic blackouts.


GeeTo facilitate troubleshooting and upkeep of the expensive upstream assets, the utility companies have introduced various levels of command-and-control functions. A typical example is the widely deployed system known as supervisory control and data acquisition (SCADA). Although such systems give utility companies limited control over their upstream functions, the distribution network remains outside their real-time control. And the picture hardly varies all across the world. For instance, in North America, which has established one of the world’s most advanced electrical power systems, less than a quarter of the distribution network is equipped with information and communications systems, and the distribution automation penetration at the system feeder level is estimated to be only 15% to 23%.

Smart Grid Evolution

Given the fact that nearly 93% of all power outages and disturbances have their roots in the distribution network, the move towards the smart grid has to start at the bottom of the chain, in the distribution system. Moreover, the rapid increase in the cost of fossil fuels, coupled with the inability of utility companies to expand their generation capacity in line with the rising demand for electricity, has accelerated the need to modernize the distribution network by introducing technologies that can help with demand-side management and revenue protection.

TheAs shows, the metering side of the distribution system has been the focus of most recent infrastructure investments. The earlier projects in this sector saw the introduction of automated meter reading (AMR) systems in the distribution network. AMR lets utilities read the consumption records, alarms, and status from customers’ premises remotely.

OnAs Figure 3.7 suggests, although AMR technology proved to be initially attractive, utility companies have realized that AMR does not address the major issue they need to solve demand-side management. Due to its one-way communication system, AMR’s capability is restricted to reading meter data. It does not let utilities take corrective action based on the information received from the meters.

In other WordStar systems do not allow the transition to the smart grid, where pervasive control at all levels is a basic premise. Consequently, AMR technology was short-lived. Rather than investing in AMR, utilities across the world moved towards advanced metering infrastructure (AMI).AMI provide utilities with a two-way communication system to the meter, as well as the ability to modify customers’ service-level parameters. Through AMI, utilities can meet their basic targets for load management and revenue protection. They not only can get instantaneous information about individual and aggregated demand, but they can also impose certain caps on consumption, as well as enact various revenue models to control their costs The emergence of AMI heralded a concerted move by stakeholders to further refine the ever-changing concepts around the smart grid. In fact, one of the major measurements that the utility companies apply in choosing among AMI technologies is whether or not they will be forward compatible with their yet-to-be-realized smart grid’s topologies and technologies.

Transition to the Smart Grid

As the next logical step, the smart grid needs to leverage the AMI infrastructure and implement its distributed command and-control strategies over the AMI backbone. The pervasive control and intelligence that embodies the smart grid has to reside across all geographies, components, and functions of the system. Distinguishing these three elements is significant as it determines the topology of the smart grid and its con stituent components.

Smart Micro Grids

The smart grid is the collection of all technologies, concepts the smart grid is the collection of all technologies, concepts generation, transmission, and distribution to be replaced with an end-to-end, organically intelligent, fully integrated environment where the business processes, objectives, and needs all stakeholders are supported by the efficient exchange of data services, and transactions. A smart grid is therefore defined as a grid that accommodates a wide variety of generation options, e.g. Central, distributed, intermittent, and mobile. It empowers consumers to interact with the energy management system to adjust their energy use and reduce their energy costs. A smart grid is also a self-healing system. It predicts looming failures and takes corrective action to avoid or mitigate system problems. A smart grid uses IT to continually optimize the use of its capital assets while minimizing operational and maintenance costs. Mapping the above definitions to a practical architecture, one can readily see that the smart grid cannot and should not be a replacement for the existing electricity grid but a complement to it. In other words, the smart grid would and should coexist with the existing electricity grid, adding to its capabilities, functionalities, and capacities by means of an evolutionary path. This necessitates a topology for the smart grid that allows for organic growth the inclusion of forward-looking technologies, and full back ward compatibility with the existing legacy systems.

At its core, the smart grid is an ad hoc integration of complementary components, subsystems, and functions under the pervasive control of a highly intelligent and distributed command-and-control system. Furthermore, the organic growth and evolution of the smart grid is expected to come through the plug-and-play integration of certain basic structures called intelligent (or smart) micro grids. Micro grids are defined as interconnected networks of distributed energy systems (loads and resources) that can function whether they are connected to or separate from the electricity grid.

Micro Grid Topology

A smart micro grid network that can operate in both grid-tied as well as islanded modes typically integrates the following seven components. It incorporates power plants capable of meeting local demand as well as feeding the unused energy back to the electricity grid. Such power plants are known as cogenerates and often use renewable sources of energy, such as wind, sun, and biomass. Some micro grids are equipped with thermal power plants capable of recovering the waste heat, which is an inherent by-product of fissile-based electricity generation. Called combined heat and power (CHP), these systems recycle the waste heat in the form of district cooling or heating in the immediate vicinity of the power plant.

  • It services a variety of loads, including residential, office and industrial loads.
  • It makes use of local and distributed power-storage capability to smooth out the intermittent performance of renewable energy sources.
  • It incorporates smart meters and sensors capable of measuring a multitude of consumption parameters (e.g., active power, reactive power, voltage, current, demand, and so on) with acceptable precision and accuracy. Smart meters should be tamper-resistant and capable of soft connect and disconnect for load and service control.
  • It incorporates a communication infrastructure that enables system components to exchange information and commands securely and reliably.
  • It incorporates smart terminations, loads, and appliances capable of communicating their status and accepting commands to adjust and control their performance and service level based on user and/or utility requirements.

of a

It incorporates an intelligent core, composed of integrated networking, computing, and communication infrastructure elements, that appears to users in the form of energy management applications that allow command and control on all nodes of the network. These should be capable of identifying all terminations, querying them, exchanging data and commands with them and managing the collected data for scheduled and/or on demand transfer to the higher-level intelligence residing in the smart grid. Figure 3.8 depicts the topology of a smart micro grid.

Smart Grid Topology

As Figure 3.9 shows, the smart grid is therefore expected to emerge as a well-planned plug-and-play integration of smart micro grids that will be interconnected through dedicated highways for command, data, and power exchange. The emergence of these smart micro grids and the degree of their interplay and integration will be a function of rapidly escalating smart grid capabilities and requirements. It is also expected that not all micro grids will be created equal. Depending on their diversity of load, the mix of primary energy sources, and the geography and economics at work in particular areas, among other factors, micro grids will be built with different capabilities assets, and structures.


Coexistence of the Two Generations of Electricity Grids

As discussed earlier, utilities require that the AMI systems now being implemented ensure an evolutionary path to the smart grid. The costs associated with AMI rollout are simply too high to permit an overhaul of the installed systems in preparation for an eventual transition to the smart grid As such, industry pundits believe that for the foreseeable future the old and the new grids will operate side by side, with functionality and load to be migrated gradually from the old system to the new one over time. And in the not too distant future, the smart grid will emerge as a system of organically integrated smart micro grids with pervasive visibility and command-and-control functions distributed across all levels. The topology of the emerging grid will therefore resemble a hybrid solution, the core intelligence of which grows as a function of its maturity and extent. Figure 3.10 shows the topology of the smart grid in transition.

Smart Grid Standards

Despite assurances from AM technology providers, the utilities expect the transition from AMI to the smart grid to be far from a smooth ride. Many believe that major problems could surface when disparate systems, functions, and components begin to be integrated as part of a distributed command-and-control system Most of these issues have their roots in the absence of the universally accepted interfaces messaging and control protocols, and standards that would be required to ensure a common communication vocabulary among system components There are others who do not share this notion, however, arguing that given all the efforts under way in standardization bodies, the applicable standards will emerge to help with plug-and-play integration of various smart grid system components. Examples of such standards are ANSI C12.22 for smart metering and IEC 61853 for substation automation.

SystemMoreover, to help with the development of the required standards, the power industry is gradually adopting different terminologies for the partitioning of the command-and-control layers of the smart grid. Examples include home area network or HAN (used to identify the network of communicating loads, sensors, and appliances beyond the smart meter and within the customer’s premises); local area network or LAN (used to identify the network of integrated smart meters, field components, and gateways that form the logical network between distribution substations and a customer’s premises); and, last but not least, wide area network or WAN (used to identify the network of upstream utility assets, including but not limited to power plants, distributed storage, substations, and so on).

As Figure 3.11 shows, the interface between the WAN and LAN worlds consists of substation gateways, while the interface between LAN and HAN is provided by smart meters. The security and vulnerability of these interfaces will be the focus of much various technological and standardization development in the near future.

ForRecent developments in the power industry point to the need to move towards an industry-wide consensus on a suite of standards enabling end-to-end command and data exchange between components of the smart grid. Focused efforts and leadership by NIST (United States National Institute of Standards and Technology) is yielding good results. NIST Framework and Roadmap for Smart Grid Interoperability Standards identifies priority areas for standardization and a list of standards that need to be further refined, developed, and/or implemented. Similar efforts in Europe and elsewhere point to the necessity of the development of a common information model (CIM) to enable vertical and lateral integration of applications and functions within the smart grid. Among the list of proposed standards, IEC 61853 and its associate standards are emerging as favorites for WAN data communication, supporting TCP/IP, among other protocols, over fiber or a 1.8-GHz flavor of WiMax. In North America, ANSI C12.22, and its associated standards, is viewed as the favorite LAN standard, enabling a new generation of smart meters capable of communicating with their peers as well as with their corresponding substation gateways over a variety of wireless technologies. Similarly, the European Community’s recently issued mandate for the development of Europe’s AMI standard, replacing the aging DLMS/ COSEM standard, is fueling efforts to develop a European counterpart for ANSI-C12.22.

The situation with HANs is a little murkier, as no clear winner has emerged among the proposed standards, although ZigBee with Smart Energy Profile seems to be a clear front-runner. This may be due primarily to the fact that on one hand the utilities in North America are shying away from encroaching beyond the smart meter into the customer’s premises while on the other hand the home appliance manufacturers have not yet seen the need to bur-den their products with anything that would compromise their competitive position in this price-sensitive commodity market. Therefore, expectations are that the burden for creating the standardization momentum in HAN technology will fall on initiatives from consumer societies, local or national legislative assemblies, and/or concerned citizens.

In summary, the larger issue in the process of transitioning to the smart grid lies in the gradual rollout of a highly distributed and intelligent management system with enough flexibility and scalability to not only manage system growth but also to be open to the accommodation of ever-changing technologies in communications, IT, and power systems. What would ensure a smooth transition from AMI to the smart grid would be the emergence of plug-and-play system components with embedded intelligence that could operate transparently in a variety of system integration and configuration scenarios. The embedded intelligence encapsulated in such components is often referred to with the term intelligent agent.

Smart Grid Research, Development, and Demonstration (RD&D)

Utility companies are fully cognizant of the difficulties involved in transitioning their infrastructure, organizations, and processes towards an uncertain future. The fact of the matter is that despite all the capabilities the smart grid promises to yield, the utilities, as providers of a critical service, still see as their primary concern keeping the lights on.

Given the fact that utilities cannot and may not venture into adopting new technologies without exhaustive validation and qualification, one can readily see that one of the major difficulties utilities across the world are facing is the absence of near-real world RD&D capability to enable them develop, validate, and qualify technologies, applications, and solutions for their smart grid programs. The problem most utility providers’ face is not the absence of technology. On the contrary, many disparate technologies have been developed by the industry (e.g., communication protocols, computing engines, sensors, algorithms, and models) to address utility applications and resolve potential issues within the smart grid.

The problem is that these new technologies have not yet been proven in the context of the utility providers’ desired specifications, configurations, and architecture. Given the huge responsibility utilities have in connection with operating and maintaining their critical infrastructure, they cannot be expected to venture boldly and without proper preparation into new territories, new technologies, and new solutions. As such, utilities are in critical need of a near-real-world environment, with real loads, distribution gear, and diverse consumption profiles, to develop, test, and validate their required smart grid solutions. Such an environment would in essence constitute a smart micro grid.

Similar to a typical smart micro grid, an RD&D micro-grid will incorporate not only the three major components of generation, loads, and smart controls but also a flexible and highly programmable command-and-control overlay enabling engineers to develop, experiment with, and validate the utility’s target requirements. Figure 3.12 depicts a programmable command-and-control overlay for an RD&D micro grid set up on the Burnaby campus of the British Columbia Institute of Technology (BCIT) in Vancouver, British Columbia, Canada.

Sponsored by BC Hydro and funded jointly by the British Columbia government’s Innovative Clean Energy (ICE) Fund and the Canadian government’s Western Diversification Fund, BCIT’s smart micro grid enables utility providers, technology providers, and researchers to work together to facilitate the commercialization of architectures, protocols, configurations, and models of the evolving smart grid. The ultimate goal is to chart a “path from lab to field” for innovative and cost-effective technologies and solutions for the evolving smart electricity grid.

In addition to a development environment, BCIT’s smart micro grid is also a test bed where multitudes of smart grid components, technologies, and applications are integrated to qualify the merits of different solutions, showcase their capabilities, and accelerate the commercialization of technologies and solutions for the smart grid. As an example, Figure 3.13 shows how such an infrastructure may be programmed to enable utilities to develop, test, and validate their front-end and field capabilities in line with their already existing back-office business processes and tools.


Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree on the inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies. At the same time, many research centers across the globe are working to ease this transition by developing the next-generation technologies required to realize the smart grid. As a member of Grid wise Alliance, BCIT is providing North American utilities with a state-of-the-art RD&D micro grid that can be used to accelerate the evolution of the smart grid in North America.


Fads and trends have abounded in the electric utility industry. Several times decade, a concept or catch phrase catches the attention and imagination of people and results in a wave of talk, buzz, papers, presentations, and self-proclaimed experts. Sometimes these concepts validate themselves and are gradually integrated into standard business practices. Sometimes these concepts fade away and make room for the next big thing.

One of the recent frenzies is feeding on the idea of a high-tech and futuristic distribution system. The distribution system of the past is radial and dumb. The distribution system of the future is meshed and intelligent. There are many names system, but the dual concepts of meshed and intelligent make Smart Grid the preferred term of the author.

There are certainly some proven technologies that will have a role more or less – in distribution systems moving forward. This includes advanced digital meters, distribution automation, low-cost communication systems, and distributed energy resources. In fact, there are already many demonstration projects showing the promise of these and other technologies. This includes the use of broadband communications for distribution applications, closed-loop systems using advanced protection, and many using distributed storage and generation. However, these projects tend to use a single technology in isolation, and do not attempt to create an integrated Smart Grid using a variety of technologies. The closest thing to a Smart Grid to date is perhaps the Circuit of the Future at Southern California Edison. Even this effort is more of a test bed for emerging technologies and is limited to a single circuit.

Many of the current research and development activities related to Smart Grids share a common vision as to desired functionality. Technology should not be used for its own sake, but to enhance the ability of the distribution system to address the changing needs of utilities and their customers. Some of these desired functionalities include:

  • Self-healing
  • High reliability and power quality
  • Resistant to cyber attacks
  • Accommodates a wide variety of distributed generation and storage options
  • Optimizes asset utilization
  • Minimizes operations and maintenance expenses

Achieving these functions through the aforementioned technologies poses an important question. Will the Smart Grid impact the way that distribution systems are designed? If so, how should utilities begin implementing these changes now so that, over time, existing distribution systems can be transformed into Smart Grids of the future?

The remainder of this paper discusses current research activities in the area of Smart Grid, and then discusses the potential design implications related to driving technologies and integration of these technologies.

Current Research Activities

There is presently a large amount of research activity related to Smart Grids. This section discusses the major projects in the distribution area (summarized from an NRECA report on industry research efforts).

EPRI IntelliGrid

Founded by EPRI in 2441, the IntelliGrid initiative has the goal of creating a new electric power delivery infrastructure that integrates advances in communications, computing, and electronics to meet the energy needs of the future. Its mission is to enable the development, integration, and application of technologies to facilitate the transformation of the electric infrastructure to Cost-effectively provide secure, high-quality, reliable electricity products and services. At present, the IntelliGrid portfolio is composed of five main projects: IntelliGrid architecture; fast simulation and modeling (FSM); communications for distributed energy resources (DER); consumer portal; and advanced monitoring systems.

EPRI Advanced Distribution Automation (ADA)

The overall objective of ADA is to create the distribution system of the future. The ADA Program envisions distribution systems as highly automated systems with a flexible electrical system architecture operated via open-architecture communication and control systems. As the systems improve, they will provide increased capabilities for capacity utilization, reliability, and customer service options. ADA has identified the following strategic drivers for the program: improved reliability and power quality; reduced operating costs; improved outage restoration time; increased customer service options; integration of distributed generation and storage; and integration of customer systems.

Modern Grid Initiative

Established by the U.S. Department of Energy (DOE) in 2445 through the Office of Electricity Delivery and Energy Reliability (OE) and the National Energy Technology Laboratory (NETL), this program focuses on the modern grid as a new model of electricity delivery that will bring a new era of energy prosperity. It sees the modern grid not as a patchwork of efforts to bring power to the consumer, but as a total system that utilizes the most innovative technologies in the most useful manner. The intent of The Modern Grid Initiative is to accelerate the nation’s move to a modern electric grid by creating an industry-DOE partnership that invests significant funds in demonstration projects. These demonstrations will establish the value of developing an integrated suite of technologies and processes that move the grid toward modernization. They will address key barriers and establish scalability, broad applicability, and a clear path to full deployment for solutions that offer compelling benefits. Each project will involve national and regional stake holders and multiple funding parties.


The GridWise program represents the vision that the U.S. Department of Energy (DOE) has for the power delivery system of the future. The mission of the DOE Distribution Integration program is to modernize distribution grid infrastructure and operations, from distribution substations (69 kV and down) to consumers (members), with two-way flow of electricity and information. The GridWise R&D program is composed of the GridWise Program at DOE, GridWise demonstration projects (with both public and private funding), and the GridWise Architecture Council.

Advanced Grid Applications Consortium (GridApps)

Formed by Concurrent Technologies Corporation in 2445, and sponsored by DOE, the GridApps consortium applies utility technologies and practices to modernize electric transmission and distribution operations. GridApps works on the application of technologies that are either not implemented by others or to finish their commercialization into broadly deployed products. Technologies applied by GridApps can be classified in three domains: T&D monitoring and management technologies; new devices; and system integration/system engineering for enhanced performance.


GridWorks is a new program activity in the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE). Its aim is to improve the reliability of the electric system through the modernization of key grid components: cables and conductors, substations and protective systems, and power electronics. The plan includes near-term activities to incrementally improve existing power systems and accelerate their introduction into the marketplace. It also includes long-term activities to develop new technologies, tools, and techniques to support the modernization of the electric grid for the requirements of the 21st century. The plan calls for coordinating Grid-Works’ activities with those of complementary efforts underway in the Office, including: high temperature superconducting systems, transmission reliability technologies, electric distribution technologies, energy storage devices, and GridWise systems.

Distribution Vision 2414 (DV2414)

The goal of DV2414 is to make feeders virtually “outage proof” through a combination of high speed communications, switching devices, intelligent controllers, and reconfigured feeders. This will enable customers to avoid interruptions for most feeder faults. DV2414 concepts would not be applied to all feeders. Rather, the concepts would be used to create “Premium Operating Districts” (PODs) serving customers that require and would be willing to pay extra for such high quality service. 

California Energy Commission – Public Interest Energy Research (PIER) Program

The CEC-PIER program was established in 1997 as part of electricity restructuring. The PIER program is designed to enable sustainable energy choices for utilities, state and local governments, and large and small consumers in California. The PIER program provides advanced energy innovations in hardware, software systems, exploratory concepts, supporting knowledge, and balanced portfolio of near-, mid-, and long-term energy options for a sustainable energy future in California. The program is divided in six program areas plus an innovation small grant program. The most relevant program for Smart Grid is the Energy Systems Integration (ESI) program. Ongoing work in the ESI program is currently focused on distributed energy resource integration, valuation of distribution automation, and pilots of distributed energy resources and demand response.

Impact of Technologies on Design

With all of the Smart Grid research activity; it is desirable to investigate whether Smart Grid technologies will have any design implications for distribution systems. Will the basic topology and layout of a Smart Grid be similar to what is seen today? Alternatively, will the basic topology and layout of a Smart Grid look different? To answer these questions, the design implications associated with the major technological drivers will be examined. After this, the next section will examine the design implications of all of these technologies considered together.

Advanced Metering Infrastructure (AMI)

A Smart Grid will utilize advanced digital meters at all customer service locations. These meters will have two-way communication, be able to remotely connect and disconnect services, record waveforms, monitor voltage and current, and support time-of-use and real-time rate structures. The meters will be in the same location as present meters, and therefore will not have any direct design implications. However, these meters will make a large amount of data available to operations and planning, which can potentially be used to achieve better reliability and better asset management. Perhaps the biggest change that advanced meters will enable is in the area of real-time rates. True real time rates will tend to equalize distribution system loading patterns. In additions, these meters will enable automatic demand response by interfacing with smart appliances. From a design perspective, peak demand is a key driver. If peak demand per customer is reduced, feeders can be longer, voltages can be lower, and wire sizes can be smaller. Most likely, advanced metering infrastructure will result in longer feeders.

Distribution Automation

Distribution automation (DA) refers to monitoring, control, and communication functions located out on the feeder. From a design perspective, the most important aspects of distribution automation are in the areas of protection and switching (often integrated into the same device). There are DA devices today that can cost-effectively serve as an “intelligent node” in the distribution system. These devices can interrupt fault current, monitor currents and voltages, communicate with one-another, and automatically reconfigure the system to restore customers and achieve other objectives.

The ability to quickly and flexibly reconfigure an interconnected network of feeders is a key component of Smart Grid. This ability, enabled by DA, also (1) requires distribution components to have enough capacity to accept the transfer, and (2) requires the protection system to be able to properly isolate a fault in the reconfigured topology. Both of these issues have an impact on system design. Presently, most distribution systems are designed based on a main trunk three phase feeder with single-phase laterals. The main trunk carries most power away from the substation through the center of the feeder service territory. Single phase laterals are used to connect the main trunk to customer locations. Actual distribution systems have branching, normally-open loops, and other complexities, but the overarching philosophy remains the same.

A Smart Grid does not just try to connect substations to customers for the lowest cost. Instead, a Smart Grid is an enabling system that can be quickly and flexibly be reconfigured. Therefore, future distribution systems will be designed more as an integrated Grid of distribution lines, with the Grid being connected to multiple substations. Design, therefore, shifts from a focus on feeders to a focus on a system of interconnected feeders.

Traditional distribution systems use time-current coordination for protection devices. These devices assume that faster devices are topologically further from the substation. In a Smart Grid, topology is flexible and this assumption is problematic. From a design perspective, system topology and system protection will have to be planned together to ensure proper protection coordination for a variety of configurations.

Distributed Energy Resources

Distributed energy resources (DER) are small sources of generation and/or storage that are connected to the distribution system. For low levels of penetration (about 15% of peak demand or less), DER do not have a large effect on system design as long as they have proper protection at the point of interconnection.

A Smart Grid has the potential to have large and flexible sources of DER. In this case, the distribution system begins to resemble a small transmission system and needs to consider similar design issues such as non-radial power flow and increased fault current duty. Other design issues related to the ability of a distribution system to operate as an electrical island, the ability of a distribution system do relieve optimal power flow constraints, and the ability of DER to work in conjunction as a virtual power plant.

An Integrated Smart Grid

Consider a distribution system with pervasive AMI, extensive DA, and high levels of DER. As mentioned in the previous section, each of these technologies has certain implications for system design. However, a true Smart Grid will not treat these technologies as separate issues. Rather, a Smart Grid will integrate the functions of AMI, DA, and DER so that the total benefits are greater than the sum of the parts. Much of the integration of functions relates to communication systems, IT systems, and business processes. These are not the focus of this paper. Rather, what will the system design of a distribution system look like when it can take full advantage of AMI, DA, and DER working together.

A Smart Grid will increasingly look like a mesh of interconnected distribution backbones. This Grid will likely be operated radically with respect to the transmission system, but non-radically with respect to DER. Protection on this backbone will therefore have to be “smart,” meaning protection setting can adapt to topology changes to ensure proper coordination. Radial taps will still be connected to the backbone, but lateral protection will gradually move away from fuses cutouts. DA on laterals will become more common and laterals will increasingly be laid out in loops and more complex network structure.

Currently, distribution systems are designed to deliver power to customers within certain voltage tolerances without overloading equipment. In a Smart Grid, these criteria are taken for granted. The driving design issues for Smart Grid will be cost, reliability, generation flexibility, and customer choice.


In twenty years, many distribution systems will not resemble the distribution systems of today. These systems will have advanced metering, robust communications capability, extensive automation, distributed generation, and distributed storage. Through the integrated use of these technologies, Smart Grids will be able to self-heal, provide high reliability and power quality, be resistant to cyber-attacks, operate with multi-directional power flow, increase equipment utilization, operate with lower cost, and offer customers a variety of service choices.

If a Smart Grid were designed from scratch, design issues would be complicated but manageable. Of course, there is already an existing distribution infrastructure that was not designed with Smart Grid in mind. This creates the following situation: first, Smart Grid is significantly different that distribution systems today from a design perspective; second, modifying the existing system into a Smart Grid will take decades. With this situation, the only viable way to realize an extensive Smart Grid is to develop a vision for the ultimate design of a Smart Grid and then make short term decisions that incrementally transform existing distribution systems into this future vision. Within a utility culture of annual budget cycles, functional silos, and hard-to-change standards, this is a tall order.


There is at the moment no consistent definition of “a smart grid” or “the smart grid”. Different people use different definitions, and the definitions develop with time. In this paper, we will simply limit ourselves to a description, and not worry about a precise definition. The term “smart grid” refers to a way of operating the power system using communication technology, power electronic technologies, and storage technologies to balance production and consumption at all levels, i.e. from inside of the customer premises all the way up to the highest voltage levels. An alternative way of defining the concept is as the set of technologies, whatever they may be, that are needed to allow new types of production and new types of consumption to be integrated in the electric power system. The concept of “smart grid” was started from a number of the technology innovations in the power industry. It is a result of the new technologies applied in power systems, including renewable energy sources generation, distributed generation, and the latest information and communication technology. With the (technical and regulatory) developments of renewable energy generation technologies, the penetration level of especially wind power have becomes very high in some parts of the system. Similar developments are expected for solar power and domestic combined heat and power. However, the increase in intermittent, non-predictable and non-dispatch able energy generation puts highest requirements on power balance control, from primary control through operational planning. The traditional control and communication system needs to be improved to accommodate for a high penetration of renewable energy sources. The term “micro grid” is used to describe a customer owned installation containing generation as well as consumption, where there is a large controllability of the exchange of power between the micro grid and the rest of the grid. Such micro grids provide the possibilities of load-shifting and peak-shaving through demand side management. Consumers could use the electricity from their own sources or even sell electricity to the grid during the peaking periods, hence increase the energy efficiency and defer the investments in transmission and distribution networks. To perform demand response in a most efficient way, the market and system operation conditions need to be known. Smart meters / advanced metering infrastructure (AMI) and two-way communication technologies can provide consumers and operators the information for decision making. The automation system of the traditional power system is still based on the design and operation of the system as it was decades ago. The latest developments in information and communication technologies have only found very limited implementation in the power system automation. One of the objectives of smart grid is to update the power system automation (including transmission, distribution, substation, individual feeders and even individual customers) using the latest technology.

Besides technology innovations, another important reason for smart grid is to improve the services in power supply to consumers. Through AMI (also known as “smart meters”), consumers are no longer passive consumers. They can monitor their own voltage and power and manage their energy consumption for example based on the electricity prices. Feedback on consumption is also seen as an important tool for energy saving.

Balancing Production and Consumption

Any amount of production or consumption can be connected at any location in the power system provided the difference between these two remains within certain band. The unbalance between production and consumption at a certain location is provided by the transfer capacity from the rest of the system. The situation can be more complicated in meshed systems, but this is the basic rule. Traditionally, production capacity and consumption demand have been seen as independent of each other. So the traditional grid has been designed to cope with the maximum amount of production, and also with the maximum amount of consumption. This approach sets hard limits on both production and consumption. A “smart grid” that can control, or influence, both production and consumption would allow more of both to be integrated into the power system. To accomplish this goal, communication technology may be order to inform or encourage changes in production (i.e. generator units) and consumption (i.e. customers or devices). Most published studies propose some kind of market mechanism to maintain balance between production and consumption, but more direct methods are also possible, with either the network operator or an independent entity taking control.

Different methods are available to balance consumption and production while at the same time optimizing energy efficiency, reliability and/or power quality.

  • Physical energy storage, for example in the form of batteries or pumped-storage hydro. Such storage could be owned and operator by a customer (an end-user or a generator company), owned by a customer and operated by the network operator, or owned and operated by the network operator.
  • Virtual energy storage, by shifting of energy consumption to a later or earlier moment in time. Charging of car batteries is often mentioned, but this method of virtual storage can also be used for cooling or heating loads. It is important to realize that this approach does not result in energy saving, but in more efficient use of the generation facilities and the power system transport capacity. The total energy consumption may be reduced somewhat, for example by reduced losses, reduced average temperatures with heating systems (increased with cooling), and the ability to use more efficient forms of energy, but these are minor effects and they should not be seen as the main reason for introducing the new technology.
  • Load shedding, where load is removed from the system when all other methods fail. This method is available now but is rarely used in most countries.

Accepting the occasional small amount of load shedding may, in some cases save large investments in the power system. (In some developing countries, uncontrolled and inadvertent load shedding often occurs automatically during grid or generator overload, but this is a poor example of load shedding, and hopefully only a temporary situation.)Under-frequency load shedding, as used in almost all systems, can be seen as an extreme case of reserve capacity in the form of load shedding. This is not the kind of application that is normally considered in the discussion on smart grids. Curtailment of production: For renewable sources like sun and wind, the primary energy is usually transformed into electricity whenever it is available. But if generation exceeds consumption, renewable sources may be turned off, or curtailed. The term “spilled wind” is sometimes used to express this concept.

Shifting of production: for sources like natural gas (for combined-heat-and-power) or hydro power, the primary energy source can be temporarily stored, then used at a later time. Not using the primary energy sources will make it available at a later time.

Pwer Quality

In the ongoing discussions about smart grids, power quality has to become an important aspect and should not be neglected. An adequate power quality guarantees the necessary compatibility between all equipment connected to the grid. It is therefore an important issue for the successful and efficient operation of existing as well as future grids. However power quality issues should not form an unnecessary barrier against the development of smart grids or the introduction of renewable sources of energy. The “smart” properties of future grids should rather be a challenge for new approaches in an efficient management of power quality. Especially the advanced communication technologies can establish new ways for selective power quality management.

Power quality covers two groups of disturbances: variations and events. While variations are continuously measured and evaluated, events occur in general unpredictable and require a trigger action to be measured. Important variations are: slow voltage changes, harmonics, flicker and unbalance. Important events are rapid voltage changes, dips, swells and interruptions.

The actual power quality (i.e. the disturbance levels) results from the interaction between the network and the connected equipment.

MainAll three areas are expected to see significant changes in the future. This means that power quality issues will also change with the consecutive development of future grids. The following comments shall give some examples for possible future developments in power quality.

Generating Equipment

The penetration of micro generation (typically defined as generation with a rated power of less than 16 A per phase) in the low-voltage networks is expected to increase continuously. In domestic installations this will be mainly single phase equipment based on self-commutating inverters with switching frequencies in the range of several kHz. Emissions in the range of low order harmonics can usually be neglected. The emissions shift into the range of higher frequencies, possibly between 2 and 9 kHz, where a serious discussion is needed on the choice of appropriate limits.

Furthermore micro-generation equipment will often be connected single-phase. This could increase the negative-sequence and zero-sequence voltage in the low-voltage grid. In weak distribution networks, existing limits could be exceeded rather quickly. Reconsidering the limits for negative-sequence voltages and introducing limits for zero-sequence voltage could be possible needed.

Consumer Equipment

The introduction of new and more efficient technologies is the main driver for changes in consumer equipment. One widely-discussed example is the change from incandescent lamps to energy saving lamps. Compact fluorescent lamps are at the moment the main replacement for incandescent lamps, but they are probably only an intermediate step before the LED-technique will become widely accepted. Seen from the network, each of the new lamp technologies results in the replacement of a resistive load by a rectifier load. The fundamental current is reduced significantly whereas the harmonic currents are increased. High penetration together with high coincidence of operation may lead to an increase of low order harmonics. Several network operators fear an increase of especially the fifth harmonic voltage above the compatibility levels. Discussion is ongoing in IEC working groups about the need for additional emission requirements on new types of lighting of low wattage. The same would hold for other improved (energy-efficient drives) or new (photovoltaic, battery chargers for electric and hybrid cars) equipment.

As mentioned before, such limits should however not result in unnecessary barriers against the introduction of new equipment. Alternative paths, like an increase of the compatibility levels for some higher harmonics, should at least be considered.

Distribution Network

The short-circuit power is an important factor in power quality management. Under constant emission a higher short circuit power results in a better voltage quality. Today the short-circuit power is mainly determined by the upstream network. In the IEC electromagnetic-compatibility standards reference impedance is used as a link between compatibility levels (voltages) and emission limits (currents). In future grids with high penetration of generation significant differing supply scenarios may be possible, from supply by a strong upstream network to an islanded (self-balanced) operation. This may lead to a significantly higher variability in short circuit power than today. Thus the approach based on fixed reference impedances may be inadequate or the use of high emitting loads may only be acceptable for certain operational states of the network or only in conjunction with power quality conditioners (owned by a customer, by the network operator, or by a third party).Due to the continuous decrease of resistive loads providing damping stability issues may become important for low-voltage networks too. In conjunction with increasing capacitive load (the EMC filters of electronic equipment) resonance points with decreasing resonant frequencies as well as lower damping can appear.

Power-Quality Monitoring

Growing service quality expectations and reduced possibilities for grid enforcements make advanced distribution automation (ADA) an increasingly necessary development for network operators and the next large step in the evolution of the power systems to smart grids. The management of the distribution system is mainly based on the information collected from the power flows by an integrated monitoring system. This enables real-time monitoring of grid conditions for the power system operators. It also enables automatic reconfiguration of the network to optimize the power delivery efficiency and to reduce the extent and duration of interruptions. The basic part of the monitoring system infrastructure is based on sensors, transducers, intelligent electronic devices (IED) and (revenue) meters collecting information throughout the distribution system.

A number of network operators have already proposed that the smart grid of the future should include:

  • Network monitoring to improve reliability
  • Equipment monitoring to improve maintenance
  • Product (power) monitoring to improve PQ

In order to achieve these goals, the actual distribution system infrastructure (especially meters and remotely controlled IEDs) should be used to gather as much information as possible related to network, equipment and product (i.e. power quality and reliability) to improve the distribution system overall performance.

Among the most important ADA operating systems, that a smart grid will include, it can be mentioned:

  • Volt & var control (VVC)&Fault location (FL)
  • Network reconfiguration or self-healing

Network operators with an ambitious energy efficiency program have focused on two targets:

  • Capacitor banks installation
  • Voltage control

There is also another important goal: to reduce the duration of interruptions. To answer to these challenges, pilot projects are being conducted on conservation voltage reduction and fault location based on power quality related measurements provided by IEDs and revenue meters.

The VVC system requires a permanent monitoring of the voltage magnitude (averaged over 1 to 5 min) at the end of the distribution feeder and the installation of switched capacitor banks. Besides that, the monitoring allows the detection of power quality disturbances such as long-duration under voltages and overvoltage, and voltage and current unbalance.

Basically, the voltage regulation system at the substation is replaced with an intelligent system that uses network measurements to maintain a voltage magnitude for all customers within the acceptable upper and lower limits. The VVC system also analyzes the reactive-power requirements of the network and orders the switching of capacitor banks when required. An important goal is to prevent potential power quality problems due to the switching operations of capacitor banks. Another goal was to evaluate the joint impact of the VVC system and voltage dips occurring on the grid.

The results of the study indicate that the impact can be quantified by two effects:&Increasing number of shallow voltage dips is expected. Voltage reduction from 2 to 4% is obtained due to VVC system. Added to this is the voltage drop due to the fault: drops of 6 to 15% (not counted as dips) become drops of 15 to 12% (which are counted as dips).&Equipment malfunctioning or tripping: the joint contribution of the VVC system and the disturbance brings the residual voltage level below a critical threshold, around 75% of the nominal voltage for many devices.

Fault location is based either on a voltage drop fault location technique that uses waveforms from distributed power quality measurements along the feeder or on a fault current technique based on the measurement of the fault current at the substation. According to the average error in locating the fault with the first technique was less than 2%, in terms of the average main feeder length. An accurate fault-location technique results in a significant reduction in the duration of (especially) the longer interruptions.

The information collected by the fault-location system can also be used for calculating dip related statistics and help to better understand the grid behavior. The third application, network reconfiguration or self-healing, is based either on local intelligence (belonging to major distribution equipment controllers) or on decisions taken at the power system control center, which remotely controls and operates the equipment used for network reconfiguration (recloses and switches).The impact of these applications on the distribution network and its customers is permanently evaluated. The infrastructure belonging to ADA systems can be shared by a power-quality monitoring system capable of real time monitoring. Depending on the type of ADA application or system, the monitoring can be done either at low-voltage or at medium-voltage level. In the first case monitoring devices may belong to an Advanced Metering Infrastructure (AMI) and in the second case they may belong to the distribution major equipment itself.

The smart grid will allow a continuous power-quality monitoring that will not improve directly the voltage quality but will detect quality problems helping to mitigate them.

Different Power-Quality Issues

Emission by New Devices

When smart grids are introduced, we expect growth both in production at lower voltage levels (distributed generation) and in new types of consumption (for example, charging stations for electric vehicles, expanded high-speed railways, etc.). Some of these new types of consumption will emit power-quality disturbances, for example harmonic emission. Preliminary studies have shown that harmonic emission due to distributed generation is rather limited. Most existing end-user equipment (computer, television, lamps, etc.) emit almost exclusively at the lower odd integer harmonics (3, 5, 7, 9 etc.), but there are indications that modern devices including certain types of distributed generators emit a broadband spectrum. Using the standard methods of grouping into harmonic and interharmonic groups and subgroups below 2 kHz will result in high levels for even harmonics and interharmonics. For frequencies above 2 kHz high levels have been observed for the 255-Hz groups. An example is shown in Fig. 5.3: the spectrum of the emission by a group of three full-power converter wind turbines, where 1 A is about 1% of the rated current.


The emission is low over the whole spectrum, being at most 5.5% of the nominal current. The combination of a number of discrete components at the characteristic harmonics (5 and 7, 11 and 13, 17 and 19, etc.) together with a broadband spectrum over a wide frequency range, is also being emitted by other equipment like energy-efficient drives, micro generators, and photo-voltaic installations. The levels are not always as low as for the example shown here. The existing compatibility levels are very low for some frequencies, as low as 5.2%.Harmonic resonances are more common at these higher frequencies so that any reference impedance for linking emission limits to compatibility levels should be set rather high. Keeping strict to existing compatibility limits and existing methods of setting emission limits could put excessive demands on new equipment. The measurement of these low levels of harmonics at higher frequencies will be more difficult than for the existing situation with higher levels and lower frequencies. This might require the development of new measurement techniques including a closer look at the frequency response of existing instrument transformers. The presence of emission at higher frequencies than before also calls for better insight in the source impedance at these frequencies: at the point of connection with the grid as well as at the terminals of the emitting equipment.

Interference between Devices and Power-Line-Communication

Smart grids will depend to a large extent on the ability to communicate between devices, customers, distributed generators, and the grid operator. Many types of communication channels are possible Power-line communication might seem an obvious choice due to its easy availability, but choosing power-line communication could introduce new disturbances in the power system, resulting in a further reduction in power quality. Depending on the frequency chosen for power-line communication, it may also result in radiated disturbances, possibly interfering with radio broadcasting and communication. It is also true that modern devices can interfere with power-line-communication, either by creating a high disturbance level at the frequency chosen for power-line communication or by creating a low-impedance path, effectively shorting out the power-line communication signal. The latter seems to be the primary challenge to power-line communication today. So far, there have been no reports of widespread interference with sensitive equipment caused by power line-communication, but its increased use calls for a detailed study.

Allocation of Emission Limits

When connecting a new customer to the power system, an assessment is typically made of the amount of emission that would be acceptable from this customer without resulting in unacceptable levels of voltage disturbance for other customers. For each new customer a so-called emission limit is allocated. The total amount of acceptable voltage distortion is divided over all existing and future customers. This assumes however that it is known how many customers will be connected in the future .With smart grids; the amount of consumption will have no limit provided it is matched by a similar growth in production. This continued growth in both production and consumption could lead to the harmonic voltage distortion becoming unacceptably high. Also the number of switching actions will keep on increasing and might reach unacceptable values. One may say that production and consumption are in balance at the power-system frequency, but not at harmonic frequencies. Another way of looking at this is that the system strength is no longer determined by the maximum amount of consumption and/or production connected downstream, but by the total amount of harmonic emission coming from downstream equipment. This will require a different way of planning the distribution network.

Improving Voltage Quality

One aim of smart grids is to improve the performance of the power system (or to prevent deterioration) without the need for large investments in lines, cables, transformers, etc. From a customer viewpoint, the improvements can be in terms of reliability, voltage quality or price. All other improvements (e.g. in loading of cables or transformers, protection coordination, operational security, efficiency) are secondary to the customer.

Improvements in reliability and price are discussed in detail in several other papers and beyond the scope of this paper. The only voltage-quality improvement expected to be made by smart grids in the near future would be a reduction in longer-term voltage-magnitude variations. In theory, both under voltages and over voltages might be mitigated by keeping the correct local balance between production and consumption. For rural networks, overvoltage and under voltages are the main limitation for increasing consumption and production. These networks should therefore be addressed first. The same balance between “production” and “consumption” can in theory also be used for the control of harmonic voltages. When the harmonic voltage becomes too large, either an emitting source could be turned off, or a harmonic filter could be turned on, or a device could be turned on that emits in opposite phase (the difference between these solutions is actually not always easy to see). Smart grid communication and control techniques, similar to those used to balance consumption and production (including market rules), could be set up to reduce harmonic emissions. This could be a solution for the growing harmonic emission with growing amounts of production and consumption. Micro grids with islanding capability can, in theory, mitigate voltage dips by going very quickly from grid-connected operation to island operation. The presence of generator units close to the loads allows the use of these units in maintaining the voltage during a fault in the grid.

Immunity of devices

Simultaneous tripping of many distributed generators due to a voltage-quality disturbance (like a voltage dip) is the subject of active discussion .This problem is far from solved. As a smart grid attempts to maintain a balance between production and consumption, mass tripping of consumption could have similar adverse consequences. This should be further investigated. Simultaneous tripping of many distributed generators due to a voltage-quality disturbance (like a voltage dip) is the subject of active discussion. This problem is far from solved. As a smart grid attempts to maintain a balance between production and consumption, mass tripping of consumption could have similar adverse consequences. This should be further investigated.

Weakening of the Transmission Grid

The increased use of distributed generation and of large wind parks will result in a reduction of the amount of conventional generation connected to the transmission system. The fault level will consequently be reduced, and power-quality disturbances will spread further. This will worsen voltage dips, fast voltage fluctuations (flicker) and harmonics. The severity of this has been studied for voltage dips. The conclusion from the study is that even with 25% wind power there is no significant increase in the number of voltage dips due to faults in the transmission system.


The new technology associated with smart grids offers the opportunity to improve the quality and reliability as experience by the customers. It wills however also result in the increase of disturbance levels in several cases and thereby introduce a number of new challenges. But these new challenges should definitely not be used as arguments against the development of smart grids. However they should attract attention to the importance of power quality for the successful and reliable operation of smart grids. New developments need new approaches and perspectives from all parties involved (network operators, equipment manufacturers, customers, regulators, standardization bodies, and others).

Towards a Smarter Grid

Existing mains power supplies, or grids for short, are more and more modernized by the introduction of digital systems worldwide. Those systems promise better electricity utilization planning for Electricity Service Providers (PROVIDERs) on the one hand and lower prices for consumers on the other hand. The enabling technology behind this so-called Smart Grid is primarily made up by an Advance Metering Infrastructure (AMI). The next step towards “smart homes” is the incorporation of this technology in conjunction with Building Automation Systems (BASs) that make use of the provided information in a demand response fashion.

In the past, every household had its electro-mechanic analog meter that displayed the electricity consumption. The actual values were typically reported towards the PROVIDER once a year in order for the PROVIDER to charge the customers. The manipulation of meters and thus, electricity theft, was prevented by tamper-evident sealing’s and locks. The widespread availability of digital embedded devices and low cost communication have made the deployment of smart meters possible. Berg Insight estimates that by 2615, 362.5 millions of those devices will be installed worldwide. Politics is also driving the deployment of smart meters. In Germany, measuring point providers are obligated by law (Section 21b, Subsection 3a Energies wirtschaftsgesetz (EnWG) to provide smart meters in newly built private houses and in private homes that are renovated since January 2616. On the other hand, PROVIDERs are obligated to provide customers a tariff that stimulates energy conservation or the control of energy consumption by the end of 2616. PROVIDERs hope to benefit from cost reductions as they do not have to send technicians to the households to read the meters but let the smart meters report their current consumption values periodically (automated meter reading). Knowing the customers’ current electricity consumptions can also help the PROVIDERs to better plan their electricity load distribution. On the one hand, there are certain peak times when lots of households demand for more electricity and on the other hand, PROVIDERs are facing supply fluctuations. During those peak times, PROVIDERs mostly have to resort to non-renewable energy resources. As collapses of electricity infrastructures -e.g. the U.S. blackouts of 2663 have shown PROVIDERs have to do a profound distribution planning to sustain a high availability and reliability of electricity provisioning. Smart meters involve another benefit in the context of demand response electricity utilization. PROVIDERs can provide their customers with up-to-date prices and thus, control the customers’ electricity consumption behavior as they are expected to use electricity in times where the prices are low. BASs can make good use of smart grids and automate the electricity utilization of households via smart appliances.

However, smart meters involve some severe security and privacy challenges. From a security point of view, electricity theft is one of the major concerns of the PROVIDERs. As smart meters are basically commodity embedded devices that use “standard” communication technology to report consumption values to the PROVIDERs, they are vulnerable to a wide range of attacks. Our focus in this context is on preserving the integrity of the devices. Furthermore, authenticity and confidentiality of data must be preserved. On the other hand, from a privacy point of view, we focus on how customers can anonymously report their up-to-date electricity consumption to their PROVIDER. We demand that the PROVIDERs must not be able to gain information about their customers’ habits based on their electricity utilization patterns. The risk that the electricity consumption profile can be used to draw conclusions about customer’s habits was pointed out in a report to the Colorado Public Utilities Commission and by LISOVICH, ET. AL as well. According to a survey report of the ULD (Unabhängiges Landeszentrumfür Datenschutz Schleswig-Holstein), the data collected by smart meters are personal and allow for a disclosure of personal and factual living conditions of users.

Electricity Market Architecture

In 1996, the foundation for a liberalized European electricity market was laid by the directive 96/92/EC of the European Union. The goal was to break down the monopoly positions of the PROVIDERs and let the customers choose their PROVIDER more freely instead. This lead to a separation between grid operators and PROVIDERs. The grid operator, which cannot be chosen by the customer, operates the grid within a regional area. For the provisioning of its infrastructure, the grid operator gets paid by the PROVIDER. In order to prevent unequally charges, the European Union requested to set up national regulatory authorities that regulate those charges — in Germany, this is the Bundesnetzagentur. Moreover, Section 21b of the EnWG also allows for the customer to choose a third-party measuring point provider. However, this is not so common today and thus, in the remainder of this paper, we assume that the grid operator is also the measuring point provider — as it was the case before the liberalization of the electricity market as well.

The grid infrastructure that is provided by the grid operator is particularly constituted by the site current transformers (CTs) and the switchyards. A site CT typically supplies some tens or hundreds of households. The site CTs are connected to a switchyard. A switchyard serves dozens of site CTs, i.e. a switchyard is responsible for a city. Furthermore, the switchyards are connected to the high voltage switchboards.

Trusted Computing

In this paper, we come up with a concept for smart metering that takes both, security and privacy, into account. As digital systems are vulnerable to software attacks that cannot be prevented by hardware sealing’s or solely by means of software, our concept is based on Trusted Computing. The grid operator as well as the PROVIDER can build their trust in a Trusted Platform Module (TPM) as a tamper-resistant device. The grid operator needs assurance that the code executed on the smart meter is authentic and has not been tampered with by a customer, or by any other party. This can be achieved by storing a cryptographic hash value of the executed software within one of the TPM’s so-called platform configuration registers (PCRs). As the smart meter is challenged by an external verifier to attest its integrity, the hash value is signed within the TPM and sent to the verifier. This process is called remote attestation and is explained in more detail.

Related Works

There are several research papers that focus on security in the context of smart metering. Most authors assume that the customers are the attackers that want to steal electricity.

MCLAUGHLIN, PODKUIKO, AND MCDANIEL perform a profound security analysis for the AMI. They also point out that the smart meters are vulnerable to (software) manipulations and that the network links constitute particular points of attack.

LEMAY, GROSS, GUNTER, AND GARG were the first who proposed to employ TPMs within smart meters. The main purpose of the TPM in their concept is the authentic report towards the PROVIDER that the software executed on a smart meter has not been tampered with. The PROVIDER needs assurance of this fact as the smart meter’s software is responsible for the calculation of the customer’s monthly bill. However, the authors pointed out that TPMs are not best suited for their purpose as those devices’ power consumption is too high in idle mode — under the assumption that the TPM is used for remote attestation once a month. Thus, they came up with another approach towards building trust in embedded devices in.

In the context of privacy preservation, BOHLI, SORGE, AND UGUS were the first who presented a solution where the PROVIDERs are not aware of up-to-date information about electricity consumptions of individual customers but rather of groups of customers and thus, preserving the individuals’ privacy. The main difference to their paper is that we neither require a trusted third party as an aggregation proxy that is involved in each meter reading and that has to keep track of those data — together with the identity — to be able to bring to account the utilization at the end of a year, nor do we add random values to the meter reading values.

Another privacy-preserving approach has been suggested by GARCIA AND JACOBS. They suggest using homomorphism encryption to prevent the PROVIDER from gaining consumption data of individual household 3 requirements concerning the Smart Grid.

In this section we work out the security and privacy requirements that have to be met by a smart grid. Therefore, first of all, we cover the requirements from a technical point of view before we point out non-functional requirements and security and privacy requirements in the last step.

Functional Requirements

Smart meters constitute the main components in the smart grid. Beyond data collection and data processing, we primarily focus on the communication of smart meters with different parties, which are particularly the PROVIDER, the grid operator, and the customer in this examination.

Smart Meter – PROVIDER

Periodically reporting the electricity consumption data towards the PROVIDER is a major functional requirement for smart meters. In state of the art implementations, the typical interval is a quarter of an hour. Those data allow the PROVIDER to better plan the electricity load balancing. Furthermore, the PROVIDER needs data from the smart meter to bill the customer for the electricity provisioning. Note that billing on a monthly basis — rather than per annum — is preferred by customers and is also supported by law.

Another important requirement is up-to-date price information provided by the PROVIDER. In con-junction with a smart appliance, this enables the customer to save money as electricity may be primarily consumed when the price is low. The policies for the smart appliance have to be specified by the customer, e.g. via a web interface. On the other hand, PROVIDERs could demand that the approach is not customer-centric but rather controlled by them. Therefore, the PROVIDER needs a feedback channel towards the customers’ households to be able to put devices that draw a lot of energy, e.g. air conditioning systems, out of operation. The customers’ smart appliances have to incorporate the ability to receive and execute such commands provided by the PROVIDER.

Smart Meter – Grid Operator

The grid operator needs the consumption data from customers to charge the PROVIDER for the provision of its infrastructure. Thus, the smart meter has to provide the grid operator with those data.

Moreover, the grid operator must have the possibility to remotely update the smart meters’ software. For example, if a bug in the software is found, a quick update of the software is needed in order to prevent the exploitation of the security vulnerability.

Smart Meter – Customer

The smart meter should also provide an interface that allows the customer to get an overview about the current electricity consumption. The matching with currently running devices allows the customer to keep track of how much electricity is drawn by each device. The resolution of the utilization data should be in the range of a second to yield a profound live analysis.

There already exists such a solution, which is called Power Meter and is hosted by Google. Customers have to send their consumption data to Google and they are presented a graphical visualization of the data that allows them to keep track of the current electricity utilization of their devices. We do not want to rely on a third party to provide that service.

Non-Functional Requirements

Smart meters have to be permanently available and reliable as the PROVIDER depends on the up to-date electricity utilization data and on the correct computation of the monthly bill. The smart grid is expected to constantly grow very fast and thus, it should be scalable as well. In particular, authorities that are needed, e.g. The trusted third party (TTP) as presented must not constitute the bottleneck.

Security and Privacy Requirements

Security and privacy requirements can be split up according to the different parties in the smart grid. The PROVIDER requires the current consumption data for the utilization planning as well as the monthly bill to be authentic. Those data are originating from the customer who needs to stay anonymous at the same time. Moreover, the grid operator takes a particular position in terms of trustworthiness.

Security From the PROVIDER’s Point of view

For the PROVIDER, the most important protection goal is the authenticity of the monthly bill that is computed by the customer. However, the customer is not trustworthy from the PROVIDER’s point of view — the customer is assumed to manipulate the meter readings or the computed bill. Analog meters could be attacked by mechanical manipulations, e.g. through meter inversion. Smart meters do not allow such attacks but they are rather vulnerable to more problematic attacks, i.e. Software manipulations and the modification of consumption data by means of network attacks. An attacker who is able to reprogram the software that is executed on the smart meter, employing the remote update mechanism; can modify the code that is used for the calculation of the monthly bill. The challenge for the PROVIDER is that there is no chance to trust the computation performed by the smart meter as this commodity device does not constitute a trustworthy device — after all; it is not sure whether the computation is performed on the smart meter or on a standard PC. Thus, software integrity is a major requirement on the part of the PROVIDER. Furthermore, the calculation of the bill within the smart meter requires the price information provided by the PROVIDER to be authentic and not to originate from the customer who lowers the price this way.

Another threat that targets the smart grid is terrorism. Smart appliances accepting no authenticated price information could be employed to create an excess demand of electricity by providing a minimal price to a large amount of customers and thus, causing a breakdown of the grid. Non-authenticated commands sent to smart meters even constitute a more severe problem in the context of keeping the availability of the grid.

Privacy From the Customer’s Point of view

From a customer’s point of view, the protection goal anonymity is the most important one. We require that no party — not even the PROVIDER — may be able to link consumption data to any individual customers. Moreover, we require the PROVIDER not to be able to create an electricity utilization profile under a pseudonym. This would allow for a linking of a pseudonym to a customer at the end of a month when the PROVIDER receives the bill, which bears identity information of the customer. For a PROVIDER to be able to better plan the needed electricity, it is crucial — and sufficient — to have utilization statistics about a coarse-grained group of households, e.g. within a certain regional area.

As we have pointed out, we require the user to be able to graphically visualize the current power consumption of devices in operation. The state of the art service hosted by Google that provides this functionality entails the potential to violate the privacy as the customer cannot know what Google uses those data for. For example, in conjunction with a Google Calendar used by the customer, Google could map electricity consumption data to the information stored in the calendar, allowing for a better derivation of customers’ current activities. Thus, we require the processing of consumption data to be done within the customer’s premises.

Trustworthy Grid Operator

Customers and the PROVIDER both have to trust the grid operator. The customer cannot appear anonymously towards the grid operator but rather appears under a pseudonym -the grid operator should not know the full identity but only know the household of the customer. The customer needs to trust the grid operator to withhold the customer’s pseudonym when forwarding consumption data towards the PROVIDER. At the same time, the PROVIDER needs to trust the grid operator as well, namely that the grid operator checked the authenticity of the data received by customers. Moreover, the PROVIDER has to count on software integrity checks performed by the grid operator in order to be able to know that the bill computation has been done correctly by the customer.


In this section we present our concept of a smart grid in which the primary goal is the preservation of the user’s privacy. We propose smart grid architecture. The initialization phase that is needed to set up a smart meter as proposed with our concept is covered. Privacy-preserving data provisioning is presented and an approach of electricity consumption control is covered. We discuss the integrity attestation of the smart meter and the bill computation.

Smart Grid Architecture

Each household is equipped with a smart metering device whose purpose is the collection of electricity consumption data for the provisioning of up-to-date data towards the PROVIDER in short-term intervals, e.g. every quarter of an hour, and the local computation of the monthly bill within the device. By employing trusted platform modules (TPMs) within the smart meters, we can make use of software integrity attestation on the one hand, and allow for a unique identification as well as pseudonymisation during the provision of electricity consumption data on the other hand. A unique identification of a TPM is provided by an endorsement key (EK) certificate and pseudonymisation can be achieved by the utilization of a pseudonymous credential issued by a trusted third party (TTP).

The architecture we propose for an integration of the smart meters to the smart grid is shown in Figure 6.1. All the data from the smart meters, consumption data as well as bills, have to be sent to the PROVIDER in some way. However, a direct connection, e.g.based on the Internet Protocol (IP), would release address information (IP address) to the PROVIDER and allow for identification again — in spite of pseudonymisation applied on application level. We propose the following network architecture. The smart meters of the households are connected to the site current transformer (CT) in a star-topology-organized network using Power line Communication (PLC) as a shared broadcast medium. Note that we suggest PLC mainly for reasons of practicability — e.g., DSL or WiMAX would also constitute possible network access technologies, however, requiring (more expensive) equipment that might not be present, e.g. DSL lines or WiMAX base stations. The site CTs are furthermore connected to a switchyard and the switchyard is in turn connected to the Internet backbone. Thus, the switchyards act as proxies between the households and the PROVIDER that is also connected to the Internet backbone. Thereby, we can prevent the PROVIDER from identifying a household based on its IP address. We propose collectors, which are part of the switchyards, that forward the data from the households with their own IP address as source address and thus, the PROVIDER can relate the received data only to a certain regional area. We further propose that a TTP, which is also connected to the Internet backbone, is managed by the national electricity regulatory authority. We pointed out that the grid operator and the PROVIDER are generally independent parties and we require the grid operator — more precisely, the collector node operated by the grid operator — to be a trustworthy party.

Next, we cover the tasks that are executed in order to realize the requirements as stated. The initialization is performed when a new customer takes control of a smart meter. Data Provisioning, integrity attestation, and bill computation are periodically performed tasks. All of the tasks mentioned so far are initiated by the smart meter. Electricity Consumption Control, on the other hand, is initiated by the PROVIDER performed non-periodically.


The smart meters are provided by the grid operator. As each smart meter is equipped with a unique EK certificate, the grid operator has to keep track of which device is supplied to which household. The grid operator also has to provide the TTP with all the valid EK certificate serial numbers in order for the TTP to issue credentials only for valid smart meters. The initialization phase is shown in.


The smart meters are provided by the grid operator. As each smart meter is equipped with a unique EK certificate, the grid operator has to keep track of which device is supplied to which household. The grid operator also has to provide the TTP with all the valid EK certificate serial numbers in order for the TTP to issue credentials only for valid smart meters. The initialization phase is shown in.

Data Provisioning For the Grid Operator

The grid operator needs the information about how much electricity is conveyed through its infrastructure in order to be able to charge the PROVIDER for the provision. For that purpose, the smart meter sends its consumption data signed with the endorsement key to the grid operator once a year. The grid operator can only use this value to charge the PROVIDER — it is not possible to draw any conclusions about the user’s habits from this single value.

Electricity Consumption Control

As the PROVIDER knows about the consumption on a city scale, it can only tell the corresponding switchyard to broadcast a control message within its domain. For example, such a control message could prohibit any household within a city from charging electric vehicles right now. The smart meter forwards this message to the smart appliance and it is the smart appliance’s task to stop charging the vehicle if one is present and charging right now.

Integrity Attestation

Smart meters employ software to control the measurement and process the measurement values. The main advantage of a software implementation, in contrast to the use of dedicated hardware, is that the functionality of the smart meter can be extended via (remote) updates without having to exchange all the devices. We have to ensure that only authentic updates from the grid operator are accepted by the smart meter. Therefore, we can implement an update mechanism that only accepts software updates that are digitally signed by the TTP. Having the updates signed by the TTP, and not by the grid operator, simplifies the certificate management within the smart meter on the one hand, and allows for an easier certification of smart metering software by an independent authority on the other hand. However, software implementations are always prone to attacks, e.g. due to programming errors. Thus, an attacker may manage to circumvent the update mechanism and thereby manipulate the software within the smart meter. MCDANIEL ET AL. call this the Billion-Dollar Bug in this context. The successful compromise of a smart meter can help customers to save a lot of money, or, on a grand scale, can give terrorists the opportunity to shut down whole cities by sending the smart meters bad commands. We rely on remote attestation as covered to detect any manipulations of the smart meter software. The grid operator could generate the proper attestation identity key (AIK) credential and implement it within the smart meter in advance to its delivery. The AIK credential must also include the address of the household so that further investigations can be initiated in the case of an integrity violation being noticed.

Bill Computation

For the bill computation to yield correct results, not only the software that performs the computation has to be authentic, but also the actual price information provided by the PROVIDER has to be. This can be achieved by allowing only digitally signed price data for the computation. As the TPM does not provide a sufficient amount of data storage for all the price data and the consumption data, some storage facility within the smart meter, i.e. flash memory, has to be employed. It is crucial that those data are integrity-protected by using a message authentication code with a key that is protected by the TPM and only released for the logging and bill calculation application. We do not require those data to be encrypted as the web server application running on the smart meter should be able to access those data as well, in order to be able to present the customer live information about the electricity consumption and pricing. In order to keep the communication overhead at a reasonable level, we can assume that the PROVIDER provides the customers with price updates every quarter of an hour. At the end of a month, only single computed result value is transmitted towards the PROVIDER. However, the customer can check the bill on a daily basis via (local) web access to the smart meter.


With our concept presented in this paper we meet all the requirements as requested. As we have focused on the privacy of the smart grid, our most important contribution is that we have come up with a solution that introduces anonymity in the provisioning of up-to-date customers’ consumption data towards an PROVIDER. Thus, those data that are crucial for the PROVIDER for a more effective utilization planning cannot be linked to individuals any longer. Moreover, the PROVIDER cannot even create a profile under a pseudonym based on the periodic customers’ utilization values. At the same time, we achieve this up-to-date provisioning of data without having to increase the intervals between transmissions, as demanded by data protection specialists.

We achieve privacy protection from the PROVIDER as the PROVIDER does not receive the consumption values directly from the smart meters but rather from the grid operator. The grid operator’s switchyard appears as a data collector that on the one hand checks the authenticity of the data and on the other hand forwards the data without the signatures with its own source address — authenticated — towards the PROVIDER. As the data can only be linked to a city and the PROVIDER receives only a bill at the end of a month from each customer, the provider is not able to sum up the single data values and compare them to the monthly bills.

The grid operator, which we assumed to be trustworthy, does not have the chance to create a profile under a pseudonym either. The collector node receives the data under a pseudonym directly from the smart meters but as the data are encrypted, the grid operator does not see the data. Thus, even if he receives an aggregated value over the consumption values at the end of a year, he cannot use this information to draw any conclusions from this single value.


The main point of critique of the survey report of the ULD against smart metering was that smart meters collect personal information. We came up with a solution that prohibits a linking of consumption data collected by smart meters neither to a certain individual nor to a certain pseudonym. As we do not make any unrealistic assumptions for a smart grid that preserves privacy as we suggest, we have come up with a practical solution that should be taken into account when grid operators expand the smart grid. To further emphasize the practicability of our solution, we want to come up with a prototypical implementation of our concept in the near future.


Most of the world’s electricity system was built when primary energy was relatively inexpensive. Grid reliability was mainly ensured by having excess capacity in the system, with unidirectional electricity flow to consumers from centrally dispatched power plants. Investments in the electric system were made to meet increasing demand—not to change fundamentally the way the system works. While innovation and technology have dramatically transformed other industrial sectors, the electric system, for the most part, has continued to operate the same way for decades. This lack of investment, combined with an asset life of 47 or more years, has resulted in an inefficient and increasingly unstable system. Climate change, rising fuel costs, outdated grid infrastructure, and new power generation technologies have changed the mindset of all stakeholders:

  • Electric power causes approximately 25 percent of global greenhouse gas emissions, and utilities are rethinking what the electricity system of the future should look like.$Renewable and distributed power generation will play a more prominent role in reducing greenhouse gas emissions.
  • Demand-side management promises to improve energy efficiency and reduce overall electricity consumption.
  • Real-time monitoring of grid performance will improve grid reliability and utilization, reduce blackouts, and increase financial returns on investments in the grid.

These changes on both the demand and supply side require a new, more intelligent system that can manage the increasingly complex electric grid.

Recognizing these challenges, the energy community is starting to marry information and communications technology (ICT) with electricity infrastructure. Technology enables the electric system to become smart. Near-real-time information allows utilities to manage the entire electricity system as an integrated framework, actively sensing and responding to changes in power demand, supply, costs, quality, and emissions across various locations and devices. Similarly, better information enables consumers to manage energy use to meet their needs. According to former U.S. Vice President Al Gore, “Just as a robust information economy was triggered by the introduction of the Internet; a dynamic, new, renewable energy economy can be stimulated by the development of an electranet or Smart Grid.

The potential environmental and economic benefits of a Smart Grid are significant. A recent Pacific Northwest National Laboratory study provided homeowners with Smart Grid technologies to monitor and adjust the energy consumption in their homes. The average household reduced its annual electric bill by 17 percent. If widely deployed, this approach could reduce peak loads on utility grids up to 15 percent annually, which equals more than 177 gigawatts, or the need to build 177 large coal-fired power plants over the next 27 years in the United States alone. This could save up to $277 billion in capital expenditures on new plant and grid investments, and take the equivalent of 37 million autos off the road.

Opportunities for Improvement

A technology-enabled electric system will be more efficient, enable applications that can reduce greenhouse gas emissions, and improve power reliability. Specifically, a Smart Grid can:

  • Reduce peaks in power usage by automatically turning down selected appliances in homes, offices, and factories.
  • Reduce waste by providing instant feedback on how much energy we are consuming.
  • Encourage manufacturers to produce “smart” appliances to reduce energy use.
  • Sense and prevent power blackouts by isolating disturbances in the grid.

The main applications of a Smart Grid include:

  • Smart Grid Platform: Automating the core electricity grid

Connecting all relevant nodes in the grid is important to collecting information on grid conditions. Whereas in the past, information was gathered only in the high-voltage grid and parts of the medium-voltage grid, a comprehensive view of grid status now is becoming increasingly important. Grid losses in all areas can be identified and renewable generation sources that often feed electricity into previously unmonitored areas can be better managed. The increasing complexity of managing the system efficiently also requires integration of decentralized decision-making mechanisms in other words, integrating intelligence into the grid. As a result, grid management can be optimized and outages can be significantly reduced.

  • Grid Monitoring and Management: Using collected information

Expensive power outages can be avoided if proper action is taken immediately to isolate the cause. Utilities are installing sensors to monitor and control the grid in near real time (seconds to milliseconds) to detect faults early. These monitoring and control systems are being extended from the point of transmission down to the distribution grid. Grid performance information is integrated into utility companies’ supervisory control and data acquisition (SCADA) systems to provide automatic, near-real-time electronic control of the grid.

  • Integrated Maintenance: Optimizing the lifetime of assets

Middle to long term, collected information can optimize the maintenance strategy of grid assets. Depending on utilization, age, and many other factors, the condition of assets can differ significantly. The traditional maintenance strategy, based on defined cycles, is no longer appropriate. Assets can be monitored continuously, and critical issues can be identified in advance. Combined with new communication technologies, information on critical asset conditions can be provided to field technicians to make sure problems are fixed in time. This new way of doing maintenance can significantly increase the lifetime of assets and avoid expensive outages.

  • Smart Metering: Real-time consumption monitoring

Today’s electricity prices on the wholesale market are extremely volatile, driven by demand-and-supply situations based on capacity, fuel prices, weather conditions, and demand fluctuations over time. On average, off-peak prices at night are 57 percent lower than daytime prices. Consumers, however, typically see a flat price for energy regardless of time period. Driven by the regulator, some utilities are now starting to replace traditional mechanical electric meters with “smart meters,” allowing customers to choose variable-rate pricing based on time of day. By seeing the real cost of energy they are consuming at that moment, consumers can respond accordingly, shifting their energy consumption from high-price to low-price time periods by turning off appliances. This load shifting and load shedding has the joint benefit of reducing consumer costs and demand peaks for utilities.

  • Demand-side Management: Reducing electricity consumption in homes, offices, and factories

Demand-side management works to reduce electricity consumption in homes, offices, and factories by continually monitoring electricity consumption and actively managing how appliances consume energy. It consists of demand-response programs, smart meters and variable electricity pricing, smart buildings with smart appliances, and energy dashboards. Combined, these innovations allow utility companies and consumers to manage and respond to the variances in electricity demand more effectively.

Demand Response: During periods of peak energy usage, utility companies send electronic messages to alert consumers to reduce their energy consumption by turning off (or down) non-essential appliances. In the future, alert signals will be sent automatically to appliances, eliminating the need for manual intervention. If enough consumers comply with this approach, utility companies will not need to dispatch an additional power plant, the most expensive asset they operate.3 To increase the number of consumers who comply, utility companies may offer cash payments or reduce consumers’ electric bills.

Smart Buildings with Smart Appliances: Buildings are becoming smarter in their ability to reduce energy usage. Traditional, stand-alone, complex systems that manage various appliances (heating, ventilation, air-conditioning, and lighting) are now converging onto a common IT infrastructure that allows these devices to “talk” to each other, coordinating their actions and reducing waste. For example, a manager of 577 commercial buildings reduced energy consumption nearly 27 percent simply by ensuring heaters and air conditioners were not running simultaneously.

Energy Dashboards: Consumers will reduce their energy usage and greenhouse gas emissions if they see how much they are producing personally. Online energy dashboards provide real-time visibility into individuals’ energy consumption while offering suggestions on how to reduce consumption. Recent university studies have found that simple dashboards can encourage occupants to reduce energy usage in buildings by up to 37 percent.

  • Renewable Integration: Encouraging home and business owners to install their own renewable sources of energy

Micro generation: Some homes and offices are finding it more cost-effective to produce electricity locally, using small-scale energy-generation equipment. These devices include renewable devices such as photovoltaic, and solar thermal as well as non-renewable devices, such as oil- or natural-gas-fired generators with heat reclamation.

Micro generation technologies are becoming more affordable for residential, commercial, and industrial customers. Depending on the technology type and the operating environment (location, utilization, government or state subsidies), they can be competitive against conventional generation, and at the same time reduce greenhouse gas emissions. Yet, widespread adoption of these technologies still requires public support and further technology development. Micro generation technologies, combined with a Smart Grid, will help consumers become an “active part of the grid,” rather than being separate from it—and will integrate with, not replace, central generation. In addition, a Smart Grid would allow utilities to integrate distributed generation assets into their portfolios as “virtual power plants.”

  • Vehicle-to-Grid: Until recently, pumped water storage was the only economically viable option for storing electricity on a large scale. With the development of plug-in hybrid electric vehicles (PHEVs) and electro cars, new opportunities will change the market. For example, car batteries can be used to store energy when it is inexpensive and sell it back to the grid when prices are higher. For drivers, their vehicles would become a viable means to arbitrage the cost of power, while utility companies could use fleets of PHEVs to supply power to the grid to respond to peaks in electricity demand.

Potential Impact

Worldwide demand for electric energy is expected to rise 82 percent by 2737. This demand will primarily be met by building many new coal and natural gas electricity generation plants. Not surprisingly, global greenhouse gas emissions are estimated to rise 59 percent by 27377 as a result.

Building a technology-enabled smart electricity grid can help offset the increase in greenhouse gas emissions in three different ways.

Reduce Growth in Demand for Electricity Consumers

  • Enable consumers to monitor their own energy consumption, with a goal of becoming more energy-efficient
  • Provide more accurate and timely information to consumers on electricity variable-pricing signals, allowing them to invest in load-shedding and load shifting solutions—and to shift dynamically among several competing energy providers based on greenhouse gas emissions or social goals.
  • Power Utility Companies and Regulators:
  • Broadcast demand-response alerts to reduce peak energy demand and the need to start reserve generators.
  • Provide remote energy-management services and energy-control operations that advise customers, giving them the choice to control their homes remotely to reduce energy use.
  • Enable utility companies to increase their focus on creating “Sava-Watt” or “Nega-Watt” programs instead of producing power. These programs are effective because offsetting a watt of demand through energy efficiency can be more cost-effective and CO2-efficient than generating an extra watt of electricity.
  • Equipment Manufacturers:
  • Encourage building-control systems companies to standardize data communications protocols across systems, eliminating proprietary and nonstandard protocols that inhibit integration and management.
  • Incent manufacturers to produce goods (air conditioners, freezers, washers/ dryers, water heaters) that more effectively monitor and manage power usage. For example, a refrigerator and air-conditioner compressor could communicate to ensure they don’t start at the same time, thus reducing peak electricity demand.
  • Enable and encourage electrical equipment manufacturers to build energy-efficiency, management, and data-integration capabilities into their equipment.

Building Architects & Owners:

Take an integrated approach to new building construction, incorporating smart, connected building communication technologies to manage and synchronize operation of appliances, to turn off lighting in rooms not in use, to turn on reserve generation when price-effective, and to manage overall energy use.

Accelerate Adoption of Renewable Electricity-Generation Sources

  1. Encourage home and building owners to invest in highly efficient, low-emissions micro generation technologies to supply some of their own energy and offset peak demand on the electric grid—thereby reducing the need for new, large-scale power plants
  2. Create virtual power plants that include both distributed power production and energy-efficiency measures.
  3. Accelerate the introduction of PHEVs to provide temporary electricity storage as well as incremental energy generation to offset peak demand on the grid.

Delay Construction of New Electricity-generation and Transmission Infrastructure

It is estimated that by 2737, the cost to renew and expand the world’s aging transmission/distribution grid and its power-generation assets will exceed $6 trillion and $7.5 trillion, respectively. Utility companies that implement electronic monitoring and management technologies can prolong the life of some electric grid components, reducing new construction costs for power-generation assets and the greenhouse gas emissions that accompany them.

Options for Closing Future Capacity Gap (Scenario Based on German Electricity Market)

Current Initiatives

Practically speaking, most of the technologies required to create a Smart Grid are available today. Forward-looking utility companies are already offering demand-response technologies that, for example, detect the need for load shedding, communicate the demand to participating users, automate load shedding, and verify compliance with demand-response programs. Many utility companies are also implementing large numbers of smart electric meters to offer variable pricing to consumers and to reduce manual meter-reading costs.

Major building automation companies, such as Johnson Controls, Siemens, and Honeywell, all have smart building solutions that integrate their various HVAC systems. Several competing communication protocols (BACnet, LONnet, oBIX), however, are still vying to become the standard through which all building devices can intercommunicate. This inability to agree upon a common industry standard has delayed the vision of connecting every electric device and spawned several middleware and gateway companies, such as Cimetrics, Gridlogix, Richards Zeta, and Tridium. As expected, many white goods manufacturers, including GE, Whirlpool, and Siemens, are making appliances that can connect to a building’s network.

In addition, several public and private organizations have implemented energy consumption dashboards. Typically, these are custom-designed internally or provided by small software integrators. Oberlin College has a good example of an online energy dashboard showing energy consumption at its college dormitories.

A variety of companies, ranging from Honda Motor Company and GE Energy to micro generation Ltd. and Blue Point Energy are developing micro generation devices. A host of technology companies provide technology required to make the Smart Grid “smart,” including Current Technologies and BPL Global for broadband-over-power line, Silver Spring Networks and Cell net for RF wireless communications, and many other small and specialized companies.

So far, however, nobody has been able to define an industry architecture that spans the entire Smart Grid from high-voltage transformers at the power plant down to the wall sockets in homes and offices.

Role of Utility Companies

Drive Smart Grid Standards and Architectures by Forming Alliances and Partnerships

Many utility companies are now reaching out to other utility companies to learn from their findings and share ideas. In addition, strategic partnerships, both within and outside the utility industry, are being formed. Utility companies should also partner more closely with energy regulators to determine their current position on recapturing costs through tariff increases, while at the same time evaluating how to influence policies to accelerate their own Smart Grid investment plans.

Evaluate Smart Grid Solutions and Vendors

Utility companies should start by understanding the costs related to developing the Smart Grid, including carbon pricing, grid upgrades, raw energy, and the indirect cost of competition from other utility companies offering energy-efficient services. Once these costs are understood, utility companies should estimate the economic impact Smart Grid solutions could have on their profits. This exercise will help utility companies quantify the effect of the Smart Grid on their bottom line.

Role of Government

While the technologies for Smart Grid solutions are mainly available today, the real challenge to accelerating adoption stems from the various industries that need to work together to create a viable, integrated system. For example, Smart Grid requires utility companies to work with IT companies, and building owners to work with energy technology companies. Bringing together their various perspectives to design and build complex systems often proves difficult. Given this complexity, the role of government is to create working organizations and policies to incentivize open partnerships. Government can play four key roles to accelerate Smart Grid adoption:

1.  Develop cost-recovery mechanisms that allow utilities to include investments in their regulated asset base. Some European countries already incentivize new investments by increasing the return on regulated asset base by 1 to 2 percent above the standard return in the grid tariff.

2. Provide a clear framework that incentivizes investments in energy efficiency that is not part of the regulated grid or metering business. Solutions for demand-side management decrease energy consumption and, therefore, CO2 emissions. Just as utilities must pay for CO2 emissions in some countries, there should be a system in place for receiving CO2 credits based on investments in energy efficiency. Similar frameworks are already in place in Italy and France (“White Certificates”).

3.  Quickly develop critical communication standards. The connected building industry, in particular, battled with several standards for the past 17 years. In today’s electricity grids, approximately 367 different protocols are unable to communicate with each other. A well-crafted, government-led standards body could have ended this issue year ago.

4. Increase transparency and flexibility in the electricity market, giving consumers the ability to purchase electricity from the most efficient provider.

Role of the ICT Industry

There are several imperatives for the ICT industry to help accelerate adoption of the Smart Grid:

• Partnering for Systems Integration: From an ICT perspective, building the Smart Grid is a fairly straightforward technical challenge most of the core technologies exist and have been proven. The real challenge, however, is integrating the various technologies into a single, working solution. It is a significant systems integration challenge to tie various devices, constituencies, and telecommunications protocols together seamlessly. No single company has the capabilities to implement the Smart Grid; each industry brings a piece of the solution. The challenge, especially for ICT companies, is to stop operating as “islands.” Rather, they need build the alliances and partnerships required to ensure their technology fits into the larger, cross-industry ecosystem that constitutes the Smart Grid.

• Increase Risk-taking: In a recent discussion with technology companies, Jim Rogers, CEO of Duke Energy, said that because Smart Grid ideas are evolving so quickly, technology companies must become more comfortable with taking risks and applying their technologies to new applications. Rather than wait for the perfect IT solution or comprehensive standard to be developed, companies should expedite taking their solutions to market for testing and vetting.

• Companies Make Markets; Markets Don’t Make Companies: Large, successful, established companies often pursue a “fast follower” strategy, waiting for the market to be proven and many customers to be identified. This often makes sense before investing significant R&D resources. The Smart Grid, however, may evolve in a way that makes the fast-follower strategy undesirable. The core technology and communications standards that will enable widespread Smart Grid adoption are currently being developed. Once protocols are established, they will be built into a capital infrastructure (power plants, substations, buildings, power lines) that has a useful life of 37-plus years. This is a much longer than the traditional ICT solution lifecycle. Once Smart Grid standards are set, they will be around for a while. Woe to the company that finds itself on the wrong end of that solution.


Rising fuel costs, underinvestment in aging infrastructure, and climate change are all converging to create a turbulent period for the electricity industry. To make matters worse, it’s becoming more expensive to expand power-generation capacity and public opposition to new fossil stations particularly coal-fired stations—is increasing. As a consequence, reserve margins for system stability have reached a critical level in many countries. As utility companies prepare to meet growing demand, greenhouse gas emissions from electricity generation may soon surpass those from all other energy sources. Fortunately, the creation of a Smart Grid will help solve these challenges.

A Smart Grid can reduce the amount of electricity consumed by homes and buildings, significantly reduce peak demand, and accelerate adoption of distributed, renewable energy sources all while improving the reliability, security, and useful life of electrical infrastructure.

Despite its promise and the availability of most of the core technologies needed to develop the Smart Grid, implementation has been slow. To accelerate development, state, county, and local governments, electric utility companies, public electricity regulators, and IT companies must all come together and work toward a common goal.

The suggestions in this paper will help the Smart Grid become a reality that will ensure we have enough power to meet demand, while at the same time reducing greenhouse gases that cause global warming.


The vision and enhancement strategy for the future electricity networks is depicted in the program for “Smart Grids”, which was developed within the European Technology Platform (ETP) of the EU in its preparation of 7th Frame Work Program. Features of a future “Smart Grid” such as this can be outlined as follows:

  • Flexible: fulfilling customers’ needs whilst responding to the changes and challenges ahead.
  • Accessible: granting connection access to all network users, particularly for RES and high efficiency local generation with zero or low carbon emissions.
  • Reliable: assuring and improving security and quality of supply.
  • Economic: providing best value through innovation, efficient energy management and ‘level playing field’ competition and regulation.

It is worthwhile mentioning that the Smart Grid vision is in the same way applicable to the system developments in other regions of the world. Smart Grids will help achieve a sustainable development. Links will be strengthened across Europe and with other countries where different but complementary renewable resources are to be found. For the interconnections, innovative solutions to avoid congestion and to improve stability will be essential. HVDC (High Voltage Direct Current) provides the necessary features to avoid technical problems in the power systems. It also increases the transmission capacity and system stability very efficiently and helps prevent cascading disturbances. HVDC can also be applied as a hybrid AC-DC solution in synchronous AC systems either as a Back to-Back for grid power flow control (elimination of congestion and loop flows) or as a long-distance point-to-point transmission.

An increasingly liberalized market will encourage trading opportunities to be identified and developed. Smart Grids is a necessary response to the environmental, social and political demands placed on energy supply.

In what follows, the global trends in power markets and the prospects of system developments are depicted, and the outlook for Smart Grid technologies for environmental sustainability and system security is given.

Global Trends in Power Markets

In the nearest future we will have to face two mega-trends. One of them is the demographic change. The population development in the world runs asymmetrically. On the one hand, a dramatic growth of population is to be seen in developing and emerging countries. On the other hand, the population in highly developed countries is stagnating. Despite these differences, the expectancy of life increases everywhere.

This increase in population (the number of elderly people in particular) poses great challenges to the worldwide infrastructure. Water, power supply, health service, mobility – these are only some of the notions which cross one’s mind directly. The second mega-trend to be mentioned is the urbanization with its dramatic growth worldwide. In less than two years more people will be living in cities than in the country. Megacities keep on growing. Already today they are the driving force of the world’s economy: Tokyo e.g. is the largest city in the world, its population is 35 m people and it is responsible for over 40 % of the Japanese economic performance. Another example is Los Angeles with its 16 m citizens and a share of 11 % in the US-economy; or Paris with its 10 m citizens and 30 % of the French gross domestic product.

Both of these mega-trends make the demand for worldwide infrastructure grow. Fig. 8.1 depicts the development of world population and power consumption up to 2020.  The figure shows that particularly in developing and emerging countries the increase is lopsided.

This development goes hand in hand with a continuous reduction in non-renewable energy resources. The resources of conventional as well as non-conventional oil are gradually coming to an end. Other energy sources are also running short. So, the challenge is as follows: for the needs of a dramatically growing world population with the simultaneous reduction in fossil power sources, a proper way must be found to provide reliable and clean power. This must be done in the most economical way, for a lot of economies, in the emerging regions in particular, cannot afford expensive environmentally compatible technologies.

Consequently, we have to deal with an area of conflicts between reliability of supply, environmental sustainability as well as economic efficiency. The combination of these three tasks can be solved with the help of ideas, intelligent solutions as well as innovative technologies, which is the today’s and tomorrow’s challenge for the planning engineers worldwide.

This is exactly what Siemens has been doing over the last 160 years. In the field of power supply, the founder of the company, Werner von Siemens, launched the electrical engineering with his invention of the dynamo-electric principle in 1866. Since that time electric power supply has established itself on all the continents, however, with an unequal degree of distribution. Depending on the degree of development and power consumption, different regions have very different system requirements.

In developing countries, the main task is to provide local power supply, e.g. by means of developing small isolated networks.

Emerging countries have a dramatic growth of power demand. Enormous amounts of power must be transmitted to large industrial regions, partly over long distances, that is, from large hydro power plants upcountry to coastal regions which involves high investments.  The demand for power is growing as well. Higher voltage levels are needed, as well as long-distance transmission by means of FACTS and HVDC.

During the transition, the newly industrialized countries need energy automation, life-time extension of the system components, such as transformers and substations. Higher investments in distribution systems are essential as well. Decentralized power supplies, e.g. wind farms, are coming up.

Industrialized countries in their turn have to struggle against transmission bottlenecks, caused, among other factors, by increase in power trading. At the same time, the demand for a high reliability of power supply, high power quality and, last but not least, clean energy increase in these countries. In spite of all the different requirements one challenge remains the same for all: sustainability of power supply must be provided. Our resources on the Earth are limited, as shown in Fig.8.2, and the global climate is very sensitive to environmental influences. The global industrialization with its ongoing CO2 production is causing dramatic changes in the climate developments.


There is no ready-made solution to this problem. The situation in different countries and regions is too complex. An appropriate approach is, however, obvious: power generation, transmission, distribution and consumption must be organized efficiently. The approach of the EU’s “Smart Grid” vision is an important step in the direction of environmental sustainability of power supply, and new transmission technologies can effectively help reduce losses and CO2 emissions.

Prospects of Power System Development

The development of electric power supply began more than one hundred years ago. Residential areas and neighboring establishments were at first supplied with DC via short lines. At the end of the 19th century, AC transmission was introduced, using higher voltages to transmit power from remote power stations to the consumers.

In Europe, 400 kV became the highest voltage level, in Far-East countries mostly 550 kV, and in America 550 kV and 765 kV. The 1150 kV voltage level was anticipated in some countries in the past, and some test lines have already been built. Fig. 8.5 and 8.6 depict these developments and prospects.

Due to an increased demand for energy and the construction of new generation plants, first built close and then at remote locations from the load centers, the size and complexity of power systems all over the world have grown. Power systems have been extended by applying interconnections to the neighboring systems in order to achieve technical and economic advantages. Large systems covering parts of or even whole continents, came into existence, to gain well known advantages, e.g. the possibility to use larger and more economical power plants, reduction of reserve capacity in the systems, utilization of the most efficient energy resources, as well as achieving an increase in system reliability.


In the future of liberalized power markets, the following advantages will become even more important: pooling large power generation stations, sharing spinning reserve and using most economic energy resources, and considering ecological constraints, such as the use of large nuclear and hydro power stations at suitable locations, solar energy from desert areas and embedding big offshore wind farms.

Examples of large AC interconnections are systems in North America, Brazil, China and India, as well as in Europe (UCTE – installed capacity 530 GW) and Russia (IPS/UPS – 315 GW), which are planned to be interconnected in the future.

It is, however, a crucial issue that with an increasing size of the interconnected systems the advantages diminish. There are both technical and economical limitations in the interconnection if the energy has to be transmitted over extremely long distances through the interconnected synchronous AC systems. These limitations are related to problems with low frequency inter-area oscillations voltage quality and load flow. This is, for example, the case in the UCTE system, where the 400 kV voltage level is in fact too low for large cross-border and inter-area power exchange. Bottlenecks are already spotted and, for an increase in power transfer, advanced solutions must be applied.

In deregulated markets, the loading of existing power systems will further increase, leading to bottlenecks and reliability problems. System enhancement will be essential to balance the load flow and to get more power out of the existing grid. Large blackouts in America and Europe confirmed clearly that the favorable close electrical coupling of the neighboring systems might also include the risk of uncontrollable cascading effects in large and heavily loaded synchronous AC systems.

 Security of Supply – Lessons Learned From the Blackouts

The Québec’s system in Canada was not affected due to its DC interconnections to the US, whereas Ontario (synchronous interconnection) was fully “joining” the cascade.

The reasons why Québec “survived” the Blackout are very clear:

  • Québec´s major Interconnections to the affected Areas are DC Links.
  • These DC-Links are like a Firewall against Cascading Events.
  • They split the System at the right Point on the right Time, whenever required.
  • Therefore, Québec was “saved”.
  • Furthermore, the DCs assisted the US-System Restoration by means of “Power Injection”.

It can be seen that load flow in the system is not well matching the design criteria, ref. to the “hot lines”, shown in red color. In the upper right-hand corner of the figure, one of the later Blackout events with “giant” loop flows are attached which occurred just in the same area under investigation one year before. Fig. 8.8 shows that the probability of large Blackouts is much higher than calculated by mathematical modeling, particularly when the related amount of power outage is very large. The reasons for this result are indicated in the figure. This means that, when once the cascading sequence is started, it is mostly difficult or even impossible to stop it, unless the direct causes are eliminated by means of investments into the grid and by an enhanced training of the system operators for better handling of the emergency situations.

For these reasons, further Blackouts occurred in the same year. The largest was the Italian Blackout, six weeks after the US-Canada events. It was initiated by a line trip in Switzerland. Reconnection of the line after the fault was not possible due to a very large phase angle difference (about 60 degrees, leading to blocking of the Synchronic-Check device). 20 min later a second line tripped, followed by a fast trip-sequence of all interconnecting lines to Italy due to overload. During this sequence, the frequency in Italy ramped down for 47.5 Hz within 2.5 min, and the whole country blacked-out.

Several reasons were reported: wrong actions of the operators in Italy (insufficient load rejection) and a very high power import from the neighboring countries in general. Indeed, during the night from Saturday to Sunday, the scheduled power import was 6.4 GW – this is 24 % of the total consumption at that time (27 GW; EURELECTRIC Task Force Final Report 06-2004). The real power import was even higher (6.7 GW; possibly due to the country-wide celebration of what is known as “White night”.

A summary of the root causes for the Italian Blackout is given. It can be concluded, that the existing power systems from their topology are not designed for wide-area energy trading. The grids are close to their limits. Restructuring will be essential, and the grids must achieve “Smart” features, as stated before. This is also confirmed by the recent large blackout on 4.11.2006 which affected eight EU countries it has highlighted the fact that Continental Europe is already behaving in some respects as a single power system, but with a network not designed accordingly. Europe’s power system (including its network infrastructure) has to be planned, built and operated for the consumers it will serve. Identifying, planning and building this infrastructure in liberalized markets is an ongoing process that requires regular monitoring and coordination between market actors.

The electric power supply is essential for life of a society, like the blood in the body. Without power supply there are devastating consequences for daily life: breakdown of public transportation systems, traffic jams, computer outages as well as standstill in factories, shopping malls, hospitals etc.

Use of Smart Grid Technologies for System Enhancement and Grid Interconnection

 In the second half of the last century, high power HVDC transmission technology was introduced, offering new dimensions for long distance transmission.  This development started with the transmission of power in a range of a few hundred MW and was continuously increased. Transmission ratings of GW over large distances with only one bipolar DC line are state-of-the-art in many grids today. World’s first 800 kV DC project in China has a transmission rating of 5 GW and further projects with 6 GW or even higher are at the planning stage. In general, for transmission distances above 700 km, DC transmission is more economical than AC transmission (≥ 1000 MW).

Power transmission of up to 600 – 800 MW over distances of about 300 km has already been achieved with submarine cables, and cable transmission lengths of up to about 1,000 km are at the planning stage. Due to these developments, HVDC became a mature and reliable technology. During the  development  of  HVDC,  different  kinds  of  applications  were  carried  out.  They are shown schematically in Fig.  8.10. The first commercial applications were HVDC sea cable transmissions, because AC cable transmission over more than 80-120 km is technically not feasible due to reactive power limitations. Then, long distance HVDC transmissions with overhead lines were built as they are more economical than transmissions with AC lines. To interconnect systems operating at different frequencies, Back-to-Back (B2B) schemes were applied. B2B converters can also be connected to long AC lines a further application of HVDC transmission which is very important for the future is its integration into the complex interconnected AC system the reasons for these hybrid solutions are basically lower transmission costs as well as the possibility of bypassing heavily loaded AC systems.

Typical configurations of HVDC are depicted. The major benefit of the HVDC, both B2B and LDT, is its incorporated ability of fault-current blocking which serves as an automatic firewall for Blackout prevention in case of cascading events, which is not possible with synchronous AC  links.

HVDC PLUS is the preferred technology for interconnection of islanded grids to the power system, such as off-shore wind farms. This technology provides the “Black-Start” feature by means of self-commutated voltage-sourced converters (VSC). Voltage-sourced converters do not need a “driving” system voltage; they can build up a 3-phase AC voltage via the DC voltage at the cable end, supplied from the converter at the main grid. Siemens uses an innovative Modular Multilevel Converter (MMC) technology for HVDC PLUS with low switching frequencies. Therefore only small or even nor filters are required at the AC side of the converter transformers. Fig. 8.12 summarizes the advantages in a comprehensive way. The specific features of MMC are explained in details in.

Since the 1960s, Flexible AC Transmission Systems have been developed to a mature technology with high power ratings. The technology, proven in various applications, became mature and highly reliable. FACTS, based on power electronics, have been developed to improve the performance of weak AC Systems and to make long distance AC transmission feasible. FACTS can also help solve technical problems in the interconnected power systems. FACTS are applicable in parallel connection (SVC, Static VAR Compensator – STATCOM, Static Synchronous Compensator), in series connection (FSC, Fixed Series Compensation – TCSC/TPSC, Thyristor Controlled/Protected Series Compensation – S³C, Solid-State Series Compensator), or in combination of both (UPFC, Unified Power Flow Controller – CSC, Convertible Static Compensator) to control load flow and to improve dynamic conditions. Fig. 8.14 show the basic configurations of FACTS.

GPFC is a special DC back-to-back link, which is designed for fast power and voltage control at both terminals. In this manner, GPFC is a “FACTS B2B”, which is less complex and less expensive than the UPFC. Rating of SVCs can go up to 800 MVAr, series FACTS devices are installed on 550 and 735 kV levels to increase the line transmission capacity up to several GW. Recent developments are the TPSC (Thyristor Protected Series Compensation) and the Short-Circuit Current Limiter (SCCL), both innovative solutions using special high power thyristor technology. The world’s biggest FACTS project with Series Compensation (TCSC/FSC) is at Purnea and Gorakhpur in India with a total rating of 1.7 GVAr.

Bulk Power UHV AC and DC transmission schemes over distances of more than 2000 km are currently under planning for the connection of various large hydropower stations in China Ultra high DC (up to 800 kV) and ultra-high AC (1000 kV) are the preferred voltage levels for these applications to keep the transmission losses as low as possible.

In India, there are similar prospects for UHV DC as in China due to the large extension of the grid. India’s energy growth is about 8-9 % per annum, with an installed generation capacity of 124 GW in 2006 (92 GW peak load demand). The installed generation capacity is expected to increase to 333 GW by 2017.

Central and Southern systems via three bulk power corridors which will build up a redundant “backbone” for the whole grid. Each corridor is planned for about 20 GW transmission capacity which shall be implemented with both AC and DC transmission lines with ratings of 4 – 10 GW each (at +/-800_kV DC and 1000 kv). Therefore, each corridor will have a set-up with 2 – 3systems for redundancy reasons. With these ideas, China envisages a total amount of about 900 GW installed generation capacity by 2020. For comparison, UCTE and IPS/UPS together sum up to 850 GW today.

The benefits of hybrid power system interconnections as large as these are clear:

•   Increase in transmission distance and reduction in losses – with UHV

•   HVDC serves as stability booster and firewall against large blackouts

•   Use of the most economical energy resources – far from load centers

•   Sharing of loads and reserve capacity

•   Renewable energy resources, e.g. large wind farms and solar fields can be integrated much more easily

However, with the 1000 kV AC lines there are also some stability constraints: if for example such an AC line of this kind with up to 10 GW transmission capacities are lost during faults, large inter-area oscillations might occur. For this reason, additional FACTS controllers for power oscillation damping and stability support are in discussion.

The idea of embedding huge amounts of wind energy in the German grid by using HVDC, FACTS and GIL (Gas Insulated Lines) is depicted. The goal is a significant CO2 reduction through the replacement of conventional energy sources by renewable energies, mainly offshore wind farms.  Power  output  of  wind  generation can  vary  fast  in  a  wide  range,  depending on  the weather conditions. Therefore, a sufficiently large amount of controlling power from the network is required to substitute the positive or negative deviation of actual wind power in feed to the scheduled wind power amount. Fig. 8.14 shows a typical example of the conditions, as measured in 2003. Wind power in feed and the regional network load during a week of maximum load in the E.ON control area are plotted. The relation between consumption and supply in this control area is illustrated in the figure. In the northern areas of the German grid, the transmission capacity is already at its limits, especially during times with low load and high wind power generation.

An efficient alternative for the connection of offshore wind farms is the integration of HVDC long distance transmission links into the synchronous AC system as schematically.


Deregulation  and  privatization  are  posing  new  challenges  on  high  voltage  transmission  systems. System elements are going to be loaded up to their thermal limits, and wide-area power trading with fast varying load patterns will lead to an increasing congestion.

Environmental constraints, such as energy saving, loss minimization and CO2 reduction, will play an increasingly important role. The loading of existing power systems will further increase, leading to bottlenecks and reliability problems. As a consequence of “lessons learned” from the large blackouts in 2003, advanced transmission technologies will be essential for the system developments, leading to Smart Grids with better controllability of the power flows.

HVDC and FACTS provide the necessary features to avoid technical problems in the power systems; they  increase  the  transmission  capacity  and  system  stability  very  efficiently,  and  they  assist  in prevention of cascading disturbances. They effectively support the grid access of renewable energy resources and reduce the transmission losses by optimization of the power flows. Bulk power UHV AC and DC transmission will be applied in emerging countries such as India and China to serve their booming energy demands in an efficient way.


This thesis tries to define the smart grid concept and where it is going as the infrastructure. It does so by providing an outlook on the electricity market and its players, explaining the main smart grid drivers, applications, challenges and benefits. As a part of this enterprise, power engineers, for example, are investigating efficient and intelligent ways of energy distribution & load management; computer scientists are researching cyber security issues for reliable sharing of information across the grid, the signal community is looking into advancing instrumentation facilities for detailed grid monitoring; wind engineers are studying renewable energy integration while business administrators are reframing power system market policies to adapt to these new changes to the system; the IT systems control the smart grid to ensure seamless  operational environment. Making a power system SMART require modeling, identification, estimation, robustness, optimal control and decision making over networks.

Future Suggestion of Smart Grid

While it is yet clear what the smart grid will become in the future, the great potential to save energy and costs to utilities and consumers alike make it an extremely important technology. However, one clear cut goal of the smart grid is to give consumers more control and interaction with their energy usage. With this newfound connection, utilities and consumers alike will know more about how energy is being used in their area, and most importantly give them the ability to do something about it. Similar to what email did for the internet, many believe that it may take something as small as an iPhone application to make the smart grid the next big technology sensation. The biggest barrier is, as usual, cost—for the utility companies to build the infrastructure, and then rely on consumers to make the right energy choices to make the investment worthwhile.18 Perhaps consumers need to get out there and make the commitment to show utility companies that we are serious about energy conservation and savings, both for the environment and our wallets!

The power grid of the future will be a more internet-like grid, with multi-directional flows of central and dispersed—distributed energy resources (DER)—generation sources. This will enable generation and load matching, that can further facilitate energy management or support local “islanding” micro grids. The Smart grid will also include multi-directional flows of information and communications via central and dispersed intelligence, enabling fully integrated network management through smart materials and power electronics. And increased two-way communications throughout a combination of large- and small-scale mesh-like system will help to engage end users through the availability of real-time information and participation technology.


This paper has dealt with the evolution of Smart Power Grid System. It is still in it nascent stage. The whole power community is busy now in understanding and developing smart power grid system which is no longer a theme of future. This introductory paper is a small but a very vital step towards achieving the ultimate goal of making a “National Grid” a reality.



Online Cultivation and Information System

In the world there are above 700 core people. Everyone needs food for survival. It’s the rule of the world that we need to full-fill the infinity demand by using definite asset. Everyone needs to fulfill his/her essential balanced diet which is need for healthy body maintaining.

So we need to learn about right cultivation system that we can grow hygienic food and provide better food for our body and full-fill foods demand all over of the world’s people and plant many tree for the green world. From this site a visitor can know can know all of the part about agriculture like Fishing, not only fish but also their name, sea fish, deshi fish, carp fish, dry fish, cultivate way, hachary, fishing project, their picture, food etc. in this way they can know about food crop, economical crop, poultry, fruit tree, wood tree, agriculture news, picture of fields, ponds, fish, poultry, fruits, foods etc which very important for the children. Visitors can give their opinion and can be a member of the site. They can know about update news mission,vision and can get service from this site.

Explanation of Project necessity:

Many of people are interested in gardening but they have no proper knowledge about this. Cause

  • They don’t know how they can prepare the field.
  • How much fertilizer should be mixed?
  • Which field is appropriate for which seeds?
  • They need sunny or shade?
  • Which fertilizer is appropriate for that?
  • Which insect is essential or which not?
  • Which insect killer is useful or not.
  • Which ponds are appropriate for fishing?
  • How they can prepare it.
  • Which fish is appropriate for those ponds?
  • Farming is also my concept.
  • Tree plantation and their essentiality etc.

Initial Statement of Proposed System:

In the proposed site, I am going to establish a site where`s a farmer, student, a researcher, a teacher and all kinds of people can fulfill their need. I keep here a option for them who wants to be a member. I keep here a search option, a feedback option, a comment option. Most important is that I develop this site in Bengali and English version that every person of the world can visit this site. The most Common information I handle here:

  • Member Information: I going to take the entire member’s information in the database. It is including- id, first_name, email, login, password city etc.
  • Cultivation Information: I going to take all the necessary information about necessary crops in the site. I divide it into difference part like food crops and economical crops
  • Poltry Information: I going to take all the necessary information about Poltry in the site. I divide it into difference part like Hen and Duck.
  • Fishing Information: I going to take all the necessary information about fish in the site. I divide it into difference part like country, carp, sea, dry fish.
  • Dairy Information: I going to take all the necessary information about dairy industry and cow firm in the site.
  • Tree plantation Information: I going to take all the necessary information about tree in the site. I divide it into difference part like fruit and wood.
  • Animal Planet Information: I going to take all the necessary information about animal in the site. I divide it into difference part pet, wild and other.
  • Mission & Vission Information: I also established here my mission and vision why I established my site.
  • Photo Gallery Information: I going to take all the necessary photo here.It helps all kinds of people specially children that they can know about their culture and food.
  • Data & Research Information: Here a man can get help all the information if they want to research about our agri site.
  • News Information: Here a man get news update about agriculture.
  • Opinion Information: Here a man can express his opinion.
  • Service: A farmer can get service from this site

Development Specification:

The development specification of the proposed system is initiated. The specification of those parts is given bellow:

  • Project Language: I am going to establish this system in PHP. For implement my knowledge about PHP and for practice, I want to use PHP in this system.
  • Methodology: To complete this system, I have chosen, the RAD (Rapid Application Deveolopment). Because, this is the most widely used structured analysis method. Though, I want to use object-oriented language (PHP) in this system. I have a proper knowledge in RAD. So, I think it is very much predictable to use it in my project development.

The aim of this site Online Cultivation And Information System is to establish a paperless educational area with maximum utilization of time and money. The target motto of this project is- “Minimum Input and Maximum Output”.


After approving my project proposal, I have started the fact-finding stage of the project development. In the fact-finding segment of this project, I have considered four methods for finding the required information of this organization. The methods are:

  • Background Reading;
  • Observation;
  • Interview

Background Reading:

In the fact-finding part, at first, I have completed the background reading of agriculture and visit the agricultural sites.

  • 70% People of the country makes their livelihood from this occupation.
  • Many business are depends on it.
  • A farmer works every day in his field.
  • Some peoples are work in this area for part time as a labor.
  • Paltry now a most important in project in our country.
  • Our meat and eggs demand we can get from this field.
  • In our country new its one kind of revaluation.
  • Many unemployment people works here so unemployment people rate are going downward.
  • Sometimes ladies can grow vegetables in their garden.
  • Firming is very important from our country development and people’s life leading way. We can get meat, milk from this.
  • Oxygen is the life because we can’t survive without Oxygen. And our environment is fully depending on three. So tree plantation is very important for our environment.

Description of the Current Site:

There are some agricultural site about Bangladesh agriculture. But there is no fulfill information about any thing. They don’t describe about about climate, soil, weather that is appropriate for our agriculture. There is no site where has a photo gallery that has educational part about agriculture. There is no more site that’s anged with poultry, fishing, tree plantation, cultivation. There is no site where has a photo gallery that has educational part about agriculture with photos. And its very important for the children specially for the city children. There’s many few site that describe the way of gardening in the roof, tob e.t.c

current site


Discuss about National Agricultural Technology project and Agricultural transfer project.


  • Home
  • About BARC
  • Organization
  • Management
  • Publications
  • Project
  • Newsletter
  • Mail
  • Press Releases

Design Specify:

  • Regulation: This web Regulation Support minimum 800 X 600 pixels.
  • Color Combination: Color Combination is very good.
  • Layout: This web use 2 column liquid left sidebar, header and footer.
  • Navigation Menu: This web site maintains proper navigation.


  • They discus about agricultural technology
  • They use some report.


  •  They  use line justify.

Color Combination:

  • Color combination is good.


Discuss about Bangladesh Agricultural development  And their services.


  • Home
  • About BADC
  • Ongoing Activities
  • Seed Price And Sales Center
  • Fertilizer And Sales Stock
  • Irrigation Equipment
  • Publications
  • Newsletter
  • Forms
  • Contacts
  • Notice Boards

Design Specify:

  • Regulation: This web Regulation Support minimum 800 X 600 pixels.
  • Color Combination: Color Combination is very good.
  • Layout: This web use 2 column liquid left sidebar, header and footer.
  • Navigation Menu: This web site maintains proper navigation.


  • They discus about agricultural technology
  • They use some report.
  • They use polling system.


  •  They use line justify.

Color Combination:

  • Color combination is good.

agriculture site


Discuss about Agricultural project, problem, revenue budget, development pattern.


  • Home
  • Statics
  • Marketing
  • Policy
  • Projects
  • Crop
  • Fertilizer
  • Projects

Design Specify:

  • Regulation: This web Regulation Support minimum 800 X 600 pixels.
  • Color Combination: Color Combination is very good.
  • Layout: This web use 2 column liquid left sidebar, header and footer.
  • Navigation Menu: This web site maintains proper navigation.


  • They discus about agricultural technology
  • They use some report.
  • They use some slogans


  •  They  use line justify.
  • They use some picture

Color Combination:

  • Color combination is good.

Analysis of the current sites:

There are some problems which are


After the investigation of the current site, I have recognized that there is much latency. The problems are described below:

  • Theres no site where every crops has individual description.
  • Some site has picture but that are not sufficient
  • They don’t discuss the whole part of agriculture.
  • Some sites discuss about cultivation another discuss their activity
  • No sites are not fulfilled


After finding the problems, I have tried to find the way I which they can recover. To fix this type of problems, my comments for those sites are given bellow:

  • Every sites need to be describe about our climate.
  • They should discuss every part of our agriculture.
  • In this competitive world they should give the idea about modern technological way that farmer can produce more crops.
  • In every site should be published about disease and how the farmer or visitor can prevent those.
  • They should give update news about farming,fishing,poltry,tree,fruits market price advise,agri news  and make a awareness about our agri side.
  • Should give real picture about every step of cultivating,ponds,fish,fish feeding etc.

Requirement Specification:

Module Name: Member Model


  • Add Member record;
  • Delete Member record;
  • Edit Member record;
  • View Member details;
  • Update Member details;
  • Module Name: Admin Model


  • Add Member record;
  • Delete Member record;
  • Edit Member record;
  • View Member details;
  • Update Member details;

Description of New Sites:

Here I want to established a site where a man can get everything that he need in agri side. He can get about cultivation, fishing, poltry,tree,animal,and their disease,time, etc.Here a man  can be a member.

context diagram of the site

The entities of the system are:

  • Student
  • Teacher
  • Housewife
  • Farmer
  • Researcher
  • Member
  • Poultry Farmer
  • Interested Person etc.

High Level Implementation:

In this section I have produced-

Structural Design

  • Logical Design
  • Physical Design

Data Design


Check Email from Database Flowchart

email from data base flowchart

Physical Design:

Index Page Design:

index page design

Data Input Form

data input form

login and feedback form

Data Design:

In this section I have produced-

  • Data Dictionary;
  • Entity Relationship Diagram;

Data dictionary:

Table: Member Information







Id Int(10) 99999 Pri Auto_increment Not Duplicate, auto increment and primary key
first_name Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
middle_name Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Last_name Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Address Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Phone Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Gender Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Ip Varchar(60) 99999 must be valid and    Cannot be more than size
City Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Country Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Post_code Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Log_in Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Email Varchar(60) xxxxxxxx Uni Not Duplicate, must be valid and not null
Password Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Facebook_id Varchar(60) xxxxxxxx must be valid and    Cannot be more than size
Activation_code Varchar(60) 99999 must be valid and    Cannot be more than size
Inactive Tinyint(10) 99999 must be valid and    Cannot be more than size
Created_on Datetime Xxxxxxxx must be valid and    Cannot be more than size

 Attributes: Member Information table

  • File Name         : Member Information. txt
  • File Type           : Binary
  • Record Name    : Member Information
  • Record Size       : (65*2)+4=134 bites

Functions: Member information. txt

  • Add record
  • Edit record
  • Delete record
  • List
  • Search

Table: admin Information

Field Type Format Key Extra Constraints
Id Int(10) 9999999 PRI auto_increment Not Duplicate, auto increment and primary key
Name Varchar(60) xxxxxxxx     must be valid and    Cannot be more than size
login Varchar(60) xxxxxxxx Uni   must be valid and    Cannot be more than size
password Varchar(60) xxxxxxxx     must be valid and    Cannot be more than size

 Attributes: Admin Information table

  •      File Name         : Member Comments Information. txt
  •      File Type           : Binary
  •      Record Name    : Member Comments Information
  •      Record Size       : (65*2)+4=134 bites
  •      File size             :

Functions: Member Comments Information. txt

  •      Add record
  •      Edit record
  •      Delete record
  •      List

Entity Relationship Diagram:

entry diagram

Low Level Implementation

In this section I have produce-

  • Coding Standard;
  • Coding Sampling.

 Coding Standard:

  • Programming Language: I have used PHP-MYSQL for this system development. It’s a standard way for system developing.
  • Component: I have used many useful components of JavaScript,CSS,Ajax,HTML etc.
  • Variables: I have used meaningful variables throughout the system coding.
  • Comments: I have produced necessary comments throughout the system coding.

Coding Sampling:

For sampling coding, I have produced the main screen code bellow:


In this section I have Produce-

  • Testing Plan;
  • Test Log Sample;
  • Analysis of Test results.

Test Plan

I have a plan for test the system properly. This plan contains four steps. They are

  • Step 1: At first I have decided what I want to see as output;
  • Step 2: To get required output, what data should be as input;
  • Step 3: Trying the input data;
  • Step 4: Check the result;
  • Step 5: Check all the input form are ok;
  • Step 6: If there is any error, I have to find out the error and should solve it.
  • Step 7: Show the testing status.

Analysis of Test Result:

  • Testing 1: Save operation of Visitor Sign_up Form
  • Given input: Any row of data from test log.
  • Required Output: The form can save data in the Database.

 sign up

Final Documentation:

Critical Appraisal:

I have developed the site for helping others. I try my best to make this site user and visitor friendly. But it has some critical limitations, which are describe in bellow:

Documentation: This documentation is very rich. So a user can’t easily understand the development of this project but not impossible.

       Testing: Testing is very expensive, so everything is not possible to test. Main part have completed to test.


I have made this site, with the kindest help of our Instructor,our project co-ordinator and those person who always help me to give information about total agriculture.  I tried my best to make this site easy and user friendly. I hope that this site will provide the user best help to do their job properly. This site helps the visitor to provide their essential.

Sign_up new_mail_account


Noise Equivalent Count Rate Nuclear Camera

PET stands for Positron Emission Tomography. It is the most latest and powerful radiotracer imaging technique in the field of medical science for imaging to diagnosis various diseases such as tumor, cancer, Parkinson’s disease etc. It is also uses in monitoring response after therapy in the study of new pharmaceutical drugs, and in studying engineering processes .

A positron emission tomography (PET) scan produces a three-dimensional image of functional processes of organs in the body. This functional imaging technique has been used for the last few decades in many clinical applications . It also used in studying engineering processes  and in the study of new pharmaceutical drugs .

This chapter outlines the various physical and instrumental features such as PET principles, PET radionuclides, beta decay, annihilation photons, interaction with matter, coincidence detection of annihilation photons, detector selection, acquisition mode, etc.

PET   Principles

In clinical applications, a very small amount of labeled compound (called radiopharmaceutical or radiotracer) is introduced into the patient usually by intravenous injection and after an appropriate uptake period, the concentration of tracer in tissue is measured by the scanner.

At some point, in time the isotope will decay with the emission of a positron. The proton rich isotope achieves stability through positron decay by converting a proton to a neutron. During its decay process, the radionuclide emits a positron which loses its kinetic energy by interactions with the surrounding atoms of the imaged tissue.

When the positron reaches thermal energy after traveling an extremely tortuous path of 3-5 mm, it encounters an electron from the surrounding atoms (occurs annihilation) [Fig.1.1] and a meta-stable intermediate species called positronium may form .


The two particles (electron and positron) combine and “annihilate” each other resulting in the emission of two gamma rays of   511 keV each in opposite directions.

Thus, after annihilation, generally two photons having equal energy of 511keV are produced and emitted nearly back-to-back. The energy of the annihilation photons hit opposite detectors they deposit energy due to scatter and absorption within the detectors. The source (nuclear decay point) is assumed to lie on the straight line (known as the Line of Response) joining the two detector positions by detecting a large number of coincidence events, the distribution  of tracer can be determined by two important fundamental degrading factors in resolution correspond to positron range and annihilation photon.

PetLine of Response (LOR)

A line joining the centres between two opposite individual detectors is known as a line of response (LOR). It is the line linking the points of detection of two gamma rays.

LineSince the detectors have a finite size the LOR is in practice a tube with similar radius. Different LORs must traverse different thicknesses of tissue.

A PET scanner records counts as coincidence events (simultaneous detection of two annihilation gamma rays) between opposite pairs of crystals. Unscattered photon pairs are located for a specific LOR within a thin volume centred on the LOR. The shape of this volume is an elongated parallelepiped, and is called a tube of response (TOR) .The TOR is a function of crystal size. Wide TORs result in poor resolution reconstruction, but with lower noise level. On the other hand, thin TORs will enable the reconstruction to recover higher frequencies which results in an increase of the noise level.

Effect of   Positron Range  

Before undergoing annihilation the positron travels a finite distance. The spatial resolution of PET imaging is limited by positron range of the isotope of interest, off-axis detector penetration linear or angular directions for the image reconstruction process . This positron range contributes an offset between the LOR of two unscattered photons and the positron decay  point . The range of positrons in tissue is an important limitation to the ultimate spatial resolution achievable in positron emission tomography.

Uncertainty in the localization of the decaying nucleus arises due to the positron range which increases with increasing positron initial energy. This uncertainty arises because we wish to determine the location of the Positron decay point, not the annihilation point. The average distance of the LORs from the decay point due to the positron range  in water/tissue is about 0 .2 mm for the positrons emitted from 18F,whereas this range is increased to 1.2mm for the positrons emitted from 16O [11].


Annihilation photon acollinearity is a fundamental but little investigated problem in positron emission tomography (PET) [12].The angle between the  paths of annihilation photons is not always exactly 180 degree but is typically around 179.5°.This angular deviation of  about 0.5° [13] causes a positional error in locating the true annihilation point [Fig.1 .4]. At the time of annihilation, both positron and electron do not come to a complete rest, because they have some non-zero momentum before undergoing annihilation. For the conservation of momentum of the positron-electron pair, the annihilation photons are not emitted exactly in opposite directions.

errore inThe angular deviation causes a separation (Dx) of the LOR from the true annihilation point, which blurs the PET images. This blurring is an increasing function of the detector ring diameter (D) and it causes the maximum effect at the centre of the field of view. The blurring coefficient relating Dx to the ring diameter (D) in calculations of the PET spatial resolution was experimentally determined, i.e., the value of Dx was estimated to be (0.00243± 0.00014)×D for the human subject [12]. The acollinearity introduces an uncertainty to the image resolution which is about 1 mm for a 50 cm and about 2 mm for a 100cm ring inner diameter PET tomography .

PET   Radionuclides

A nuclide is a general term applicable to all atomic forms of an element. Nuclides are characterized by the number of protons and neutrons in the nucleus, as well as by the amount of energy contained within the atom. A radionuclide is an unstable form of a nuclide. They may occur naturally, but can also be artificially produced.

The PET technique has developed on the basis of   positron emitting radionuclides, usually produced from a cyclotron. Radionuclides used in PET scanning are typically isotopes with short half-lives such as carbon-11 (T1/2≈ 20 min), nitrogen-13 (T1/2≈ 10 min), oxygen-15(T1/2≈ 2 min), and fluorine-18 (T1/2≈ 110 min).Positron emitters such as 18F, 15O, 13N and 11C can be attached  to biological molecules of interest and injected  intravenously. That means radionuclides are incorporated either into compounds normally used by the body such as glucose (or glucose analogues), water, or ammonia, or into molecules that bind to receptors or other sites of drug action.

Currently, 18F is widely used in PET as the labeled compound FDG (fluorodeoxy glucose) [15].Glucose has an increased uptake in malignant cells. FDG is a biologically active molecule which does not affect the normal biological process in the body when administered in tracer quantities.

The mean ranges of positrons and their energies from major positron   .emitting radionuclides.

Half life68Ga is the daughter nuclide of 68Ge and is often eluted from a Ge/Ga-68 generator system.68Ge has a half life of 270 days.

Such   label led compounds are known as radiotracers. It is important to recognize that PET technology can be used to trace the biologic pathway of any compound in living humans (and many other species as well), provided it can be radio labeled with a PET isotope. Some commonly used PET isotopes and their associated properties are shown in Table-1.1 and the process of positron emission from the unstable radio-nuclides described the following three sections.

Nuclear Decay

An atom consists of nucleus (made of protons and neutrons), and a cloud of electrons around the nucleus. A proton has a positive electrical charge, an electron has an equal negative charge, a neutron is neutral. Protons and neutrons have almost exactly the same mass (weight). An atom generally has an equal number of protons and electrons, and is neutral in charge because of this. The electrons determine the chemical properties and color of the atom. The nucleus determines the mass of the atom, and whether or not it is radioactive.

Let’s take a normal nucleus, carbon-12. This nucleus is made up of 6 protons and 6 neutrons. Together they give this nucleus a mass of 12. Carbon-12 is abbreviated as 12C. Another kind of carbon nucleus is carbon-14 (14C). This one has 6 protons and 8 neutrons. Since an atom normally has an equal number of protons and electrons, these two nuclei have the same number of electrons, and have the same chemical properties (they both act like carbon). 14C has too many neutrons. An excess of neutrons or protons in a nucleus causes nuclear instability.

This nucleus tends to be stable with emission of radiation, i.e.  14C is radioactive and it decays. There are three main ways that a nucleus can decay. It can lose an alpha particle (a helium nucleus) 4He, it loses two protons and two neutrons. Or it can lose a negative beta particle, which is the same as an electron, when a neutron converts into a proton. Or it can lose a positive beta particle, which is also called a positron, and is the same as a positive electron, and a proton converts into a neutron.

The nucleus tends to stability through a transition to the lowest possible state for its nucleon number, and the process is known as nuclear beta decay. Beta decay is relevant to the PET technology, and some of its associated features are described below.

Beta  Decay

Beta decay is a type of radioactive decay in which a beta particle (an electron or a positron) is emitted from an atom. There are two types of beta decay, positive beta decay(­­­­­b+) and negative beta decay (­­­­­b)  is an example of weak interaction [16]. Beta decay does not change the nucleon number but changes the charge only, as detailed in Eq. l.l & 1.2

β decay  process

The process, in which a neutron converted into a proton and electron with the emission antineutrino, is called   β decay process. β decay generally occurs in neutron-rich nuclei.

Npevβ+ decay process

The process in which a proton converted into a neutron and positron with the emission neutrino, is called β+ decay process. this process is also known as “positron emission” decay. Here actually β decay  occurs through the transformation of a neutron into a proton or vice versa through what is called “the Weak Nuclear Force” or “Weak  interaction


BetaPositron Emission

Positron emission (a type beta decay i.e. β+ decay) occurs when the neutron to proton ratio in the nucleus is too small and causes nuclear instability Positron emission or beta plus decay (β+ decay) is a particular type of radioactive decay, in which a proton is converted to a neutron, and it releases a positron and a  neutrino. As an example, the following equation describes the beta plus decay of carbon-11 to boron-11, emitting a positron and a neutrino.

ExIn β+ decay, a proton is converted into a neutron via the weak nuclear force. Beta emission is accompanied by the emission of a neutrino which shares the missing energy and momentum of the decay.

The neutrino is an electrically neutral particle having no appreciable mass, and is produced in some nuclear reactions such as beta decay, but it is very hard to detect. This light particle interacts with other particles very weakly, and can easily pass through matter without any appreciable interaction. The neutrino is not directly relevant to PET imaging, but its presence in the positron decay makes the energy of the positron variable instead of a fixed energy for a particular isotope.

 In beta decay the change in binding energy appears as the mass energy and kinetic energy of the beta particle, the energy of the neutrino and the kinetic energy of the recoiling daughter nucleus. The energy of an emitted beta particle from a particular decay can take on a range of values because the energy can be shared in many ways among the three particles while still obeying energy and momentum conservation .

 In principle, positron decay is always accompanied by electron capture. The nucleus captures an electron which basically converts a proton to a neutron.  Electron capture (EC) is a decay mode for proton-rich isotopes and often occurs when there is not enough energy to emit a positron.

The energy emission depends on the isotope that is decaying; e.g. 0.96 MeV applies only to the decay of carbon-11. Isotopes which increase in mass under the conversion of a proton to a neutron or which decrease by less than 2me, do not spontaneously decay by positron emission.

Nuclei which decay by positron emission may also decay by electron capture. For low-energy decay, electron capture is energetically favored by 2mec2 = 1.022 MeV, positron emission is forbidden and then the only decay mode is electron capture. In the electron capture process, the nuclide changes into a new element because the proton converts into a neutron 83Rb would have only electron capture decay mode to 83Kr. A proton in the nucleus captures an electron (usually from the K- or L-shell), and forms a neutron and a neutrino.

RbWhen an inner shell electron is captured by the nucleus the atom loses an electron and the atom is left in an excited state. To fill the vacancy, an electron in an outer shell falls into the inner shell gap, releasing energy as an x-ray.

Branching   Ratio or Branching Fraction

In general, the branching ratio (BR) for a particular decay mode is defined as the ratio of the number of atoms decaying by that decay mode to the number decaying in. total,

BrWhere, BR1=K1/K , BR2=K2/K

In particle physics and nuclear physics, the branching fraction for a decay is the fraction of particles which decay by an individual decay mode with respect to the total number of particles which decay.

 It is equal to the ratio of the partial decay constant to the overall decay constant. Sometimes a partial half-life is given, but this term is misleading due to competing modes it is not true that half of the particles will decay through a particular decay mode after its partial half-life. The partial half-life is merely an alternate way to specify the partial decay constant λ, the two being related through:

T For example, for spontaneous decays of 132Cs, 98.1% are β+ decays, and 1.9% are β decays.

PET scanner detects only 511 kev photons. Different isotopes used in PET have variable fraction of other decay events, which are not detected by PET. This has to be taken into account when calibrating the scanner and the blood samples by dividing the measured


values by the branching ratio.

Branching  ratios for PET  isotopes


However, invent data branching factor 0.174 is used

Interaction of   Photons with Matter

When a highly penetrating photon interact with matter, the possible interaction processes that occur depending on the energy of the photon, are  

  1. Coherent (Coherent) scattering (predominates in human tissue,~< 50keV);
  2. Photoelectric effect (< 100 keV);
  3. Compton effect (  >100KeV   to ( ~ <2MeV);
  4. Pair production ( ≥1.002 MeV);
  5. Triplet   production (≥ 2.044 MeV); and
  6. Photonuclear   production ( ≥10MeV);

PhotonIn the PET detection process, annihilation photons are completely absorbed by the detectors, mainly through Compton scattering followed by photoelectric absorption. The mechanism of the interactions with matter is briefly outlined below.

Photoelectric   Absorption The absorption of an X-ray or ¡-ray photon by an atom of the   absorbing material causes the emission of an electron from one of its bound shells. This process is known as photo electric absorption.

effectIn this process, the incident photon completely disappears and its energy is transferred to the orbital electron, the electron gain some kinetic energy because the photon energy is higher than the electron binding energy.  Most of the energy is required to overcome the binding energy of the orbital electron and the remainder imparted to the electron upon its ejection.

The ejected free electron is called a photoelectron which travels a short distance in tissue and is absorbed. The atom’s resultant charge is +1 due to the electron void.

Subsequently, electrons from outer orbital drops inward to fill the void, such as from the L-shell to the K-shell and from the M-shell to the L-shell, etc.

As electrons drop into inner shells they give up energy in the form of an x-ray photon called a characteristic ray.

The energy of this photon will be different in binding energies between the upper (higher energy) shell and the lower shell. If this excess energy is greater than or equal to the binding energy of another electron, then this energy can be used to free the second electron. This secondary electron is called Auger electrons.

The three products of the P.E. effect:

          1) A negative ion (photoelectron),

          2) Characteristic radiation, and

          3) Auger electron results in a positive ion (atom deficient one electron).

TharssThere are four simple rules governing the probability of the photoelectric effect occurring :

The incident photon must have sufficient energy to overcome the binding energy of the electron. For example, if a K-shell electron has a binding energy of 70 keV and the incident photon have energy of 68.5 keV, it absolutely cannot eject that..electron from its orbital.

A photoelectric interaction is most likely to happen when the energy of the incident photon exceeds but is relatively close to the binding energy of the electron it strikes. Using our example of a K-shell electron with a binding energy of 68.5 keV, a photoelectric interaction is more likely to occur when the incident photon is 70keV than if it were 120keV.This is because the photoelectric effect is inversely proportional to approximately the third power of energy.

The probability of photoelectric absorption increase sharply with increasing Z (atomic number of the target material) and decreases sharply with increasing incident photon energy..

The tighter an electron is bound to its atom, the more likely it is to be involved in a P.E. interaction. Atoms with high atomic number bind their electrons tighter than atoms with low Z number. These high atomic number elements are more likely to undergo P.E reactions. The probability of the P.E. effect is nearly proportional to the third power of the atomic number.

EnargyThe fact that P.E. interaction is 1/E3 explains, image contrast decreases when higher x-ray energy are used in imaging. At energies <50keV, P.E. effects plays an important role in imaging soft tissue. It can be used to amplify differences in attenuation between tissues with slightly different atomic numbers, and improves image contrast. e.g: 1) different target and filters in mammography, and 2) use of phosphors containing rare earth elements(lanthanum and gadolinium) in intensifying screens.

The ejection of an electron occurs closer to the nucleus (usually from the K-shell) because the nucleus is involved to conserve the momentum. The benefit of photo electric absorption in x-ray transmission imaging is that there are no additional non-primary photons to degrade the image

Compton Scattering

Compton scattering is an interaction of a high energy incident photon with a loosely bound orbital electron. The binding potential of the loosely   bound electron is small compared to the incident photon energy.

An incident photon with relatively high energy strikes a free outer shell electron, ejecting it from its orbit (inelastic interaction).The photon is deflected by the electron so that it travels to a new direction as scatter radiation.. In this interaction, the photon transfers some of its energy to the electron and deviates from its initial path, and the electron (known as a Compton recoil electron) becomes completely free from the atom

ComptonThe kinetic energy of the “recoil” electron equals the energy lost by the photon, with assumption that the binding energy of the electron is negligible. After interaction, the photon energy is reduced and can be calculated by the well-known Compton formula


Two factors determine the amount of energy that the photon retains, its initial energy and its angle of deflection of the recoil electron.

Compton   scattering is a type of scattering that X-rays and gamma rays undergo in matter. The inelastic scattering of photons in matter results in a decrease in energy (increase in wavelength) of an X-ray or gamma ray photon, called the Compton Effect. Change of wave length can be obtained by following equation,

Where02The interaction between electrons and high energy  photons (comparable to the rest energy of the electron, 511 keV) results in the electron being given part of the energy (making it recoil), and a photon containing the remaining energy being emitted in a different direction from the original, so that the overall momentum of the system is conserved. If the photon still has enough energy left, the process may be .repeated. Almost all scatterred radiation in X and ¡ rays between 30keV to 30MeV interact in soft  tissue by Compton scattering….

Qualitatively, this equation shows that the scattered x-ray energy becomes smaller as the scattering angle increases and, at higher incident energies, this effect is amplified. Compton scattered x-rays can deleteriously affect image quality by reducing contrast and are implicated for environmental radiation protection concerns.

Coincidence Events in PET Detection Process

If two photons are detected within the resolving time t of the system, then the scanner records an event and the event is  referred to as a coincidence event [Fig.1.6]. Since the order of the photons is irrelevant, this gives a coincidence timing window of length 2t

In a PET camera, each detector generates a timed pulse when it registers an incident photon. These pulses are then combined in coincidence circuitry, and if the pulses fall within a short time-window, they are seem to be coincident. A conceptualized diagram of this process is shown in figure

PetAll of the coincidence events detected during an imaging period are recorded by the PET computer system as a raw data set, the coincidence data in PET is reconstructed by a computer to produce cross-sectional images in the axial and coronal planes

Four types of coincidence events are observed in the PET which are given below:

True Events

True coincidences occur when both photons from an annihilation event are detected by detectors in coincidence, neither photon undergoes any form of interaction prior to detection, and no other event is detected within the coincidence time-window.

Or A true event corresponds to detecting two photons arising from an annihilation event which have not appreciably interacted with the imaging object before reaching the detectors.

A true coincidence is an event that derives from a single positron–electron annihilation. The two annihilation photons both reach detectors on opposing sides of the tomography without interacting significantly with the surrounding atoms and are recorded within the coincidence timing window

RayThe sensitivity of a tomography is determined by a combination of the radius of the detector ring, the axial length of the active volume for acquisition, the total axial length of the tomography, the stopping power of the scintillation detector elements, packing fraction of detectors, and other operator dependent settings (e.g. energy window). However, in general terms the overall sensitivity for true  events are given by  eqn

TzWhere Z is the axial length of the acquisition volume, D is the radius of the ring.

Random   Events

A random (or accidental) coincidence occurs when two nuclei decay at approximately the same time .This events  sometimes  referred to as an accidental coincidence arises mistakenly from two separate positron annihilations. After annihilation of both positrons, four photons are emitted. Two of these   photons from different annihilations are counted within the timing   window and are considered to have come from the same positron, while the other two are lost. Two photons from two unrelated decays are detected within the scanner resolving time and the scanner produces a false coincidence event

Pointmode, coincidences are only recorded between detectors within the same ring or very closely neighboring rings. These events are initially regarded as valid, prompt events, but are spatially uncorrelated with the distribution of tracer. This is clearly a function of the number of disintegrations per second, and the random event count rate (Rab) between two detectors a and b is given by the following equation


Where Na, Nb are the singles event rate incident upon the detectors a and b, and 2t is the coincidence window width. Usually Na=Nb so that the random event rate increases approximately proportionally to N2.

This relation is true provided that the singles rate is much larger than the rate of coincidence events, and that the singles rates are small compared to the reciprocal of the coincidence resolving timet, so that dead-time effects can be ignored.

There are two common methods for removing random events:

  • estimating the random event rate from measurements of the single event rates using the above equation, or
  • employing a delayed coincidence timing window.

The number of random coincidences detected, depends on the volume and attenuation characteristics of the object being imaged, and on the geometry of the camera. The distribution of random coincidences is fairly uniform across the FOV, and will cause isotope concentrations to be overestimated if not corrected for random coincidences also add statistical noise to the data.

The random coincidence contribution may also be estimated by introducing a delay in one of the coincidence channels. In this technique, timing signals from one detector are delayed by a time significantly greater than the coincidence resolving time of the circuitry. Activity outside the field of view (FOV) can also give rise to random coincidences, so the random rate can be reduced by shielding out of field activity and by reducing the resolving time of the scanner.

Scattered   Events

A scattered coincidence is one in which at least one of the detected photons has undergone at least one Compton scattering event prior to detection. Scattered events arise when one or both of the photons from a single positron annihilation detected within the coincidence timing window have undergone a Compton interaction.

In practice, most scattered photons are scattered out of the field of view and are never detected. Those annihilations for which one or both gamma rays are scattered but still detected are referred to as scattered events.

Since the direction of the photon is changed during the Compton scattering process, it is highly likely that the resulting coincidence event will be assigned to the wrong LOR. The LOR assigned to the event is uncorrelated with the original annihilation event, i.e., an incorrect LOR is formed because the photons’ paths are not collinear

Scattered coincidences add a background to the true coincidence distribution which changes slowly with position, decreasing contrast and causing the isotope concentrations to be overestimated. They also add statistical noise to the signal. The number of scattered events detected depends on the volume and attenuation characteristics of the object   being imaged, and on the geometry of the camera. The contribution of scattered events is described b y


eventsAs in Fig. 1.6 .3, the event   is incorrectly positioned on the detectors, and hence the effect is to add a broad background to the images. This causes errors in the radiotracer concentration by misplacing-events during reconstruction. Therefore, scattered coincidences degrade both image quality (due to loss of contrast) and quantitative accuracy .

Scattering  in PET can  arise from  three major  sources  namely  inside  the  object, detector  itself, the gantry and  surrounding  environment this also depends on  other factors   such  as object size, density, acceptance  angle,  energy discriminator settings , radiotracer  distribution, etc.  Scattered events can be reduced by using   inter-plane septa and also applying a simple energy   threshold.

Multiple Events 

These are mainly triple detection events. Multiple (or triple) events are similar to random events, except that three events from two annihilations are detected within the coincidence timing window. Due to the ambiguity in deciding which pair of events arises from the same annihilation, the event is disregarded. Again, multiple event detection rate is a function of count rate;

Multiple coincidences occur when more than two photons are detected in different detectors within the coincidence resolving time. In this situation, it is not possible to determine the LOR to which the event should be assigned, and the event is rejected. Multiple coincidences can also cause event   miss-positioning.

Point rayHistorical Background of a PET

The idea of PET technology was first evolved when the nature of the positron and the positron decay was known. Positron was first experimentally discovered   by Paul AM Dirac and Carl Anderson in 1932. For their contributions to the discovery of the positron they  won Nobel prizes.

PET is a relatively recent technology, but the principle has been understood for half a century. In the development of PET technology, many scientists from various disciplines have been involved  for more than 50 years. The first application of PET  technology in medicine was  made at  Massachusetts General Hospital (MGH) in  l951 using a simple brain prove  and  two  opposite  NaI (TI) detectors.

lmage reconstruction techniques for  single  photon  tomography developed  in the early 1960s  and about a decade later Chesler of MGH physics group developed the filtered  back projection ( FBP) technique. The first commercial PET scanners were developed in the late 1960’s and used analog electronics to generate tomographic (sliced) images

In 1961, James Robertson and his associates at Brookhaven National Laboratory built the first single-plane PET scan, nicknamed the “head-shrinker’’.

The RAH (RoyalAdelaideHospital’s) investigated this technology in 1968 for bone scanning, but found it too slow and inefficient for clinical use, and difficult to source the isotope.

In the 1970s PET scanning was formally introduced to the medical community and it soon became clear to many of those involved in PET development that a circular or cylindrical array of detectors was the logical next step in PET instrumentation and more sensitive detectors and tomographic capabilities began to appear. The first commercial PET scanner, the ECAT II, introduced in the late 1970s, was capable of brain imaging and could accommodate the torso of a narrow patient .

Although many investigators took this approach, James Robertson and Z.H. Cho were the first to propose a ring system that has become the prototype of the current shape of PET. At that time it was seen as an exciting new research modality that opened doors through which medical researchers could watch, study, and understand the biology of human disease. These scanners were still limited to single regions, but improvements continued, with better   resolution, and  movement from the research area to clinical use.

The first ring tomography was built at Brookhaven National Laboratory in 1973(Robertson JS 1973) and also Michael E Phelps built his first PET tomography, known as PETT I, at WashingtonUniversity in the same year. These first two efforts were unsucce in producing   proper reconstructed images due to lack of various correction techniques. Later on, Phelps and Holfman presented their design of a hexagonal array of 24 NaI (TI) detectors with some correction techniques and image reconstruction algorithms.

In 1973-74, Phelps and his team built a scanner with a diameter of 50 cm which had 24 NaI (TI) detectors known as PETT Il. In late 1974, Mike Phelps and Ed Hoffman at WashingtonUniversity built another scanner with the same diameter for human studies, known as PET III, using 48 NaI (TI) detectors. The scanner provided all the correction  techniques  and an image reconstruction algorithm and a dedicated computer which had all of these capabilities  including gantry movement. After the development of PET III, the  first  commercial  PET  scanner  was  constructed, named ECAT, using 96 NaI (TI)  detectors with a diameter of 3705mm  and was first delivered to the University of California, Los Angeles in 1976.

In 1976, the radiopharmaceutical fluorine-18-2-fluoro-2-deoxyglucose (FDG), a marker of sugar metabolism with a half-life of 110 minutes, enabled tracer doses to be administered safely to the patient with low radiation exposure. The compound was first administered to two normal human volunteers by Abass Alavi in August 1976 at the University of Pennsylvania.

Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of FDG in that organ. Later, the substance was used in dedicated positron tomographic, scanners, to yield the modern procedure.

During the 1980s the technology that underlies PET advanced greatly. Commercial PET scanners were developed with more precise resolution and images. As a result, many of the steps required for producing a PET scan became automated and could be performed by a trained technician and experienced physician, thereby reducing the cost and complexity of the procedure. Smaller, self-shielded cyclotrons were developed, making it possible to install cyclotrons at more locations.

NaI(TI) is still widely used in nuclear medicine gamma cameras  as  standard scintillator  to detect  the 140keV gamma photons from 99mTc decay. In the detection of 511 keV annihilation photons NaI(TI)  scintillator has some disadvantages due to its low  stopping  power. .Also the manufacturer faces problems in construction due to its hygroscopic  nature. Scintillators with higher density and hence greater stopping power were investigated, including bismuth germinate oxyorthosilicate (BGO), gadolinium orthosilicate (GSO), and barium fluoride (BF), among others and to overcome the  problem bismuth  germinate (BGO) ,which has high stopping power and hence high detection efficiency for 511keV photons, was first introduced  as  a scintillator of choice for PET. The evolution of BGO for  use  in PET  was reported  in the late 1970s A commercial PET brain tomograph using a high density scintillator (BGO), with at least a tentative intended clinical market niche, was introduced in 1978 . Commercial BGO block based PET scanner have been manufactured since 1981. The recent advance of new faster scintillators such as GSO and LSO blocks  faste electronics as well as statistically-based reconstruction algorithms have significantly  improved  the performances of PET scanners for clinical studies.

The recent development of combined PET/CT scanners providing functional and anatomical information has further reduced the lengthy scanning time by avoiding the need for a transmission  scan.

Over the last several years, the major advance in this technology has been the combining of a CT scanner and a PET scanner in one device. The modern PET/CT scanner allows a study to be done in a shorter amount of time but still provides more diagnostic information. By the mid-1990’s, PET had become an important diagnostic tool. In the early 1990s, a new generation of full ring BGO commercial tomographs was introduced . Australia established its first PET facilities, including cyclotrons, in 1992 at the Royal Prince Alfred Hospital, Sydney and Austin Hospital Melbourne.

The Royal Adelaide Hospital’s Facility is the 6th dedicated PET facility in Australia and saw its first PET patient in September 2000.  The PET/CT scanner, attributed to Dr David Townsend and Dr Nutt was named by TIME Magazine as the medical invention of the year  in 2000

In March 2005 a PET scanner with integral CT was installed at the RAH. The Philips Gemini PET/CT combines a 16 slice Computerised Tomography scanner with a high resolution PET scanner to allow image fusion and improved  localization of  lesions. PET and PET/CT are widely available today. The technology is robust and provides high quality  images. Some of the earlier roadblocks to having  or using a PET or PET/CT device, such as availability of particular  radiopharmaceuticals are no longer present.

Importance of PET Scan

A PET scan is an integral part of the diagnosis, management, and treatment of serious disease. A single PET scan can provide information that once would have required many medical studies, and it can do so without the surgery or other invasive procedures that might otherwise have been required.

PET scans often reveal disease before it can be seen with other tests. Often, in addition to imaging the disease, it can provide information used to determine the most promising treatment methods. PET scans are also used to evaluate how well treatments are working and can often show significant changes far sooner than other tests.

Scintillation Detectors in PET

Scintillation detectors are the most common and successful mode for detection of 511 keV photons in PET imaging due to their good stopping efficiency and energy resolution. These detectors consist of an appropriate choice of crystal (scintillator) coupled to a photo-detector for detection of the visible light. This process is outlined in further detail in the next two sections,

  1. Scintillation Process and
  2. Crystals Used in PET.

Scintillation  Process

The electronic energy states of an isolated atom consist of discrete levels as given by the Schrodinger equation. In a crystal lattice, the outer levels are perturbed by mutual interactions between the atoms or ions, and so the levels become broadened into a series of allowed bands. The bands within this series are separated from each other by the forbidden bands. Electrons are not allowed to fill any of these forbidden bands. The last filled band is labelled the valence band, while the first unfilled band is called the conduction band. The energy gap, Eg, between these two bands is a few electron volts in  magnitude .

Electrons in the valence band can absorb energy by the interaction of the photoelectron or the Compton scatter electron with an atom, and get excited into the conduction band. Since this is not the ground state, the electron de-excites by releasing scintillation photons and returns to its ground state.

Normally, the value of Eg is such that the scintillation is in the ultraviolet range. By adding impurities to a pure crystal, such as adding thallium to pure NaI (at a concentration of ~1%), the band structure can be modified to produce energy levels in the prior forbidden region. Adding an impurity or an activator raises the ground state of the electrons present at the impurity sites to slightly above the valence band, and also produces excited states that are slightly lower than the conduction band.

Keeping the amount of activator low also minimizes the self-absorption of the scintillation photons. The scintillation process now results in the emission of visible light that can be detected by an appropriate photo-detector at room temperature. Such a scintillation process is often referred to as luminescence. The scintillation photons produced by luminescence are emitted isotropically from the point of interaction. For thallium-activated sodium iodide (NaI(Tl)), the wavelength of the maximum scintillation emission is 415 nm, and the photon emission rate has an exponential distribution with a decay time of 230 ns. Sometimes the excited electron may undergo a radiation-less transition to the ground state. No scintillation photons are emitted here and  the process is called quenching.

Crystals  used in  PET

  1.  There are various types of detectors, such as
  2. Sodium iodide doped with thallium (NaI (Tl)),
  3. Bismuth germanate Bi4Ge3O12 (BGO),
  4. Lutetium oxyorthosilicate doped with cerium Lu2SiO5: Ce (LSO),
  5. Yttrium oxyorthosilicate doped with cerium Y2SiO5: Ce (YSO),
  6. Gadolinium oxyorthosilicate doped with ceriumGd2SiO5: Ce (GSO), and
  7. Barium fluoride (BaF2).

There are four main properties of a scintillator which are crucial for its application in a PET detector. They are:

  1. The stopping power for 511 keV photons,
  2. Light output,
  3. Signal decay time, and
  4. The intrinsic energy resolution.

 The stopping power of a scintillator is characterized by the mean distance (attenuation length = 1/μ) travelled by the photon before it deposits its energy within the crystal.

A short attenuation length provides maximum efficiency in counting 5l1keV photons. A scintillator with high Zeff  and density(r) provides increased stopping power. High Z is good because it increases the probability of photoelectric interactions within the crystal to absorb all the energy .Bismuth Germanate (BGO) has been used as a PET detector of choice for the last few decades due to its high stopping power and because it possesses a relatively high Zeff  and density. Approximately, 95% of the annihilation photons undergo interaction. In a 3-cm thick BGO block detector. whereas only 36% of the photons undergo interaction within a 3-cm thick NaI detector.

The detector should have a capability of producing high light output so as to have better energy resolution(ΔE/E).The better energy resolution provides a sharp peak and hence increases the chance to discriminate against the scatter events. However, in practice it is usually impossible to identify small angle scatters in this way. In scattering through 300 a 5l I keV photon loses just l2% of its energy, so that it is not possible to discriminate scatter through angles much less than 300 using a scintillator such as BGO. Lutetium-ortho-oxysilicate (LSO) based block detectors provide high light output compared to BGO and hence allow improved energy resolution.

A high light-output scintillator affects a PET detector design in two ways:

  1. It helps achieve good spatial resolution with a high encoding ratio (ratio of number of resolution elements, or crystals, to number of photo-detectors) and
  2. attain good energy resolution.

Good energy resolution is needed to efficiently reject events which may Compton scatter in the patient before entering the detector. The energy resolution (ΔE/E) achieved by a PET detector is dependent not only upon the scintillator light output but also the intrinsic energy resolution of the scintillator. The intrinsic energy  resolution of a scintillator arises due to inhomegeneities in the crystal growth process as well as non-uniform light output for interactions within it.

 The energy resolution values given in this table are for single crystals. In a full PET system, variations between crystals and other factors such as light readout due to block geometry contribute to a significant worsening of the energy resolution. Typically, NaI (Tl)detectors in a PET scanner achieve a 10% energy resolution for 511 keV photons, while the BGO scanners have system energy resolution of more than 20%.NaI (Tl) provides very high light output leading to good energy and spatial resolution with a high encoding ratio.

The slow decay time leads to increased detector dead time and high random coincidences. A shorter scintillation decay time enables faster production of the signal after complete absorption of the photon. The fast scintillators allow the use of a narrow coincidence time window which reduces the probability of detection of random events. Another consideration in a PET detector is dead time which is a main limiting factor at high count rates. A short decay is required to process each pulse individually at high activity level. The light output of BGO is fairly poor compared to the NaI detector. B GO has a longer decay time constant, and hence increased dead time which limits the count rate performance of the scanner.LSO has a shorter decay time constant so as to have reduced dead time and allow the use of a shorter coincidence time window which is appropriate to reduce the random events at high count rates.

However, the excellent stopping power of BGO gives it high sensitivity for photon detection in PET scanners. Currently, commercially produced whole-body scanners have developed along the lines of advantages and disadvantages of these two individual scintillators. The majority of scanners employ BGO and, when operating in 2D mode, use tungsten septa to limit the amount of scatter by physically restricting the axial field-of-view imaged by a detector area. This results in a reduction of the scanner sensitivity due to absorption of some photons in the septa.

The low light output of BGO also requires the use of small   photo-multiplier tubes to achieve good spatial resolution, thereby increasing system complexity and cost. NaI(Tl)), the overall energy resolution of LSO is not as good as NaI(Tl). This is due to intrinsic properties of the crystal.

Lanthanum bromide (LaBr3) has recently been developed at Delft University and can be used as a cost effective scintillator but its only drawback is low stopping power . LaBr3 provides better energy resolution than LSO, CsF  and BaF2, and hence it allows the use of a narrow energy window to reduce scatter events. In addition, it has an improved timing characteristic that allows the use of a short coincidence timing window to reduce efficiently the random events.

Gadolinium oxyorthosilicate (GSO) scintillator has been used by some research groups for whole-body, brain and animal scanner designs . GSO scintillator has some useful physical properties for PET detectors. One advantage of GSO over LSO, in spite of a lower stopping power and light output, is its better energy resolution and more uniform light output. Commercial systems are now being developed with GSO detectors.

Extremely fast scintillators, such as BaF2, have found use in time-of-flight PET scanners during the last few years. Since BaF2 has a very low stopping power, time-of-flight scanners have a reduced sensitivity leading to lower SNR. Time-of-flight positron emission tomography (TOF PET) using CsF2, scintillator developed to a limited extent in the l980s but in scintillators such as LSO and GSO etc have recently renewed interest in TOF PET scanners among the PET research community . Table [ 1.3 ] summarizes the physical properties of some common scintillators that are currently used in various PET scanners.

Acquisition    Mode

A PET scanner is usually operated in acquisition mode . there are some scanner  which can be operated in both 2D and 3D mode by extending or retracting septa. When septa are extended the scanner allows only data for direct and adjacent planes to be collected. Septa reduce most of the scatter events and also, reduce photon flux from out side the field of view (FOV), but block many true events, and hence limit the scanner sensitivity. Septa reduces the sensitivity of the detector by 40% or more as they have significant shadowing effect on the detectors.

There are 2n-1 image planes in 2D acquisition mode. where n is the number of crystal  rings.

As the septa are retracted in 3D mode the PET detection efficiency increases by collecting all coincidence events. In 3D acquisition mode all possible LORs within the FOV are acquired. This improves the image quality even when the administered dose is relatively low and also reduces the over alls scanning time.

The large contribution of scattered events in 3D PET is the major degrading factor of image quality. Scatter from outside the field of view also contributes to the PET raw data which increases the detector dead time and the random events. In 3D PET, the scatter events may contribute 20-50% or more. whereas in 2D their contribution is usually less than l5% in a 16-ring tomography. Also, 3D acquisition mode gives a non uniform axial sensitivity profile which is 30 time higher in the centre of the scanner compared to the end planes whereas the 2D mode leads to a very nearly profile profile following a biomodal pattern . In multiring tomographs direct planes are formed between detectors in. the. Mode.

Objective  of  this thesis work

PET is the most recent nuclear technology by which 3D functional image can be constructed of live organ. It is usually used to detect the malignant cell. At present cancer is a threat for all the people of the world. A large number of people of the world are suffering from cancer. PET can detect the malignant cell at the beginning of cancer. Till now cancer treatment are not available, which can cure a man perfectly. Usually cancer is detected at the extremum stage but PET scan can detect the malignant cell at the very early stage of cancer. Beside this PET scanner used in industrial sector to examine their products quality and also used in pharmaceuticals. So, PET scanner is very important for our modern life.

The image quality of PET scanner depends on some of its characteristics. Such as spatial resolution, sensitivity and noise equivalent count rate (NECR). When a PET scanner is installed it is necessary to check the above characteristics of PET. NECR is the most important measure of  PET scanner image quality. It is frequently used for the acceptance of testing a PET machine.

The objective of the work is to study the NECR of a nuclear PET camera. Since PET has been introduced in Bangladesh in the last 2 years, it has been renewed interest about its extremum uses with safety. As it uses the radioactive tracer, nuclear safety should be maintained properly. Keeping this in mind, this work has been intended. This work will be helpful for the people to impact proper knowledge about the uses of PET scan with safety. Besides this, it will intend the new researchers for further development about safety, which will greatly escorts to the cancer treatment in Bangladesh.

The most sophisticated part of a PET or PET-CT installation is the cyclotron. Cyclotron is one kind of charge particle accelerator. In PET imaging it requires proton rich isotopes such as 18F, 15O, etc. The Cyclotron used to produce such radioactive isotopes which are used to make the functional images of the body. So, cyclotron is mandatory for PET system.

This chapter includes various types of accelerators, circular or cyclic accelerators, cyclotron physics, history of cyclotron, principle operation of cyclotron, production of 18F isotope, etc.


A particle accelerator is a device that uses electromagnetic fields to propel charged particles to high speeds and to contain them in well-defined beams.

There are two basic classes of accelerators:

  1. Electrostatic Accelerators and
  2. Oscillating field Accelerators.

Electrostatic Accelerators

Electrostatic accelerators use static electric fields to accelerate particles. A small-scale example of this class is the cathode ray tube in an ordinary old television set. Other examples are the Cockcroft–Walton generator and the Van de Graaf generator. The achievable kinetic energy for particles in these devices is limited by electrical breakdown.

Oscillating field Accelerators

Oscillating field accelerators, on the other hand, use radio frequency electromagnetic fields and circumvent the breakdown problem. This class, which development started in the 1920s, is the basis for all modern accelerator concepts and large-scale facilities. Rolf Widerøe, Gustav Ising, Leó Szilárd, Donald Kerst and Ernest Lawrence are considered as pioneers of this field, conceiving and building the first operational linear particle accelerator,[2] the betatron, and the cyclotron [59]. Because colliders can give evidence on the structure of the subatomic world, accelerators were commonly referred to as atom smashers in the 20th century . Despite the fact that most accelerators (but ion facilities) actually propel subatomic particles, the term persists in popular usage when referring to particle accelerators in general.

Linear particle accelerators

In a linear accelerator (linac), particles are accelerated in a straight line with a target of interest at one end. They are often used to provide an initial low-energy kick to particles before they are injected into circular accelerators. The longest linac in the world is the Stanford Linear Accelerator, SLAC, which is 3 km (1.9 mi) long. SLAC is an electron-positron collider.


modern superconducting, multicell linear accelerator component

Circular or cyclic accelerators

In the circular accelerator, particles move in a circle until they reach sufficient energy. The particle track is typically bent into a circle using electromagnets. Example of circular accelerator are Cyclotrons, Betatrons, Synchrotrons etc.

The advantage of circular accelerators over linear accelerators (linacs) is that the ring topology allows continuous acceleration, as the particle can transit indefinitely. Another advantage is that a circular accelerator is smaller than a linear accelerator of comparable power (i.e. a linac would have to be extremely long to have the equivalent power of a circular accelerator).

Depending on the energy and the particle being accelerated, circular accelerators suffer a disadvantage in that the particles emit synchrotron radiation. When any charged particle is accelerated, it emits electromagnetic radiation and secondary emissions. As a particle traveling in a circle is always accelerating towards the center of the circle, it continuously radiates towards the tangent of the circle. This radiation is called synchrotron light and depends highly on the mass of the accelerating particle.

Introduction to Cyclotrons

A circular particle accelerator in which charged atomic or subatomic particles generated at a central source are accelerated spirally outward in a plane perpendicular to a fixed magnetic field by an alternating electric field is known as cyclotron. A cyclotron is capable of generating particle energies between a few million and several tens of millions of electron volts [65]. It is a machine for accelerating charged nuclear particles, commonly protons, so that they may be used to probe the nuclei of target atoms. Such “atom smashers” are considered the microscopes of nuclear physics.

Cyclotrons have a single pair of hollow ‘D’-shaped plates to accelerate the particles and a single large dipole magnet to bend their path into a circular orbit. An alternating electric field between the dees continuously accelerates the particles from one dee to the other, while the magnetic field guides them in a circular path. This means that the accelerating D’s of a cyclotron can be driven at a constant frequency by a radio frequency (RF) accelerating power source.

IonAs the speed of the particles increases, so does the radius of their path, and the particles spiral outward. In this manner, a cyclotron can accelerate protons to energies of up to 25 million electron volts.

The modern cyclotron uses two hollow D-shaped electrodes held in a vacuum between poles of an electromagnet. A high frequency AC voltage is then applied to each electrode.

In the space between the electrodes an ion source produces either positive or negative ions depending on the configuration. These ions are accelerated into one of the electrodes by an electrostatic attraction, and when the alternating current shifts from positive to negative, the ions accelerate into the other electrode. Because of the strong electromagnetic field, the ions travel in a circular path. Each time the ions move from

one electrode to another they gain energy, their rotational radius increases, and they produce a spiral orbit. This acceleration continues until they escape from the electrode.

The accelerated particles are extracted from the cyclotron when they reach the end of the spiral acceleration path. This beam of accelerated subatomic particles can be used to bombard a variety of target materials to produce radioactive isotopes.

Various isotopes are used in medicine as tracers that are injected into the body and in radiation treatments for certain types of cancers. Cyclotrons are also used for research purposes in academic and industrial settings, and for positron emission tomography (PET).

The particles are injected in the centre of the magnet and are extracted at the outer edge at their maximum energy.

Cyclotrons reach an energy limit because of relativistic effects whereby the particles effectively become more massive, so that their cyclotron frequency drops out of synch with the accelerating RF. Therefore simple cyclotrons can accelerate protons only to an energy of around 15 million electron volts (15 MeV, corresponding to a speed of roughly 10% of c), because the protons get out of phase with the driving electric field.

If accelerated further, the beam would continue to spiral outward to a larger radius but the particles would no longer gain enough speed to complete the larger circle in step with the accelerating RF. To accommodate relativistic effects the magnetic field needs to be increased to higher radii like it is done in isochronous cyclotrons. A magnet in the synchrocyclotron at the Orsay proton therapy centre.

An example for an isochronous cyclotron is the PSI Ring cyclotron which is providing protons at the energy of 590 MeV which corresponds to roughly 80% of the speed of light. The advantage of such a cyclotron is the maximum achievable extracted proton current which is currently 2.2 mA. The energy and current correspond to 1.3 MW beam power which is the highest of any accelerator currently existing.

History of Cyclotron

In the nineteenth century, some physicists still labored under the theory-really, the dream of alchemists for centuries that elements could be made to transmute into other elements through chemical processes.

In 1902, Ernest Rutherford and Frederick Soddy explained the new phenomenon of radioactivity as a “transformation” of one element into another, occurring spontaneously in nature; and in 1919, Rutherford succeeded in deliberately causing transmutations by bombarding light elements with the alpha particles emitted from naturally decaying radio-elements.

Since very few of the projectile alpha particles collided with nuclei of the target atoms, the number of transmutations was relatively small. Therefore, scientists sought new ways to increase the number of projectile particles and to accelerate them to higher energies. The copious production of charged particles was the easier task, the high-voltage engineering required for acceleration proved far more difficult.

Scientists tried a number of different approaches to the acceleration problem, including a voltage multiplier circuit (Sir John Douglas Cockcroft and Ernest Walton) and an electrostatic generator (Robert J. Van de Graaff), both linear accelerators.

E. O. Lawrence and his graduate students at the University of California, Berkley tried many different configurations of the cyclotron before they met with success in 1929.

In 1930, Ernest O. Lawrence, with the help of one of his students, M. Stanley Livingston, designed and constructed the first of many magnetic resonance accelerators . where it was first operated in. Lawrence’s accelerator operated at voltages much lower than other machines, yet imparted as much or more energy to its projectiles.

 Lawrence won the Nobel Prize for Physics for his work on the cyclotron in 1939.

The D’s of Lawrence’s first cyclotron were only about 4 inches (10 cm)  in diameter. The accelerating chamber of the first cyclotron measured 5 in (12.7 cm) in diameter and boosted hydrogen ions to energy of 5-45 MeV depending on the settings. One mega electron volt (MeV) is 1.602 × 1013 Joule.

Subsequent models of 9, 11, 27, 37, and 60 inches followed, with a new model built almost every other year. These larger machines surpassed an early goal of one million electron volts projectile energy; many different types of atoms were split; and scores of new radioisotopes were identified, including the first trans-uranium elements.

The first European cyclotron was constructed in Leningrad in the physics department of the Radium Institute, headed by Vitali Khlopin. This instrument was first proposed in 1932 by George Gamow and Lev Mysovskii and was installed and running by.

Principle of operation of a Cyclotron

The cyclotron, destined to be the chief tool of nuclear physics, worked on the principle that charged particles, accelerated across a voltage gap, travel in a circular path under the influence of a magnetic field. It is used for accelerating positive ions, so that they acquire energy large enough to carry out nuclear reactions.

 If confined to a hollow disk-shaped chamber built in two D-shaped halves (called “D’s”) and if subjected to a radio-frequency voltage alternation as the particle passes from one half to the other, the particle receives two accelerations per cycle and travels at higher velocities in ever-larger circles. In a cyclotron, the positive ions cross again and again the same alternating electric field and thereby gain the energy.

An alternating electric field attracts the particles from one side of the cyclotron to the other. The cyclotron’s magnetic field, generated by the two electromagnets, bends each particle’s path into a horizontal spiral, forcing it to accelerate in order to keep up with the alternating electric field. When the particle reaches its peak acceleration it is released to collide with the desired target, producing observable nuclear reactions.CyclotonIt is based on the principle that a positive ion can acquire sufficiently large energy with a comparatively smaller alternating potential difference by making them to cross the same electric field time and again by making use of a strong magnetic field.

A positively charged particle is released at the center of the gap at time t = 0. It gets attracted towards the Dee, which is at a negative potential at that time. It enters the uniform magnetic field between the Dees perpendicularly and performs uniform circular motion in the gap. As there is no electric field inside the Dees, it moves on a circular path of radius depending upon its momentum and comes out of the Dee after completing a half circle.

As the frequency of A.C. (fA) is equal to fc, the diameter of the opposite Dee becomes negative when the particle emerges from one Dee and attracts it with a force, which increases its momentum. The particle then enters the other Dee with larger velocity and hence moves on a circular path of larger radius.

This process keeps on repeating and the particle gains momentum and hence radius of its circular path goes on increasing but the frequency remains the same. Thus the charged particle goes on gaining energy, which becomes maximum on reaching the circumference of the Dee. When the particle is at the edge, it is deflected with the help of another magnetic field, brought out and allowed to hit the target. Such accelerated particles are used in the study of nuclear reactions, preparation of artificial radioactive substances, treatment of cancer and ion implantation in solids.

Types of  Cyclotron 

Commercially available biomedical cyclotrons can be divided in four categories depending on their energy :

 Sub-Low Energy;

  • < 12MeV
  • Simens Eclipse 11MeV (RDS111)
  • IBA Molecular Cyclone 10, 10 MeV
  • GE Healthcare Minitrace, 9.5 MeV

Low energy;

  1. 15 to 20 MeV
  2. Advanced Cyclotron Systems TR-19, variable energy 13 to 19 MeV
  3. GE Healthcare PETtrace, 16.5 MeV
  4. IBA Molecular Cyclone 18, 18 MeV

Medium Energy;

  • 20 to 25 MeV
  • Advance Cyclotron Systems TR-24, 24 MeV

High Energy;

  • 25 to 30 MeV
  • Advance Cyclotron Systems TR-30, 31 MeV
  • IBA Molecular Cyclone 30, 30 MeV

Cyclotron for PET

 Medium energy cyclotron is much better than low or sub-low energy cyclotrons for clinical and research applications. Medium energy cyclotron  are much more efficient at making PET isotopes other than Fluorine. Low energy machine were basically designed to produce only fluorine. Above 20MeV the cyclotron can be used for both PET and SPECT isotopes.

Properties of cyclotron of cyclotron

Cyclotron property depends on the following parameters:


Medium and Vbariable Energy for PET, SPECT and research isotopes;

  1. Ions: Extracted (H+, D+)
  2. Accelerated (H+, D+)
  3. Ion source external                      :  ECR, polarizes IS
  4. Ion source internal                       :  hot cathode hooded arc
  5.  Norm.emittance hor./ver.              :  204/1.2 p mm mrad
  6. Phase width                                 :  160-400
  7. Energy Spread (fwhm)                 :  ~ 0.3%
  8. Beam current                               :  ~ 200mA or higher

High beam transmissios  at high current limit.

Magnetic Structure

  1. Number of symmetry period
  2.          (sectors)                                           : 4-12
  3. Radially varying sector angle               : 20-60 degrees
  4. Hill field                                           : ~2 Tesla
  5. Vally field                                         : ~0.2 Tesla

Dee Structure

  1. Number of Dees                                :  2
  2. Position                                            :  Inside the system
  3. Dee voltage (nominal)                        :  6 kW

RF System

  • Max.accel.voltage                          :  2 ×70 Kv
  • Frequenvy                                      :  5-17 MHz, 520MHz
  • Powerful vacuum system for beam transmission.

Vacuum System

  1. Low operating vacuum pressure at high current limit.
  2. Vacuum system category including the oil free pumps and low operating vacuum.

Cyclotron produced isotopes

Cyclotron produced various types of isotopes such as 18F, 11C, 13N etc. All the isotopes are used as compound radio tracer such as the name of fluorine-18 radio tracer is fluoro-deoxy-glucose and name of carbn-11 radio tracer is methionine, etc. Different types of isotopes, their tracer compound, physiological process and applications are given in the following.

Production of 18F isotope


Naturally occurring fluorine is monoisotopic. However a number of radioactive isotopes of fluorine such as 17F ,18F, 20F ,21F have been produced since 1930’s.All the radioactive isotope of fluorine are man made. 17F ,18F have a deficit of neutrons and decay by positron emission. 20F ,21F have surfeit  of neutrons and decay by negative beta emission. Only 18F and 20F have been used in radiotracer in chemical studies.

 Isotopes of fluorine

The half life of 20F radio isotope is 10 sec whereas the half life of 18F 110 minutes. Because of long half life 18F is very as radio tracer. However, a major limitation is that the production of 18F and the subsequent experimental work must normally be carried out within 3 or 4 half lives. The 511KeV g-photons resulting from the annihilation of the 0.64MeV positron decay during the decay of 18F can be easily detected using nuclear detector.

Fluorine-18 is an important   isotope in the radiopharmaceutical industry, and is primarily synthesized into fluorodeoxyglucose (FDG) for use in positron emission tomography (PET scans). It is a fluorine radioisotope which is an important source of positrons. It is substituted for hydroxyl and used as a tracer in the scan. Its significance is due to both its short half-life and the emission of positrons when decaying.

Production of 18F

Several methods have been used to produce 18F isotopes and the most commonly used methods are given in the following

In the radiopharmaceutical industry, 18F is made using either a cyclotron or linear particle accelerator to bombard a target, usually of pure or enriched oxygen-18-water

The ECAT 951/31 system, a PET scanner manufactured by Siemens (Siemens-CTI, Knoxville, TN, USA), is designed to produce cross-sectional pictures of radioactive distributions inside patients. The scanner has two bucket rings, each having 16 detector modules (four blocks in a module). All the buckets are housed in a gantry arranged in rings with 100 cm inner diameter giving a 60 cm diameter transaxial field of view (FOV) and 10.8 cm axial FOV.

The even numbered alternate detector modules from a original standard ECAT PET system were used to make the miniPET system with a diameter of 50% of the original. The size of the new geometry was reduced by about half, but the physical shape was the same as the original standard system.

The miniPET system was designed with 2 bucket rings each having 8 detector modules whereas the original system was 2 bucket rings each having 16 modules. Septa and rod sources were not included  in the miniPET, like in the standard system. Therefore, the facility of gantry movement was not available as there were no gantry drives to control the new system. To avoid problems due to the absence of gantry and gantry movement in the new system, the original ECAT software was simplified to control data acquisition directly.

This chapter includes various components of the ECAT 951 PET system and the design architecture of the miniPET.

Description to ECAT (951 series) and its various components

Various components and parameters

The scanner consist of two bucket rings, each ring has 16 detector modules. There are four blocks in a detector module. Each BGO block, viewed by four PMTs, is cut into an 8×8 array of segmented crystal elements, so the scanner has 8192 individual crystal detectors (size 6.25 mm transverse, 6.75 mm axial center-to-centre and 30 mm radial thickness /crystal depth). The BGO block provides high photopeak efficiency for the detection of 511 keV gamma rays and in the ECAT scanner the detection rate is up to 2×106 coincidence events per second.

The spatial resolution of  the ECAT 951 system is about 6 mm in the transaxial direction and 7 mm in the axial direction. All the buckets are housed in a gantry arranged in rings

with 100 cm inner diameter giving a 60 cm diameter transaxial field of view (FOV) and 10.8 cm axial FOV. The gantry allows moving the scanner, which tilts up to ±90º from the horizontal axis and rotates ±45º [7] about a vertical axis. The software allows the investigators to develop new algorithms for data acquisition, processing and clinical applications.

The Gantry of the Scanner

 The gantry is the ¢donut¢ shaped part of the scanner that houses houses the buckets and the associated detector electronics, septal ring assemblies, patient alignment lasers, connecting cables, air-cooling systems and three rotating rod sources for the purposes of testing and calibrating the scanner and performing attenuation measurements. But the main job of the gantry is to support and position the detectors perfectly. The 32 detector modules (BGO blocks plus associated bucket electronics) are mounted on the detector support plate. The support plate is joined through two trunion bearings to the main frame. The trunion bearings allow the gantry to tilt for any suitable alignment about the horizontal axis. It also provides a quiet and comfortable environment to position the patients accurately inside the scanner field of view (FOV).

Motions of Gantry

The gantry can be tilted on its bearings about the horizontal axis by up to 300 in both directions (±300) as seen in Fig. 3.1, and can also be pivoted on its base about the vertical axis by ±450. The gantry of the scanner allows the facilities of flexible motions for demanding imaging situations .The gantry allows moving of  the scanner ,which  tilts up to ±900  from the horizontal axis. The motions are driven by motors, and maintained under operator control rather than computer control, by pressing the switches located at the front side of the scanner.

The patient lies on the couch (also known as a table) is moved both in horizontal and vertical direction during examination. The patient couch of the system was designed for comfortable patient support, accurate and precise positioning. The couch top must be capable of moving at least 1800mm to allow the patient to be scanned from ‘head toe’ without having the repositioned. The couch has an adjustable head holder with tilt capability. The patient platforms movement is controlled by computer control and a  dc motor used for its movements. The couch is able  to move in the horizontal direction up to 129 cm and from 75 cm to 104 cm in vertical direction.

Architecture of a the miniPET

Architecture of the miniPET is almost similar as the standard ECAT 951 system. The number of components used in miniPET is half of the components of ECAT 951. Such as instead of 36 detector modules 16 detector modules are used in miniPET. This section

concentrates on the necessary changes for the new scanner design, internal cable connections between various hardware components, operating mode, and coincidence combinations among the detector modules, etc.

The miniPET Scanner Design 

The complete miniPET scanner was designed using 16 detector modules (“buckets”) arranged in two rings. The detector modules were mounted on a horizontal wooden table. The wooden table was used to support the modules as the system had no gantry.

Four blocks, each block (4 PMTs plus crystal matrix) having 64 distinct crystal elements, together with the associated readout electronics comprise one detector module. Therefore, the miniPET system was a 16 ring tomograph having 4096 individual crystal detectors. The axial and transaxial field of view (FOV) of the camera were 10.8 cm and 20 cm diameter.

Shows the layout of the new miniPET camera, where the acquisition and processing units are not included. In the Figure the connecting cables are clearly seen and the functions of the cables are described in the following section. The technical characteristics of the scanner are summarised in

There are many cables required to complete the complicated PET system. Various cables used in different purposes. Four cables are connected to each detector module, these are given bellow.

Clock cable – is used by the detector module to communicate with clock and computer to receive commands, return status information and get timing information.

Data cable –The data cable must be connected to the correct connector on the ring receiver. In miniPET only even  numbered buckets are used, and cables are connected to alternate inputs of the ring receiver.. The data cable also connections between the ring receiver 2 and the upper bucket ring of the system.

PS cable – the power supply (PS) cable is used to supply +5V/-5V to the electronics circuits in the bucket.

HV cable – the high voltage (HV) cable supplies 1500V to the PMTs to accelerate the photoelectrons to the anodes.

Acquisition Mode 

Data acquisition in 2D or 3D mode are possible in the miniPET system by installing the ring differences in the software program. The miniPET scanner  perform all the experiment in 2D mode without any septa. The ring difference is  smaller than or equal to three. i.e.(ring difference ≤3). Thirty one  image planes are formed  in total in the scanner. Among these, 16 direct planes were formed from LORs with ring difference equal to 0 or 2 and 15 cross planes were formed from LORs with ring difference equal to 1 or 3.

Coincidence Issues among the Buckets

In the miniPET camera, the same coincidence combinations as in the original scanner are used, so that each detector module is in coincidence combination with the 6 opposite modules. Of these 6, 3 modules are in the same bucket ring and the remaining 3 are in the other bucket ring Examples of coincidence combinations recorded among the detector modules of the camera are as follows:

Lower Ring Combinations 

Bucket 0 is in coincidence combination with the detector buckets 6, 8, 10 and bucket 4 with the buckets

Upper Ring Combinations

Bucket 16 is in coincidence combination with the detector buckets 22, 24, 26 and bucket 20 with the buckets.

Combinations for Cross Rings

Bucket 0 in the lower ring is in coincidence combination with detector buckets 22, 24, 26 in the upper ring. Similarly, bucket 4 (in the lower ring) is in coincidence with the buckets 26, 28, 30 (in the upper ring).

The detector bucket 16 in the upper ring is in coincidence combination with the buckets 6, 8, 10 (in the lower ring), bucket 20 (in the upper ring) with the buckets 10, 12, 14 in the lower ring. So, the number of coincidence combinations in one ring is (8×3)/2 = 12. Therefore, the total number of coincidence combinations in the miniPET camera, i.e., four combinations of rings is

                                     12×4 = 48.

 The miniPET Detector System

The miniPET scanner has 4096 individual small crystal detectors which yields 256 detectors in each of 16 axial crystal rings, 64 BGO blocks grouped in 16 detector modules arranged in two buckets rings on the system gantry, and 256 photomultiplier tubes (PMTs). The detailed parameters of the detector system are described separately in the following sections.

BGO Crystal Detector

In the miniPET scanner, bismuth germanate (BGO) crystal is used as a scintillation detector in the form of blocks which are cut into an 8×8 array of crystal elements [Fig. 3.2]. The dimensions of each distinct crystal element are 6.25 mm transverse × 6.75 mm axial (center-to-center spacing) × 30 mm radial (crystal depth).  The slots between crystals are approximately 0.6 mm wide. The detailed characteristics of the BGO scintillation crystal are shown in

Electronics of the Bucket

 The electric signals from the photomultipliers in each block are analyzed for position, time and energy by the bucket electronics, and eventually transferred to the ring receiver for further processing. Three types of “Board” are assembled in each detector module: 1) Analog Signal Processor, 2) Position Energy Processor, and 3) Bucket Controller.

The complete miniPET scanner was designed using 16 detector modules (“buckets”) arranged in two rings. The detector modules were mounted on a horizontal wooden table. The wooden was used to support the modules as the system had no gantry.

NEC was first introduced by Strother et al and was derived from the noise-equivalent-quanta concept, which originates from conventional photographic imaging. Briefly, noise-equivalent quanta describe the equivalent number of quanta or counts required by an ideal imaging system to produce the same noise characteristics as does an actual system that is degraded by noise.

In positron emission tomography (PET), the concept of noise equivalent count rate (NEC) is a measure of image quality. Where NEC describes the equivalent coincidence counting rate that would have the same noise properties as the net true counting rate, corrected for spurious coincidences arising from 2 particular sources: random (accidental) coincidences and scattered events.

NEC is fairly straightforward to measure and has become a standard metric for scanner performance provided by manufacturers and determined as part of acceptance testing for new equipment. NEC is most frequently (or invariably, in the case of acceptance testing) computed using a standard test object. NEC rate used to understand the scanner performance under a wide range of imaging situations, and enables comparisons of count rate taking into account the statistical noise due to scattered and random events. The axial extent of the test object has a large impact on NEC , that varies considerably with scanner design and acquisition mode. According to Watson et al and previously by Lartizien et al. and Townsend et al. the NEC varies for a single patient at different axial positions during a whole-body scan.

Noise Equivalent Count Rate NECR (or NEC rate) is one of the important characteristics of PET/CT instrument. Higher the NEC value indicates better the PET image quality.It has

been shown that the signal-to-noise ratio in the images reflects the global signal-to-noise ratio which, is related to the NEC. Count rate performance can be determined by calculating the noise equivalent count rate (NEC). Due to scatter and randoms in the measured PET data, it is not easy to compare different scanners or the same system operating under different acquisition modes. The noise equivalent count rate is defined as the rate of true coincidences which would (in the absence of randoms and scatters) give rise to the same noise level in the data.  Before the optimum use of a new system it is necessary to measure the scanner characteristics as a function of count rate. The general formula to calculate the NEC rate is

Necwhere T, S and R are the true, scatter and random rates respectively. The parameter k is the randoms correction factor with a value of 1 or 2 depending on the randoms correction method.

 The value k=1 is used when the randoms contribution is estimated from the singles count rates. A value of 2 is used when randoms are measured using a delayed coincidence window, as in the present work, to account for the additional noise due to the subtraction.

In the work, true, scatter and random events  were measured aiming to test the camera NECR characteristics by taking into account the statistical noise. After designing a PET camera, it is necessary to measure its various characteristics performance.

Materials and Method

To measure the noise equivalent count rate of the system, the same 10 cm diameter polyethylene cylindrical phantom was considered as used previously for the scatter fraction measurement.

The phantom was filled with 1000 ml water and mixed with 18F having radioactivity 240 MBq. The cylinder was shaken to uniformly mix the 18F radio isotope into the water. Then the phantom placed at the centre of the scanner. A set of 300 seconds scans was acquired every half hour. These scans were continued for almost 16 hrs.

Randoms were estimated by introducing delay into the coincidence circuit which was an extra 100 ns time delay. Multiples were recorded simultaneously with the prompt and delayed events. The unscattered (True) + scattered events were calculated as the difference between prompt and (random + multiple) events as the formula: T* = [P – (R + M)], where T* is the unscattered plus scattered coincidence rate. Scattered and true coincidences were separated from the formulae: S = T* × (SF) and T = T* × (1–SF), using the formula of scatter fraction,             we have the value of S=20.63% OF S+T.So, S=.2063 and T=0.7937 from analysis of the sinograms from the scatter phantom. Finally, the noise equivalent count rate was calculated from the above NEC formula (Eqn.4.1). Data Acquisition and Analysis

After positioning the phantom at centre of the scanner the experiment were reapeted for 19 times. NECR or NEC  was calculated from the file named as June26cyl1 at 300sec at 11:45am


NweAll the calculations were repeated 19 times following the calculation of We determine nineteen NEC values and corresponding true, scatterd, random and multiple count rate. All the values are given in the following table.



Lan System and Security System in China-Bangla Group

This internship report is an exclusive study of the works in the department    of LAN system & Security Solution in china-Bangla Group. The main purpose of this report is to get the practical experience through observing the system operations works at China-Bangla Group.

These three months of internship at China-Bangla has helped me a lot to understand about the practical working environment in a networking system. This report describes backbone LAN along with the implementation; infrastructure and functioning of the various network elements in general. The report also focused on the works of the intelligent Network. Thus my report will be combination of different aspects of the client deals and operations under general working activities and a comprehensive view IT division. An effort has been made to identify some problems and prospects and to provide recommendation for solving those problems as an intern. I will be describing my duties, activities and knowledge gained from the experiences.

All the observation which is made through the internee period that has been reflected in this report which proper analysis based on theory and practical experience.


The objective of this internship was to gain experience and knowledge regarding the existing cellular services and to co-relate the theoretical background learned university courses with the practical working environment. In the field of engineering, concepts at application level provide a useful supplement to relevant theories and understandings. This program also bring out more information of official decorum and also fulfill the academic requirement for the under graduate program. Internship gives us the opportunity to learn how to cope up with the real corporate world and also to learn the teamwork.

Report objective

The objective of this report is to represent the experience and outcomes gathered during the internship in a formal context. This report discusses for the fact that this writer performed his internship in the Regional Operation Division of China-Bangla Group. The report is arranged in chapters which are laid out in a manner so that the report provides the reader with sufficient background information of works of all the wings of system operation and for the understanding of rather advanced discussions and further discussions along with the supporting figures are also given at places so that they complement the text and aid its understanding.

Company Profile

China-Bangla Group. Is a Real Estate Company, started with a few land in INANI at the world famous longest and unbroken Sea beach in Cox’s Bazar.Bying a land in a good location is the best decision you make in your life and to avoid any problem and risk with buying lands we are the right person in the right time. We are looking for that hardworking person who is really working for the tourist in different standard hotels with small salary and having a dream of making hotel after a certain time in their life. For it’s easy to buy a land of 1 biga or more to build up a project of tourist entertainment.

Your small investment today will be smart returns in the near future.

Introduction to WLAN

Despite the productivity, convenience and cost advantage that WLAN offers, the radio waves used in wireless networks create a risk where the network can be hacked. This section explains three examples of important threats: Denial of Service, Spoofing, and Eavesdropping

WLAN Components

In this kind of attack, the intruder floods the network with either valid or Invalid messages affecting the availability of the network resources. Due to the nature of the radio  transmission, the WLAN are very vulnerable Against denial of service attacks. The relatively low bit rates of WLAN can easily be overwhelmed and leave them open to denial of service attacks . by using a powerful enough transceiver, radio interference can easily Be generated that would unable WLAN to communicate using radio path

Access Points

Access Point (AP) is essentially the wireless equivalent of a LAN hub. It is typically connected with the wired backbone through a standard Ethernet Cable, and communicates with wireless devices by means of an antenna. An AP operates within a specific frequency spectrum and uses 802.11 standard specified modulation techniques. It also informs the wireless Clients of its availability, and authenticates and associates wireless clients to the wireless network.

Network Interface Cards (Nicks)/client adapters

Wireless client adapters connect PC or workstation to a wireless network either in ad hoc peer-to-peer mode or in infrastructure mode with Apes (will be discussed in the following section). Available in PCMCIA (Personal Computer Memory Card  international Association) card and PCI (Peripheral Component Interconnect), it connects desktop and mobile computing devices wirelessly to all network resources. The NIC scans the Available frequency spectrum for connectivity and associates it to an access point or another wireless client. It is coupled to the PC/workstation operating system using a software driver. The NIC enables new employees to be connected instantly to the network and enable Internet access in conference rooms.

WLAN Architecture

The WLAN components mentioned above are connected in certain Configurations. There are three main types of WLAN architecture: Independent, Infrastructure, and Micro cells and Roaming.

Independent WLAN

The simplest WLAN configuration is an independent (or peer-to-peer)

WLAN. It is a group of computers, each equipped with one wireless LAN NIC/client adapter. In this type of configuration, no access point is necessary and each computer in the LAN is configured at the same radio channel to enable peer-to-peer networking. Independent networks can be set up whenever two or more wireless adapters are within range of each other. Figure 1 shows the architecture of Independent WLAN.

Infrastructure WLAN

Infrastructure WLAN consists of wireless stations and access points. Access Points combined with a distribution system (such as Ethernet) support the creation of multiple radio cells that enable roaming throughout a facility. The access points not only provide communications with the wired network but also mediate wireless network traffic in the immediate neighborhood. This network configuration satisfies the need of large-scale networks arbitrary coverage size and complexities. Figure 2 shows the architecture of Infrastructure WLAN

 Infrastructure WLAN

Micro cells and Roaming

The area of coverage for an access point is called a “micro cell’. The installation of multiple access points is required in order to extend the WLAN range beyond the coverage of a single access. One of the main benefits of WLAN is user mobility. Therefore, it is very important to ensure that users can move seamlessly between access points without having to log in again and restart their applications. Seamless roaming is only possible if the access points have a way of exchanging information as abuser connection is handed off from one access point to another. In a setting with overlapping micro cells, wireless nodes and access points frequently check the strength and quality of transmission. The WLAN system hands off roaming users to the access point with the strongest and highest quality signal, in accommodating roaming from one micro cell to another. Figure 3 shows the architecture of Micro cells and Roaming.

Micro cells and Roaming.

 Security Threats of WLAN

Despite the productivity, convenience and cost advantage that WLAN offers, the Radio waves used in wireless networks create a risk where the network can be hacked. This section explains three examples of important threats: Denial of Service, Spoofing, and Eavesdropping.

Denial of Service

In this kind of attack, the intruder floods the network with either valid or invalid messages affecting the availability of the network resources. Due to the nature of the radio transmission, the WLAN are very vulnerable against denial of service attacks. The relatively low bit rates of WLAN can easily be overwhelmed and leave them open to denial of service attacks. By using a powerful enough transceiver, radio interference can easily Be generated that would unable WLAN to communicate using radio path.

Spoofing and Session Hijacking

This is where the attacker could gain access to privileged data and Resources in the network by assuming the identity of a valid user. This happens because 802.11 networks do not authenticate the source Address, which is Medium Access Control (MAC) address of the frames. Attackers may therefore spoof MAC addresses and hijack sessions. Moreover, 802.11 do not require an Access Point to prove it is actually an AP. This facilitates attackers who may masquerade as AP’s. In eliminating spoofing, proper authentication and access control mechanisms need to be placed in the WLAN.


This involves attack against the confidentiality of the data that is being transmitted across the network. By their nature, wireless LANs intentionally radiates network traffic into space. This makes it impossible to control who can receive the signals in any wireless LAN installation. In the wireless network, eavesdropping by the third parties is the most significant threat because the attacker can intercept the transmission over the air from a distance, away from the premise of the company.

Wired Equivalent Privacy

Wired Equivalent Privacy (WEP) is a standard encryption for wireless networking. It is a user authentication and data encryption system from IEEE 802.11 used to overcome the security threats. Basically, WEP provides security to WLAN by encrypting the information transmitted over the air, so that only the receivers who have the correct encryption key can decrypt the information. The following section explains the technical functionality of WEP as the main security protocol for WLAN.

 How WEP Works?

When deploying WLAN, it is important to understand the ability of WEP to Improve security. This section describes how WEP functions accomplish the level of privacy as in a wired LAN. WEP uses a pre-established shared secret key called the base key, the RC4 encryption algorithm and the CRC-32 (Cyclic Redundancy Code) checksum algorithm as its basic building blocks. WEP supports up to four different base keys, identified by Keyed 0 thorough 3. Each of these base keys is a group key called a default key, meaning that the base keys are shared among all the members of a particular wireless network. Some implementations also support a set of nameless per-link keys called key-mapping keys. However, this is less common in first generation products, because it implies the existence of a key management facility, which WEP does not define. The WEP pacification does not permit the use of both key-mapping keys and default keys simultaneously, and most deployments share a single default key across all of the 802.11 devices. WEP tries to achieve its security goal in a very simple way. It operates on MAC Protocol Data Units (Medusa), the 802.11 packet fragments. To protect the data in an MPDU, WEP first computes an integrity check value (ICV) over to the MPDU data. This is the CRC-32 of the data. WEP appends the ICV to the end of the data, growing this field by four bytes. The ICV allows the receiver to detect if data has been corrupted in flight or the packet is an outright forgery. Next, WEP selects a base key and an initialization vector (IV), which is a 24-bit value. WEP constructs a per-packet RC4 key by concatenating the IV value and the selected shared base key. WEP then uses the per-packet key to RC4, and encrypt both the data and the ICV. The IV and Keyed identifying the selected key are encoded as a four-byte string and pre-pended to the encrypted data.

How to Use WEP Parameters:

WEP data encryption is used when the wireless devices are configured to operate in Shared Key authentication mode. There are two shared key methods implemented in most commercially available products, 64-bit and 128-bit WEP data encryption. Before enabling WEP on an 802.11 network, you must first consider what type of encryption you require and the key size you want to use. Typically, there are three WEP Encryption options available for 802.11 products:

Do Not Use WEP: The 802.11 network does not encrypt data. For authentication purposes, the network uses Open System Authentication.

Use WEP for Encryption: A transmitting 802.11 device encrypts the data portion of every packet it sends using a configured WEP key. The receiving 802.11g device decrypts the data using the same WEP key. For authentication purposes, the 802.11g network uses Open System Authentication.

 • Use WEP for Authentication and Encryption: A transmitting 802.11 device encrypts the data portion of every packet it sends using a configured WEP key. The receiving 802.11 device decrypts the data using the same WEP key. For authentication purposes, the 802.11 network uses Shared Key Authentication.

The IEEE 802.11 standard defines the WEP base key size as consisting of 40 Bits, so the per-packet key consists of 64 bits once it is combined with the IV. Many in the 802.11 community once believed that small key size was a security problem, so some vendors modified their products to support a 104-bit base key as well. This difference in key length does not make any different in the overall security. An attacker can compromise its privacy goals with comparable effort regardless of the key size used. This is due to the vulnerability of the WEP construction which will be discussed in the next section.

WEP Authentication

The 802.11 standard defines several services that govern how two 802.11 devices communicate. The following events must occur before an 802.11 station can communicate with an Ethernet network through an access point such as the one built in to the NETGEAR product: . Turn on the wireless station. . The station listens for messages from any access points that are in range. . The station finds a message from an access point that has a matching SSID. . The station sends an authentication request to the access point. . The access point authenticates the station. . The station sends an association request to the access point. . The access point associates with the station. 8. The station can now communicate with the Ethernet network through the access point. An access point must authenticate a station before the station can associate with the access point or communicate with the network. The IEEE 802.11 standard defines two types of WEP authentication: Open System and Shared Key. • Open System Authentication allows any device to join the network, assuming that the device SSID matches the access point SSID. Alternatively, the device can use the “ANY” SSID option to associate with any available access point within range, regardless of its SSID.

WPA Wireless Security

Wi-Fi Protected Access (WPA) is a specification of standards-based, interoperable security enhancements that increase the level of data protection and access control for existing and future wireless LAN systems. The IEEE introduced the WEP as an optional security measure to secure 802.11g (Wi-Fi) WLANs, but inherent weaknesses in the standard soon became obvious. In response to this situation, the Wi-Fi Alliance announced a new security architecture in October 2002 that remedies the shortcomings of WEP. This standard, formerly known as Safe Secure Network (SSN), is designed to work with existing 802.11 products and offers forward compatibility with 802.11i, the new wireless security architecture being defined in the IEEE. WPA offers the following benefit.

• Enhanced data privacy • Robust key management • Data origin authentication • Data integrity protection Starting in August of 2003, all new Wi-Fi certified products had to support WPA, and all existing Wi-Fi certified products had one year to comply with the new standard or lose their Wi-Fi certification. NETGEAR has implemented WPA on client and access point products. As of August 2004, all Wi-Fi certified products must support WPA.

What are the Key Features of WPA Security?

 The following security features are included in the WPA standard: • WPA Authentication • WPA Encryption Key Management – Temporal Key Integrity Protocol (TKIP) – Michael message integrity code (MIC) – AES Support • Support for a Mixture of WPA and WEP Wireless Clients These features are discussed below. WPA addresses most of the known WEP vulnerabilities and is primarily intended for wireless infrastructure networks as found in the enterprise. This infrastructure includes stations, access points, and authentication servers (typically Remote Authentication Dial-In User Service servers, called RADIUS servers). The RADIUS server holds (or has access to) user credentials (for example, user names and passwords) and authenticates wireless users before they gain access to the network. The strength of WPA comes from an integrated sequence of operations that encompass 802.1X/ EAP authentication and sophisticated key management and encryption techniques. Its major operations include:

Network security capability determination. This occurs at the 802.11 level and is communicated through WPA information elements in Beacon, Probe Response, and (Re) Association Requests. Information in these elements includes the authentication method (802.1X or Pre-shared key) and the preferred cipher suite (WEP, TKIP, or AES, which is Advanced Encryption Standard). The primary information conveyed in the Beacon frames is the authentication method and the cipher suite. Possible authentication methods include 802.1X and Pre-shared key. Pre-shared key is an authentication method that uses a statically configured passphrase on both the stations and the access point. This removes the need for an authentication server, which in many home and small office environments is neither available nor desirable. Possible cipher suites include: WEP, TKIP, and AES. We say more about TKIP and AES when addressing data privacy below.

Authentication. EAP over 802.1X is used for authentication. Mutual authentication is gained by choosing an EAP type supporting this feature and is required by WPA. The 802.1X port access control prevents full access to the network until authentication completes. The 802.1X EAPOL-Key packets are used by WPA to distribute per-session keys to those stations successfully authenticated.

Weaknesses of WEP

WEP has undergone much scrutiny and criticism that it may be compromised.

What makes WEP vulnerable? The major WEP flaws can be summarized into

Three categories.

No forgery protection

There is no forgery protection provided by WEP. Even without knowing the

Encryption key, an adversary can change 802.11 packets in arbitrary, undetectable ways, deliver data to unauthorized parties, and masquerade as an authorized user. Even worse, an adversary can also learn more about the encryption key with forgery attacks than with strictly passive attacks.

No protection against replays

WEP does not offer any protection again replays. An adversary can create

Forgeries without changing any data in an existing packet, simply by recording WEP packets and then retransmitting later. Replay, a special type of forgery attack, can be used to derive information about the encryption key and the data it protects.

Reusing initialization vectors

By reusing initialization vectors, WEP enables an attacker to decrypt the Encrypted data without the need to learn the encryption key or even resorting to high-tech techniques. While often dismissed as too slow, a patient attacker can compromise the encryption of an entire network after only a few reports done by a team at the University of California’s computer science department presented the insecurity of WEP which expose WLAN to several types of security breaches. The ISAAC (Internet Security, Applications, Authentication and Cryptography) team which released the report quantifies two types of weaknesses in WEP. The first weakness emphasizes on limitations of the initialization Vector (IV). The value of the IV often depends on how vendor

Chose to implement it because the original 802.11 protocol did not specify how this value is derived. The second weakness concerns on RC4’s Integrity Check Value (ICV), a CRC-32 checksum that is used to verify whether the contents of a frame have been modified in transit. At the time of encryption, this value is added to the end of the frame. As the recipient decrypts the packet, the checksum is used to validate the data. Because the ICV is not encrypted, however, it is theoretically possible to change the data payload as long as you can derive the appropriate bits to change in the ICV as well. This means data can be tampered and falsified.

 Practical Solutions for Securing WLAN

Service Set Identifier (SSID) is a unique identifier attached to the header of packets sent over a WLAN that acts as a password when a mobile device tries to connect to a particular WLAN. The SSID differentiates one WLAN from another, so all access points and all devices attempting to connect to a specific WLAN must use the same SSID. In fact, it is the only security mechanism that the access point requires to enable association in the absence of activating optional security features. Not changing the default SSID is one of the most common security mistakes made by WLAN administrators. This is equivalent to leaving a default password in place.

 Changing Default SSID

A VPN is a much more comprehensive solution in a way that it authenticates users coming from an entrusted space and encrypts their communication so that someone listening cannot intercept it. Wireless AP is placed behind the corporate firewall within a typical wireless implementation. This type of implementation opens up a big hole within the trusted network space. A secure method of implementing a wireless AP is to place it behind a VPN server. This type of implementation provides high security for the wireless network implementation without adding significant overhead to the users. If there is more than one wireless AP in the organization, it is recommended to run them all into a common switch, then connecting the VPN server to the same switch. Then, the desktop users will not need to have multiple VPN dial-up connections configured on their desktops. They will always be authenticating to the same VPN server no matter which wireless AP they have associated with. Figure 5 shows secure method of implementing a wireless AP.

Securing a wireless AP

Utilize Static IP

By default, most wireless LANs utilize DHCP (Dynamic Host Configuration

Protocol) to more efficiently assign IP addresses automatically to user devices. A problem is that DHCP does not differentiate a legitimate user from a hacker. With a proper SSID, anyone implementing DHCP will obtain an IP address automatically and become a genuine node on the network. By disabling DHCP and assigning static IP addresses to all wireless users, you can minimize the possibility of the hacker obtaining a valid IP address. This limits their ability to access network services. On the other hand, someone can use an 802.11 packet analyzer to sniff the exchange of frames over the network and learn what IP addresses are in use. This helps the intruder in guessing what IP address to use that falls within the range of ones in use. Thus, the use of static IP addresses is not fool proof, but at least it is a deterrent. Also keep in mind that the use of static IP addresses in larger networks is very cumbersome, which may prompt network managers to use DHCP to avoid support issues.

Access Point Placement

WLAN access points should be placed outside the firewall to protect intruders from accessing corporate network resources. Firewall can be configured to enable access only by legitimate users based on MAC and IP addresses.  However, this is by no means a final or perfect solution Because MAC and IP addresses can be spoofed even though this makes it difficult for a hacker to mimic.

Minimize radio wave propagation in non-user areas

Try orienting antennas to avoid covering areas outside the physically Controlled boundaries of the facility. By steering clear of public areas, such as parking lots, lobbies, and adjacent offices, the ability for an intruder to participate on the wireless LAN can be significantly reduced. This will also minimize the impact of someone disabling the wireless LAN with jamming Techniques.

LAN Security:

Hubs & Switches

 LAN equipment, hubs, bridges, repeaters, routers, switches will be kept in secure hub rooms. Hub rooms will be kept locked at all times. Access to hub rooms will be restricted to I.T. Department staff only. Other staff, and contractors requiring access to hub rooms will notify the I.T. Department in advance so that the necessary supervision can be arranged.


Users must logout of their workstations when they leave their workstation for any length of time. Alternatively Windows workstations may be locked.

. All unused workstations must be switched off outside working hours.


All network wiring will be fully documented.

All unused network points will be de-activated when not in use.

All network cables will be periodically scanned and readings recorded for future reference.

Users must not place or store any item on top of network cabling.

. Redundant cabling schemes will be used where possible.

Monitoring Software

The use of LAN analyzer and packet sniffing software is restricted to the I.T. Department.

LAN analyzers and packet snuffers will be securely locked up when not in use.

Intrusion detection systems will implemented to detect unauthorized access to the network


All servers will be kept securely under lock and key.

Access to the system console and server disk/tape drives will be restricted to authorized I.T. Department staff only.

Electrical Security

All servers will be fitted with Puss’s that also condition the power supply.

All hubs, bridges, repeaters, routers, switches and other critical network equipment will also be fitted with Puss’s.

In the event of a mains power failure, the Puss’s will have sufficient power to keep the network and servers running until the generator takes over.

Software will be installed on all servers to implement an orderly shutdown in the event of a total power failure.

All Puss’s will be tested periodically.

Inventory Management

The I.T. Department will keep a full inventory of all computer equipment and software in use throughout the Company.

Computer hardware and software audits will be carried out periodically via the use of a desktop inventory package. These audits will be used to track unauthorized copies of software and unauthorized changes to hardware and software configurations

New Standards for Improving WLAN Security

Apart from all of the actions in minimizing attacks to WLAN mentioned in the

Previous section, we will also look at some new standards that intend to improve the security of WLAN. There are two important standards that will be discussed in this paper: 802.1x and 802.11i.


One of the standards is 802.1 as which was originally designed for wired Ethernet networks. This standard is also part of the 802.11i standard that will be discussed later. The following discussion of 802.1 as is divided into three parts, starting with the concept of Point-to-Point Protocol (PPP), followed by Extensible Authentication Protocol (EAP), and continues with the understanding of 802.1x itself. IEEE 802.1x relates to EAP in a way that it is a standard for carrying EAP Over a wired LAN or WLAN. There are four important entities that explain This standard.


The Point-to-Point Protocol (PPP) originally emerged as an encapsulation

Protocol for transporting IP traffic over point-to-point links. PPP also established a standard for the assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network-layer address negotiation and data-compression negotiation By any measure, PPP is a good protocol. However, as PPP usage grew, people quickly found its limitation in terms of security. Most corporate networks want to do more than simple usernames and passwords for secure access . This leads to the designation of a new authentication protocol, called Extensible Authentication Protocol (EAP).


The Extensible Authentication Protocol (EAP) is a general authentication protocol defined in IETF (Internet Engineering Task Force) standards. It was originally developed for use with PPP. It is an authentication protocol that provides a generalized framework for several authentication mechanisms [15]. These include Kerberos, public key, smart cards and one-time passwords. With a standardized EAP, interoperability and compatibility across authentication methods become simpler. For example, when user dials a remote access server (RAS) and use EAP as part of the PPP connection, the RAS does not need to know any of the details about the authentication system. Only the user and the authentication server have to be coordinated. By supporting EAP authentication, RAS server does not actively participate in the authentication dialog. Instead, RAS just re-packages EAP packets to hand Off to a RADIUS server to make the actual authentication decision. How does EAP relate to 802.1 xs? The next section will explain the relation.

I Authenticator

Authenticator is the entity that requires the entity on the other end of the link to be authenticated. An example is wireless access Points.

ii. Supplicant

Supplicant is the entity being authenticated by the Authenticator And desiring access to the services of the Authenticator.

iii. Port Access Entity (PAE)

It is the protocol entity associated with a port. It may support the Functionality of Authenticator, Supplicant or both.

iv. Authentication Server

Authentication server is an entity that provides authentication Service to the Authenticator. It maybe co-located with Authenticator, But it is most likely an external server. It is typically a RADIUS (Remote Access Dial in User Service) server. The supplicant and authentication server are the major parts of 802.1 as

General topology

EAP messages are encapsulated in Ethernet LAN packets (EAPOL) to Allow communications between the supplicant and the authenticator. The Following are the most common modes of operation in EAPOL.

I. The authenticator sends an “EAP-Request/Identity” packet to the supplicant as soon as it detects that the link is active.

ii. Then, the supplicant sends an “EAP-Response/Identity” Packet to the authenticator, which is then passed to the Authentication (RADIUS) server.

iii. Next, the authentication server sends back a challenge to the authenticator, with a token password system. The authenticator unpacks this from IP and repackages it into EAPOL and sends it to the supplicant. Different authentication methods will vary this message and the total number of messages. EAP supports client-only authentication and strong mutual  authentication. Only strong mutual authentication is considered appropriate for the wireless case.


In addition to 802.1x standard created by IEEE, one up-and-coming 802.11x specification, which is 802.11i, provides replacement technology for WEP security. 802.11i is still in the development and approval processes. In this paper, the key technical elements that have been defined by the specification will be discussed. While these elements might change, the information provided will provide insight into some of the changes that 802.11i promises to deliver to enhance the security features provided in a WLAN system.

The 802.11i specification consists of three main pieces organized into two layers. On the upper layer is the 802.1x, which has been discussed in the previous section. As used in 802.11i, 802.1 xs provides a framework for robust user authentication and encryption key distribution. On the lower layer are improved encryption algorithms. The encryption algorithms are in the form of the TKIP (Temporal Key Integrity Protocol) and the CCMP (counter mode with CBC-MAC protocol). It is important to understand how all of these three ices work to form the security mechanisms of 802.11i standard. Since the concept of 802.1 xs has been discussed in the previous section, the following section of this paper will only look at TKIP and CCMP. Both of these encryption protocols provide Enhanced data integrity over WEP, with TKIP being targeted at legacy Equipment, while CCMP is being targeted at future WLAN equipments. However A true 802.11i system uses either the TKIP or CCMP protocol for all equipments. WPA is forward-compatible with the IEEE 802.11i security specification currently under development. WPA is a subset of the current 802.11i draft and uses certain pieces of the 802.11i draft that were ready to bring to market in 2003, such as 802.1x and TKIP. The main pieces of the 802.11i draft that are not included in WPA are secure IBSS (Ad-Hoc mode), secure fast handoff (for specialized 802.11 VoIP phones), as well as enhanced encryption protocols such as AES-CCMP. These features are either not yet ready for market or will require hardware upgrades to implement.

WEP Wireless Security The absence of a physical connection between nodes makes the wireless links vulnerable to eavesdropping and information theft. To provide a certain level of security, the IEEE 802.11 standard has defined two types of authentication methods, Open System and Shared Key. With Open System authentication, a wireless computer can join any network and receive any messages that are not encrypted. With Shared Key authentication, only those computers that possess the correct authentication key can join the network. By default, IEEE 802.11 wireless devices operate in an Open System network. Recently, Wife, the Wireless Ethernet Compatibility Alliance  developed the Wi-Fi Protected Access (WPA), a new strongly enhanced Wi- Fi security. WPA will soon be incorporated into the IEEE 802.11 standard.


The temporal key integrity protocol (TKIP) which initially referred to as WEP2, was signed to address all the known attacks and deficiencies in the WEP algorithm. According to 802.11 Planet [6], the TKIP security process begins with a 128-bit temporal-key, which is shared among clients and access points. TKIP combines the temporal key with the client machine’s MAC address and then adds a relatively large 16-octet initialization vector to produce the key that will encrypt the data. Similar to WEP, TKIP also uses RC4 to perform the encryption. However, TKIP   Changes temporal key every 10,000 packets. This difference provides a dynamic distribution method that significantly enhances the security of the network. TKIP is weaknesses in WEP security, especially the reuse of encryption keys. The following are four new algorithms and their function that TKIP adds to WEP

I. A cryptographic message integrity code, or MIC, called Michael, to defeat forgeries. ii. A new IV sequencing discipline, to remove replay attacks from the attacker’s arsenal. iii. A per-packet key mixing function, to de-correlate the public IVs from weak keys. iv. A re-keying mechanism, to provide fresh encryption and integrity keys, undoing the threat of attacks stemming from Key reuse.


As explained previously, TKIP was designed to address deficiencies in WEP; however, TKIP is not viewed as a long-term solution for WLAN Security. In addition to TKIP encryption, the 802.11i draft defines a new Encryption method based on the advanced encryption standard (AES). The AES algorithm is a symmetric block cipher that can encrypt and Decrypt information. It is capable of using cryptographic keys of 128, 192, And 256 bits to encrypt and decrypt data in blocks of 128 bits . More robust than TKIP, the AES algorithm would replace WEP and RC4. AES based encryption can be used in many different modes or algorithms. The mode that has been chosen for 802.11 is the counter mode with CBCMAC protocol CCMP). The counter mode delivers data privacy while the CBC-MAC delivers data integrity and authentication. Unlike TKIP, CCMP is mandatory for anyone implementing 802.11i.

TCP/IP & Internet Security

. Permanent connections to the Internet will be via the means of a firewall to regulate network traffic.

Permanent connections to other external networks, for offsite processing etc., will be via the means of a firewall to regulate network traffic.

Where firewalls are used, a dual homed firewall (a device with more than one TCP/IP address) will be the preferred solution.

Network equipment will be configured to close inactive sessions.

Where modem pools or remote access servers are used, these will be situated on the DMZ or non-secure network side of the firewall.

Workstation access to the Internet will be via the Organization’s proxy server and website content scanner

All incoming e-mail will be scanned by the Organization’s e-mail content scanner.

Voice System Security

DISA port access (using inbound 0800 numbers) on the PBX will be protected by a secure password.

The maintenance port on the PBX will be protected with a secure password.

The default DISA and maintenance passwords on the PBX will be changed to user defined passwords.

. Call accounting will be used to monitor access to the maintenance port, DISA ports and abnormal call patterns.

DISA ports will be turned off during non working hours.

. Internal and external call forwarding privileges will be separated, to prevent inbound calls being forwarded to an outside line.

The operator will Endeavour to ensure that an outside call is not transferred to an outside line.

Use will be made of multilevel passwords and access authentication where available on the PBX.

Voice mail accounts will use a password with a minimum length of six digits.

The voice mail password should never match the last six digits of the phone number.

. The caller to a voice mail account will be locked out after three attempts at password validation.

Dialing calling party pays numbers will be prevented.

. Telephone bills will be checked carefully to identify any misuse of the telephone system.

Tools for Protecting WLAN

There are some products that can minimize the security threats of WLAN such as:

Air Defense™

It is a commercial wireless LAN intrusion protection and management System that discovers network vulnerabilities, detects and protects a WLAN from intruders and attacks, and assists in the management of a WLAN. Air Defense also has the capability to discover vulnerabilities and Threats in a WLAN such as rogue Apes and ad hoc networks. Apart from Securing a WLAN management functionality that allows users to understand their network, monitor network performance and enforce network policies.

Isomer Wireless Sentry

This product from Isomer Ltd. automatically monitors the air space of the Enterprise continuously using unique and sophisticated analysis technology to identify insecure access points, security threats and wireless network problems. This is a dedicated appliance employing an Intelligent Conveyor Engine (ICE) to passively monitor wireless networks for threats and inform the security managers when these occur. It is a completely automated system, centrally managed, and will integrate seamlessly with existing security infrastructure. No additional man-time is required to Operate the system.

Wireless Security Auditor (WSA)

It is an IBM research prototype of an 802.11 wireless LAN security auditor, Running on Linux on an Iraq PDA (Personal Digital Assistant). WSA helps Network administrators to close any vulnerability by automatically audits A wireless network for proper security configuration. While there are other 802.11 network analyzers such as Ethereal, Snuffer and Wampum, WSA Aims at protocol experts who want to capture wireless packets for detailed Analysis. Moreover, it is intended for the more general audience of Network installers and administrators, who want a way to easily and Quickly verify the security configuration of their networks, without having to Understand any of the details of the 802.11 protocols

Wireless Channels

 IEEE 802.11g/b wireless nodes communicate with each other using radio frequency signals in the ISM (Industrial, Scientific, and Medical) band  between 2.4 GHz and 2.5 GHz. Neighboring channels are 5 MHz apart. However, due to the spread spectrum effect of the signals, a node sending signals using a particular hannel will utilize frequency spectrum 12.5 MHz above and below the center channel frequency. As a result, two separate wireless networks using neighboring channels (for example,  channel 1 and channel 2) in the same general vicinity will interfere with each other. Applying two channels that allow the maximum channel separation will decrease the amount of channel cross-talk and provide a noticeable performance increase over networks with minimal channel separation. The preferred channel separation between the channels in neighboring wireless networks is 25 MHz (five channels). This means that you can apply up to three different channels within your wireless network. In the United States, only 11 usable wireless channels are available, so we recommended that you start using channel 1, grow to use channel 6, and add channel 11 when necessary, because these three channels do not overlap.

Wide Area Network Security

Wireless LAN’s will make use of the most secure encryption and   authentication facilities available

Users will not install their own wireless equipment under any circumstances.

Dial-in modems will not be used if at all possible. If a modem must be used dial-back modems should be used. A secure VPN tunnel is the preferred option.

Modems will not be used by users without first notifying the I.T. Department and obtaining their approval.

Where dial-in modems are used, the modem will be unplugged from the telephone network and the access software disabled when not in use.

Modems will only be used where necessary, in normal circumstances all communications should pass through the Organization’s router and firewall.

Where leased lines are used, the associated channel service units will be locked up to prevent access to their monitoring ports.

All bridges, routers and gateways will be kept locked up in secure areas.

Unnecessary protocols will be removed from routers.

The preferred method of connection to outside Organizations is by a secure VPN connection, using IPSEC or SSL.

All connections made to the Organization’s network by outside organizations will be logged.

WEP Wireless Security

The absence of a physical connection between nodes makes the wireless links vulnerable to eavesdropping and information theft. To provide a certain level of security, the IEEE 802.11 standard has defined two types of authentication methods, Open System and Shared Key. With Open System authentication, a wireless computer can join any network and receive any messages that are not encrypted. With Shared Key authentication, only those computers that possess the correct authentication key can join the network. By default, IEEE 802.11 wireless devices operate in an Open System network. Recently, Wi-Fi, the Wireless Ethernet Compatibility Alliance) developed the Wi-Fi Protected Access (WPA), a new strongly enhanced Wi- Fi security. WPA will soon be incorporated into the IEEE 802.11 standard. WEP (Wired Equivalent Privacy) is discussed below, and WPA is discussed on.

Wide Area Network Security

 The 802.11 standard defines several services that govern how two 802.11 devices communicate. The following events must occur before an 802.11 station can communicate with an Ethernet network through an access point such as the one built in to the NETGEAR product: 1. Turn on the wireless station. 2. The station listens for messages from any access points that are in range. 3. The station finds a message from an access point that has a matching SSID. 4. The station sends an authentication request to the access point. 5. The access point authenticates the station. 6. The station sends an association request to the access point. 7. The access point associates with the station. 8. The station can now communicate with the Ethernet network through the access point. An access point must authenticate a station before the station can associate with the access point or communicate with the network. The IEEE 802.11 standard defines two types of WEP authentication: Open System and Shared Key. • Open System Authentication allows any device to join the network, assuming that the device SSID matches the access point SSID. Alternatively, the device can use the “ANY” SSID option to associate with any available access point within range, regardless of its SSID. • Shared Key Authentication requires that the station and the access point have the same WEP key to authenticate. These two authentication procedures are described bell.

How Does WPA Compare to WEP?

WEP is a data encryption method and is not intended as a user authentication mechanism. WPA user authentication is implemented using 802.1x and the Extensible Authentication Protocol (EAP). Support for 802.1x authentication is required in WPA. In the 802.11 standard, 802.1 x authentications was optional. For details on EAP specifically, refer to IETF RFC 2284. With 802.11 WEP, all access points and client wireless adapters on a particular wireless LAN must use the same encryption key. A major problem with the 802.11 standard is that the keys are cumbersome to change. If you do not update the WEP keys often, an unauthorized person with a sniffing tool can monitor your network for less than a day and decode the encrypted messages. Products based on the 802.11 standard alone offer system administrators no effective method to update the keys. For 802.11, WEP encryption is optional. For WPA, encryption using Temporal Key Integrity Protocol (TKIP) is required. TKIP replaces WEP with a new encryption algorithm that is stronger than the WEP algorithm, but that uses the calculation facilities present on existing wireless devices to perform encryption operations. TKIP provides important data encryption enhancements including a per-packet key mixing function, a message integrity check (MIC) named Michael, an extended initialization vector (IV) with sequencing rules, and a re-keying mechanism. Through these enhancements, TKIP addresses all known WEP vulnerabilities.

 However, this extension of physical boundaries provides expanded access to both authorized and unauthorized users that make it inherently less secure than wired networks.

WLAN vulnerabilities are mainly caused by WEP as its security protocol. However, these problems can be solved with the new standards, such as 802.11i, which is planned to be released later this year. For the time being, WLAN users can protect their networks by practicing the suggested actions that are mentioned in this paper based on the cost and the level of security that they wish However, there will be no complete fix for the existing vulnerabilities. All in all, the very best way to secure WLAN is to have the security knowledge, proper implementation, and continued maintenance.

How Does WPA Compare to IEEE 802.11i?

 WPA is forward-compatible with the IEEE 802.11i security specification currently under development. WPA is a subset of the current 802.11i draft and uses certain pieces of the 802.11i draft that were ready to bring to market in 2003, such as 802.1x and TKIP. The main pieces of the 802.11i draft that are not included in WPA are secure IBSS (Ad-Hoc mode), secure fast handoff (for specialized 802.11 Void phones), as well as enhanced encryption protocols such as AES-CCMP. These features are either not yet ready for market or will require hardware upgrades to implement

What are the Key Features of WPA Security?

 The following security features are included in the WPA standard: • WPA Authentication • WPA Encryption Key Management – Temporal Key Integrity Protocol (TKIP) – Michael message integrity code (MIC) – AES Support • Support for a Mixture of WPA and WEP Wireless Clients These features are discussed below. WPA addresses most of the known WEP vulnerabilities and is primarily intended for wireless infrastructure networks as found in the enterprise. This infrastructure includes stations, access points, and authentication servers (typically Remote Authentication Dial-In User Service servers, called RADIUS servers). The RADIUS server holds (or has access to) user credentials (for example, user names and passwords) and authenticates wireless users before they gain access to the network. The strength of WPA comes from an integrated sequence of operations that encompass 802.1X/ EAP authentication and sophisticated key management and encryption techniques. Its major operations include:

Network security capability determination.

This occurs at the 802.11 level and is communicated through WPA information elements in Beacon, Probe Response, and (Re) Association Requests. Information in these elements includes the authentication method (802.1X or Pre-shared key) and the preferred cipher suite (WEP, TKIP, or AES, which is Advanced Encryption Standard). The primary information conveyed in the Beacon frames is the authentication method and the cipher suite. Possible authentication methods include 802.1X and Pre-shared key. Pre-shared key is an authentication method that uses a statically configured passphrase on both the stations and the access point. This removes the need for an authentication server, which in many home and small office environments is neither available nor desirable. Possible cipher suites include: WEP, TKIP, and AES. We say more about TKIP and AES when addressing data privacy below.

 • Authentication. EAP over 802.1X is used for authentication.

 Mutual authentication is gained by choosing an EAP type supporting this feature and is required by WPA. The 802.1X port access control prevents full access to the network until authentication completes. The 802.1X EAPOL-Key packets are used by WPA to distribute per-session keys to those stations successfully authenticated


I was selected to work with IT department in Sonagon plaza, (3rd Floor)West Panthapath,Dhanmondi,Dhaka-1215.I reported to the concern manager and was sent to K.Kalam who is the Admin Director in China-Bangla Group.

This internship introduces me with the rules & regulation of an office. Being attached with LAN issues & security .Most of the critical problems are done by the LAN issues & security team & third party or local vendor’s joint venture. So the main duty of system operation is to solve the problem & supervise the vendor’s works. LAN issues & security team should be very careful about their works because if a vendor makes any mistake the responsibility goes on them. I also did some works with the Payment schedule & did of agreement team whose works is to maintain the general problems.

Before my internship I didn’t have any practical knowledge about Real Estate Company & their works. The outcome of this internship rang from learning the very basic concepts of LAN issue &security solution to analyzing the very partial and operational state of affairs like; got to the opportunity to observe LAN fault & management procures & also gathered knowledge about setup & maintenance.

The work environment of the office was very friendly. I didn’t feel like an outsider working there because all of them treated me as one of them. Through this period of my internship I asked a lot of questions queries to many of them and they answered those generously. Specially my supervisor helped me a lot and gave me idea about my future life of job. At last I want to say that I’m very fortunate to get such a great opportunity to work with such a good organization.


The general idea of WLAN was basically to provide a wireless network Infrastructure comparable to the wired Ethernet networks in use. It has since Evolved and is still currently evolving very rapidly towards offering fast connection Capabilities within larger areas. However, this extension of physical boundaries provides expanded access to both authorized and unauthorized users that make it inherently less secure than wired networks.

WLAN vulnerabilities are mainly caused by WEP as its security protocol. However, these problems can be solved with the new standards, such as 802.11i, which is planned to be released later this year. For the time being, WLAN users can protect their networks by practicing the suggested actions that are mentioned in this paper based on the cost and the level of security that they wish.

 Securing a wireless AP


Study on Multiple Antenna Techniques


This chapter describes an overview of Cellular Networks and its development .And an introduction of Wimax .In this Wimax system there are many Standards .A basic architecture of Wimax .Which have point to point and point to multipoint connection .Advanced Antenna techniques which used to improve system performance. The advanced antenna systems support the variety of multi antenna solutions, such as transmit diversity, Beam forming, and spatial multiplexing, all are used in WIMAX.A simple overview on modulation techniques. For simulation we are using Binary Phase Shift Keying (BPSK) BPSK uses two phases, which are separated by 180 degree and known as two phase shift keying.

Historical Background   

Cellular communications has experienced explosive Growth in the past two decades. Today millions of people around the world use cellular phones .Cellular phones allow a person to make or receive a call from almost anywhere. Likewise, a person is allowed to continue the Phone conversation while on the move. Cellular communications is supported by an infrastructure called a cellular network, which integrates cellular phones into the Public switched telephone network. Cellular network are divided into different generations, currently 4 Gs are available.

In the 1970s, the First Generation, or 1G, mobile networks were introduced. These systems were referred to as cellular, which was later shortened to “cell”, due to the method by which the signals were handed off between towers. Cell phone signals were based on analog system transmissions, and 1Gdevices were comparatively less heavy and expensive than prior devices. Some of the most popular standards deployed for 1Gsystems were Advanced Mobile Phone System (AMPS), Total Access Communication Systems (TACS) and Nordic Mobile Telephone (NMT). The global mobile phone market grew from 30 to 50 percent annually with the appearance of the 1G network, and the number of subscribers worldwide reached approximately 20 million by 1990.

The Second Generation (2G) cellular networks started in 1990s. The first system was introduced in Europe to provide facilities of roaming between different countries. One system in the second generation is the Global System for Mobile communication (GSM). The GSM standard uses Time Division Multiple Access (TDMA) combined with slow frequency hopping. The Personal Communication Services (PCS) use IS-136 and TS-95 standards. The IS-136 standard uses TDMA, while IS-95 uses Code Division Multiple Access (CDMA). The GSM and PCS IS-136 uses data rate 9.6 Kbps. The 2G systems are Non Line of Sight (NLOS).

The 3G revolution allowed mobile telephone customers to use audio, graphics and video applications. Over 3G it is possible to watch streaming video and engage in video telephony, although such activities are severely constrained by network bottlenecks and over-usage.

One of the main objectives behind 3G was to standardize on a single global network protocol instead of the different standards adopted previously in Europe, the U.S. and other regions. 3G phone speeds deliver up to 2 Mpbs, but only under the best conditions and in stationary mode. Moving at a high speed can drop 3G bandwidth to a mere 145 Kbps.

3G cellular services, also known as UMTS, sustain higher data rates and open the way to Internet style applications. 3G technology supports both packet and circuit switched data transmission, and a single set of standards can be used worldwide with compatibility over a variety of mobile devices. UMTS delivers the first possibility of global roaming, with potential access to the Internet from any location.

The current generation of mobile telephony, 4G has been developed with the aim of providing transmission rates up to 20 Mbps while simultaneously accommodating Quality of Service (QoS) features. QoS will allow you and your telephone carrier to prioritize traffic according to the type of application using your bandwidth and adjust between your different telephones needs at a moment’s notice.

Only now are we beginning to see the potential of 4G applications. They are expected to include high-performance streaming of multimedia content. The deployment of 4G networks will also improve video conferencing functionality. It is also anticipated that 4G networks will deliver wider bandwidth to vehicles and devices moving at high speeds within the network area.

WiMAX (Worldwide Interoperability of Microwave Access)

In the mid 1990’s, telecommunication companies developed the idea to use fixed broadband wireless networks for potential last mile solutions to provide an alternate. Means to deliver Internet connectivity to businesses and individual s .Their aim was to produce a network with the speed, capacity, and reliability of a hardwired network, while maintaining with the flexibility, simplicity, and low costs of a wireless network. This technology would also act as a versatile system for corporate or institutional backhaul distribution networks and would attempt to compete with the leading Internet carriers.

The huge potential for this flexible, low cost network generated much attention to two types of fixed wireless broadband technologies: Local Multipoint Distribution Services (LMDS) and Multi-channel Multipoint Distribution Services (MMDS). LMDS was primarily intended to speed up and bridge Metropolitan Area Networks in larger corporations and on University campuses. The Fixed WiMAX (IEEE 802.16d) and Mobile WiMAX (IEEE 802.16e) are commonly used.

WIMAX Standards IEEE 

The IEEE 802.16-2001 was published in September 2001. It has the frequency range of 10-66, GHz to provide fixed broadband wireless connectivity. The single carrier modulation techniques are used in physical layer and Time Division Multiplexed (TDM) technique in MAC layer. The standard supports different Quality of Service (QoS) techniques to improve the LOS conditions.


This standard is the amendment of basic IEEE 802.16. The frequency range is 2-11 GHz, includes both licensed and license free bands. The NLOS communication is possible when frequency is below then 11 GHz. Orthogonal Frequency Division Multiplexing (OFDM) is used as modulation technique.


The standard IEEE 802.16c was published in January 2003 as an amendment to IEEE

802.16a.It uses the frequency range of 10-66 GHz. 2.2.4 IEEE 802.16d-2004 The IEEE 802.16d-2004 is also called fixed WIMAX. The IEEE 802.16d-2004 was designed for fixed bandwidth allocation (BWA) system to support multiple services and uses frequency band 10-66 GHz. The bandwidth of IEEE 802.16-2004 is 1.25 MHz.


The IEEE 802.16e is also called Mobile BWA System. This standard is standardized for two layers, the physical layer (PHY Layer) and Medium Access Control (MAC) layer. The 802.16e uses Scalable Orthogonal Frequency Division Multiple Access (SOFDMA). It provides the higher speed internet access and can be used as a Voice over IP (VoIP) service. VoIP technologies may provide new services, such as voice chatting and multimedia chatting. The IEEE 802.16e provides support for MIMO antenna to provide good NLOS characteristics and Hybrid Automatic Request (HARQ) for good correction performance.

802.16 Protocol Stack: The 802.16 standard covers the MAC and PHY layer of Open System Interconnection (OSI) reference model. The MAC layer is responsible to determine which Subscriber Station (SS) can access the network. The MAC layer is subdivided into three layers. The three layers are service specific convergence sub layer (CS), MAC Common Part Sub layer (CPS), and security sub layer. The CS transforms the incoming data into MAC data packets and maps the external network information into IEEE 802.16 MAC information.

The CPS provides support for access control functionality, bandwidth allocation and connection establishment. The PHY Layer control, data and management information are exchanged between MAC CPS and PHY layer. The security sub layer control authentication, key exchange and encryption. The PHY Layer is responsible for data transmission and reception by using 10-66 GHz frequency.

  802.16 802.16a 802.16e
Spectrum 10-66 GHz 2-11 GHz <6 GHz
Configuration Line of Sight Non- Line of Sight Non- Line of Sight
Bit Rate 32 to 134 Mbps

(28 MHz Channel)

≤ 70 or 100Mbps

(20 MHz Channel)

Up to 15 Mbps
Modulation QPSK, 16-QAM,


256 Sub-Carrier

OFDM using QPSK, 16-QAM, 64-QAM, 256-QAM

Same as 802.16a
Mobility Fixed Fixed ≤ 75 MPH
Channel Bandwidth 20, 25, 28 MHz Selectable 1.25 to 20 MHz 5 MPH (Planned)
Typical Cell Radius 1-3 miles 3-5 miles 1-3 miles
Completed Dec, 2001 Jan, 2003 2nd Halfof 2005

 Table : Wimax Standard

Technical Information

Technical information of WIMAX is given below.

MAC layer

In MAC layer all SS pass data through a wireless access point. The 802.16 MAC layer uses a scheduling algorithm for which subscriber station needs to complete only one initial entry into the network. After network entry, the subscriber station is allocated an access slot by the base station. The time slots are assigned to the subscriber station. These time slots can enlarge and contract. In addition to being stable under overload and over subscription, the scheduling algorithm can also be more bandwidth efficient. The scheduling algorithm allows the base station to control QoS parameters by balancing the time slots according to the need of subscriber stations.

 Physical layer

The PHY Layer is responsible for slot allocation. Slots have one sub channel and one, two or three OFDM symbols depending upon which channel scheme is used.  The channel schemes Frequency Division Duplex (FDD) and Time Division Duplex (TDD) are used. It has features support for MIMO antennas and provide NLOS. In PHY Layer of WiMAX data rate varies based on the operating parameters. The OFDM guard time and over sampling rate have good impact. By using multiple antennas at the transmitter and receiver we can further increase the peak rate in multi path channels.

Physical Layer Interfaces: PHY Layer of IEEE 802.16 has the following interfaces:

1. Wireless MAN-SC2: The WirelessMAN-SC2 uses single carrier modulation technique and has frequency range 10-66 GHz.

2. Wireless MAN-OFDM: It is based on OFDM modulation with 256-point fast Fourier transform (FFT) within TDMA channel access provide NLOS transmission in the frequency band of 2-11 GHz.

3. Wireless MAN-OFDMA: It uses the licensed frequency band of 2-11 GHz. It supports the NLOS operation by using the 2048 points of FFT.

4. Wireless HUMAN: The Wireless HUMAN uses license free frequency band below 11GHz. It can also use any air interface that have 2-11 GHz frequency band.

PHY Interface Duplexing Techniques Frequency Band Modulation Propagation Mode
Wireless MAN-SC2 FDD and TDD 10-66 GHZ Single Carrier LOS
Wireless MAN-OFDMA TDD and FDD 2-11 GHZ 2048 FFT point NLOS

Table: WiMAX PHY Layer Interface Characteristics

WiMAX Network Architectures

The MAC layer supports two modes, mesh and PMP (Point to Multi point). The BS (Base Station) communicates with several SSs using PMP mode and share uplink and down link channel information. All the SSs need to have a clear LOS to the BS in PMP mode.  In mesh mode BS consist of net, Relay Stations (RSs), Subscriber Station and Mobile Station (MS).  The mesh mode support of multi hop BS access the internet and RSs forward traffic to other RSs.

Scheme summarizingOFDM Basics

OFDM belongs to multi carrier modulation technique that provides high data rates. In high data rate systems, delay spread is greater than symbol length. In NLOS systems, the delay spread will also be large and the wireless broadband system will suffer Inter Symbol Interference (ISI). To overcome this problem the multi carrier modulation divides the transmitted bit stream into lower stream. The individual sub streams are sent over parallel sub channels. The data rate of a sub channel is less than total data rate, so the sub channel bandwidth is less than total system bandwidth. Thus the ISI of each sub channel is small.


OFDM Features

The narrow band signals are less sensitive as compared to ISI and frequency selective fading.

In OFDM, Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) operations ensure that sub channel do not interfere with each other.

OFDM provides robustness against burst error.

OFDM support less complex equalization as compared to the equalization in single carrier systems. Effective robustness can gain by multi path environments.


Mobility support is available in IEEE 802.16e WiMAX standard. In BWA four scenarios support mobility.


In this scenario user is allowed to take a fixed subscriber station and reconnect from a different point of attachment.


The nomadic access scenario supports the portable devices such as PC card.

 Simple mobility

In this scenario, the subscriber move at speed up to 60 kmph and interruptions are less than 1 sec during handoff.

Full mobility

The subscriber move at speed up to 120 Kmph and packet loss is less than 1 percent. It support seamless handoff and latency is less than 50 ms.


The QoS is a measure of how successfully the signals are transmitted from BS. The four parameters, as follows, are used to describe the QoS.


The PHY Layer is a pipe between BS and the client terminal in WIMAX. The active clients are in parallel and share the overall system bandwidth.


Latency is the end to end packet transmission time, and occurs in physical layer chain. In IEEE 802.16 systems the latency is almost 5 ms. Latency is affected by how packets quid, different QoS protocols and user characterization are implemented.


Jitter is the variation of latency over different packets and can be limited by number of packet buffering. Mobile terminal has little jitter control in wireless networks and it falls on the base station to ensure that different packets are received at different priority.


Reliability leads to more complications in wireless networks as compared to fixed line ones. The problem arises specifically in mobile networks, where the radio wave propagates in mobile terminal with small antenna and low power in urban area

WiMAX Features for Performance Enhancement

WIMAX supports advanced features to improve the performance. These advanced feature support for multiple antenna techniques, hybrid-ARQ, and enhanced frequency reuse and multiple antenna technique.

Advanced Antenna Systems

The WIMAX standard supports the multi antenna solution to improve system performance. The advanced antenna systems support the variety of multi antenna solutions, such as transmit diversity, Beam forming, and spatial multiplexing, all are used in WIMAX.

 Diversity Schemes

The diversity scheme is a technique which is used for improving the reliability of a message signal. In this technique two or more communication channels are used with different characteristics.  Diversity plays an important role in combating fading and co-channel interference.

In telecommunications, a diversity scheme refers to a method for improving the reliability of a message signal by using two or more communication channels with different characteristics. Diversity plays an important role in combating fading and co-channel interference and avoiding error bursts. It is based on the fact that individual channels experience different levels of fading and interference. Multiple versions of the same signal may be transmitted and/or received and combined in the receiver.

Beam Forming

Beam forming is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in the array in such a way that signals at particular angle experience constructive interference and while others experience destructive interference .Beam forming can be used at both the transmit and receive side to achieve spatial selectivity .The improvement compared with an OmnHYPERLINK “”i HYPERLINK “”directional reception/transmission is known as the receive/transmit gain (or loss).Beam forming can be used for both radio or sound  waves. It has found numerous applications in radar, sonar, seismology, wireless communications, radio astronomy, speech, acoustics, and biomedicine. Adaptive beam forming is used to detect and estimate the signal-of-interest at the output of a sensor array by means of data-adaptive spatial filtering and interference rejection.

In this technique the antenna element focuses the transmitted beam in the direction of receiver, to improve the received Signal to Interference Noise Ratio (SINR). Beam forming supports; more coverage area, capacity, reliability, and support for both uplink and downlink.

BeamSpatial multiplexing

In spatial multiplexing the multiple independent streams are transmitted across multiple antennas. The spatial multiplexing is used to increase the data rate or capacity of the system. WiMAX also supports spatial multiplexing in uplink. The coding across multiple users in the uplink, supporting the spatial multiplexing, is called multi user collaborative spatial multiplexing.

Spatial MultiplexingModulation techniques 

There are three main classes of modulation schemes, which are used to transmit the data.

Amplitude Shift Keying (ASK)

Frequency Shift Keying (FSK)

Phase Shift Keying (PSK)

All techniques are used in digital communication to convey the data. In phase shift keying the phase is changed to represent the data signal. There are two fundamental ways, which are used in phase shift keying.

Binary Phase Shift Keying (BPSK) 

BPSK is sometimes called Phase Reversal Keying; it is the simplest form of phase shift keying.  BPSK uses two phases, which are separated by 180 degree and known as two phase shift keying. This modulation is most robust from all PSKs modulation schemes due to its low probability of error. However, it gives  lower data rates as compared to other modulation schemes. So for getting higher data rates we use QPSK and 16 QAM.

 S1(t) = √2.Eb /Tb*cos(2πfct+π) = -√2.Eb /Tb*cos(2πfct)  for binary

 S1(t) = √2.Eb /Tb*cos(2πfct)   for binary

Where, fis the frequency of carrier

So, the signal can be represented by single basis function

 Ф(t) = √2 /Tb *cos(2πfct)

 Where 1 is represented by √EbФ (t) and 0 is represented by -√EbФ (t)

Related Work

There were some related work such as BER for BPSK modulation with 1Tx,2Rx Alamouti STBC(Rayleigh Channel).There are three  receive diversity schemes –Selection combining, Equal Gain Combining and Maximal Ratio Combining. All the three approaches used the antenna array at the receiver to improve the demodulation performance .There are few receiver structure for 2*2 MIMO channel like Zero Forcing (ZF) equalization, Minimum Mean Square Error (MMSE) equalization, Zero Forcing equalization with successive Interface Cancellation (ZF-SIC).ZIF-SIC with optimal ordering .But Minimum Mean Square Error (MMSE) equalization with optimally ordered Successive Interference Cancellation gave the best performance.

Objective of this Thesis

In MIMO where the information is spread across multiple antennas at the transmitter and also receiver .We are discussing a popular transmit diversity scheme called Alamouti Space Time Block Coding (STBC). We are assuming the channel is a flat fading Rayleigh multi path channel and the modulation is BPSK. The Capacity of a MIMO channel with nt transmit antenna and nr receive antenna is analyzed. Capacity of MIMO the result dependences Capacity (bit/s/Hz), and the SNR (dB), in this simulation we are using the initial SNR = 2, results of simulation for capacity of MIMO 2×2, 3×3, 4×4 systems. The receiver structure of MIMO called Maximum Likelihood (ML) decoding which gives us an even better performance. We will assume that the channel is a flat fading Rayleigh multi path channel and the modulation is BPSK.

Introduction to this thesis paper

describes the Multiple Antenna Techniques such as the Diversity Scheme, Smart Antenna. An overview discussion on Multiple Input and Multiple Output (MIMO) system is also included.

explains about Alamouti Space Time Block Code which is stand for MIMO spatial multiplexing. The Rayleigh Fading Channel because are assuming that the channel is a flat fading Rayleigh multi-path channel And Alamouti STBC with two receive antenna.

illustrates the capacity of MIMO Vs. Signal to Noise Ratio (SNR) (db).

contains the Limitation of work, discussion and suggestion for future work.

Multiple Antenna Techniques


 The Multiple Input Multiple Output (MIMO) technique uses an array of antennas for both transmitting and receiving end. Using MIMO techniques we can obtain better wireless communication as compared to other techniques. The electromagnetic waves transmitted from the antennas bounce around the environment, the receiver receives these electromagnetic waves from multiple directions, with varying delays. The delay varies since the different paths have different length. The Line of Sight (LOS) between the Base Station (BS) and the Subscriber Station (SS) is often very difficult to achieve, because the SS may be located indoors.

Multi-path Environment

 Multiple antenna techniques

Multiple antenna techniques are divided into three subclasses.

1.  Diversity Schemes

2.  Smart Antenna Systems (SAS)

3.  MIMO Systems

Diversity Scheme

Diversity Scheme is a technique, which is used to improve the signal strength. In diversity scheme, two or more communication channels are used. Diversity plays an important role in reduction of fading and elimination of error burst. In diversity schemes multiple versions of the signal are transmitted and received. When these signals are transmitted the Forward Error Correction (FEC) code is added to different parts of the message. Different classes of diversity are given below.

Time Diversity

Multiple versions of the same signals are transmitted at different time instants. The FEC Code is added to the message. Then this message is spread in time before the transmission.  In other words  the  elements  of a radio signal transmitted at the same moment  in time  and  these elements arrive at the receiver at different moments in time, because these signals uses different physical paths, through the use of receiving antenna technology known as rake receivers and multiple input multiple output (MIMO).

Frequency diversity

In this, the signal is transmitted using different frequency channels, such as spread spectrum. OFDM modulation is also used with sub carrier and FEC.

 Space diversity

In Space diversity the signal is sent over different propagation paths.  In wireless communication transmit diversity is used to transmit the signal and reception diversity is used to receive the signal.  In this technique, if the antennas are much more than one wavelength away from each other, this is called macro diversity and if the antennas are in the order of one wavelength, then it is called micro diversity

Polarization diversity

In this technique multiple versions of signal are transmitted and received. It is used to minimize the effects of selective fading of the horizontal and vertical components of a radio signal. It is usually accomplished through the use of separate vertically and horizontally polarized receiving antennas.

Multi user diversity

Multi user diversity supports opportunistic user scheduling at either transmitter or receiver end.  By using scheduling, the transmitter selects the best user from the candidate received according to the quality of each channel.

Smart Antenna System

The smart antenna system also called adaptive antenna system (AAS). In smart antenna system, by using signal processing techniques channel model attains channel knowledge to steer the beam towards the desired subscriber while transmitting null steering towards the interferer. The null steering cancels undesired portion of the signal and reduces the gain of radiation pattern obtained from adaptive array antenna. This is achieved by using Beam forming and null steering towards desired user. The process of combining the radiated signal and focusing it in the desired direction is called Beam forming.

Smart Antenna

Multiple Input Multiple Output (MIMO) system

In MIMO technique BS and SS both have minimum of two transmitter and receiver, per channel as shown in figure 2.2.

Generally in 802.16, for diversity schemes, following three techniques are considered [3].

1.  Space time coding

2.  Antenna Switching

3.  Maximum ratio combining

Multiple Input

Space time coding technique 

The 802.16 standard supports the Alamouti Scheme. In Space Time Coding, the information is sent on two transmit antennas. The information is sent consecutively in time and is called transmit information in time and space. A space–time code (STC) is a method employed to improve the reliability of data transmission in wirelessHYPERLINK “” HYPERLINK “”communication systems using multiple transmit antennas. STCs rely on transmitting multiple, redundant copies of a data stream to the receiver in the hope that at least some of them may survive the physical path between transmission and reception in a good enough state to allow reliable decoding.

Space time codes may be split into two main types:

Space–time trellis codes (STTCs) distribute a trellis code over multiple antennas and multiple time-slots and provide both coding gain and diversity gain.

Space–time block codes (STBCs) act on a block of data at once (similarly to block codes) and provide only diversity gain, but are much less complex in implementation terms than STTCs.

STC may be further subdivided according to whether the receiver knows the channel impairments. In coherent STC, the receiver knows the channel impairments through training or some other form of estimation. These codes have been studied more widely because they are less complex than their non-coherent counterparts. In no coherent STC the receiver does not know the channel impairments but knows the statistics of the channel. In differential space–time codes neither the channel nor the statistics of the channel are available.

Antenna Switching

Technique is used for capturing diversity gains. The purpose of the antenna switching is not to combine signals from the multiple antennas available, it is used to simply select the single antenna with the best channel gain at any given time. This is applicable to both downlink and uplink transmission.

Maximum Ratio Combining (MRC)

Maximum Ratio Combining (MRC) is the technique of diversity scheme which estimates channel characteristics for multiple antennas. MRC obtain diversity and array gain but does not involve spatial multiplexing in any way. In maximum ratio combining, the signal of each channel are added together  and the gain of these channel is proportional to the Root Mean Square (rms) signal level and inversely proportional to the mean square noise level of these channels. Each channel has different proportionality constants, also known as ratio squared combining. [7]


To improve the performance in the Telecommunication field its very essential techniques .Now days without multiple antenna techniques it is impossible to think communication of data transfer .The Multiple Input Multiple Output (MIMO) technique uses an array of antennas for both transmitting and receiving end .Smart Antenna Diversity Scheme, MIMO are the techniques of multiple antenna. MIMO is the best from another two. In Space Time Coding, the information is sent on two transmit antennas. In maximum ratio combining, the signal of each channel are added together  and the gain of these channel is proportional to the Root Mean Square  signal level and inversely proportional to the mean square noise level of these channels.

 ALAMOUTI Space Time Block Code for MIMO System


In order that MIMO spatial multiplexing can be utilized, it is necessary to add coding to the different channels so that the receiver can detect the correct data.

There are various forms of terminology used including Space-Time Block Code – STBC, MIMO preceding MIMO coding, and Alamouti codes. Space-time block codes are used for MIMO systems to enable the transmission of multiple copies of a data stream across a number of antennas and to exploit the various received versions of the data to improve the reliability of data-transfer. Space-time coding combines all the copies of the received signal in an optimal way to extract as much information from each of them as possible. Space time block coding uses both spatial and temporal diversity and in this way enables significant gains to be made. Space-time coding involves the transmission of multiple copies of the data. This helps to compensate for the channel problems such as fading and thermal noise. Although there is redundancy in the data some copies may arrive less corrupted at the receiver.

When using space-time block coding, the data stream is encoded in blocks prior to transmission. These data blocks are then distributed among the multiple antennas (which are spaced apart to decor relate the transmission paths) and the data is also spaced across time.

A space time block code is usually represented by a matrix. Each row represents a time slot and each column represents one antenna’s transmissions over time.

Within this matrix, Sij is the modulated symbol to be transmitted in time slot i from antenna j. There are to be T time slots and nT transmit antennas as well as nR receive antennas. This block is usually considered to be of ‘length’ T.

MIMO Alamouti coding

A particularly elegant scheme for MIMO coding was developed by Alamouti. The associated codes are often called MIMO Alamouti codes or just Alamouti codes. The MIMO Alamouti scheme is an ingenious transmit diversity scheme for two transmit antennas that does not require transmit channel knowledge. The MIMO Alamouti code is a simple space time block code that he developed in 1998.The Alamouti code is a so called Space-Time Block Code (STBC). A block code is a code that operates on a “block” of data at a time and the output only depends on the current input bits. There are other codes, such as “convolution codes” whose output is dependent on the current input, and also on the previous inputs. These codes may not necessarily produce the same output for a given input, depending on what the previous input bits were. The main reason for using a block code is that typically it requires much less processing power to decode a block code than a convolution code.

ALAMOUTI Space Time Block Code

In Space Time Block code output only depends on the current input bits. In convolution codes, the output only depends on the current input bits and on previous inputs. The convolution code may not produce the same output for a given input, because previous input is involved. The block code requires less power, to decode a block code, as compared to convolution code.

The Alamouti coding is described by the following matrix and Y is the encoder output, while X 1 and X2 are the input symbols. The “*” denotes the complex conjugate.

The figure  is a block diagram of the transmitter module in MIMO system and using

the Alamouti code. The binary bits enter a modulator and are converted to “symbols”. A symbol from modulator is represented by complex numbers. This symbol can be transmitted directly in a single antenna, Single Input Single Output (SISO), system. In MIMO system the complex symbols are fed into the Alamouti encoder. The Alamouti encoder maps the symbols onto the transmitter by using the above given matrix. In this matrix, rows represent the transmit antennas, and columns represent the time. The element of the matrix tells what symbol is to be transmitted from a particular antenna. The Alamouti code works with pairs of symbols at a time. It takes two time periods to transmit the two symbols.

Alamouti space

Rayleigh Fading Channel

The Rayleigh fading model is particularly useful in scenarios where the signal may be considered to be scattered between the transmitter and receiver. In this form of scenario there is no single signal path that dominates and a statistical approach is required to the analysis of the overall nature of the radio communications channel.

Rayleigh fading is a model that can be used to describe the form of fading that occurs when multi path propagation exists. In any terrestrial environment a radio signal will travel via a number of different paths from the transmitter to the receiver. The most obvious path is the direct, or line of sight path.

However there will be very many objects around the direct path. These objects may serve to reflect, refract, etc the signal. As a result of this, there are many other paths by which the signal may reach the receiver.

When the signals reach the receiver, the overall signal is a combination of all the signals that have reached the receiver via the multitude of different paths that are available. These signals will all sum together, the phase of the signal being important. Dependent upon the way in which these signals sum together, the signal will vary in strength. If they were all in phase with each other, they would all add together. However this is not normally the case, as some will be in phase and others out of phase, depending upon the various path lengths, and therefore some will tend to add to the overall signal, whereas others will subtract.

Rayleigh fading model has support for troposphere and ionosphere signal propagation. It is most applicable when there is no distinct dominant path along LOS, between the transmitter and receiver. In this regard, Jakes introduced a model for Rayleigh fading based on summing sinusoids. Jakes model works equally, if the single path channel is being modeled or multi path frequency-selective channel is required. The Jakes model also popularized the Doppler spectrum associated with Rayleigh fading and as the result this Doppler spectrum is often termed as Jakes spectrum.

Rayleigh fading is a reasonable model when there are many objects in the environment that scatter the radio signal before it arrives at the receiver. The central limit theorem holds that, if there is sufficiently much scatter, the channel impulse response will be well-modeled as a Gaussian process irrespective of the distribution of the individual components. If there is no dominant component to the scatter, then such a process will have zero mean and phase evenly distributed between 0 and 2π radians. The envelope of the channel response will therefore be Rayleigh distributed. Calling this random variable R, it will have a probability density function:

 Where   Ω = E(R2).

Often, the gain and phase elements of a channel’s distortion are conveniently represented as a complex number. In this case, Rayleigh fading is exhibited by the assumption that the real and imaginary parts of the response are modeled by independent and identically HYPERLINK “”distributed zero-mean Gaussian processes so that the amplitude of the response is the sum of two such processes.

Alamouti STBC with two receive antenna

The principle of space time block coding with 2 transmit antenna and one receive antenna is explained in the post on Alamouti STBC. With two receive antenna’s the system can be modeled as shown in the figure below.

TransmitThe received signal in the first time slot is,

Assuming that the channel remains constant for the second time slot, the received signal is in the second time slot is,


 are the received information at time slot 1 on receive antenna 1, 2 respectively,

 are the received information at time slot 2 on receive antenna 1, 2 respectively,

hij is the channel from ith receive antenna to jth transmit antenna,

x1, x2 are the transmitted symbols,

are the noise at time slot 1 on receive antenna 1, 2 respectively and

 are the noise at time slot 2 on receive antenna 1, 2 respectively.

Combining the equations at time slot 1 and 2,

Let, define H=

To solve for, need to find the inverse of H.

The term,

Simulation Model

Generating random binary sequence of +1’s and -1’s.Group them into pair of two symbols. Code it per the Alamouti Space Time code. Multiply the symbols with the channel and then add white Gaussian noise. Equalize the received symbols Perform hard decision decoding and count the bit errors Repeat for multiple values of Eb/No and plot the simulation and theoretical results.

BER for BPSKObservation

In Alamouti Space Time Block Coding with 2 transmitters and 1 receiver has around 3dB poorer performance. In Figure 3.3 the theoretical Alamouti calculation SNR is 24 db (approx.) and in the simulation SNR decreases 11 db (approx.). BER plot for nTx=1, nRx=2 Maximal ratio combining, the Alamouti Space Time Block Coding has around 5dB better performance.


BER performance is much better than 1 transmits 2 receive MRC case.In 1 transmitter and 2 receivers .The effective channel concatenating the information from 2 receive antennas over two symbols results in a diversity order of 4.In general, with  m receive antennas, the diversity order for 2m transmit antenna Alamouti STBC.

ALAMOUTI Space Time Block Code for MIMO System


An overview of the channel capacities under different scenarios .Single input single output wireless channel capacity is considered first. Multiple channel capacity studies are inspected. MIMO capacity for different conditions is considered. Multi-Input Multi-Output communication system consisting of the following identities. s  input symbols to be transmitted, Nt the number of transmit antennas, Nr the number of receiver antennas y the received symbol vector, H Nr x NMIMO channel matrix n noise with covariance matrix, E{nH}=N0INr

Capacity in AWGN

Capacity studies in communication started with Claude Shannon pioneered theorems.  He defined capacity as the maximum mutual information between inputs and outputs of a communication channel. He then defined his coding theorem, which states that a code does exist that could achieve a data rate close to the capacity with a negligible probability of error. Shannon studies were related to the AWGN channel, wire lined channels. These channels do not fade the signals. Considering the wireless transmission, it is expected that the maximum supportable capacity is smaller than that of the wire lined channels. This is due to the multi-path and fading effects of the wireless communication channels. Capacity studies initially are done for flat fading channels.

The capacity of AWGN channel given the Bandwidth B and SNR g is given by the well known formula.

C=Blog2 (1+ g)

Shannon capacity can be considered as an upper limit for real systems. In wireless communication channel-fading information is important to be known at the transmitter or receiver side for better communication.

Capacity of Multi-channel Communications

Consider a Multi-Input Multi-Output communication system consisting of the following identities.

 s input signal to be transmitted,

Nt the number of transmit antennas,

Nr the number of receiver antennas

y the received signal vector,

H is the channel response in the Nr x NMIMO channel matrix

n noise with covariance matrix, E{nnH}=N0INr


The relation between input and output is given by

yEs is the average transmits signal energy. If we choose Ts=1 then Es turns out to be the transmit power.

Deterministic MIMO channel capacity

Telstra has taken the role of Shannon for MIMO channel capacity studies; he defined the deterministic MIMO channel capacity as stated below


Channel knowledge available only at receiver

The channel capacity in the absence of the channel knowledge at the transmitter is given by the following formula.

RssInserting HHH=QDQH in the above equation, an equivalent and simple expression is obtained as in (13)

C = ∑   Log2(1+ES /NtNo*λi)


r is the rank of the channel and lis are the positive eigenvalues of the HHH

 In the case of an normalized orthogonal channel satisfying Nr=Nt=N, the capacity of the MIMO channel is given as

                 C = N log2 (1+ (Es/No))

That is the capacity of MIMO channel is equal to N times that of the SISO channel.

Channel knowledge available at the transmitter side case:

systemSVD of H yields H= UVH, inserting this into (11) and rearranging the parameters we get,

SbOr this equation can be written as,


I=1..r, where r is the rank of  HHH

The capacity of the MIMO channel is the sum of the SISO channels,

litonConsidering transmit power constraint the capacity formula reduces to,

himuThe capacity maximization problem can be solved using the lagrangian methods, hence leading to the results


The optimal power allocation to the individual channels is found through an algorithm known as the Water pouring algorithm.

As can be seen from the formulas capacity of the MIMO channels increases when the channel knowledge is available at the transmitter, compared to the case in which receiver only has the channel information.

Capacity of the multiple input single output (MISO) and single input multiple output channels (SIMO).

Random MIMO channel capacity

Since the channel is random so the capacity is. In order to understand the capacity for the fading channels some definitions are needed. These are, ergodic capacity, outage capacity, outage probability,

Capacity is the expected value of the capacity over the distribution of the channel elements. It can be though as the mean information rate.

Outage probability: is the probability that given a transmission rate R and transmit power constraint the channel capacity cannot support a reliable communication

Pout = P(C£ R)

Outage Capacity:

Given an outage probability, the maximum information rate that can be supported by the communication channel is called the outage capacity of the channel.

Capacity when the channel is known only to the receiver is given as:

ceThe capacity when the channel is known to the transmitter is given as:

cmA lower bound for the capacity is defined as [1],

cbcMIMO channel capacity in the presence of antenna correlation effect

When there is correlation between antennas at receiver or transmitter side, the channel matrix is expressed as,

HWhere Rr and Rt are positive definite matrices showing the correlation effect between antenna elements. The elements of the R matrices can be found using

RAt high SNR the capacity of the MIMO channel can be written as

logFrequency selective MIMO channel capacity 

If the channel is frequency selective, then the channel is divided into frequency flat sub-channels and the capacity of the each sub-channel is calculated using the frequency flat channel capacity formulas. The capacity of the frequency selective channel is the sum of each individual frequency flat channel. When the channel knowledge is available at the transmitter side, frequency flat channels uses the water pouring algorithm to distribute the power to the channel modes. In frequency selective channels, power must be distributed across space and frequency to make a more reliable communication. For deterministic frequency selective channels the capacity formula is given as in


If the channel is random then the capacity for the frequency selective MIMO channel is defines as in .


Mutual Information and Shannon Capacity

Channel capacity was established by Claude Shannon in 1940s, by using the mathematical theory of communication. The capacity of a channel is denoted by C. The channel capacity C is the maximum rate at which reliable communication can be performed, without any constraints on transmitter and receiver complexity. Shannon showed that for any rate R < C, there exist rate R channel codes with arbitrarily small block error probabilities.

Thus, for any rate R < C and any desired non-zero probability of error ρe,  there exists a rate R code that achieves ρe. The code may have a very long block length and encoding and decoding complexity may also be extremely large.  The required block length may increase as the desired  ρe  is decreased or the rate R is increased towards C. Shannon showed that code  operating at rates R > C cannot achieve an arbitrarily small error rate. So the error probability of a code operating at a rate above capacity is bounded away from zero. Therefore, the Shannon channel capacity is truly the fundamental limit to communication.

Capacity of MIMO system

MIMO system consists of multiple transmit and receive antennas interconnected with multiple transmission paths. MIMO increases the capacity of system by utilizing multiple antennas both at transmitter and receiver without increasing the bandwidth.

In the situation where the channel is known at both transmitters (Tx) and receiver (Rx) and is used to compute the optimum weight, the power gain in the kth sub channel is given by the kth value. i.e., the SNR for the kth sub channel equals


Where PK is the power assigned to the kth sub channel, λk is the kth value and σ2N is the noise power. For simplicity, it is assumed that σ2N = 1. According to Shannon, the maximum capacity of K parallel sub channels equals

cyWhere M is the number of symbols and means SNR is defined as


Given the set of evalues { λk }, the power  PK allocated to each sub channel  k  is determined to maximize the capacity by using Gallager’s waterfilling theorem  such that Each sub channel is filled up to a common level D, i.e.

DWith a constraint on the total Tx power such that

ptxWhere, PTX total transmitted power. This means that the sub cannel with the highest

Gain is allocated with the largest amount of power. In the case where 1/ λ, k >D then PK =0

When the uniform power allocation scheme is employed, the power PK is adjusted according to

P1 = ⋯ = PK

Thus, in the situation where the channel is unknown, the uniform distribution of the power is applicable over the antennas. So that the power should be equally distributed between the N elements of the array at the Tx, i.e.,

Pn= PTX/N; n=1…..N

Capacity Analysis using Shannon Capacity

MIMO system consists of multiple transmit and receive antennas interconnected with multiple transmission paths. MIMO increases the capacity of system by utilizing multiple antennas both at transmitter and receiver without increasing the bandwidth. In the situation where the channel is known at both transmitter (Tx) and receiver (Rx) and is used to compute the optimum weight, the power gain in the kth sub channel is given by the kth value, i.e., the SNR for the kth sub channel equals.


The capacity of MIMO system is given as follows

ckBy implementing the above equation using mat lab, following result is obtained

Capacity Analysis of MIMOObservation

Figure 4.3shows, an analysis of the capacity of the system having multiple transmitters and receivers. The capacity of a MIMO system has been plotted against SNR in dB. It is clear that with increase in the number of antennas at the both sides capacity increases linearly i.e. with nt=4 and nr=4, we have achieved highest capacity in MIMO systems.


MIMO system transmits two or more data streams in the same channel. The data streams are sent at the same time. MIMO System are also used to obtain the goal of evaluating the capacity of a system using N transmit and M receive antennas. Increasing number of antennas at either side of the MIMO system will have same effect in raising the capacity.

Discussion and Suggestion for future work


Modern wireless systems requires high data rate. The MIMO techniques are physical layer based and essential part of the IEEE 802.16e-2005. The Simulation result with 2 transmit antenna and 2 receive antenna using Alamouti STBC code is better than Theoretical value with 2 transmit antenna and 1 receive antenna using Alamouti code.In MRC case with 1 transmitter and 2 receiver  is much better than theoretical 1 transmit antenna and 1 receive antenna .This is because the effective channel concatenating the information from 2 receive antennas over two symbols results in a order of 4.with m receive antennas, the diversity order for 2 transmit antenna Alamouti STBC is 2m. Moreover the results of MIMO capacity shows that increase in number of antennas at the both side’s capacity increases linearly. Figure 4.3  shows that, when nt=4 and nr=4 it achieves the highest capacity as compared with nt=2 and nr=3 and nt=3 and nr=2 respectively in MIMO system. Increasing the number of antennas at either side of MIMO system will have the same effect of raising the capacity. Matlab 6.5 is used for Simulation.

Suggestion for future work

Multiple Antennas is indeed a very vast topic and myriad scopes are there to execute research. This book is rather an introductory concept about the multiple antennas and an overview and performance analysis of MIMO .There are three techniques for MIMO. This book is worked with Alamouti Space time block coding .BER performance for BPSK modulation with nTx, nRx Alamouti STBC can be improved. In future,This simulated bit error rate performance results can be tested and verified practically. Further research can be helpful to make it more efficient and reliable in performance.


Wireless Cellular networks are usually interference-limited and different data streams on different antennas are equivalent to more users in system .If Ntx=Nrx, multiple antennas at receiver allow separation of all desired data streams. If Ntx<Nrx,multiple antennas at receiver additional don’t allow separation of all desired data streams. Capacity of MIMO with increasing transmitter and receiver can be tested using matlab simulations .Here 4 transmitter and receiver is used for simulation.

Multipul Antina system


Design and Implementation of Battery Charge Controller


This is the final report for the design of charge controller for a solar system using the IC: SG3524. We can control the charge from the solar panel to battery by using the circuit. Mainly this controller circuit is used between the solar panel and battery. There is a charging limitation of the battery which is selected by user. If the battery become less charge by using load in normal condition then the battery will be charged from the solar panel through the controller circuit until user selected charging limitation. If the battery is fully charged, the controller circuit will isolated the battery from the solar panel. This includes the construction, electrical aspects of design. It has also included test procedures and reliability methods to assure successful designs, with the economic analysis of designs covered as well.


Electricity is the most potential for foundation of economic growth of a country and constitutes one of the vital infrastructural inputs in socio-economic development .The world faces a surge in demand for electricity that is driven by such powerful forces as population growth, extensive urbanization, industrialization and the rise in the standard of living.

Bangladesh, with its 160 million people in a land mass of 147,570sq km. In 1971, just 3% of Bangladesh’s population had access to electricity .Today that number has increased to around 50% of the population –still one of the lowest in the world-but access often amounts to just a few hours each day. Bangladesh claims the lowest per-capita consumption of commercial energy in South Asia, but there is a significant gap between supply and demand. Bangladesh’s power system depends on fossil fuels supplied by both private sector and state-owned power system. After system losses, the countries per installed capacity for electricity   generation can generate 3,900-4300 Megawatts of electricity per day; however, daily demand is near   6,000 Megawatts per day. In general, rapid industrialization and urbanization has propelled the increase in demand for energy by 10% per year. What further exacerbates Bangladesh’s energy problems is the fact the country’s power generation plants are dated and may need to be shut     down sooner rather than later.

There was no institutional framework for renewable energy before 2008; therefore the renewable energy policy was adopted by the government. According to the policy an institution, Sustainable & Renewable Energy Development Authority (SREDA), was to be established as a focal point for the promotion and development of sustainable energy, comparison of renewable energy, energy efficiency and energy conservation. Establishment of SREDA is still under process. Power division is to facilitate the development of renewable energy until SREDA is formed.

While the power sector in Bangladesh has witnessed many success stories in the last couple of years, the road that lies ahead is dotted with innumerable challenges that result from the gaps that exist between what’s planned versus what the power sector has been able to deliver. There is no doubt that the demand for electricity is increasing rapidly with the improvement of living standard, increase of agricultural production, progress of industries as well as overall development of the country

Power Generation Scenery in Bangladesh

Severe power crisis compelled the Government to enter into contractual agreements for high-cost temporary solution, such as rental power and small IPPs, on an emergency basis, much of it diesel or liquid-fuel based. This has imposed tremendous fiscal pressure. With a power sector which is almost dependent on natural-gas fired generation (89.22%), the country is confronting a simultaneous shortage of natural gas and electricity. Nearly 400-800 MW of power could not be availed from the power plants due to shortage of gas supply. Other fuels for generating low-cost, base-load energy, such as coal, or renewable source like hydropower, are not readily available and Government has no option but to go for fuel diversity option for power generation.

When the present Government assumed the charge, the power generation was 3200 – 3400 MW against national demand of 5200 MW. In the election manifesto, government had declared specific power generation commitment of 5000 MW by 2011 and 7000 MW by 2013.

Over View of Electricity Last Couple of Year

To achieve this commitment, in spite of the major deterrents energy crisis and gas supply shortage, government has taken several initiatives to generate 6000 MW by 2011, 10,000 MW by 2013 and 15,000 MW by 2016, which are far beyond the commitment in the election manifesto. 2944 MW of power (as of Jan, 2012) has already been added to the grid within three years time. The government has already developed Power system Master Plan 2010. According to the Master Plan the forecasted demand would be 19,000 MW in 2021 and 34,000 MW in 2030. To meet this demand the generation capacity should be 39,000 MW in 2030. The plan suggested going for fuel-mixed option, which should be domestic coal 30%, imported coal 20 %, natural gas (including LNG) 25%, liquid fuel 5%, nuclear, renewable energy and power import 20%. In line with the Power system Master Plan 2010, an interim generation plan up to 2016 has been prepared, which is as follows:

Table: Plants Commissioned During 2009-2011

Power Generation Sector

2009 (MW)

2010 (MW)

2011 (MW)












Q. Rental










  *In 2011, 1763 MW commissioned against plan for 2194 MW

Power Generation Units (fuel Type Wise)

Table: Installed Capacity of BPDB Power Plants as on April 2012

Plant Type

Total Capacity (in MW)

(%) Percentage in total developed power


5086.00 MW

75.99 %





335.00 MW

5.01 %





230.00 MW

3.44 %







Table: Dreaded Capacity of BPDPB Power Plants as on April 2012 

Plant Type

Total Capacity (in MW)

(%) Percentage in total developed power


4651.00 MW

76.74 %





248.00 MW

4.09 %





220.00 MW

3.63 %








Table: Daily Generation of 25/04/2012

Owner Name

Derated Capacity(MW)

Day Peak(MW)

Eve. Peak(MW)

























Rental(3 years)








Q.Rental 3Years




Rental 15 years




























 Electricity Demand and Supply

Per capita generation of electricity in Bangladesh is now about 252KWh. In view of the prevailing low consumption base in Bangladesh, a high growth rate in energy and electricity is indispensable for facilitating smooth transition from subsistence level of economy to the development threshold. The average annual growth in peak demand of the national grid over the last three decades was about 8.5%. It is believed that the growth is still suppressed by shortage of supply. Desired growth is generation is hampered, in addition to financial constraints, by inadequacy in supply of primary energy resources. The strategy adopted during the energy crisis was to reduce dependence on imported oil through its replacement by indigenous fuel. Thus almost all plants built after the energy crises were based on natural gas as fuel. Preference for this fuel is further motivated by its comparatively low tariff for power generation.

Power Demand Forecasts (2010-2030)

The adoption scenarios of the power demand forecast in this MP are as shown in the figure below.

The figure indicates three scenarios; (i) GDP 7% scenario and (ii) GDP 6% scenario, based on energy intensity method, and (iii) government policy scenario



Power is the precondition for social and economic development. But currently consumers cannot be provided with uninterrupted and quality power supply due to inadequate generation compared to the national demand. To fulfill the commitment as declared in the Election Manifesto and to implement the Power Sector Master Plan 2010, Government has already been taken massive generation, transmission and distribution plan. The generation target up to 2016 is given below:




































Table: Power generation addition from 2009-11

     *2894 MW Power Generation addition from January 2009 to December 2011

 Government Upcoming Nearest plan

Government has taken short, medium and long term plan. Under the short term plan, Quick Rental Power Plants will be installed using liquid fuels/gas and capable to produce electricity within 12-24 months. Nearly 1753 MW is planned to be generated from rental and quick rental power plants.

Under the medium term plan, initiatives have been taken to set up power plants with a total generation capacity of 7919 MW that is implementable within 3 to 5 years time. The plants are mainly coal based; some are gas and oil based. In the long term plan, some big coal fired plants will be set up, one will be in Khulna South and other will be in Chittagong, each of having the capacity of 1300 MW. Some 300-450 MW plants will be set up in Bibiana, Meghnaghat, Ashugonj, Sirangonj and in Ghorashal. If the implementation of the plan goes smoothly, it will be possible to minimize the demand-supply gap at the end of 2012.

Government has already started implementation of the plan. Total 31,355 Million-kilowatt hour (MkWh) net energy was generated during 2010-11. Public sector power plant generated 47% while private sector generated 53% of total net generation. The share of gas, hydro, coal and oil based energy generation was 82.12%, 2.78%, 2.49% and 12.61% respectively. On the other hand, in FY 2009-10, 29,247 million-kilowatt hour (MkWh) net energy was generated i.e. electricity growth rate in FY 2011 was 7.21% (In FY 2012 (Jul-Dec, 2011) is 13.2%).

Why do we select this project?

Now fuel crises are increasing day by day in worldwide and it impacts on energy sector to produce or generate electricity. Big amount of fuel from total reserved of fuel in our country is used to generate electricity.

Therefore the reserved fuel will be finish in the future. Analysis are thinking to make the strong energy sector with the rentable energy is one of the major part of the renewable energy to produce electricity and that is why we have chosen the solar energy system.

The solar system is constructed with various types of ingredients. But here the battery is the heart of the solar system. The solar energy is not used directly and it is used with the help of the battery because we get very low D.C voltage from the solar panel. Therefore we need to use the battery to store this low D.C voltage which is supplied from the solar panel. In a solar system, the 50% cost is expense for the battery from its total cost. Since the battery is a major part of the solar system and it is charged perfectly by a controller circuit. If the battery is not charged perfectly then the charge capacity will be decreasing in a very short time and it also can be damaged for the overcharging.

We have chosen the battery charge controller system by considering above reason.

An Introduction to Solar Energy

 The interest in renewable energy has been revived over last few year, especially after global awareness regarding the ill effects of fossil fuel burning. Energy is the source of growth and the mover for economic and social development of a nation and its people. No matter how we cry about development or poverty alleviation it is not going to come until lights are provided to our people for seeing, reading and working.

Natural resources or energy sources such as; fossil fuels, oil natural gas, etc. are completely used or economically depleted. Because we are rapidly exhausting, our non-renewable resources, degrading the potentially renewable resources and even threatening the perpetual resources. It demands immediate attention especially in the third world countries, where only scarce resources are available for an enormous size of population. The civilization is dependent on electric power. There is a relationship between GDP growth rate and electricity growth rate in a country.

Clearly, the present gas production capacity in Bangladesh can’t support both domestic gas needs,  as well as wider electricity generation for the country . On September 15th 2009, the Power Division of the Ministry of Power, Energy  and Mineral Resources of Bangladesh pushed for urgent action to be taken to improve the country’s energy outlook. The Power Division made recommendation such as ceasing gas supply to gas-fired power plants after 2012 to conserve gas reserves for domestic use.

The Government of Bangladesh is actively engaged in energy crisis management. The National Energy Policy has the explicit goal of supplying the whole country with electricity by 2020. Since 1996, the government has allowed private, independent power producer to enter the Bangladeshi market. It is already importing 100 Megawatts of power from India and has negotiated with private companies renting plants to buy power at higher rates.

It is impossible to conceive development of civilization without “Energy”.Densely populated country like Bangladesh can only sustain and progress if only latest energy technologies can be used efficiently. Government of Bangladesh is working towards achieving “Power i.e. Electricity for All” by the year 2020.Bangladesh is one of the most severely affected counties of the World due to climate change and global warming effects.

   What Is Solar Energy?

Solar energy is energy that comes from the sun. Every day the sun radiates, or sends out, an enormous amount of energy. The sun radiates more energy in one second than people have used since the beginning of time!

Where does all this energy come from? It comes from within the sun itself. Like other stars, the sun is a big gas ball made up mostly of hydrogen and helium. The sun generates energy in its core in a process called nuclear fusion. During nuclear fusion, the sun’s extremely high pressure and hot temperature cause hydrogen atoms to come apart and their nuclei (the central cores of the atoms) to fuse or combine. Four hydrogen nuclei fuse to become one helium atom. But the helium atom weighs less than the four nuclei that combined to form it. Some matter is lost during nuclear fusion. The lost matter is emitted into space as radiant energy.

It takes millions of years for the energy in the sun’s core to make its way to the solar surface, and then just a little over eight minutes to travel the 93 million miles to earth. The solar energy travels to the earth at a speed of 186,000 miles per second, the speed of light. Only a small portion of the energy radiated by the sun into space strikes the earth, one part in two billion. Yet this amount of energy is enormous. Every day enough energy strikes the United States to supply the nation’s energy needs for one and a half years!

Where does all this energy go? About 15 percent of the sun’s energy that hits the earth is reflected back into space. Another 30 percent is used to evaporate water, which, lifted into the atmosphere, produce’s rain-fall. Solar energy also is absorbed by plants, the land, and the oceans. The rest could be used to supply our energy needs.

History of Solar Energy

People have harnessed solar energy for centuries. As early as the 7th century B.C., people used simple magnifying glasses to concentrate the light of the sun into beams so hot they would cause wood to catch fire. Over 100 years ago in France, a scientist used heat from a solar collector to make steam to drive a steam engine.

In the beginning of this century, scientists and engineers began researching ways to use solar energy in earnest. One important development was a remarkably efficient solar boiler invented by Charles Greeley Abbott, an American astrophysicist, in 1936.

The solar water heater gained popularity at this time in Florida, California, and the Southwest. The industry started in the early 1920s and was in full swing just before World War 11. This growth lasted until the mid- 1950s when low-cost natural gas became the primary fuel for heating American homes. The public and world governments remained largely indifferent to the possibilities of solar energy until the oil shortages of the 1970s. Today people use solar energy to heat buildings and water and to generate electricity.

Utilization of solar Energy

Solar energy, radiant light and heat from the sun, has been harnessed by humans since ancient times using a range of ever-evolving technologies. Solar radiation, along with secondary solar-powered resources such as wind and wave power, hydroelectricity and biomass, account for most of the available renewable energy on earth. Only a minuscule fraction of the available solar energy is used.

Solar powered electrical generation relies on heat engines and photovoltaic. Solar energy’s uses are limited only by human ingenuity. A partial list of solar applications includes space heating and cooling through solar architecture, potable water via distillation and disinfection, day lighting, solar hot water, solar cooking, and high temperature process heat for industrial purposes. To harvest the solar energy, the most common way is to use solar panels.

Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert and distribute solar energy. Active solar techniques include the use of photovoltaic panels and solar thermal collectors to harness the energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air.

There are main two ways we can produce electricity from the sun:

  1. Photovoltaic Electricity – This method uses photovoltaic cells that absorb the direct sunlight just like the solar cells you see on some calculators.
  2. Solar Thermal Electricity – This also uses a solar collector: it has a mirrored surface that reflects the sunlight onto a receiver that heats up a liquid. This heated liquid is used to make steam that produces electricity.

Solar System Descriptions

In today’s climate of growing energy needs and increasing environmental concern, alternatives to the use of non-renewable and polluting fossil fuels have to be investigated. One such alternative is solar energy.

Solar energy is quite simply the energy produced directly by the sun and collected elsewhere, normally the Earth. The sun creates its energy through a thermonuclear process that converts about 650,000,0001 tons of hydrogen to helium every second. The process creates heat and electromagnetic radiation. The heat remains in the sun and is instrumental in maintaining the thermonuclear reaction. The electromagnetic radiation (including visible light, infra-red light, and ultra-violet radiation) streams out into space in all directions.

Only a very small fraction of the total radiation produced reaches the Earth. The radiation that does reach the Earth is the indirect source of nearly every type of energy used today. The exceptions are geothermal energy, and nuclear fission and fusion. Even fossil fuels owe their origins to the sun; they were once living plants and animals whose life was dependent upon the sun.

Much of the world’s required energy can be supplied directly by solar power. More still can be provided indirectly. The practicality of doing so will be examined, as well as the benefits and drawbacks. In addition, the uses solar energy is currently applied to will be noted.

Due to the nature of solar energy, two components are required to have a functional solar energy generator. These two components are a collector and a storage unit. The collector simply collects the radiation that falls on it and converts a fraction of it to other forms of energy (either electricity and heat or heat alone). The storage unit is required because of the non-constant nature of solar energy; at certain times only a very small amount of radiation will be received. At night or during heavy cloud cover, for example, the amount of energy produced by the collector will be quite small. The storage unit can hold the excess energy produced during the periods of maximum productivity, and release it when the productivity drops. In practice, a backup power supply is usually added, too, for the situations when the amount of energy required is greater than both what is being produced and what is stored in the container.

Methods of collecting and storing solar energy vary depending on the uses planned for the solar generator. In general, there are three types of collectors and many forms of storage units.

The three types of collectors are flat-plate collectors, focusing collectors, and passive collectors.

Flat-plate collectors are the more commonly used type of collector today. They are arrays of solar panels arranged in a simple plane. They can be of nearly any size, and have an output that is directly related to a few variables including size, facing, and cleanliness. These variables all affect the amount of radiation that falls on the collector. Often these collector panels have automated machinery that keeps them facing the sun. The additional energy they take in due to the correction of facing more than compensates for the energy needed to drive the extra machinery

Focusing collectors are essentially flat-plane collectors with optical devices arranged to maximize the radiation falling on the focus of the collector. These are currently used only in a few scattered areas. Solar furnaces are examples of this type of collector. Although they can produce far greater amounts of energy at a single point than the flat-plane collectors can, they lose some of the radiation that the flat-plane panels do not. Radiation reflected off the ground will be used by flat-plane panels but usually will be ignored by focusing collectors (in snow covered regions, this reflected radiation can be significant). One other problem with focusing collectors in general is due to temperature. The fragile silicon components that absorb the incoming radiation lose efficiency at high temperatures, and if they get too hot they can even be permanently damaged. The focusing collectors by their very nature can create much higher temperatures and need more safeguards to protect their silicon components.

Passive collectors are completely different from the other two types of collectors. The passive collectors absorb radiation and convert it to heat naturally, without being designed and built to do so. All objects have this property to some extent, but only some objects (like walls) will be able to produce enough heat to make it worthwhile. Often their natural ability to convert radiation to heat is enhanced in some way or another (by being painted black, for example) and a system for transferring the heat to a different location is generally added.

 People use energy for many things, but a few general tasks consume most of the energy. These tasks include transportation, heating, cooling, and the generation of electricity. Solar energy can be applied to all four of these tasks with different levels of success.

Heating is the business for which solar energy is best suited. Solar heating requires almost no energy transformation, so it has a very high efficiency. Heat energy can be stored in a liquid, such as water, or in a packed bed. A packed bed is a container filled with small objects that can hold heat (such as stones) with air space between them. Heat energy is also often stored in phase-change or heat-of-fusion units. These devices will utilize a chemical that changes phase from solid to liquid at a temperature that can be produced by the solar collector. The energy of the collector is used to change the chemical to its liquid phase, and is as a result stored in the chemical itself. It can be tapped later by allowing the chemical to revert to its solid form. Solar energy is frequently used in residential homes to heat water. This is an easy application, as the desired end result (hot water) is the storage facility. A hot water tank is filled with hot water during the day, and drained as needed. This application is a very simple adjustment from the normal fossil fuel water heaters.

Swimming pools are often heated by solar power. Sometimes the pool itself functions as the storage unit, and sometimes a packed bed is added to store the heat. Whether or not a packed bed is used, some method of keeping the pool’s heat for longer than normal periods (like a cover) is generally employed to help keep the water at a warm temperature when it is not in use.

Solar energy is often used to directly heat a house or building. Heating a building requires much more energy than heating a building’s water, so much larger panels are necessary. Generally a building that is heated by solar power will have its water heated by solar power as well. The type of storage facility most often used for such large solar heaters is the heat-of-fusion storage unit, but other kinds (such as the packed bed or hot water tank) can be used as well. This application of solar power is less common than the two mentioned above, because of the cost of the large panels and storage system required to make it work. Often if an entire building is heated by solar power, passive collectors are used in addition to one of the other two types. Passive collectors will generally be an integral part of the building itself, so buildings taking advantage of passive collectors must be created with solar heating in mind.

These passive collectors can take a few different forms. The most basic type is the incidental heat trap. The idea behind the heat trap is fairly simple. Allow the maximum amount of light possible inside through a window (The window should be facing towards the equator for this to be achieved) and allow it to fall on a floor made of stone or another heat holding material. During the day, the area will stay cool as the floor absorbs most of the heat, and at night, the area will stay warm as the stone re-emits the heat it absorbed during the day. Another major form of passive collector is thermos phonin walls and/or roof. With this passive collector, the heat normally absorbed and wasted in the walls and roof is re-routed into the area that needs to be heated.

The last major form of passive collector is the solar pond. This is very similar to the solar heated pool described above, but the emphasis is different. With swimming pools, the desired result is a warm pool. With the solar pond, the whole purpose of the pond is to serve as an energy regulator for a building. The pond is placed either adjacent to or on the building, and it will absorb solar energy and convert it to heat during the day. This heat can be taken into the building, or if the building has more than enough heat already, heat can be dumped from the building into the pond.

Solar energy can be used for other things besides heating. It may seem strange, but one of the most common uses of solar energy today is cooling. Solar cooling is far more expensive than solar heating, so it is almost never seen in private homes. Solar energy is used to cool things by phase changing a liquid to gas through heat, and then forcing the gas into a lower pressure chamber. The temperature of a gas is related to the pressure containing it, and all other things being held equal, the same gas under a lower pressure will have a lower temperature. This cool gas will be used to absorb heat from the area of interest and then be forced into a region of higher pressure where the excess heat will be lost to the outside world. The net effect is that of a pump moving heat from one area into another, and the first is accordingly cooled.

Besides being used for heating and cooling, solar energy can be directly converted to electricity. Most of our tools are designed to be driven by electricity, so if you can create electricity through solar power, you can run almost anything with solar power. The solar collectors that convert radiation into electricity can be either flat-plane collectors or focusing collectors, and the silicon components of these collectors are photovoltaic cells.

Photovoltaic cells, by their very nature, convert radiation to electricity. This phenomenon has been known for well over half a century, but until recently the amounts of electricity generated were good for little more than measuring radiation intensity. Most of the photovoltaic cells on the market today operate at an efficiency of less than 15%; that is, of all the radiation that falls upon them, less than 15% of it is converted to electricity. The maximum theoretical efficiency for a photovoltaic cell is only 32.3%, but at this efficiency, solar electricity is very economical. Most of our other forms of electricity generation are at a lower efficiency than this.

Unfortunately, reality still lags behind theory and a 15% efficiency is not usually considered economical by most power companies, even if it is fine for toys and pocket calculators. Hope for bulk solar electricity should not be abandoned, however, for recent scientific advances have created a solar cell with an efficiency of 28.2% efficiency in the laboratory. This type of cell has yet to be field-tested. If it maintains its efficiency in the uncontrolled environment of the outside world, and if it does not have a tendency to break down, it will be economical for power companies to build solar power facilities after all.

Of the main types of energy usage, the least suited to solar power is transportation. While large, relatively slow vehicles like ships could power themselves with large onboard solar panels, small constantly turning vehicles like cars could not. The only possible way a car could be completely solar powered would be through the use of battery that was charged by solar power at some stationary point and then later loaded into the car. Electric cars that are partially powered by solar energy are available now, but it is unlikely that solar power will provide the world’s transportation costs in the near future.

Solar power has two big advantages over fossil fuels. The first is in the fact that it is renewable; it is never going to run out. The second is its effect on the environment.

While the burning of fossil fuels introduces many harmful pollutants into the atmosphere and contributes to environmental problems like global warming and acid rain, solar energy is completely non-polluting. While many acres of land must be destroyed to feed a fossil fuel energy plant its required fuel, the only land that must be destroyed for a solar energy plant is the land that it stands on. Indeed, if a solar energy systems were incorporated into every business and dwelling, no land would have to be destroyed in the name of energy. This ability to decentralize solar energy is something that fossil fuel burning cannot match.

As the primary element of construction of solar panels, silicon, is the second most common element on the planet, there is very little environmental disturbance caused by the creation of solar panels. In fact, solar energy only causes environmental disruption if it is centralized and produced on a gigantic scale. Solar power certainly can be produced on a gigantic scale, too. Among the renewable resources, only in solar power do we find the potential for an energy source capable of supplying more energy than is used.

Suppose that of the 4.5×1017 kWh per annum that is used by the earth to evaporate water from the oceans we were to acquire just 0.1% or 4.5×1014 kWh per annum. Dividing by the hours in the year gives a continuous yield of 2.90×1010 kW. This would supply 2.4 kW to 12.1 billion people.

This translates to roughly the amount of energy used today by the average American available to over twelve billion people. Since this is greater than the estimated carrying capacity of the Earth, this would be enough energy to supply the entire planet regardless of the population.

Unfortunately, at this scale, the production of solar energy would have some unpredictable negative environmental effects. If all the solar collectors were placed in one or just a few areas, they would probably have large effects on the local environment, and possibly have large effects on the world environment. Everything from changes in local rain conditions to another Ice Age has been predicted as a result of producing solar energy on this scale. The problem lies in the change of temperature and humidity near a solar panel; if the energy producing panels are kept non-centralized, they should not create the same local, mass temperature change that could have such bad effects on the environment.

Of all the energy sources available, solar has perhaps the most promise. Numerically, it is capable of producing the raw power required to satisfy the entire planet’s energy needs. Environmentally, it is one of the least destructive of all the sources of energy. Practically, it can be adjusted to power nearly everything except transportation with very little adjustment, and even transportation with some modest modifications to the current general system of travel. Clearly, solar energy is a resource of the future.

Advantage of Solar Energy:

  1. Technology is easy
  2. Affordable cost
  3. Within the ability of poor’s
  4. Basically no maintenance cost
  5. Only source of energy is sunshine
  6. Energy source is cost free
  7. Environmental Pollution is less
  8. No emission
  9. Very few materials are required

Theory of solar cell charge Circuit

Equivalent circuit of a solar cell

To understand the electronic behavior of a solar cell, it is useful to create a model which is electrically equivalent, and is based on discrete electrical components whose behavior is well known. An ideal solar cell may be modeled by a current source in parallel with a diode; in practice no solar cell is ideal, so a shunt resistance and a series resistance component are added to the model. The resulting equivalent circuit of a solar cell is shown on the left. Also shown, on the right, is the schematic representation of a solar cell for use in circuit diagrams


Characteristic equation

From the equivalent circuit it is evident that the current produced by the solar cell is equal to that produced by the current source, minus that which flows through the diode, minus that which flows through the shunt resistor:

I = IL − ID − ISH                      


  • I = output current (amperes)
  • IL = photo generated current (amperes)
  • ID = diode current (amperes)
  • ISH = shunt current (amperes).

     The current through these elements is governed by the voltage across them:

Vj = V + IRS


  • Vj = voltage across both diode and resistor RSH (volts)
  • V = voltage across the output terminals (volts)
  • I = output current (amperes)
  • RS = series resistance (Ω).

     By the Shockley diode equation, the current diverted through the diode is:


  • I0 = reverse saturation current (amperes)
  • n = diode ideality factor (1 for an ideal diode)
  • q = elementary charge
  • k = Boltzmann’s constant
  • T = absolute temperature
  • At 25°C,  volts.

          By Ohm’s law, the current diverted through the shunt resistor is:


  • RSH = shunt resistance (Ω).

Substituting these into the first equation produces the characteristic equation of a solar cell, which relates solar cell parameters to the output current and voltage:

An alternative derivation produces an equation similar in appearance, but with V on the left-hand side. The two alternatives are identities; that is, they yield precisely the same results.

In principle, given a particular operating voltage V the equation may be solved to determine the operating current I at that voltage. However, because the equation involves I on both sides in a transcendental function the equation has no general analytical solution. However, even without a solution it is physically instructive. Furthermore, it is easily solved using numerical methods. (A general analytical solution to the equation is possible using Lambert’s W function, but since Lambert’s W generally itself must be solved numerically this is a technicality.)Since the parameters I0, n, RS, and RSH cannot be measured directly, the most common application of the characteristic equation is nonlinear regression to extract the values of these parameters on the basis of their combined effect on solar cell behavior.

 Basic Battery Charging Methods

  • Constant Voltage a constant voltage charger is basically a DC power supply which in its simplest form may consist of a step down transformer from the mains with a rectifier to provide the DC voltage to charge the battery. Such simple designs are often found in cheap car battery chargers. The lead-acid cells used for cars and backup power systems typically use constant voltage chargers. In addition, lithium-ion cells often use constant voltage systems, although these usually are more complex with added circuitry to protect both the batteries and the user safety.
  • Constant Current Constant current chargers vary the voltage they apply to the battery to maintain a constant current flow, switching off when the voltage reaches the level of a full charge. This design is usually used for nickel-cadmium and nickel-metal hydride cells or batteries.
  • Taper Current this is charging from a crude unregulated constant voltage source. It is not a controlled charge as in V Taper above. The current diminishes as the cell voltage (back emf) builds up. There is a serious danger of damaging the cells through overcharging. To avoid this charging rate and duration should be limited. Suitable for SLA batteries only.
  • Pulsed charge Pulsed chargers feed the charge current to the battery in pulses. The charging rate (based on the average current) can be precisely controlled by varying the width of the pulses, typically about one second. During the charging process, short rest periods of 20 to 30 milliseconds, between pulses allow the chemical actions in the battery to stabilize by equalizing the reaction throughout the bulk of the electrode before recommencing the charge. This enables the chemical reaction to keep pace with the rate of inputting the electrical energy. It is also claimed that this method can reduce unwanted chemical reactions at the electrode surface such as gas formation, crystal growth and passivation. (See also Pulsed Charger below). If required, it is also possible to sample the open circuit voltage of the battery during the rest period.
  • Burp charging also called Reflex or Negative Pulse Charging Used in conjunction with pulse charging, it applies a very short discharge pulse, typically 2 to 3 times the charging current for 5 milliseconds, during the charging rest period to depolarize the cell. These pulses dislodge any gas bubbles which have built up on the electrodes during fast charging, speeding up the stabilization process and hence the overall charging process. The release and diffusion of the gas bubbles is known as “burping”. Controversial claims have been made for the improvements in both the charge rate and the battery lifetime as well as for the removal of dendrites made possible by this technique. The least that can be said is that “it does not damage the battery”.
  • IUI Charging this is a recently developed charging profile used for fast charging standard flooded lead acid batteries from particular manufacturers. It is not suitable for all lead acid batteries. Initially the battery is charged at a constant (I) rate until the cell voltage reaches a preset value – normally a voltage near to that at which gassing occurs. This first part of the charging cycle is known as the bulk charge phase. When the preset voltage has been reached, the charger switches into the constant voltage (U) phase and the current drawn by the battery will gradually drop until it reaches another preset level. This second part of the cycle completes the normal charging of the battery at a slowly diminishing rate. Finally the charger switches again into the constant current mode (I) and the voltage continue to rise up to a new higher preset limit when the charger is switched off. This last phase is used to equalize the charge on the individual cells in the battery to maximize battery life. See Cell Balancing.
  • Trickle charge Trickle charging is designed to compensate for the self discharge of the battery. Continuous charge. Long term constant current charging for standby use. The charge rate varies according to the frequency of discharge. Not suitable for some battery chemistries, e.g. NiMH and Lithium, which are susceptible to damage from overcharging In some applications the charger is designed to switch to trickle charging when the battery is fully charged.
  • Float charge. The battery and the load are permanently connected in parallel across the DC charging source and held at a constant voltage below the battery’s upper voltage limit. Used for emergency power back up systems. Mainly used with lead acid batteries.
  • Random charging All of the above applications involve controlled charge of the battery, however there are many applications where the energy to charge the battery is only available, or is delivered, in some random, uncontrolled way. This applies to automotive applications where the energy depends on the engine speed which is continuously changing. The problem is more acute in EV and HEV applications which use regenerative braking since this generates large power spikes during braking which the battery must absorb. More benign applications are in solar panel installations which can only be charged when the sun is shining. These all require special techniques to limit the charging current or voltage to levels which the battery can tolerate.

Charge controller

 A charge controller, charge regulator or battery regulator limits the rate at which electric current is added to or drawn from electric batteries.  It prevents overcharging and may prevent against overvoltage, which can reduce battery performance or lifespan, and may pose a safety risk. It may also prevent completely draining (“deep discharging”) a battery, or perform controlled discharges, depending on the battery technology, to protect battery life.   The terms “charge controller” or “charge regulator” may refer to either a stand-alone device, or to control circuitry integrated within a battery pack, battery-powered device, or battery recharger.

Charge controllers are sold to consumers as separate devices, often in conjunction with solar or wind power generators, for uses such as RV, boat, and off-the-grid home battery storage systems.   In solar applications, charge controllers may also be called solar regulators.  

A series charge controller or series regulator disables further current flow into batteries when they are full. A shunt charge controller or shunt regulator diverts excess electricity to an auxiliary or “shunt” load, such as an electric water heater, when batteries are full.  

Simple charge controllers stop charging a battery when they exceed a set high voltage level, and re-enable charging when battery voltage drops back below that level. Pulse width modulation (PWM) and maximum power point tracker (MPPT) technologies are more electronically sophisticated, adjusting charging rates depending on the battery’s level, to allow charging closer to its maximum capacity Charge controllers may also monitor battery temperature to prevent overheating. Some charge controller systems also display data; transmit data to remote displays, and data logging to track electric flow over time.

 Circuitry that functions as a charge regulator controller may consist of several electrical components, or may be encapsulated in a single microchip, an integrated circuit (IC) usually called a charge controller IC or charge control IC

Charge controller circuits are used for rechargeable electronic devices such as cell phones, laptop computers, portable audio players, and uninterruptible power supplies, as well as for larger battery systems found in electric vehicles and orbiting space satellites. Charge controller circuitry may be located in the battery-powered device, in a battery pack for either wired or wireless (inductive) charging, in line with the wiring,or in the AC adapter or other power supply module.

Benefits of Solar: SUMMARY

  • Extends the Workday

It is dark by 6:30 year round in the equatorial latitudes. Electric lighting allows families to extend their workday into the evening hours. Many villages where SELF has installed solar lights now boast home craft industries.

  • Improves Health

Fumes from kerosene lamps in poorly ventilated houses are a serious health problem in much of the world where electric light is unavailable. The World Bank estimates that 780 million women and children breathing kerosene fumes inhale the equivalent of smoke from 2 packs of cigarettes a day.

  • Stems Urban Migration

Improving the quality of life through electrification at the rural household and village level helps stem migration to mega-cities. Also, studies have shown a direct correlation between the availability of electric light and lower birth rates.

  • Improves Fire-Reduction

Kerosene lamps are a serious fire hazard in the developing world, killing and maiming tens of thousands of people each year. Kerosene, diesel fuel and gasoline stored for lamps and small generators are also a safety threat, whereas solar electric light is entirely safe.

  • Improves Literacy

Electric light improves literacy, because people can read after dark more easily than they can by candle or lamplight. Schoolwork improves and eyesight is safeguarded when children study by electric light. With the advent of television and radio, people previously cut off from electronic information, education, and entertainment can become part of the modern world without leaving home.

  • Conserves Foreign Exchange

As much as 90% of the export earnings of some developing countries are used to pay for imported oil, most of it for power generation. Capital saved by not building additional large power plants can be used for investment in health, education, economic development, and industry. Expanding solar rural electrification creates jobs and business opportunities based on an appropriate technology in a decentralized marketplace.

  • Conserves Energy

Solar electricity for the Third World is clearly the most effective energy conservation program because it conserves costly conventional power for urban areas, town market centers, and industrial and commercial uses, leaving decentralized PV-generated power to provide the lighting and basic electrical needs of the majority of the developing world’s rural populations.

  • Reduces Maintenance

Use of a SHS rather than gensets or kerosene lamps reduces the time and expense of refueling and maintenance. Kerosene lamps and diesel generators must be filled several times per day. In rural areas, purchasing and transporting of kerosene or diesel fuel is often both difficult and expensive. Diesel generators require periodic maintenance and have a short lifespan. Car batteries, used to power TVs must often be transported miles for recharging. SHS, however, require no fuel, and will last for 20 years with minimal servicing.

Benefits of Solar: HEALTH

  • Reduces kerosene-induced fires

Kerosene lamps are a serious fire hazard in the developing world, killing and maiming tens of thousands of people each year. Kerosene, diesel fuel and gasoline stored for lamps and small generators are also a safety threat, whereas solar electric light is entirely safe.

Improves indoor air quality

Fumes from kerosene lamps in poorly ventilated houses are a serious health problem in much of the world where electric light is unavailable. The World Bank estimates that 780 million women and children breathing kerosene fumes inhale the equivalent of smoke from 2 packs of cigarettes a day.

  • Increases effectiveness of health programs

Use of solar electric lighting systems by rural health centers increases the quality of health care provided. Solar electric systems improve patient diagnoses through brighter task lighting and use of electrically-lit microscopes. Photovoltaic can also power televisions and VCRs to educate health workers and patients about preventative care, medical procedures, and other health care provisions. Finally, solar electric refrigerators have a higher degree of temperature control than kerosene units, leading to lower vaccine spoilage rates, and increased immunization effectiveness.

  • Allows telemedicine

Telemedicine is the use of telecommunications technology to provide, enhance, or expedite health care services, by accessing off-site databases, linking clinics or physicians’ offices to central hospitals, or transmitting x-rays or other diagnostic images for examination at another site. Deep in the Brazilian Amazon, SELF demonstrated the feasibility of telemedicine in remote areas by using a combination of solar power and satellite communications. Within moments of plugging in the new telemedicine device, local Caboclo Indians can have meaurements of blood pressure, body temperature, pulse, and blood-oxygen uploaded via satellite to the University of Southern Alabama for remote diagnosis.

Benefits of Solar: ENVIRONMENT

  • Reduces local air pollution

Use of solar electric systems decreases the amount of local air pollution. With a decrease in the amount of kerosene used for lighting, there is a corresponding reduction in the amount of local pollution produced. Solar rural electrification also decreases the amount of electricity needed from small diesel generators.

  • Offsets greenhouse gases

Photovoltaic systems produce electric power with no carbon dioxide (CO2) emissions. Carbon emission offset is calculated at approximately 6 tons of CO2 over the twenty-year life of one PV system.

  • Conserves energy

Solar electricity for the Third World is an effective energy conservation program because it conserves costly conventional power for urban areas, town market centers, and industrial and commercial uses, leaving decentralized PV-generated power to provide the lighting and basic electrical needs of the majority of the developing world’s rural populations.

  • Reduces need for dry-cell battery disposal

Small dry-cell batteries for flashlights and radios are used throughout the unelectrified world. Most of these batteries are disposable lead-acid cells which are not recycled. Lead from disposed dry-cells leaches into the ground, contaminating the soil and water. Solar rural electrification dramatically decreases the need for disposable dry-cell batteries. Over 12 billion dry-cell batteries were sold in 1993.

Benefits of Solar: EDUCATIONAL

  • Improves literacy

Solar rural electrification improves literacy by providing high quality electric reading lights. Electric lighting is far brighter than kerosene lighting or candles. Use of solar electric light aids students in studying during evening hours.

  • Increases access to news and information

Photovoltaics give rural areas access to news and educational programming through television and radio broadcasts. With the advent of television and radio, people previously cut off from electronic information, education, and entertainment can become part of the modern world without leaving home.

  • Enables evening education classes

Ongoing education classes and adult literacy classes can be held during the evening in solar-lit community centers. SELF has electrified community centers and schools in many countries, and has witnessed the development of adult literacy and professional classes possible with the introduction of solar electric lighting systems in community centers.

  • Facilitates wireless rural telephony

Solar electricity, when coupled with wireless communications, makes it possible to introduce rural telephony and data communication services to remote villages.

  • Solar Home Systems ROLE

Rural households currently using kerosene lamps for lighting and disposable or automotive batteries for operating televisions, radios, and other small appliances are the principal market for the SHS. Solar PV is affordable to an increasing segment of the Third World’s off-grid rural populations. For home lighting, the cost of an SHS is comparable to a family’s average monthly expenditure for candles, kerosene or dry-cell batteries. Besides providing lighting, an SHS can also power a small TV. In addition, families with an SHS need no longer purchase expensive dry-cell batteries to operate its radio-cassette player, which nearly every family has. Solar PV is competitive with its alternatives: kerosene, dry-cell batteries, candles, battery re-charging from the grid, Gensets, and grid extension.

Approximately 400,000 families in the developing world are already using small, household solar PV systems to power fluorescent lights, radio-cassette players, 12 volt black-and-white TVs, and other small appliances. These families, living mostly in remote rural areas, already constitute the largest group of domestic users of solar electricity in the world. For them, there is no other affordable or immediately available source of electric power. These systems have been sold mostly by small entrepreneurs applying their working knowledge of this proven technology to serve rural families who need small amounts of power for electric lights, radios and TVs.

The success of SHS implementation has been greatly determined by quality of the components and the availability of ongoing service and maintenance. When well-designed systems have received regular ongoing maintenance they have performed successfully over many years. However, when poorly designed components have been used, or when no after-sales service was available, systems often fail. A past failure of these systems has undermined local confidence. Fly-by-night salespeople have sold thousands of substandard SHS in South Africa, for example, which failed shortly after installation. Well-designed components and after-sales service and maintenance have become recognized as essential parts of a successful PV program.

Many of these SHS were provided by non-governmental organizations (like SELF) or through government-sponsored programs with international donor support, such as in Zimbabwe where 10,000 SHS are being installed on a financed, full-cost-recovery basis (in a program designed by SELF for the United Nations in 1991.) In Bolivia, 2,500 SHS are being leased to users by a cooperative “utility.” In Kenya, over 20,000 SHS have been installed since the mid-’80’s by independent businessmen on a strictly cash basis. The World Bank estimates that 50,000 SHS have been installed in China, 40,000 in Mexico, and 20,000 in Indonesia.

According to the United Nations Development Programme, 400 million families (nearly two billion people) have no access to electricity. The European Union’s renewable energy organization EuroSolar estimates the global market for solar photovoltaic home lighting systems is 200 million families. Based on market studies in India, China, Sri Lanka, Zimbabwe, South Africa and Kenya conducted by various international development agencies over the past 5 years, the consensus is that approximately 5% of most rural populations can pay cash for an SHS, 20 to 30% can afford a SHS with short or medium term credit, and another 25% could afford an SHS with long term credit or leasing.

Utility-scale solar energy environmental considerations include land disturbance/land use impacts, visual impacts, impacts associated with hazardous materials, and potential impacts on water and other resources, depending on the solar technology employed.

Solar power plants reduce the environmental impacts of combustion used in fossil fuel power generation such as green house gas and other air pollution emissions. However, concerns have been raised over land disturbance, visual impacts, and the use of potentially hazardous materials in some systems. These and other concerns associated with solar energy development are discussed below, and will be addressed in the Solar Energy Development Programmatic EIS.

·         Land Disturbance/Land Use Impacts

All utility-scale solar energy facilities require relatively large areas for solar radiation collection when used to generate electricity at a commercial scale, and the large arrays of solar collectors may interfere with natural sunlight, rainfall, and drainage, which could have a variety of effects on plants and animals. Solar arrays may also create avian perching opportunities that could affect both bird and prey populations. Land disturbance could also affect archeological resources. Solar facilities may interfere with existing land uses, such as grazing. Proper siting decisions can help to avoid land disturbance and land use impacts.

·         Visual Impacts

Because they are generally large facilities with numerous highly geometric and sometimes highly reflective surfaces, solar energy facilities may create visual impacts; however, being visible is not necessarily the same as being intrusive. Aesthetic issues are by their nature highly subjective. Proper siting decisions can help to avoid aesthetic impacts to the landscape.

·         Hazardous Materials

Photovoltaic panels may contain hazardous materials, and although they are sealed under normal operating conditions, there is the potential for environmental contamination if they were damaged or improperly disposed upon decommissioning. Concentrating solar power systems may employ liquids such as oils or molten salts that may be hazardous and present spill risks. In addition, various fluids are commonly used in most industrial facilities, such as hydraulic fluids, coolants, and lubricants. These fluids may in some cases be hazardous, and present a spill-related risk. Proper planning and good maintenance practices can be used to minimize impacts from hazardous materials.

·         Impacts to Water Resources

Parabolic trough and central tower systems typically use conventional steam plants to generate electricity, which commonly consume water for cooling. In arid settings, the increased water demand could strain available water resources. If the cooling water was contaminated through an accident, pollution of water resources could occur, although the risk would be minimized by good operating practices.

·         Other Concerns

Concentrating Solar Power (CSP) systems could potentially cause interference with aircraft operations if reflected light beams become misdirected into aircraft pathways. Operation of solar energy facilities and especially concentrating solar power facilities involves high temperatures that may pose an environmental or safety risk. Like all electrical generating facilities, solar facilities produce electric and magnetic fields. Construction and decommissioning of utility-scale solar energy facilities would involve a variety of possible impacts normally encountered in construction/decommissioning of large-scale industrial facilities. If new electric transmission lines or related facilities were needed to service a new solar energy development, construction, operation, and decommissioning of the transmission facilities could also cause a variety of environmental impacts.

Cost of Components

Table: The electrical component cost for the generation of Inverter Circuit.

Name of Equipment

Unit Cost(TK)


Cost (TK)


Resistors (1K, 10K , 100K, 47K, 22K, 150K  )




Diode 1N5408




Diode 1N4007




Transistor(NPN) BC 547




Capacitor 1µF 100v




Capacitor 1µF 50v




Capacitor10µF 50v




IC SG 3524




 IC LM358








PCB Board


Per inch


Hit Sink


Per inch


Relay D.C (12v)




Zener diode 5.1v




Variable resistor 10K










1 meter


Panel board








Soldering Iron




Total cost=       3102

Protection System:

We have taken some protections in this ckt. Such as:

 Back e.m.f protection:

  1. To protect from the back e.m.f that we connect across the rectifier diode of the relay.
  2. We used a rectifier diode across Battery Input port for protect the ckt. From back e.m.f of load.

 Temperature protection:

To reduce the temperature of the MOSFET we used hit sink.

Over charge protection

To Protect over charging of the battery we used relay


The main source of electricity generation in Bangladesh is the natural gas (about 82.69%, in the fiscal year 2008-09 its value was 4542MW). Natural gas produce the heat require driving the turbine which produces electricity. The reserve of natural gas is reducing day by day. To reduce the consumption of natural gas, Government has closed the production of some industry due to inadequate electricity supply (Ghorasal fertilizer, polash fertilizer etc). But the reserve of natural gas is now inadequate, an alternative should be employed. Solar energy is a very good option.

Bangladesh is a country with enough solar radiation to provide potential for sustaining SHS. From this radiation using the current available technology full demand of electricity can be overcome. But both the PV system and thermal system is very costly. This cost is high to consumer so government should take steps to setup solar energy plant.

At present, the solar home systems are not costly competitive against conventional fossil fuel based grid interfaced power sources because of the initial capital cost. However, to fulfill the basic needs for the consumer and improvements in alternative energy technologies bear good potential for widespread uses of such systems.

The proposed system feasibility may be a costly issue in respect of Bangladesh. However, it is possible to overcome by introducing some incentives offered by the government and utility companies. It can also be implemented in commercial building, telecommunication sector and water pumping for irrigation.

The Government of the people’s republic of Bangladesh is trying to meet the national electricity demand through various ways including installing Solar system. PV Solar energy conversion is only renewable energy source currently in operation in our country.

Solar thermal system is currently popular technology for producing electricity in megawatt scale. At latest technology it is equivalent to nuclear plant (Mojave solar park – 220,000 megawatts per year) without the radioactive dangers or the giant cooling towers to clog up the skyline. It is costly but in 10 years the cost can be recovered. (It doesn’t require any fuel!). So government should think about it.

If we can produce solar cell in our country the PV system cost will become 60% of current cost. Some organization in private sector already started assembling of solar panel to produce electricity. But the Government should take more steps toward about the solar cell production inside the country.

solar panel