Among the measures that enable electricity use to be optimized, improving the power factor of electrical systems is undoubtedly one of the most important. A unity factor of 1.0 (100%), can be considered ideal. However, most users of electricity, power factor is usually less than 100%, which means the electrical power is not effectively utilized.  This inefficiency can increase the cost of the user’s electricity, as the energy or electrical utility company transfers its own excess operational costs on to the user.  Billing of electricity is computed by various methods, which may also affect costs.

From the electric utility’s view, raising the average operating power factor of the network from .70 to .90 means:

1. Reducing costs due to losses in the network

2. Increasing the potential of generation production and distribution of network operations

This means saving hundreds of thousands of tons of fuel (and emissions), hundreds of transformers becoming available, and not having to build power plants and their support systems.  Thus in the case of low power factor, utility companies charge higher rates in order to cover the additional costs they must incur, due to the inefficiency of the system that taps energy.

Without power factor correction, equipment draws more power than is actually needed to perform the work.  With power factor correction, less total power is drawn.  Payback justification of power factor correction can be achieved by calculating the installed cost compared to the immediate monthly savings. In many cases, the initial cost will result in a short term payback of 12 – 18 months, with electric use savings continuing each month.






 

Touchscreen Interface


Power Factor Panel