Thursday, June 26, 2008

Interaction energy

Interaction energy


In physics, interaction energy is the contribution to the total energy that is caused by an interaction between the objects being considered.

The interaction energy usually depends on the relative position of the objects. For example, Q1Q2 / (4πε0Δr) is the electrostatic interaction energy between two objects with charges Q1, Q2.

Supermolecular interaction energy

A straightforward approach for evaluating the interaction energy is to calculate the difference between the energies of isolated objects and their assembly. In the case of two objects, A and B, the interaction energy can be written as:


where E(A) and E(B) are the energies of the isolated objects (monomers), and E(A,B) the energy of their interacting assembly (dimer).

For larger system, consisting of N objects, this procedure can be generalized to provide a total many-body interaction energy:


By calculating the energies for monomers, dimers, trimers, etc., in an N-object system, a complete set of two-, three-, and up to N-body interaction energies can be derived.

The supermolecular approach has an important disadvantage in that the final interaction energy is usually much smaller than the total energies from which it is calculated. and therefore contains a much larger relative uncertainty.

Thermodynamic free energy

Thermodynamic free energy


In thermodynamics, the term thermodynamic free energy is a measure of the amount of mechanical (or other) work that can be extracted from a system, and is helpful in engineering applications. It is a subtraction of the entropy of a system ("useless energy") from the total energy, yielding a thermodynamic state function which represents the "useful energy".

Overview

In short, free energy is that portion of any First-Law energy that is available for doing thermodynamic work; i.e., work mediated by thermal energy. Since free energy is subject to irreversible loss in the course of such work and First-Law energy is always conserved, it is evident that free energy is an expendable, Second-Law kind of energy that can make things happen within finite amounts of time.

In solution chemistry and biochemistry, the Gibbs free energy change (denoted by ΔG) is commonly used merely as a surrogate for (−T times) the entropy produced by spontaneous chemical reactions in situations where there is no work done; or at least no "useful" work; i.e., other than PdV. As such, it serves as a particularization of the second law of thermodynamics, giving it the physical dimensions of energy, even though the inherent meaning in terms of entropy would be more to the point.

The free energy functions are Legendre transforms of the internal energy. For processes involving a system at constant pressure P and temperature T, the Gibbs free energy is the most useful because, in addition to subsuming any entropy change due merely to heat flux, it does the same for the PdV work needed to "make space for additional molecules" produced by various processes. (Hence its utility to solution-phase chemists, including biochemists.) The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore PdV work.)

The (historically earlier) Helmholtz free energy is defined as A = U − TS, where U is the internal energy, T is the absolute temperature, and S is the entropy. Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant T. Thus its appellation "work content", and the designation A from arbeit, the German word for work. Since it makes no reference to any quantities involved in work (such as P and V), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system, and it can increase at most by the amount of work done on a system.

The Gibbs free energy G = H − TS, where H is the enthalpy. (H = U + PV, where P is the pressure and V is the volume.)

There has been historical controversy:
Among physicists, “free energy” most often refers to the Helmholtz free energy, denoted by F.
Among chemists, “free energy” most often refers to the Gibbs free energy, also denoted by F.

Since both fields use both functions, a compromise has been suggested, using A to denote the Helmholtz function, with G for the Gibbs function. While A is preferred by IUPAC, F is sometimes still in use, and the correct free energy function is often implicit in manuscripts and presentations.

Application

The experimental usefulness of these functions is restricted to conditions where certain variables (T, and V or external P) are held constant, although they also have theoretical importance in deriving Maxwell relations. Work other than PdV may be added, e.g., for electrochemical cells, or f ˑdx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.

In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy.

Wednesday, June 25, 2008

Entropy

Entropy


In physics, entropy, symbolized by S,( from the Greek μετατροπή (metatropi) meaning "transformation"),[3][4] is a measure of the unavailability of a system’s energy to do work.[5] Entropy is central to the second law of thermodynamics and the combined law of thermodynamics, which deal with physical processes and whether they occur spontaneously. Spontaneous changes, in isolated systems, occur with an increase in entropy. Spontaneous changes tend to smooth out differences in temperature, pressure, density, and chemical potential that may exist in a system, and entropy is thus a measure of how far this smoothing-out process has progressed. In short Entropy is a function of a quantity of heat which shows the possibility of conversion of that heat into work. The increase in entropy is small when heat is added at high temperature and is greater when heat is added at lower temperature. Thus for maximum entropy there is minium availability for conversion into work and for minimum entropy there is maximum avilability for conversin into work.

The concept of entropy was developed in the 1850s by German physicist Rudolf Clausius who described it as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state.[3] In contrast, the first law of thermodynamics, formalized through the heat-friction experiments of James Joule in 1843, deals with the concept of energy, which is conserved in all processes; the first law, however, lacks in its ability to quantify the effects of friction and dissipation. Entropy change has often been defined as a change to a more disordered state at a molecular level. In recent years, entropy has been interpreted in terms of the "dispersal" of energy. Entropy is an extensive state function that accounts for the effects of irreversibility in thermodynamic systems.



Ice melting - a classic example of entropy increasing described in 1862 by Rudolf Clausius as an increase in the disgregation of the molecules of the body of ice.


Quantitatively, entropy is defined by the differential quantity dS = δQ / T, where δQ is the amount of heat absorbed in an isothermal and reversible process in which the system goes from one state to another, and T is the absolute temperature at which the process is occurring.[6] Entropy is one of the factors that determines the free energy of the system. This thermodynamic definition of entropy is only valid for a system in equilibrium (because temperature is defined only for a system in equilibrium), while the statistical definition of entropy (see below) applies to any system. Thus the statistical definition is usually considered the fundamental definition of entropy.

When a system's energy is defined as the sum of its "useful" energy, (e.g. that used to push a piston), and its "useless energy", i.e. that energy which cannot be used for external work, then entropy may be (most concretely) visualized as the "scrap" or "useless" energy whose energetic prevalence over the total energy of a system is directly proportional to the absolute temperature of the considered system. (Note the product "TS" in the Gibbs free energy or Helmholtz free energy relations).

In terms of statistical mechanics, the entropy describes the number of the possible microscopic configurations of the system. The statistical definition of entropy is the more fundamental definition, from which all other definitions and all properties of entropy follow. Although the concept of entropy was originally a thermodynamic construct, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics, and evolution.

History

The short history of entropy begins with the work of French mathematician Lazare Carnot who in his 1803 work Fundamental Principles of Equilibrium and Movement postulated that in any machine the accelerations and shocks of the moving parts all represent losses of moment of activity. In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire in which he set forth the view that in all heat-engines whenever "caloric", or what is now known as heat, falls through a temperature difference, that work or motive power can be produced from the actions of the "fall of caloric" between a hot and cold body. This was an early insight into the second law of thermodynamics.

Carnot based his views of heat partially on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on recent 1789 views of Count Rumford who showed that heat could be created by friction as when cannon bores are machined.Accordingly, Carnot reasoned that if the body of the working substance, such as a body of steam, is brought back to its original state (temperature and pressure) at the end of a complete engine cycle, that "no change occurs in the condition of the working body." This latter comment was amended in his foot notes, and it was this comment that led to the development of entropy.



Rudolf Clausius - originator of the concept of "entropy".


In the 1850s and 60s, German physicist Rudolf Clausius gravely objected to this latter supposition, i.e. that no change occurs in the working body, and gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction.This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Later, scientists such as Ludwig Boltzmann, Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. Carathéodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.

Definitions and descriptions

In science, the term "entropy" is generally interpreted in three distinct, but semi-related, ways, i.e. from macroscopic viewpoint (classical thermodynamics), a microscopic viewpoint (statistical thermodynamics), and an information viewpoint (information theory). Entropy in information theory is a fundamentally different concept from thermodynamic entropy. However, at a philosophical level, some argue that thermodynamic entropy can be interpreted as an application of the information entropy concept to a highly specific set of physical questions.

The statistical definition of entropy (see below) is the fundamental definition because the other two can be mathematically derived from it, but not vice versa. All properties of entropy (including second law of thermodynamics) follow from this definition.

Entropy in chemical thermodynamics

Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in the combination of a system and its surroundings (or in an isolated system by itself) increases during all spontaneous chemical and physical processes. Spontaneity in chemistry means “by itself, or without any outside influence”, and has nothing to do with speed. The Clausius equation of δqrev/T = ΔS introduces the measurement of entropy change, ΔS. Entropy change describes the direction and quantitates the magnitude of simple changes such as heat transfer between systems – always from hotter to cooler spontaneously.Thus, when a mole of substance at 0 K is warmed by its surroundings to 298 K, the sum of the incremental values of qrev/T constitute each element's or compound's standard molar entropy, a fundamental physical property and an indicator of the amount of energy stored by a substance at 298 K.[14][15] Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.

Entropy is equally essential in predicting the extent of complex chemical reactions, i.e. whether a process will go as written or proceed in the opposite direction. For such applications, ΔS must be incorporated in an expression that includes both the system and its surroundings, Δ Suniverse = ΔSsurroundings + Δ S system. This expression becomes, via some steps, the Gibbs free energy equation for reactants and products in the system: Δ G [the Gibbs free energy change of the system] = Δ H [the enthalpy change] – T Δ S [the entropy change].

The second law

An important law of physics, the second law of thermodynamics, states that the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value; and so, by implication, the entropy of the universe (i.e. the system and its surroundings), assumed as an isolated system, tends to increase. Two important consequences are that heat cannot of itself pass from a colder to a hotter body: i.e., it is impossible to transfer heat from a cold to a hot reservoir without at the same time converting a certain amount of work to heat. It is also impossible for any device that can operate on a cycle to receive heat from a single reservoir and produce a net amount of work; it can only get useful work out of the heat if heat is at the same time transferred from a hot to a cold reservoir. This means that there is no possibility of a "perpetual motion" which is isolated. Also, from this it follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.

In general, according to the second law, the entropy of a system that is not isolated may decrease. An air conditioner, for example, cools the air in a room, thus reducing the entropy of the air. The heat, however, involved in operating the air conditioner always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air. Thus the total entropy of the room and the environment increases, in agreement with the second law.



During steady-state continuous operation, an entropy balance applied to an open system accounts for system entropy changes related to heat flow and mass flow across the system boundary.


Entropy in quantum mechanics (von Neumann entropy)

In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy". Von Neumann established the correct mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave collapse is described as an irreversible process (the so called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.

It is well known that a Shannon based definition of information entropy leads in the classical case to the Boltzmann entropy. It is tempting to regard the Von Neumann entropy as the corresponding quantum mechanical definition. But the latter is problematic from quantum information point of view. Consequently Stotland, Pomeransky, Bachmat and Cohen have introduced a new definition of entropy that reflects the inherent uncertainty of quantum mechanical states. This definition allows to distinguish between the minimum uncertainty entropy of pure states, and the excess statistical entropy of mixtures.

Standard textbook definitions

Entropy – energy broken down in irretrievable heat.
Boltzmann's constant times the logarithm of a multiplicity; where the multiplicity of a macrostate is the number of microstates that correspond to the macrostate.
the number of ways of arranging things in a system (times the Boltzmann's constant).
a non-conserved thermodynamic state function, measured in terms of the number of microstates a system can assume, which corresponds to a degradation in usable energy.
a direct measure of the randomness of a system.
a measure of energy dispersal at a specific temperature.
a measure of the partial loss of the ability of a system to perform work due to the effects of irreversibility.
an index of the tendency of a system towards spontaneous change.
a measure of the unavailability of a system’s energy to do work; also a measure of disorder; the higher the entropy the greater the disorder.
a parameter representing the state of disorder of a system at the atomic, ionic, or molecular level.
a measure of disorder in the universe or of the availability of the energy in a system to do work.

Energy dispersal

The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature.Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels.

Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students.As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures will tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics.Physical chemist Peter Atkins, for example, who previously wrote of dispersal leading to a disordered state, now writes that "spontaneous changes are always accompanied by a dispersal of energy", and has discarded 'disorder' as a description.

Ice melting example

The illustration for this article is a classic example in which entropy increases in a small 'universe', a thermodynamic system consisting of the 'surroundings' (the warm room) and 'system' (glass, ice, cold water). In this universe, some heat energy δQ from the warmer room surroundings (at 298 K or 25 C) will spread out to the cooler system of ice and water at its constant temperature T of 273 K (0 C), the melting temperature of ice. The entropy of the system will change by the amount dS = δQ/T, in this example δQ/273 K. (The heat δQ for this process is the energy required to change water from the solid state to the liquid state, and is called the enthalpy of fusion, i.e. the ΔH for ice fusion.) The entropy of the surroundings will change by an amount dS = -δQ/298 K. So in this example, the entropy of the system increases, whereas the entropy of the surroundings decreases.

It is important to realize that the decrease in the entropy of the surrounding room is less than the increase in the entropy of the ice and water: the room temperature of 298 K is larger than 273 K and therefore the ratio, (entropy change), of δQ/298 K for the surroundings is smaller than the ratio (entropy change), of δQ/273 K for the ice+water system. To find the entropy change of our 'universe', we add up the entropy changes for its constituents: the surrounding room, and the ice+water. The total entropy change is positive; this is always true in spontaneous events in a thermodynamic system and it shows the predictive importance of entropy: the final net entropy after such an event is always greater than was the initial entropy.

As the temperature of the cool water rises to that of the room and the room further cools imperceptibly, the sum of the δQ/T over the continuous range, at many increments, in the initially cool to finally warm water can be found by calculus. The entire miniature "universe", i.e. this thermodynamic system, has increased in entropy. Energy has spontaneously become more dispersed and spread out in that "universe" than when the glass of ice water was introduced and became a "system" within it.

Topics in entropy

Entropy and life

For over a century and a half, beginning with Clausius' 1863 memoir "On the Concentration of Rays of Heat and Light, and on the Limits of its Action", much writing and research has been devoted to the relationship between thermodynamic entropy and the evolution of life. The argument that life feeds on negative entropy or negentropy as put forth in the 1944 book What is Life? by physicist Erwin Schrödinger served as a further stimulus to this research. Recent writings have utilized the concept of Gibbs free energy to elaborate on this issue. Tangentially, some creationists have argued that entropy rules out evolution.

In the popular 1982 textbook Principles of Biochemistry by noted American biochemist Albert Lehninger, for example, it is argued that the order produced within cells as they grow and divide is more than compensated for by the disorder they create in their surroundings in the course of growth and division. In short, according to Lehninger, "living organisms preserve their internal order by taking from their surroundings free energy, in the form of nutrients or sunlight, and returning to their surroundings an equal amount of energy as heat and entropy."

Evolution related definitions:
Negentropy - a shorthand colloquial phrase for negative entropy.
Ectropy - a measure of the tendency of a dynamical system to do useful work and grow more organized.
Syntropy - a tendency towards order and symmetrical combinations and designs of ever more advantageous and orderly patterns.
Extropy – a metaphorical term defining the extent of a living or organizational system's intelligence, functional order, vitality, energy, life, experience, and capacity and drive for improvement and growth.
Ecological entropy - a measure of biodiversity in the study of biological ecology.

The arrow of time

Entropy is the only quantity in the physical sciences that "picks" a particular direction for time, sometimes called an arrow of time. As we go "forward" in time, the Second Law of Thermodynamics tells us that the entropy of an isolated system can only increase or remain the same; it cannot decrease. Hence, from one perspective, entropy measurement is thought of as a kind of clock.

Entropy and cosmology

We have previously mentioned that a finite universe may be considered an isolated system. As such, it may be subject to the Second Law of Thermodynamics, so that its total entropy is constantly increasing. It has been speculated that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy, so that no more work can be extracted from any source.

If the universe can be considered to have generally increasing entropy, then - as Roger Penrose has pointed out - gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. Hawking has, however, recently changed his stance on this aspect.

The role of entropy in cosmology remains a controversial subject. Recent work has cast extensive doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly - thus entropy density is decreasing with time. This results in an "entropy gap" pushing the system further away from equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.

Miscellaneous definitions
Entropy unit - a non-S.I. unit of thermodynamic entropy, usually denoted "e.u." and equal to one calorie per kelvin
Gibbs entropy - the usual statistical mechanical entropy of a thermodynamic system.
Boltzmann entropy - a type of Gibbs entropy, which neglects internal statistical correlations in the overall particle distribution.
Tsallis entropy - a generalization of the standard Boltzmann-Gibbs entropy.
Standard molar entropy - is the entropy content of one mole of substance, under conditions of standard temperature and pressure.
Black hole entropy - is the entropy carried by a black hole, which is proportional to the surface area of the black hole's event horizon.
Residual entropy - the entropy present after a substance is cooled arbitrarily close to absolute zero.
Entropy of mixing - the change in the entropy when two different chemical substances or components are mixed.
Loop entropy - is the entropy lost upon bringing together two residues of a polymer within a prescribed distance.
Conformational entropy - is the entropy associated with the physical arrangement of a polymer chain that assumes a compact or globular state in solution.
Entropic force - a microscopic force or reaction tendency related to system organization changes, molecular frictional considerations, and statistical variations.
Free entropy - an entropic thermodynamic potential analogous to the free energy.
Entropic explosion – an explosion in which the reactants undergo a large change in volume without releasing a large amount of heat.
Entropy change – a change in entropy dS between two equilibrium states is given by the heat transferred dQrev divided by the absolute temperature T of the system in this interval.
Sackur-Tetrode entropy - the entropy of a monatomic classical ideal gas determined via quantum considerations.

Other relations

Other mathematical definitions
Kolmogorov-Sinai entropy - a mathematical type of entropy in dynamical systems related to measures of partitions.
Topological entropy - a way of defining entropy in an iterated function map in ergodic theory.
Relative entropy - is a natural distance measure from a "true" probability distribution P to an arbitrary probability distribution Q.
Rényi entropy - a generalized entropy measure for fractal systems.

Sociological definitions

The concept of entropy has also entered the domain of sociology, generally as a metaphor for chaos, disorder or dissipation of energy, rather than as a direct measure of thermodynamic or information entropy:
Entropology – the study or discussion of entropy or the name sometimes given to thermodynamics without differential equations.
Psychological entropy - the distribution of energy in the psyche, which tends to seek equilibrium or balance among all the structures of the psyche.
Economic entropy – a semi-quantitative measure of the irrevocable dissipation and degradation of natural materials and available energy with respect to economic activity.
Social entropy – a measure of social system structure, having both theoretical and statistical interpretations, i.e. society (macrosocietal variables) measured in terms of how the individual functions in society (microsocietal variables); also related to social equilibrium.
Corporate entropy - energy waste as red tape and business team inefficiency, i.e. energy lost to waste.


Quotes

“ Any method involving the notion of entropy, the very existence of which depends on the second law of thermodynamics, will doubtless seem to many far-fetched, and may repel beginners as obscure and difficult of comprehension. ”

--Willard Gibbs, Graphical Methods in the Thermodynamics of Fluids (1873)

“ My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage. ”

--Conversation between Claude Shannon and John von Neumann regarding what name to give to the “measure of uncertainty” or attenuation in phone-line signals (1949)

Enthalpy

Enthalpy

Enthalpy, measure of the heat content of a chemical or physical system; it is a quantity derived from the heat and work relations studied in thermodynamics. As a system changes from one state to another the enthalpy change, ΔH, is equal to the enthalpy of the products minus the enthalpy of the reactants. If heat is given off during a transformation from one state to another, then the final state will have a lower heat content than the initial state, the enthalpy change ΔH will be negative, and the process is said to be exothermic. If heat is absorbed during the transformation, then the final state will have a higher heat content, ΔH will be positive, and the process is said to be endothermic. The enthalpy change accompanying a chemical reaction is called the heat of the reaction. For a reaction in which a compound is formed from its composite elements, the enthalpy increase or decrease is called the heat of formation of the compound. Changes of state, or phase, of matter are also accompanied by enthalpy changes; the change associated with the solid-liquid transition is called the heat of fusion and the change associated with the liquid-gas transition is called the heat of vaporization (see latent heat). The enthalpy change for a given reaction often may be used to tell how favorable the reaction is; an exothermic reaction involves a loss of heat and a consequent lower final energy and thus tends to be favorable, while an endothermic reaction tends to be unfavorable because it involves an increase in energy. However, there are other factors, such as entropy changes, which must also be taken into account in determining whether or not a given process can occur.

Tuesday, June 24, 2008

Activation energy

Activation energy


In chemistry, activation energy, also called threshold energy, is a term introduced in 1889 by Svante Arrhenius that is defined as the energy that must be overcome in order for a chemical reaction to occur. Activation energy may otherwise be denoted as the minimum energy necessary for a specific chemical reaction to occur. The activation energy of a reaction is usually denoted by Ea, and given in units of kilojoules per mole.

Basically, the activation energy is the height of the potential barrier (sometimes called the energy barrier) separating two minima of potential energy (of the reactants and of the products of reaction). For chemical reaction to have noticeable rate, there should be noticeable number of molecules with the energy equal or greater than the activation energy.



The sparks generated by striking steel against a flint provide the activation energy to initiate combustion in this Bunsen burner. The blue flame will sustain itself after the sparks are extinguished because the continued combustion of the flame is now energetically favorable.


Overview
Main article: Collision theory

Known as the "collisional model", there are three necessary requirements in order for a reaction to take place:
1. the molecules must collide to react.

If two molecules simply collide, however, they will not always react; therefore, the occurrence of a collision is not enough. The second requirement is that:
2. there must be enough energy (energy of activation) for the two molecules to react.

This is the idea of a transition state; if two slow molecules collide, they might bounce off one another because they do not contain enough energy to reach the energy of activation and overcome the transition state (the highest energy point). Lastly, the third requirement is:
3. the molecules must be orientated with respect to each other correctly.

For the reaction to occur between two colliding molecules, they must collide in the correct orientation, and possess a certain, minimum, amount of energy. As the molecules approach each other, their electron clouds repel each other. Overcoming this repulsion requires energy (activation energy), which is typically provided by the heat of the system; i.e., the translational, vibrational, and rotational energy of each molecule, although sometimes by light (photochemistry) or electrical fields (electrochemistry). If there is enough energy available, the repulsion is overcome and the molecules get close enough for attractions between the molecules to cause a rearrangement of bonds.



Reaction coordinate showing the relationship between enzyme kinetics and activation energy.


At low temperatures for a particular reaction, most (but not all) molecules will not have enough energy to react. However, there will nearly always be a certain number with enough energy at any temperature because temperature is a measure of the average energy of the system — individual molecules can have more or less energy than the average. Increasing the temperature increases the proportion of molecules with more energy than the activation energy, and consequently the rate of reaction increases. Typically the activation energy is given as the energy in kilojoules needed for one mole of reactants to react.

Mathematical formulation

The Arrhenius equation gives the quantitative basis of the relationship between the activation energy and the rate at which a reaction proceeds.From the Arrhenius equation, the activation energy can be expressed as



where A is the frequency factor for the reaction, R is the universal gas constant, and T is the temperature (in kelvin). The higher the temperature, the more likely the reaction will be able to overcome the energy of activation. A is a steric factor, which expresses the probability that the molecules contain a favorable orientation and will be able to proceed in a collision. In order for the reaction to proceed and overcome the activation energy, the temperature, orientation, and energy of the molecules must be substantial; this equation manages to sum up all of these things. Because Ea for most chemical reactions is in few electronvolt range (as chemical reactions only involve exchange of outermost electrons between atoms), then raising the temperature by 10 kelvins (at room temperature kT~0.04 eV) approximately doubles the rate of a reaction (in the absence of any other temperature dependent effects) due to an increase in the number of molecules that have the activation energy (as given by Boltzmann distribution equation).

Method of Reducing Activation Energy

Ways in which enzymes can lower the activation energy include: destabilizing the substrate (ground state), or stabilization of the transition state. The enzyme can interact with the substrate to destabilize it via:
geometric interactions
electrostatic (charge-charge) interactions
desolvation

Transition states



The relationship between activation energy (Ea) and enthalpy of formation (ΔH) with and without a catalyst. The highest energy position (peak position) represents the transition state. With the catalyst, the energy required to enter transition state decreases, thereby decreasing the energy required to initiate the reaction.

The transition state along a reaction coordinate is the point of maximum free energy, where bond-making and bond-breaking are balanced. Transition states are only in existence for extremely brief (10-15 s) periods of time. The energy required to reach the transition state is equal to the activation energy for that reaction. Multi-stage reactions involve a number of transition points, here the activation energy is equal to the one requiring the most energy. After this time either the molecules move apart again with original bonds reforming, or the bonds break and new products form. This is possible because both possibilities result in the release of energy (shown on the enthalpy profile diagram at right, as both positions lie below the transition state). A substance that modifies the transition state to lower the activation energy is termed a catalyst; a biological catalyst is termed an enzyme. It is important to note that a catalyst increases the rate of reaction without being consumed by it. In addition, while the catalyst lowers the activation energy, it does not change the energies of the original reactants nor products. Rather, the reactant energy and the product energy remain the same and only the activation energy is altered (lowered). To further enhance this idea, see this page or the image to the right.

Negative activation energy

In some cases rates of reaction decrease with increasing temperature. When following an approximately exponential relationship so the rate constant can still be fit to an Arrhenius expression, this results in a negative value of Ea. Reactions exhibiting these negative activation energies are typically barrierless reactions, in which the reaction proceeding relies on the capture of the molecules in a potential well. Increasing the temperature leads to a reduced probability of the colliding molecules capturing one another (with more glancing collisions not leading to reaction as the higher momentum carries the colliding particles out of the potential well), expressed as a reaction cross section that decreases with increasing temperature. Such a situation no longer leads itself to direct interpretations as the height of a potential barrier.

Energy

Energy


Concept

As with many concepts in physics, energy—along with the related ideas of work and power—has a meaning much more specific, and in some ways quite different, from its everyday connotation. According to the language of physics, a person who strains without success to pull a rock out of the ground has done no work, whereas a child playing on a playground produces a great deal of work. Energy, which may be defined as the ability of an object to do work, is neither created nor destroyed; it simply changes form, a concept that can be illustrated by the behavior of a bouncing ball.

How It Works

In fact, it might actually be more precise to say that energy is the ability of "a thing" or "something" to do work. Not only tangible objects (whether they be organic, mechanical, or electromagnetic) but also non-objects may possess energy. At the subatomic level, a particle with no mass may have energy. The same can be said of a magnetic force field.

One cannot touch a force field; hence, it is not an object—but obviously, it exists. All one has to do to prove its existence is to place a natural magnet, such as an iron nail, within the magnetic field. Assuming the force field is strong enough, the nail will move through space toward it—and thus the force field will have performed work on the nail.

Work: What It Is and Is Not

Work may be defined in general terms as the exertion of force over a given distance. In order for work to be accomplished, there must be a displacement in space—or, in colloquial terms, something has to be moved from point A to point B. As noted earlier, this definition creates results that go against the common-sense definition of "work."

A person straining, and failing, to pull a rock from the ground has performed no work (in terms of physics) because nothing has been moved. On the other hand, a child on a playground performs considerable work: as she runs from the slide to the swing, for instance, she has moved her own weight (a variety of force) across a distance. She is even working when her movement is back-and-forth, as on the swing. This type of movement results in no net displacement, but as long as displacement has occurred at all, work has occurred.




Lightning is the electric breakdown of air by strong electric fields, producing a plasma, which causes an energy transfer from the electric field to heat, mechanical energy (the random motion of air molecules caused by the heat), and light.



Similarly, when a man completes a full push-up, his body is in the same position—parallel to the floor, arms extended to support him—as he was before he began it; yet he has accomplished work. If, on the other hand, he at the end of his energy, his chest is on the floor, straining but failing, to complete just one more push-up, then he is not working. The fact that he feels as though he has worked may matter in a personal sense, but it does not in terms of physics.

Calculating Work

Work can be defined more specifically as the product of force and distance, where those two vectors are exerted in the same direction. Suppose one were to drag a block of a certain weight across a given distance of floor. The amount of force one exerts parallel to the floor itself, multiplied by the distance, is equal to the amount of work exerted. On the other hand, if one pulls up on the block in a position perpendicular to the floor, that force does not contribute toward the work of dragging the block across the floor, because it is not par allel to distance as defined in this particular situation.

Similarly, if one exerts force on the block at an angle to the floor, only a portion of that force counts toward the net product of work—a portion that must be quantified in terms of trigonometry. The line of force parallel to the floor may be thought of as the base of a triangle, with a line perpendicular to the floor as its second side. Hence there is a 90°-angle, making it a right triangle with a hypotenuse. The hypotenuse is the line of force, which again is at an angle to the floor.

The component of force that counts toward the total work on the block is equal to the total force multiplied by the cosine of the angle. A cosine is the ratio between the leg adjacent to an acute (less than 90°) angle and the hypotenuse. The leg adjacent to the acute angle is, of course, the base of the triangle, which is parallel to the floor itself. Sizes of triangles may vary, but the ratio expressed by a cosine (abbreviated cos) does not. Hence, if one is pulling on the block by a rope that makes a 30°-angle to the floor, then force must be multiplied by cos 30°, which is equal to 0.866.

Note that the cosine is less than 1; hence when multiplied by the total force exerted, it will yield a figure 13.4% smaller than the total force. In fact, the larger the angle, the smaller the cosine; thus for 90°, the value of cos = 0. On the other hand, for an angle of 0°, cos = 1. Thus, if total force is exerted parallel to the floor—that is, at a 0°-angle to it—then the component of force that counts toward total work is equal to the total force. From the standpoint of physics, this would be a highly work-intensive operation.

Gravity and Other Peculiarities of Work

The above discussion relates entirely to work along a horizontal plane. On the vertical plane, by contrast, work is much simpler to calculate due to the presence of a constant downward force, which is, of course, gravity. The force of gravity accelerates objects at a rate of 32 ft (9.8 m)/sec2. The mass (m) of an object multiplied by the rate of gravitational acceleration (g) yields its weight, and the formula for work done against gravity is equal to weight multiplied by height (h) above some lower reference point: mgh.

Distance and force are both vectors—that is, quantities possessing both magnitude and direction. Yet work, though it is the product of these two vectors, is a scalar, meaning that only the magnitude of work (and not the direction over which it is exerted) is important. Hence mgh can refer either to the upward work one exerts against gravity (that is, by lifting an object to a certain height), or to the downward work that gravity performs on the object when it is dropped. The direction of h does not matter, and its value is purely relative, referring to the vertical distance between one point and another.

The fact that gravity can "do work"—and the irrelevance of direction—further illustrates the truth that work, in the sense in which it is applied by physicists, is quite different from "work" as it understood in the day-to-day world. There is a highly personal quality to the everyday meaning of the term, which is completely lacking from its physics definition.

If someone carried a heavy box up five flights of stairs, that person would quite naturally feel justified in saying "I've worked." Certainly he or she would feel that the work expended was far greater than that of someone who had simply allowed the the elevator to carry the box up those five floors. Yet in terms of work done against gravity, the work done on the box by the elevator is exactly the same as that performed by the person carrying it upstairs. The identity of the "worker"—not to mention the sweat expended or not expended—is irrelevant from the standpoint of physics.

Measurement of Work and Power

In the metric system, a newton (N) is the amount of force required to accelerate 1 kg of mass by 1 meter per second squared (m/s2). Work is measured by the joule (J), equal to 1 newton-meter (N · m). The British unit of force is the pound, and work is measured in foot-pounds, or the work done by a force of 1 lb over a distance of one foot.

Power, the rate at which work is accomplished over time, is the same as work divided by time. It can also be calculated in terms of force multiplied by speed, much like the force-multiplied-by-distance formula for work. However, as with work, the force and speed must be in the same direction. Hence, the formula for power in these terms is F · cos θ · v, where F=force, v=speed, and cos θ is equal to the cosine of the angle θ (the Greek letter theta) between F and the direction of v.

The metric-system measure of power is the watt, named after James Watt (1736-1819), the Scottish inventor who developed the first fully viable steam engine and thus helped inaugurate the Industrial Revolution. A watt is equal to 1 joule per second, but this is such a small unit that it is more typical to speak in terms of kilowatts, or units of 1,000 watts.

Ironically, Watt himself—like most people in the British Isles and America—lived in a world that used the British system, in which the unit of power is the foot-pound per second. The latter, too, is very small, so for measuring the power of his steam engine, Watt suggested a unit based on something quite familiar to the people of his time: the power of a horse. One horsepower (hp) is equal to 550 foot-pounds per second.




Thomas Young - the first to use the term "energy" in the modern sense.




Sorting Out Metric and British Units

The British system, of course, is horridly cumbersome compared to the metric system, and thus it long ago fell out of favor with the international scientific community. The British system is the product of loosely developed conventions that emerged over time: for instance, a foot was based on the length of the reigning king's foot, and in time, this became standardized. By contrast, the metric system was created quite deliberately over a matter of just a few years following the French Revolution, which broke out in 1789. The metric system was adopted ten years later.

During the revolutionary era, French intellectuals believed that every aspect of existence could and should be treated in highly rational, scientific terms. Out of these ideas arose much folly—especially after the supposedly "rational" leaders of the revolution began chopping off people's heads—but one of the more positive outcomes was the metric system. This system, based entirely on the number 10 and its exponents, made it easy to relate one figure to another: for instance, there are 100 centimeters in a meter and 1,000 meters in a kilometer. This is vastly more convenient than converting 12 inches to a foot, and 5,280 feet to a mile.

For this reason, scientists—even those from the Anglo-American world—use the metric system for measuring not only horizontal space, but volume, temperature, pressure, work, power, and so on. Within the scientific community, in fact, the metric system is known as SI, an abbreviation of the French Système International d'Unités—that is, "International System of Units."

Americans have shown little interest in adopting the SI system, yet where power is concerned, there is one exception. For measuring the power of a mechanical device, such as an automobile or even a garbage disposal, Americans use the British horsepower. However, for measuring electrical power, the SI kilowatt is used. When an electric utility performs a meter reading on a family's power usage, it measures that usage in terms of electrical "work" performed for the family, and thus bills them by the kilowatt-hour.

Three Types of Energy

Kinetic and Potential Energy Formulae

Earlier, energy was defined as the ability of an object to accomplish work—a definition that by this point has acquired a great deal more meaning. There are three types of energy: kinetic energy, or the energy that something possesses by virtue of its motion; potential energy, the energy it possesses by virtue of its position; and rest energy, the energy it possesses by virtue of its mass.

The formula for kinetic energy is KE = ½ mv2. In other words, for an object of mass m, kinetic energy is equal to half the mass multiplied by the square of its speed v. The actual derivation of this formula is a rather detailed process, involving reference to the second of the three laws of motion formulated by Sir Isaac Newton (1642-1727.) The second law states that F = ma, in other words, that force is equal to mass multiplied by acceleration. In order to understand kinetic energy, it is necessary, then, to understand the formula for uniform acceleration. The latter is vf2 = v02 + 2as, where vf2 is the final speed of the object, v02 its initial speed, a acceleration and s distance. By substituting values within these equations, one arrives at the formula of ½ mv2 for kinetic energy.

The above is simply another form of the general formula for work—since energy is, after all, the ability to perform work. In order to produce an amount of kinetic energy equal to ½ mv2 within an object, one must perform an amount of work on it equal to Fs. Hence, kinetic energy also equals Fs, and thus the preceding paragraph simply provides a means for translating that into more specific terms.

The potential energy (PE) formula is much simpler, but it also relates to a work formula given earlier: that of work done against gravity. Potential energy, in this instance, is simply a function of gravity and the distance h above some reference point. Hence, its formula is the same as that for work done against gravity, mgh or wh, where w stands for weight. (Note that this refers to potential energy in a gravitational field; potential energy may also exist in an electromagnetic field, in which case the formula would be different from the one presented here.)



A Calorimeter - An instrument used by physicists to measure energy



Rest Energy and Its Intriguing Formula

Finally, there is rest energy, which, though it may not sound very exciting, is in fact the most intriguing—and the most complex—of the three. Ironically, the formula for rest energy is far, far more complex in derivation than that for potential or even kinetic energy, yet it is much more well-known within the popular culture.

Indeed, E = mc2 is perhaps the most famous physics formula in the world—even more so than the much simpler F = ma. The formula for rest energy, as many people know, comes from the man whose Theory of Relativity invalidated certain specifics of the Newtonian framework: Albert Einstein (1879-1955). As for what the formula actually means, that will be discussed later.



Heat, a form of energy, is partly potential energy and partly kinetic energy.



Real-Life Applications

Falling and Bouncing Balls

One of the best—and most frequently used—illustrations of potential and kinetic energy involves standing at the top of a building, holding a baseball over the side. Naturally, this is not an experiment to perform in real life. Due to its relatively small mass, a falling baseball does not have a great amount of kinetic energy, yet in the real world, a variety of other conditions (among them inertia, the tendency of an object to maintain its state of motion) conspire to make a hit on the head with a baseball potentially quite serious. If dropped from a great enough height, it could be fatal.

When one holds the baseball over the side of the building, potential energy is at a peak, but once the ball is released, potential energy begins to decrease in favor of kinetic energy. The relationship between these, in fact, is inverse: as the value of one decreases, that of the other increases in exact proportion. The ball will only fall to the point where its potential energy becomes 0, the same amount of kinetic energy it possessed before it was dropped. At the same point, kinetic energy will have reached maximum value, and will be equal to the potential energy the ball possessed at the beginning. Thus the sum of kinetic energy and potential energy remains constant, reflecting the conservation of energy, a subject discussed below.

It is relatively easy to understand how the ball acquires kinetic energy in its fall, but potential energy is somewhat more challenging to comprehend. The ball does not really "possess" the potential energy: potential energy resides within an entire system comprised by the ball, the space through which it falls, and the Earth. There is thus no "magic" in the reciprocal relationship between potential and kinetic energy: both are part of a single system, which can be envisioned by means of an analogy.

Imagine that one has a 20-dollar bill, then buys a pack of gum. Now one has, say, $19.20. The positive value of dollars has decreased by $0.80, but now one has increased "non-dollars" or "anti-dollars" by the same amount. After buying lunch, one might be down to $12.00, meaning that "anti-dollars" are now up to $8.00. The same will continue until the entire $20.00 has been spent. Obviously, there is nothing magical about this: the 20-dollar bill was a closed system, just like the one that included the ball and the ground. And just as potential energy decreased while kinetic energy increased, so "non-dollars" increased while dollars decreased.



As a ball falls freely under the influence of gravity, it accelerates downward, its initial potential energy converting into kinetic energy. On impact with a hard surface the ball deforms, converting the kinetic energy into elastic potential energy. As the ball springs back, the energy converts back firstly to kinetic energy and then as the ball re-gains height into potential energy. Energy losses due to inelastic deformation and air resistance cause each successive bounce to be lower than the last.



Bouncing Back

The example of the baseball illustrates one of the most fundamental laws in the universe, the conservation of energy: within a system isolated from all other outside factors, the total amount of energy remains the same, though transformations of energy from one form to another take place. An interesting example of this comes from the case of another ball and another form of vertical motion.

This time instead of a baseball, the ball should be one that bounces: any ball will do, from a basketball to a tennis ball to a superball. And rather than falling from a great height, this one is dropped through a range of motion ordinary for a human being bouncing a ball. It hits the floor and bounces back—during which time it experiences a complex energy transfer.

As was the case with the baseball dropped from the building, the ball (or more specifically, the system involving the ball and the floor) possesses maximum potential energy prior to being released. Then, in the split-second before its impact on the floor, kinetic energy will be at a maximum while potential energy reaches zero.

So far, this is no different than the baseball scenario discussed earlier. But note what happens when the ball actually hits the floor: it stops for an infinitesimal fraction of a moment. What has happened is that the impact on the floor (which in this example is assumed to be perfectly rigid) has dented the surface of the ball, and this saps the ball's kinetic energy just at the moment when the energy had reached its maximum value. In accordance with the energy conservation law, that energy did not simply disappear: rather, it was transferred to the floor.

Meanwhile, in the wake of its huge energy loss, the ball is motionless. An instant later, however, it reabsorbs kinetic energy from the floor, undents, and rebounds. As it flies upward, its kinetic energy begins to diminish, but potential energy increases with height. Assuming that the person who released it catches it at exactly the same height at which he or she let it go, then potential energy is at the level it was before the ball was dropped.

When a Ball Loses Its Bounce

The above, of course, takes little account of energy "loss"—that is, the transfer of energy from one body to another. In fact, a part of the ball's kinetic energy will be lost to the floor because friction with the floor will lead to an energy transfer in the form of thermal, or heat, energy. The sound that the ball makes when it bounces also requires a slight energy loss; but friction—a force that resists motion when the surface of one object comes into contact with the surface of another—is the principal culprit where energy transfer is concerned.

Of particular importance is the way the ball responds in that instant when it hits bottom and stops. Hard rubber balls are better suited for this purpose than soft ones, because the harder the rubber, the greater the tendency of the molecules to experience only elastic deformation. What this means is that the spacing between molecules changes, yet their overall position does not.

If, however, the molecules change positions, this causes them to slide against one another, which produces friction and reduces the energy that goes into the bounce. Once the internal friction reaches a certain threshold, the ball is "dead"—that is, unable to bounce. The deader the ball is, the more its kinetic energy turns into heat upon impact with the floor, and the less energy remains for bouncing upward.

Varieties of Energy in Action

The preceding illustration makes several references to the conversion of kinetic energy to thermal energy, but it should be stressed that there are only three fundamental varieties of energy: potential, kinetic, and rest. Though heat is often discussed as a form unto itself, this is done only because the topic of heat or thermal energy is complex: in fact, thermal energy is simply a result of the kinetic energy between molecules.

To draw a parallel, most languages permit the use of only three basic subject-predicate constructions: first person ("I"), second person ("you"), and third person ("he/she/it.") Yet within these are endless varieties such as singular and plural nouns or various temporal orientations of verbs: present ("I go"); present perfect ("I have gone"); simple past ("I went"); past perfect ("I had gone.") There are even "moods," such as the subjunctive or hypothetical, which permit the construction of complex thoughts such as "I would have gone." Yet for all this variety in terms of sentence pattern—actually, a degree of variety much greater than for that of energy types—all subject-predicate constructions can still be identified as first, second, or third person.

One might thus describe thermal energy as a manifestation of energy, rather than as a discrete form. Other such manifestations include electromagnetic (sometimes divided into electrical and magnetic), sound, chemical, and nuclear. The principles governing most of these are similar: for instance, the positive or negative attraction between two electromagnetically charged particles is analogous to the force of gravity.

Mechanical Energy

One term not listed among manifestations of energy is mechanical energy, which is something different altogether: the sum of potential and kinetic energy. A dropped or bouncing ball was used as a convenient illustration of interactions within a larger system of mechanical energy, but the example could just as easily have been a roller coaster, which, with its ups and downs, quite neatly illustrates the sliding scale of kinetic and potential energy.

Likewise, the relationship of Earth to the Sun is one of potential and kinetic energy transfers: as with the baseball and Earth itself, the planet is pulled by gravitational force toward the larger body. When it is relatively far from the Sun, it possesses a higher degree of potential energy, whereas when closer, its kinetic energy is highest. Potential and kinetic energy can also be illustrated within the realm of electromagnetic, as opposed to gravitational, force: when a nail is some distance from a magnet, its potential energy is high, but as it moves toward the magnet, kinetic energy increases.

Energy Conversion in a Dam

A dam provides a beautiful illustration of energy conversion: not only from potential to kinetic, but from energy in which gravity provides the force component to energy based in electromagnetic force. A dam big enough to be used for generating hydroelectric power forms a vast steel-and-concrete curtain that holds back millions of tons of water from a river or other body. The water nearest the top—the "head" of the dam—thus has enormous potential energy.

Hydroelectric power is created by allowing controlled streams of this water to flow downward, gathering kinetic energy that is then transferred to powering turbines. Dams in popular vacation spots often release a certain amount of water for recreational purposes during the day. This makes it possible for rafters, kayakers, and others downstream to enjoy a relatively fast-flowing river. (Or, to put it another way, a stream with high kinetic energy.) As the day goes on, however, the sluice-gates are closed once again to build up the "head." Thus when night comes, and energy demand is relatively high as people retreat to their homes, vacation cabins, and hotels, the dam is ready to provide the power they need.

Other Manifestations of Energy

Thermal and electromagnetic energy are much more readily recognizable manifestations of energy, yet sound and chemical energy are two forms that play a significant part as well. Sound, which is essentially nothing more than the series of pressure fluctuations within a medium such as air, possesses enormous energy: consider the example of a singer hitting a certain note and shattering a glass.

Contrary to popular belief, the note does not have to be particularly high: rather, the note should be on the same wavelength as the glass's own vibrations. When this occurs, sound energy is transferred directly to the glass, which is shattered by this sudden net intake of energy. Sound waves can be much more destructive than that: not only can the sound of very loud music cause permanent damage to the ear drums, but also, sound waves of certain frequencies and decibel levels can actually drill through steel. Indeed, sound is not just a by-product of an explosion; it is part of the destructive force.

As for chemical energy, it is associated with the pull that binds together atoms within larger molecular structures. The formation of water molecules, for instance, depends on the chemical bond between hydrogen and oxygen atoms. The combustion of materials is another example of chemical energy in action.

With both chemical and sound energy, however, it is easy to show how these simply reflect the larger structure of potential and kinetic energy discussed earlier. Hence sound, for instance, is potential energy when it emerges from a source, and becomes kinetic energy as it moves toward a receiver (for example, a human ear). Furthermore, the molecules in a combustible material contain enormous chemical potential energy, which becomes kinetic energy when released in a fire.

Rest Energy and Its Nuclear Manifestation

Nuclear energy is similar to chemical energy, though in this instance, it is based on the binding of particles within an atom and its nucleus. But it is also different from all other kinds of energy, because its force component is neither gravitational nor electromagnetic, but based on one of two other known varieties of force: strong nuclear and weak nuclear. Furthermore, nuclear energy—to a much greater extent than thermal or chemical energy—involves not only kinetic and potential energy, but also the mysterious, extraordinarily powerful, form known as rest energy.

Throughout this discussion, there has been little mention of rest energy; yet it is ever-present. Kinetic and potential energy rise and fall with respect to one another; but rest energy changes little. In the baseball illustration, for instance, the ball had the same rest energy at the top of the building as it did in flight—the same rest energy, in fact, that it had when sitting on the ground. And its rest energy is enormous.

Nuclear Warfare

This brings back the subject of the rest energy formula: E = mc2, famous because it made possible the creation of the atomic bomb. The latter, which fortunately has been detonated in warfare only twice in history, brought a swift end to World War II when the United States unleashed it against Japan in August 1945. From the beginning, it was clear that the atom bomb possessed staggering power, and that it would forever change the way nations conducted their affairs in war and peace.

Yet the atom bomb involved only nuclear fission, or the splitting of an atom, whereas the hydrogen bomb that appeared just a few years after the end of World War II used an even more powerful process, the nuclear fusion of atoms. Hence, the hydrogen bomb upped the ante to a much greater extent, and soon the two nuclear superpowers—the United States and the Soviet Union—possessed the power to destroy most of the life on Earth.

The next four decades were marked by a superpower struggle to control "the bomb" as it came to be known—meaning any and all nuclear weapons. Initially, the United States controlled all atomic secrets through its heavily guarded Manhattan Project, which created the bombs used against Japan. Soon, however, spies such as Julius and Ethel Rosenberg provided the Soviets with U.S. nuclear secrets, ensuring that the dictatorship of Josef Stalin would possess nuclear capabilities as well. (The Rosenbergs were executed for treason, and their alleged innocence became a celebrated cause among artists and intellectuals; however, Soviet documents released since the collapse of the Soviet empire make it clear that they were guilty as charged.)

Both nations began building up missile arsenals. It was not, however, just a matter of the United States and the Soviet Union. By the 1970s, there were at least three other nations in the "nuclear club": Britain, France, and China. There were also other countries on the verge of developing nuclear bombs, among them India and Israel. Furthermore, there was a great threat that a terrorist leader such as Libya's Muammar al-Qaddafi would acquire nuclear weapons and do the unthinkable: actually use them.

Though other nations acquired nuclear weapons, however, the scale of the two super-power arsenals dwarfed all others. And at the heart of the U.S.-Soviet nuclear competition was a sort of high-stakes chess game—to use a metaphor mentioned frequently during the 1970s. Soviet leaders and their American counterparts both recognized that it would be the end of the world if either unleashed their nuclear weapons; yet each was determined to be able to meet the other's ever-escalating nuclear threat.

United States President Ronald Reagan earned harsh criticism at home for his nuclear buildup and his hard line in negotiations with Soviet President Mikhail Gorbachev; but as a result of this one-upmanship, he put the Soviets into a position where they could no longer compete. As they put more and more money into nuclear weapons, they found themselves less and less able to uphold their already weak economic system. This was precisely Reagan's purpose in using American economic might to outspend the Soviets—or, in the case of the proposed multi-trillion-dollar Strategic Defense Initiative (SDI or "Star Wars")—threatening to outspend them. The Soviets expended much of their economic energy in competing with U.S. military strength, and this (along with a number of other complex factors), spelled the beginning of the end of the Communist empire.

E = Mc2

The purpose of the preceding historical brief is to illustrate the epoch-making significance of a single scientific formula: E = mc2. It ended World War II and ensured that no war like it would ever happen again—but brought on the specter of global annihilation. It created a superpower struggle—yet it also ultimately helped bring about the end of Soviet totalitarianism, thus opening the way for a greater level of peace and economic and cultural exchange than the world has ever known. Yet nuclear arsenals still remain, and the nuclear threat is far from over.

So just what is this literally earth-shattering formula? E stands for rest energy, m for mass, and c for the speed of light, which is 186,000 mi (297,600 km) per second. Squared, this yields an almost unbelievably staggering number.

Hence, even an object of insignificant mass possesses an incredible amount of rest energy. The baseball, for instance, weighs only about 0.333 lb, which—on Earth, at least—converts to 0.15 kg. (The latter is a unit of mass, as opposed to weight.) Yet when factored into the rest energy equation, it yields about 3.75 billion kilowatt-hours—enough to provide an American home with enough electrical power to last it more than 156,000 years!

How can a mere baseball possess such energy? It is not the baseball in and of itself, but its mass; thus every object with mass of any kind possesses rest energy. Often, mass energy can be released in very small quantities through purely thermal or chemical processes: hence, when a fire burns, an almost infinitesimal portion of the matter that went into making the fire is converted into energy. If a stick of dynamite that weighed 2.2 lb (1 kg) exploded, the portion of it that "disappeared" would be equal to 6 parts out of 100 billion; yet that portion would cause a blast of considerable proportions.

As noted much earlier, the derivation of Einstein's formula—and, more to the point, how he came to recognize the fundamental principles involved—is far beyond the scope of this essay. What is important is the fact, hypothesized by Einstein and confirmed in subsequent experiments, that matter is convertible to energy, a fact that becomes apparent when matter is accelerated to speeds close to that of light.

Physicists do not possess a means for propelling a baseball to a speed near that of light—or of controlling its behavior and capturing its energy. Instead, atomic energy—whether of the wartime or peacetime varieties (that is, in power plants)—involves the acceleration of mere atomic particles. Nor is any atom as good as another. Typically physicists use uranium and other extremely rare minerals, and often, they further process these minerals in highly specialized ways. It is the rarity and expense of those minerals, incidentally—not the difficulty of actually putting atomic principles to work—that has kept smaller nations from developing their own nuclear arsenals.

Monday, June 23, 2008

Hydropower plant

Hydropower plant

Hydropower or hydraulic power is the force or energy of moving water. It may be captured for some useful purpose.

Prior to the widespread availability of commercial electric power, hydropower was used for irrigation, and operation of various machines, such as watermills, textile machines, and sawmills. The energy of moving water has been exploited for millennia. In India, water wheels and watermills were built; in Imperial Rome, water powered mills produced flour from grain, and in China and the rest of the Far East, hydraulically operated "pot wheel" pumps that raised water into irrigation canals. In the 1830s, at the peak of the canal-building era, hydropower was used to transport barge traffic up and down steep hills using inclined plane railroads. Direct mechanical power transmission required that industries using hydropower had to locate near the waterfall. For example, during the last half of the 19th century, many grist mills were built at Saint Anthony Falls, utilizing the 50 foot (15 metre) drop in the Mississippi River. The mills contributed to the growth of Minneapolis. Today the largest use of hydropower is for electric power generation, which allows low cost energy to be used at long distances from the water source.



Natural manifestations of hydraulic power

In hydrology, hydropower is manifested in the force of the water on the riverbed and banks of a river. It is particularly powerful when the river is in flood. The force of the water results in the removal of sediment and other materials from the riverbed and banks of the river, causing erosion and other alterations.

Types of water power

There are several forms of water power:
Waterwheels, used for hundreds of years to power mills and machinery
Hydroelectricity, usually referring to hydroelectric dams or run-of-the-river setups.
Tidal power, which captures energy from the tides in horizontal direction
Tidal stream power, which does the same vertically
Wave power, which uses the energy in waves



Hydroelectric power

Hydroelectric power now supplies about 715,000 MWe or 19% of world electricity (16% in 2003). Large dams are still being designed. Apart from a few countries with an abundance of it, hydro power is normally applied to peak load demand because it is readily stopped and started. Nevertheless, hydroelectric power is probably not a major option for the future of energy production in the developed nations because most major sites within these nations are either already being exploited or are unavailable for other reasons, such as environmental considerations.

Hydropower produces essentially no carbon dioxide or other harmful emissions, in contrast to burning fossil fuels, and is not a significant contributor to global warming through CO2.

Hydroelectric power can be far less expensive than electricity generated from fossil fuels or nuclear energy. Areas with abundant hydroelectric power attract industry. Environmental concerns about the effects of reservoirs may prohibit development of economic hydropower sources.




The chief advantage of hydroelectric dams is their ability to handle seasonal (as well as daily) high peak loads. When the electricity demands drop, the dam simply stores more water (which provides more flow when it releases). Some electricity generators use water dams to store excess energy (often during the night), by using the electricity to pump water up into a basin. Electricity can be generated when demand increases. In practice the utilization of stored water in river dams is sometimes complicated by demands for irrigation which may occur out of phase with peak electrical demands.

Not all hydroelectric power requires a dam; a run-of-river project only uses part of the stream flow and is a characteristic of small hydropower projects.

Tidal power

Harnessing the tides in a bay or estuary has been achieved in France (since 1966), Canada and Russia, and could be achieved in other areas with a large tidal range. The trapped water turns turbines as it is released through the tidal barrage in either direction. Another possible fault is that the system would generate electricity most efficiently in bursts every six hours (once every tide). This limits the applications of tidal energy.

Tidal stream power

A relatively new technology, tidal stream generators draw energy from currents in much the same way that wind generators do. The higher density of water means that a single generator can provide significant power. This technology is at the early stages of development and will require more research before it becomes a significant contributor.

Several prototypes have shown promise. In the UK in 2003, a 300 kW Periodflow marine current propeller type turbine was tested off the coast of Devon, and a 150 kW oscillating hydroplane device, the Stingray, was tested off the Scottish coast. Another British device, the Hydro Venturi, is to be tested in San Francisco Bay.

The Canadian company Blue Energy has plans for installing very large arrays tidal current devices mounted in what they call a 'tidal fence' in various locations around the world, based on a vertical axis turbine design.

Wave power

Harnessing power from ocean surface wave motion might yield much more energy than tides. The feasibility of this has been investigated, particularly in Scotland in the UK. Generators either coupled to floating devices or turned by air displaced by waves in a hollow concrete structure would produce electricity. Numerous technical problems have frustrated progress.

A prototype shore based wave power generator is being constructed at Port Kembla in Australia and is expected to generate up to 500 MWh annually. The Wave Energy Converter has been constructed (as of July 2005) and initial results have exceeded expectations of energy production during times of low wave energy. Wave energy is captured by an air driven generator and converted to electricity. For countries with large coastlines and rough sea conditions, the energy of waves offers the possibility of generating electricity in utility volumes. Excess power during rough seas could be used to produce hydrogen.

Small scale hydro power

Small scale hydro or micro-hydro power has been increasingly used as an alternative energy source, especially in remote areas where other power sources are not viable. Small scale hydro power systems can be installed in small rivers or streams with little or no discernible environmental effect on things such as fish migration. Most small scale hydro power systems make no use of a dam or major water diversion, but rather use water wheels.

There are some considerations in a micro-hydro system installation. The amount of water flow available on a consistent basis, since lack of rain can affect plant operation. Head, or the amount of drop between the intake and the exit. The more head, the more power that can be generated. There can be legal and regulatory issues, since most countries, cities, and states have regulations about water rights and easements.

Micro-hydro power can be used directly as "shaft power" for many industrial applications. Alternatively, the preferred option for domestic energy supply is to generate electricity with a generator or a reversed electric motor which, while less efficient is likely to be available locally and cheaply.

Combined cycle

Combined cycle


A combined cycle is characteristic of a power producing engine or plant that employs more than one thermodynamic cycle. Heat engines are only able to use a portion of the energy their fuel generates (usually less than 50%). The remaining heat from combustion is generally wasted. Combining two or more "cycles" such as the Brayton cycle and Rankine cycle results in improved overall efficiency.

In a combined cycle power plant (CCPP), or combined cycle gas turbine (CCGT) plant, a gas turbine generator generates electricity and the waste heat is used to make steam to generate additional electricity via a steam turbine; this last step enhances the efficiency of electricity generation. Most new gas power plants in North America and Europe are of this type. In a thermal power plant, high-temperature heat as input to the power plant, usually from burning of fuel, is converted to electricity as one of the outputs and low-temperature heat as another output. As a rule, in order to achieve high efficiency, the temperature difference between the input and output heat levels should be as high as possible (see Carnot efficiency). This is achieved by combining the Rankine (steam) and Brayton (gas) thermodynamic cycles. Such an arrangement used for marine propulsion is called COmbined Gas (turbine) And Steam (turbine) (COGAS).

Design principle



In a steam power plant water is the working medium. High pressure steam requires strong, bulky components. High temperatures require expensive alloys made from nickel or cobalt, rather than inexpensive steel. These alloys limit practical steam temperatures to 655 °C while the lower temperature of a steam plant is fixed by the boiling point of water. With these limits, a steam plant has a fixed upper efficiency of 35 to 40%.

For gas turbines these limitations do not apply. Gas cycle firing temperatures above 1,200 °C are practicable. So, a combined cycle plant has a thermodynamic cycle that operates between the gas-turbine's high firing temperature and the waste heat temperature near the boiling point of water.

A gas turbine has a compressor, burner and turbine. The input temperature to the turbine is relatively high (900 to 1,350 °C) but the output temperature of the flue gas is also high (450 to 650 °C).

The temperature of a gas turbine's flue gas is therefore high enough to make steam for a second steam cycle (a Rankine cycle), with a live steam temperature between 420 and 580 °C. The condenser is usually cooled by water from a lake, river, sea or cooling towers.

The output heat of the gas turbine's flue gas is utilized to generate steam by passing it through a heat recovery steam generator (HRSG).

By combining both processes, high input temperatures and low output temperatures can be achieved. The efficiency of the cycles add, because they are powered by the same fuel source.

Efficiency of CCGT plants

The thermal efficiency of a combined cycle power plant is the net power output of the plant divided by the heating value of the fuel. If the plant produces only electricity, efficiencies of up to 59% can be achieved. In the case of combined heat and power generation, the efficiency can increase to 85%.

Supplementary firing

The HRSG can be designed with supplementary firing of fuel after the gas turbine in order to increase the quantity or temperature of the steam generated. Without supplementary firing, the efficiency of the combined cycle power plant is higher, but supplementary firing lets the plant respond to fluctuations of electrical load. Supplementary burners are also called duct burners.

More fuel is sometimes added to the turbine's exhaust. This is possible because the turbine exhaust gas (flue gas) still contains some oxygen. Temperature limits at the gas turbine inlet force the turbine to use excess air, above the optimal stoichiometric ratio to burn the fuel. Often in gas turbine designs part of the compressed air flow bypasses the burner and is used to cool the turbine blades.

Fuel for combined cycle power plants

Combined cycle plants are usually powered by natural gas, although fuel oil, synthetic gas or other fuels can be used. The supplementary fuel may be natural gas, fuel oil, or coal.

Integrated Gasification Combined Cycle (IGCC)

An Integrated Gasification Combined Cycle, or IGCC, is a power plant using synthetic gas (syngas). Below is a schematic flow diagram of an IGCC plant:



The gasification process can produce syngas from high-sulfur coal, heavy petroleum residues and biomass.

The plant is called "integrated" because its syngas is produced in a gasification unit in the plant which has been optimized for the plant's combined cycle. The gasification process produces heat, and this is reclaimed by steam "waste heat boilers". The steam is utilized in steam turbines.

There are currently (2007) only two IGCC plants generating power in the U.S.; however, several new IGCC plants are expected to come online in the U.S. in the 2012-2020 time frame.

The first generation of IGCC plants polluted less than contemporary coal-based technology, but also polluted water: For example, the Wabash River Plant "routinely" violated its water permit because it emitted arsenic, selenium and cyanide. The Wabash River Generating Station has now wholly owned and operated by the Wabash River Power Association, and currently operates as one of the cleanest solid fuel power plants in the world.

New IGCC plants based on these demonstration projects can achieve low NOx emissions, greater than 90-95% mercury removal, and greater than 99% sulfur dioxide (SO2) removal.

IGCC is now touted as "capture ready" and could capture and store carbon dioxide. IGCC's can be outfitted for carbon capture much more easily and cheaply than conventional and supercritical pulverized coal plants because the carbon can be removed in the gasifier, before the fuel is combusted. Even without carbon capture, the high thermal efficiency of IGCC plants means that IGCC plants release less carbon while producing the same amount of energy.

The main problem for IGCC is its extremely high capital cost, upwards of $3,593/kW[1]. Official US government figures give more optimistic estimates [2] of $1491/kw installed capacity (2005 dollars) v $1290 for a conventional clean coal facility.This is about 20% greater cost than a conventional pulverized coal plant, but the U.S. Department of Energy and many states offer subsidies for clean coal technology projects that could help to bridge the cost gap.

However, the per megawatt-hour cost of an IGCC plant vs. a pulverized coal plant coming online in 2010 would be $56 vs $52. And IGCC becomes even more attractive when you include the costs of carbon capture and sequestration, IGCC becoming $79 per megawatt-hour vs. $95 per megawatt-hour for pulverized coal. [3]

The DOE Clean Coal Demonstration Project helped construct 3 IGCC plants: Wabash River in Indiana, Polk in Tampa, Florida (online 1996), and Pinon Pine in Reno, Nevada. In the Reno demonstration project, researchers found that then-current IGCC technology would not work more than 300 feet (100m) above sea level[4]. The plant failed. [5]

The power generation industry has yet to show that IGCC is reliable. Of five demonstration facilities, none had availabilities comparable to conventional CCGTs or coal-fired power plants. Wabash River was down repeatedly for long stretches due to gasifier problems, and the gasifier problems have not been remedied -- subsequent projects, such as Excelsior's Mesaba Project, have a third gasifier and train built in. However, the past year has seen Wabash River running reliably, with availability comparable to or better than other technologies.

General Electric is currently designing an IGCC model plant that should introduce greater reliability. GE's model features advanced turbines optimized for the coal syn-gas. Eastman's industrial gasification plant in Kingsport, TN uses a GE Energy solid-fed gasifier. Eastman, a fortune 500 company, built the facility in 1983 without any state or federal subsidies and turns a profit. [6][7]



There are several refinery-based IGCC plants in Europe that have demonstrated good availability (90-95%) after initial shakedown periods. Several factors help this performance:

First, none of these facilities use advanced technology ("F" type) gas turbines.

Second, all refinery-based plants use refinery residues, rather than coal, as the feedstock. This eliminates coal handling and coal preparation equipment and its problems. Also, there is a much lower level of ash produced in the gasifier, which reduces cleanup and downtime in its gas cooling and cleaning stages.

Third, these non-utility plants have recognized the need to treat the gasification system as an up-front chemical processing plant, and have reorganized their operating staff accordingly.

Another IGCC success story has been the 250 MW Buggenum plant in The Netherlands. It also has good availability. This coal-based IGCC plant currently uses about 30% biomass as a supplemental feedstock. The owner, NUON, is paid an incentive fee by the government to use the biomass. NUON has begun site preparation for a much larger plant - about 1200 MW. Although not confirmed, it is expected that they will specify "F" class advanced gas turbines.

A new generation of IGCC-based coal-fired power plants has been proposed, although none is yet under construction. Projects are being developed by AEP, Duke Energy, and Southern Company in the US, and in Europe, by E.ON and Centrica (both UK), RWE (Germany) and NUON (Netherlands). In Minnesota, the state's Dept. of Commerce analysis found IGCC to have the highest cost, with an emissions profile not significantly better than pulverized coal. In Delaware, the Delmarva and state consultant analysis had essentially the same results.

The high cost of IGCC is the biggest obstacle to its integration in the power market; however, most energy executives recognize that carbon regulation is coming soon. Bills requiring carbon reduction are being proposed again both the House and the Senate, and with the Democratic majority it seems likely that with the next President there will be a greater push for carbon regulation. The Supreme Court decision requiring the EPA to regulate carbon (Commonwealth of Massachusetts et al. v. Environmental Protection Agency et al.)[8] also speaks to the likelihood of future carbon regulations coming sooner, rather than later. With carbon capture, the cost of electricity from an IGCC plant would increase approximately 30%. For a natural gas CC, the increase is approximately 33%. For a pulverized coal plant, the increase is approximately 68%. This potential for less expensive carbon capture makes IGCC an attractive choice for keeping low cost coal an available fuel source in a carbon constrained world.

Automotive use

Combined cycles have traditionally only been used in large power plants. BMW, however, has proposed that automobiles use exhaust heat to drive steam turbines.[9] It may be possible to use the pistons in a reciprocating engine for both combustion and steam expansion.[10]

Aeromotive use

Some versions of the Wright R-3350 were produced as "Turbo-compound" engines. Three turbines driven by exhaust gases, known as "Power recovery turbines", provided nearly 600 hp at takeoff. These turbines added power to the engine crankshaft through bevel gears and fluid couplings.