Born: 22 September 1791
Birthplace: Newington, Surrey, England
Died: 25 August 1867
Best Known As: Inventor of the first dynamo
Although he had little formal education, Michael Faraday went on to become one of the most influential scientists in the field of electricity. He spent his professional career in the laboratory of the Royal Institution in London (1813-62), where he got his start as an assistant in 1813 to Sir Humphry Davy. By 1825 he had worked his way up to being laboratory director, and in 1833 he was made a professor of chemistry. In the lab he had great success with electrochemistry, and he even has an electrical unit named after him (a faraday is an amount of electricity measured during electrolysis). Faraday built the first dynamo, a copper disk that rotated between the poles of a permanent magnet and produced an electromotive force (something that moves electricity). His work in electromagnetic induction led to the development of modern dynamos and generators. Faraday also discovered the compound benzene.
The English physicist and chemist Michael Faraday (1791-1867) discovered benzene and the principles of current induction.
One of a blacksmith's 10 children, Michael Faraday was born on Sept. 22, 1791, in Newington, Surrey. The family soon moved to London, where young Michael picked up the rudiments of reading, writing, and arithmetic. At the age of 14 he was apprenticed to a bookbinder and bookseller. He read ravenously and attended public lectures, including some by Sir Humphry Davy.
Faraday's career began when Davy, temporarily blinded in a laboratory accident, appointed Faraday as his assistant at the Royal Institution. With Davy as a teacher in analytical chemistry, Faraday advanced in his scientific apprenticeship and began independent chemical studies. By 1825 he discovered benzene and had become the first to describe compounds of chlorine and carbon. He adopted the atomic theory to explain that chemical qualities were the result of attraction and repulsion between united atoms. This proved to be the theoretical foundation for much of his future work.
Faraday had already done some work in magnetism and electricity, and it was in this field that he made his most outstanding contributions. His first triumph came when he found a solution to the problem of producing continuous rotation by use of electric current, thus making electric motors possible. Hans Oersted had discovered the magnetic effect of a current, but Faraday grasped the fact that a conductor at rest and a steady magnetic field do not interact and that to get an induced current either the conductor or the field has to move. On Aug. 29, 1831, he discovered electromagnetic induction.
During the next 10 years Faraday explored and expanded the field of electricity. In 1834 he announced his famous two laws of electrolysis. Briefly, they state that for any given amount of electrical force in an electrochemical cell, chemical substances are released at the electrodes in the ratio of their chemical equivalents. He also invented the voltameter, a device for measuring electrical charges, which was the first step toward the later standardization of electrical quantities.
Faraday continued to work in his laboratory, but his health began to deteriorate and he had to stop work entirely in 1841. Almost miraculously, however, his health improved and he resumed work in 1844. He began a search for an interaction between magnetism and light and in 1845 turned his attention from electrostatics to electromagnetism. He discovered that an intense magnetic field can rotate the plane of polarized light, a phenomenon known today as the Faraday effect. In conjunction with these experiments he showed that the magnetic line of force is conducted by all matter. Those which were good conductors he called paramagnetics, while those which conducted the force poorly he named diamagnetics. Thus, the energy of a magnet is in the space around it, not in the magnet itself. This is the fundamental idea of the field theory.
Faraday was a brilliant lecturer, and through his public lectures he did a great deal to popularize science. Shortly after he became head of the Royal Institution in 1825, he inaugurated the custom of giving a series of lectures for young people during the Christmas season. This tradition has been maintained, and over the years the series have frequently been the basis for fascinating, simply written, and informative books.
On Aug. 25, 1867, Faraday died in London.
The admiration of physicists for Faraday has been demonstrated by naming the unit of capacitance the farad and a unit of charge, the faraday. No other man has been doubly honored in this way. His name also appears frequently in connection with effects, laws, and apparatus. These honors are proper tribute to the man who was possibly the greatest experimentalist who ever lived.
Further Reading
Much has been written about Faraday, but the student should first read the account by his successor at the Royal Institution, John Tyndall, Faraday as a Discoverer (1961). The sketch of Faraday in James Gerald Crowther, Men of Science (1936), is also recommended. Leslie Pearce Williams, Michael Faraday: A Biography (1965), appraises Faraday's work in relation to modern science and contains many previously unpublished manuscripts.
Famous British physicist, born in London on September 22, 1791. He became an assistant to Sir Humphry Davy and later became celebrated for his brilliant discoveries relating to electricity and chemistry. Faraday's well-known saying, "Nothing is too amazing to be true," apparently was not meant to cover table turning. It was, for him, too amazing to be true. His noted theory that table movements were caused by unconscious muscular pressure was first advanced in a letter to the Times of June 30, 1853. To prove it, he prepared two small flat boards a few inches square, placed several glass rollers between them and fastened the whole together with a couple of rubber bands so that the upper board would slide under lateral pressure to a limited extent over the lower one. A light index fastened to the upper board would betray the least amount of sliding.
During experiments this is just what happened. The upper board always moved first, which demonstrated that the fingers moved the table and not the table the fingers. Faraday also found that when the sitters learned the meaning of the index and kept their attention fixed on it, no movement took place. When it was hidden from their sight it kept on wavering, although the sitters believed that they always pressed directly downward. However, the pressure of the hands was trifling and was practically neutralized by the absence of unanimity in the direction. The sitters never made the same movement at the same moment.
For this reason, and for the weightier one that tables moved without contact as well, his theory was soon found inadequate. According to Charles Richet, it was Michel Chevreul, the famous French chemist, who originally evolved the theory of un-conscious muscular pressure. Chevreul's book, however, did not appear until 1854, a year after Faraday's explanation was published.
In later years many attempts were made to prove to Faraday the reality of psychic phenomena, but he was too obstinate. "They who say these things are not competent witnesses of facts," he wrote in 1865. To an invitation to attend the first séance of the Davenport brothers he returned the answer, "If spirit communications, not utterly worthless, should happen to start into activity, I will trust the spirits to find out for themselves how they can move my attention. I am tired of them."
Faraday was a member of the Sandemanians, an obscure religious sect holding rigid biblical views. When Sir William Crookes inquired of Faraday how he reconciled science with religion, he received the reply that he kept his science and religion strictly apart.
At the time of the Home-Lyon trial (see D. D. Home), a Professor Tyndall, in a letter in Pall Mall Gazette (May 5, 1868), wrote that, years before, Faraday had accepted an invitation to examine Home's phenomena, but his conditions were not met and the investigation fell through. When the original correspondence on the subject between Faraday and Sir Emerson Tennant was published, it appeared that one of Faraday's conditions was, "If the effects are miracles, or the work of spirits, does he (Home) admit the utterly contemptible character, both of them and their results, up to the present time, in respect either of yielding information or instruction or supplying any force or action of the least value to mankind?" Robert Bell, the intermediary for the proposed séance, found Faraday's letter so preposterous that, without consulting Home, he declined his intervention. Home, when he learned about it, was duly indignant.
Professor Tyndall—as an arch skeptic—commended Faraday's attitude, but those interested in psychical research assumed the contrary position. "The letter," writes Frank Pod-more in Modern Spiritualism (1902), "was, of course, altogether unworthy of Faraday's high character and scientific eminence, and was no doubt the outcome of a moment of transient irritation. The position taken was quite indefensible. To enter upon a judicial inquiry by treating the subject-matter as a chose jugée was surely a parody of scientific methods."
Faraday died August 25, 1867. In a series of séances between 1888 and 1910 in Spring Hall, Kansas, the presiding spirit claimed to be Faraday, and his communications were published in four books by A. Aber: Rending of the Veil, Beyond the Veil, The Guiding Star, and The Dawn of Another Life. A second set of communications reportedly from Faraday were received by an anonymous medium who called herself (or himself) the "Mystic Helper." The messages were received sporadically beginning in 1874 and were finally published in 1924.
Saturday, February 9, 2008
Magnetic flux
magnetic flux, in physics, term used to describe the total amount of magnetic field in a given region. The term flux was chosen because the power of a magnet seems to “flow” out of the magnet at one pole and return at the other pole in a circulating pattern, as suggested by the patterns formed by iron filings sprinkled on a paper placed over a magnet or a conductor carrying an electric current. These patterns are called lines of induction. Although there is no actual physical flow, the lines of induction suggest the correct mathematical description of magnetism in terms of a field of force. The lines of induction originate on the north pole of the magnet and end on the south pole; their direction at any point is the direction of the magnetic field, and their density (the number of lines passing through a unit area) gives the strength of the field. Near the poles where the lines converge, the field and the force it produces are large; away from the poles where the lines diverge, the field and force are progressively weaker.
Lenz's law
Lenz's law
A law of electromagnetism which states that, whenever there is an induced electromotive force (emf) in a conductor, it is always in such a direction that the current it would produce would oppose the change which causes the induced emf. If the change is the motion of a conductor through a magnetic field, the induced current must be in such a direction as to produce a force opposing the motion. If the change causing the emf is a change of flux threading a coil, the induced current must produce a flux in such a direction as to oppose the change.
Definition
Lenz's law states that induced e.m.f. opposes the change in flux producing it.
Connection with law of conservation of energy
Lenz's Law is one consequence of the principle of conservation of energy. To see why, move a permanent magnet towards the face of a closed loop of wire (eg. a coil or solenoid). An electric current is induced in the wire, because the electrons within it are subjected to an increasing magnetic field as the magnet approaches. This produces an emf (Electromotive Force) that acts upon them. The direction of the induced current depends on whether the north or south pole of the magnet is approaching: an approaching north pole will produce an anti-clockwise current (from the perspective of the magnet), and south pole approaching the coil will produce a clockwise current.
To understand the implications for conservation of energy, suppose that the induced currents' directions were opposite to those just described. Then the north pole of an approaching magnet would induce a south pole in the near face of the loop. The attractive force between these poles would accelerate the magnet's approach. This would make the magnetic field increase more quickly, which in turn would increase the loop's current, strengthening the magnetic field, increasing the attraction and acceleration, and so on. Both the kinetic energy of the magnet and the rate of energy dissipation in the loop (due to Joule heating) would increase. A small energy input would produce a large energy output, violating the law of conservation of energy.
This scenario is only one example of electromagnetic induction. Lenz's Law states that the magnetic field of any induced current opposes the change that induces it.
For a rigorous mathematical treatment, see electromagnetic induction and Maxwell's equations.
Practical Demonstrations
A brief video demonstrating Lenz's Law is at EduMation.
A neat device made by William J. Beaty levitates a magnet above two spinning rollers.
A dramatic demonstration of the effect with an aluminum block in an MRI, falling very slowly.
A demonstration that even a small child can try:
- Find a small electric motor.
- Spin its shaft.
- Connect its wires together (with a paper clip or alligator clip), and spin the shaft again.
- This time, the motor resists turning, because current can flow through its wires.
A law of electromagnetism which states that, whenever there is an induced electromotive force (emf) in a conductor, it is always in such a direction that the current it would produce would oppose the change which causes the induced emf. If the change is the motion of a conductor through a magnetic field, the induced current must be in such a direction as to produce a force opposing the motion. If the change causing the emf is a change of flux threading a coil, the induced current must produce a flux in such a direction as to oppose the change.
Definition
Lenz's law states that induced e.m.f. opposes the change in flux producing it.
Connection with law of conservation of energy
Lenz's Law is one consequence of the principle of conservation of energy. To see why, move a permanent magnet towards the face of a closed loop of wire (eg. a coil or solenoid). An electric current is induced in the wire, because the electrons within it are subjected to an increasing magnetic field as the magnet approaches. This produces an emf (Electromotive Force) that acts upon them. The direction of the induced current depends on whether the north or south pole of the magnet is approaching: an approaching north pole will produce an anti-clockwise current (from the perspective of the magnet), and south pole approaching the coil will produce a clockwise current.
To understand the implications for conservation of energy, suppose that the induced currents' directions were opposite to those just described. Then the north pole of an approaching magnet would induce a south pole in the near face of the loop. The attractive force between these poles would accelerate the magnet's approach. This would make the magnetic field increase more quickly, which in turn would increase the loop's current, strengthening the magnetic field, increasing the attraction and acceleration, and so on. Both the kinetic energy of the magnet and the rate of energy dissipation in the loop (due to Joule heating) would increase. A small energy input would produce a large energy output, violating the law of conservation of energy.
This scenario is only one example of electromagnetic induction. Lenz's Law states that the magnetic field of any induced current opposes the change that induces it.
For a rigorous mathematical treatment, see electromagnetic induction and Maxwell's equations.
Practical Demonstrations
A brief video demonstrating Lenz's Law is at EduMation.
A neat device made by William J. Beaty levitates a magnet above two spinning rollers.
A dramatic demonstration of the effect with an aluminum block in an MRI, falling very slowly.
A demonstration that even a small child can try:
- Find a small electric motor.
- Spin its shaft.
- Connect its wires together (with a paper clip or alligator clip), and spin the shaft again.
- This time, the motor resists turning, because current can flow through its wires.
Semiconductor
Semiconductor
A solid crystalline material whose electrical conductivity is intermediate between that of a metal and an insulator. Semiconductors exhibit conduction properties that may be temperature-dependent, permitting their use as thermistors (temperature-dependent resistors), or voltage-dependent, as in varistors. By making suitable contacts to a semiconductor or by making the material suitably inhomogeneous, electrical rectification and amplification can be obtained. Semiconductor devices, rectifiers, and transistors have replaced vacuum tubes almost completely in low-power electronics, making it possible to save volume and power consumption by orders of magnitude. In the form of integrated circuits, they are vital for complicated systems. The optical properties of a semiconductor are important for the understanding and the application of the material. Photodiodes, photoconductive detectors of radiation, injection lasers, light-emitting diodes, solar-energy conversion cells, and so forth are examples of the wide variety of optoelectronic devices. See also Integrated circuits; Laser; Light-emitting diode; Photodiode; Photoelectric devices; Semiconductor diode; Semiconductor rectifier; Transistor; Varistor.
Conduction in semiconductors
The electrical conductivity of semiconductors ranges from about 103 to 10−9 ohm−1 cm−1, as compared with a maximum conductivity of 107 for good conductors and a minimum conductivity of 10−17 ohm−1 cm−1 for good insulators. See also Electric insulator.
The electric current is usually due only to the motion of electrons, although under some conditions, such as very high temperatures, the motion of ions may be important. The basic distinction between conduction in metals and in semiconductors is made by considering the energy bands occupied by the conduction electrons. See also Band theory of solids; Ionic crystals.
At absolute zero temperature, the electrons occupy the lowest possible energy levels, with the restriction that at most two electrons with opposite spin may be in the same energy level. In semiconductors and insulators, there are just enough electrons to fill completely a number of energy bands, leaving the rest of the energy bands empty. The highest filled energy band is called the valence band. The next higher band, which is empty at absolute zero temperature, is called the conduction band. The conduction band is separated from the valence band by an energy gap, which is an important characteristic of the semiconductor. In metals, the highest energy band that is occupied by the electrons is only partially filled. This condition exists either because the number of electrons is not just right to fill an integral number of energy bands or because the highest occupied energy band overlaps the next higher band without an intervening energy gap. The electrons in a partially filled band may acquire a small amount of energy from an applied electric field by going to the higher levels in the same band. The electrons are accelerated in a direction opposite to the field and thereby constitute an electric current. In semiconductors and insulators, the electrons are found only in completely filled bands, at low temperatures. In order to increase the energy of the electrons, it is necessary to raise electrons from the valence band to the conduction band across the energy gap. The electric fields normally encountered are not large enough to accomplish this with appreciable probability. At sufficiently high temperatures, depending on the magnitude of the energy gap, a significant number of valence electrons gain enough energy thermally to be raised to the conduction band. These electrons in an unfilled band can easily participate in conduction. Furthermore, there is now a corresponding number of vacancies in the electron population of the valence band. These vacancies, or holes as they are called, have the effect of carriers of positive charge, by means of which the valence band makes a contribution to the conduction of the crystal. See also Hole states in solids.
The type of charge carrier, electron or hole, that is in largest concentration in a material is sometimes called the majority carrier and the type in smallest concentration the minority carrier. The majority carriers are primarily responsible for the conduction properties of the material. Although the minority carriers play a minor role in electrical conductivity, they can be important in rectification and transistor actions in a semiconductor.
Intrinsic semiconductors
A semiconductor in which the concentration of charge carriers is characteristic of the material itself rather than of the content of impurities and structural defects of the crystal is called an intrinsic semiconductor. Electrons in the conduction band and holes in the valence band are created by thermal excitation of electrons from the valence to the conduction band. Thus an intrinsic semiconductor has equal concentrations of electrons and holes. The carrier concentration, and hence the conductivity, is very sensitive to temperature and depends strongly on the energy gap. The energy gap ranges from a fraction of 1 eV to several electronvolts. A material must have a large energy gap to be an insulator.
Extrinsic semiconductors
Typical semiconductor crystals such as germanium and silicon are formed by an ordered bonding of the individual atoms to form the crystal structure. The bonding is attributed to the valence electrons which pair up with valence electrons of adjacent atoms to form so-called shared pair or covalent bonds. These materials are all of the quadrivalent type; that is, each atom contains four valence electrons, all of which are used in forming the crystal bonds. See also Crystal structure.
Atoms having a valence of +3 or +5 can be added to a pure or intrinsic semiconductor material with the result that the +3 atoms will give rise to an unsatisfied bond with one of the valence electrons of the semiconductor atoms, and +5 atoms will result in an extra or free electron that is not required in the bond structure. Electrically, the +3 impurities add holes and the +5 impurities add electrons. They are called acceptor and donor impurities, respectively. Typical valence +3 impurities used are boron, aluminum, indium, and gallium. Valence +5 impurities used are arsenic, antimony, and phosphorus.
Semiconductor material “doped” or “poisoned” by valence +3 acceptor impurities is termed p‐type, whereas material doped by valence +5 donor material is termed n-type. The names are derived from the fact that the holes introduced are considered to carry positive charges and the electrons negative charges. The number of electrons in the energy bands of the crystal is increased by the presence of donor impurities and decreased by the presence of acceptor impurities. See also Acceptor atom; Donor atom.
At sufficiently high temperatures, the intrinsic carrier concentration becomes so large that the effect of a fixed amount of impurity atoms in the crystal is comparatively small and the semiconductor becomes intrinsic. When the carrier concentration is predominantly determined by the impurity content, the conduction of the material is said to be extrinsic. Physical defects in the crystal structure may have similar effects as donor or acceptor impurities. They can also give rise to extrinsic conductivity.
Materials
The group of chemical elements which are semiconductors includes germanium, silicon, gray (crystalline) tin, selenium, tellurium, and boron. Germanium, silicon, and gray tin belong to group 14 of the periodic table and have crystal structures similar to that of diamond. Germanium and silicon are two of the best-known semiconductors. They are used extensively in devices such as rectifiers and transistors.
A large number of compounds are known to be semiconductors. A group of semiconducting compounds of the simple type AB consists of elements from columns symmetrically placed with respect to column 14 of the periodic table. Indium antimonide (InSb), cadmium telluride (CdTe), and silver iodide (AgI) are examples of III–V, II–IV, and I–VI compounds, respectively. The various III–V compounds are being studied extensively, and many practical applications have been found for these materials. Some of these compounds have the highest carrier mobilities known for semiconductors. The compounds have zincblende crystal structure which is geometrically similar to the diamond structure possessed by the elemental semiconductors, germanium and silicon, of column 14, except that the four nearest neighbors of each atom are atoms of the other kind. The II–VI compounds, zinc sulfide (ZnS) and cadmium sulfide (CdS), are used in photoconductive devices. Zinc sulfide is also used as a luminescent material. See also Luminescence; Photoconductivity.
The properties of semiconductors are extremely sensitive to the presence of impurities. It is therefore desirable to start with the purest available materials and to introduce a controlled amount of the desired impurity. The zone-refining method is often used for further purification of obtainable materials. The floating zone technique can be used, if feasible, to prevent any contamination of molten material by contact with the crucible. See also Zone refining.
For basic studies as well as for many practical applications, it is desirable to use single crystals. Various methods are used for growing crystals of different materials. For many semiconductors, including germanium, silicon, and the III–V compounds, the Czochralski method is commonly used. The method of condensation from the vapor phase is used to grow crystals of a number of semiconductors, for instance, selenium and zinc sulfide. See also Crystal growth.
The introduction of impurities, or doping, can be accomplished by simply adding the desired quantity to the melt from which the crystal is grown. When the amount to be added is very small, a preliminary ingot is often made with a larger content of the doping agent; a small slice of the ingot is then used to dope the next melt accurately. Impurities which have large diffusion constants in the material can be introduced directly by holding the solid material at an elevated temperature while this material is in contact with the doping agent in the solid or the vapor phase.
A doping technique, ion implantation, has been developed and used extensively. The impurity is introduced into a layer of semiconductor by causing a controlled dose of highly accelerated impurity ions to impinge on the semiconductor. See also Ion implantation.
An important subject of scientific and technological interest is amorphous semiconductors. In an amorphous substance the atomic arrangement has some short-range but no long-range order. The representative amorphous semiconductors are selenium, germanium, and silicon in their amorphous states, and arsenic and germanium chalcogenides, including such ternary systems as Ge-As-Te. Some amorphous semiconductors can be prepared by a suitable quenching procedure from the melt. Amorphous films can be obtained by vapor deposition.
Rectification in semiconductors
In semiconductors, narrow layers can be produced which have abnormally high resistances. The resistance of such a layer is nonohmic; it may depend on the direction of current, thus giving rise to rectification. Rectification can also be obtained by putting a thin layer of semiconductor or insulator material between two conductors of different material.
A narrow region in a semiconductor which has an abnormally high resistance is called a barrier layer. A barrier may exist at the contact of the semiconductor with another material, at a crystal boundary in the semiconductor, or at a free surface of the semiconductor. In the bulk of a semiconductor, even in a single crystal, barriers may be found as the result of a nonuniform distribution of impurities. The thickness of a barrier layer is small, usually 10−3 to 10−5 cm.
A barrier is usually associated with the existence of a space charge. In an intrinsic semiconductor, a region is electrically neutral if the concentration n of conduction electrons is equal to the concentration p of holes. Any deviation in the balance gives a space charge equal to e(p − n), where e is the charge on an electron. In an extrinsic semiconductor, ionized donor atoms give a positive space charge and ionized acceptor atoms give a negative space charge.
Surface electronics
The surface of a semiconductor plays an important role technologically, for example, in field-effect transistors and charge-coupled devices. Also, it presents an interesting case of two-dimensional systems where the electric field in the surface layer is strong enough to produce a potential wall which is narrower than the wavelengths of charge carriers. In such a case, the electronic energy levels are grouped into subbands, each of which corresponds to a quantized motion normal to the surface, with a continuum for motion parallel to the surface. Consequently, various properties cannot be trivially deduced from those of the bulk semiconductor. See also Charge-coupled devices; Surface physics.
A solid crystalline material whose electrical conductivity is intermediate between that of a metal and an insulator. Semiconductors exhibit conduction properties that may be temperature-dependent, permitting their use as thermistors (temperature-dependent resistors), or voltage-dependent, as in varistors. By making suitable contacts to a semiconductor or by making the material suitably inhomogeneous, electrical rectification and amplification can be obtained. Semiconductor devices, rectifiers, and transistors have replaced vacuum tubes almost completely in low-power electronics, making it possible to save volume and power consumption by orders of magnitude. In the form of integrated circuits, they are vital for complicated systems. The optical properties of a semiconductor are important for the understanding and the application of the material. Photodiodes, photoconductive detectors of radiation, injection lasers, light-emitting diodes, solar-energy conversion cells, and so forth are examples of the wide variety of optoelectronic devices. See also Integrated circuits; Laser; Light-emitting diode; Photodiode; Photoelectric devices; Semiconductor diode; Semiconductor rectifier; Transistor; Varistor.
Conduction in semiconductors
The electrical conductivity of semiconductors ranges from about 103 to 10−9 ohm−1 cm−1, as compared with a maximum conductivity of 107 for good conductors and a minimum conductivity of 10−17 ohm−1 cm−1 for good insulators. See also Electric insulator.
The electric current is usually due only to the motion of electrons, although under some conditions, such as very high temperatures, the motion of ions may be important. The basic distinction between conduction in metals and in semiconductors is made by considering the energy bands occupied by the conduction electrons. See also Band theory of solids; Ionic crystals.
At absolute zero temperature, the electrons occupy the lowest possible energy levels, with the restriction that at most two electrons with opposite spin may be in the same energy level. In semiconductors and insulators, there are just enough electrons to fill completely a number of energy bands, leaving the rest of the energy bands empty. The highest filled energy band is called the valence band. The next higher band, which is empty at absolute zero temperature, is called the conduction band. The conduction band is separated from the valence band by an energy gap, which is an important characteristic of the semiconductor. In metals, the highest energy band that is occupied by the electrons is only partially filled. This condition exists either because the number of electrons is not just right to fill an integral number of energy bands or because the highest occupied energy band overlaps the next higher band without an intervening energy gap. The electrons in a partially filled band may acquire a small amount of energy from an applied electric field by going to the higher levels in the same band. The electrons are accelerated in a direction opposite to the field and thereby constitute an electric current. In semiconductors and insulators, the electrons are found only in completely filled bands, at low temperatures. In order to increase the energy of the electrons, it is necessary to raise electrons from the valence band to the conduction band across the energy gap. The electric fields normally encountered are not large enough to accomplish this with appreciable probability. At sufficiently high temperatures, depending on the magnitude of the energy gap, a significant number of valence electrons gain enough energy thermally to be raised to the conduction band. These electrons in an unfilled band can easily participate in conduction. Furthermore, there is now a corresponding number of vacancies in the electron population of the valence band. These vacancies, or holes as they are called, have the effect of carriers of positive charge, by means of which the valence band makes a contribution to the conduction of the crystal. See also Hole states in solids.
The type of charge carrier, electron or hole, that is in largest concentration in a material is sometimes called the majority carrier and the type in smallest concentration the minority carrier. The majority carriers are primarily responsible for the conduction properties of the material. Although the minority carriers play a minor role in electrical conductivity, they can be important in rectification and transistor actions in a semiconductor.
Intrinsic semiconductors
A semiconductor in which the concentration of charge carriers is characteristic of the material itself rather than of the content of impurities and structural defects of the crystal is called an intrinsic semiconductor. Electrons in the conduction band and holes in the valence band are created by thermal excitation of electrons from the valence to the conduction band. Thus an intrinsic semiconductor has equal concentrations of electrons and holes. The carrier concentration, and hence the conductivity, is very sensitive to temperature and depends strongly on the energy gap. The energy gap ranges from a fraction of 1 eV to several electronvolts. A material must have a large energy gap to be an insulator.
Extrinsic semiconductors
Typical semiconductor crystals such as germanium and silicon are formed by an ordered bonding of the individual atoms to form the crystal structure. The bonding is attributed to the valence electrons which pair up with valence electrons of adjacent atoms to form so-called shared pair or covalent bonds. These materials are all of the quadrivalent type; that is, each atom contains four valence electrons, all of which are used in forming the crystal bonds. See also Crystal structure.
Atoms having a valence of +3 or +5 can be added to a pure or intrinsic semiconductor material with the result that the +3 atoms will give rise to an unsatisfied bond with one of the valence electrons of the semiconductor atoms, and +5 atoms will result in an extra or free electron that is not required in the bond structure. Electrically, the +3 impurities add holes and the +5 impurities add electrons. They are called acceptor and donor impurities, respectively. Typical valence +3 impurities used are boron, aluminum, indium, and gallium. Valence +5 impurities used are arsenic, antimony, and phosphorus.
Semiconductor material “doped” or “poisoned” by valence +3 acceptor impurities is termed p‐type, whereas material doped by valence +5 donor material is termed n-type. The names are derived from the fact that the holes introduced are considered to carry positive charges and the electrons negative charges. The number of electrons in the energy bands of the crystal is increased by the presence of donor impurities and decreased by the presence of acceptor impurities. See also Acceptor atom; Donor atom.
At sufficiently high temperatures, the intrinsic carrier concentration becomes so large that the effect of a fixed amount of impurity atoms in the crystal is comparatively small and the semiconductor becomes intrinsic. When the carrier concentration is predominantly determined by the impurity content, the conduction of the material is said to be extrinsic. Physical defects in the crystal structure may have similar effects as donor or acceptor impurities. They can also give rise to extrinsic conductivity.
Materials
The group of chemical elements which are semiconductors includes germanium, silicon, gray (crystalline) tin, selenium, tellurium, and boron. Germanium, silicon, and gray tin belong to group 14 of the periodic table and have crystal structures similar to that of diamond. Germanium and silicon are two of the best-known semiconductors. They are used extensively in devices such as rectifiers and transistors.
A large number of compounds are known to be semiconductors. A group of semiconducting compounds of the simple type AB consists of elements from columns symmetrically placed with respect to column 14 of the periodic table. Indium antimonide (InSb), cadmium telluride (CdTe), and silver iodide (AgI) are examples of III–V, II–IV, and I–VI compounds, respectively. The various III–V compounds are being studied extensively, and many practical applications have been found for these materials. Some of these compounds have the highest carrier mobilities known for semiconductors. The compounds have zincblende crystal structure which is geometrically similar to the diamond structure possessed by the elemental semiconductors, germanium and silicon, of column 14, except that the four nearest neighbors of each atom are atoms of the other kind. The II–VI compounds, zinc sulfide (ZnS) and cadmium sulfide (CdS), are used in photoconductive devices. Zinc sulfide is also used as a luminescent material. See also Luminescence; Photoconductivity.
The properties of semiconductors are extremely sensitive to the presence of impurities. It is therefore desirable to start with the purest available materials and to introduce a controlled amount of the desired impurity. The zone-refining method is often used for further purification of obtainable materials. The floating zone technique can be used, if feasible, to prevent any contamination of molten material by contact with the crucible. See also Zone refining.
For basic studies as well as for many practical applications, it is desirable to use single crystals. Various methods are used for growing crystals of different materials. For many semiconductors, including germanium, silicon, and the III–V compounds, the Czochralski method is commonly used. The method of condensation from the vapor phase is used to grow crystals of a number of semiconductors, for instance, selenium and zinc sulfide. See also Crystal growth.
The introduction of impurities, or doping, can be accomplished by simply adding the desired quantity to the melt from which the crystal is grown. When the amount to be added is very small, a preliminary ingot is often made with a larger content of the doping agent; a small slice of the ingot is then used to dope the next melt accurately. Impurities which have large diffusion constants in the material can be introduced directly by holding the solid material at an elevated temperature while this material is in contact with the doping agent in the solid or the vapor phase.
A doping technique, ion implantation, has been developed and used extensively. The impurity is introduced into a layer of semiconductor by causing a controlled dose of highly accelerated impurity ions to impinge on the semiconductor. See also Ion implantation.
An important subject of scientific and technological interest is amorphous semiconductors. In an amorphous substance the atomic arrangement has some short-range but no long-range order. The representative amorphous semiconductors are selenium, germanium, and silicon in their amorphous states, and arsenic and germanium chalcogenides, including such ternary systems as Ge-As-Te. Some amorphous semiconductors can be prepared by a suitable quenching procedure from the melt. Amorphous films can be obtained by vapor deposition.
Rectification in semiconductors
In semiconductors, narrow layers can be produced which have abnormally high resistances. The resistance of such a layer is nonohmic; it may depend on the direction of current, thus giving rise to rectification. Rectification can also be obtained by putting a thin layer of semiconductor or insulator material between two conductors of different material.
A narrow region in a semiconductor which has an abnormally high resistance is called a barrier layer. A barrier may exist at the contact of the semiconductor with another material, at a crystal boundary in the semiconductor, or at a free surface of the semiconductor. In the bulk of a semiconductor, even in a single crystal, barriers may be found as the result of a nonuniform distribution of impurities. The thickness of a barrier layer is small, usually 10−3 to 10−5 cm.
A barrier is usually associated with the existence of a space charge. In an intrinsic semiconductor, a region is electrically neutral if the concentration n of conduction electrons is equal to the concentration p of holes. Any deviation in the balance gives a space charge equal to e(p − n), where e is the charge on an electron. In an extrinsic semiconductor, ionized donor atoms give a positive space charge and ionized acceptor atoms give a negative space charge.
Surface electronics
The surface of a semiconductor plays an important role technologically, for example, in field-effect transistors and charge-coupled devices. Also, it presents an interesting case of two-dimensional systems where the electric field in the surface layer is strong enough to produce a potential wall which is narrower than the wavelengths of charge carriers. In such a case, the electronic energy levels are grouped into subbands, each of which corresponds to a quantized motion normal to the surface, with a continuum for motion parallel to the surface. Consequently, various properties cannot be trivially deduced from those of the bulk semiconductor. See also Charge-coupled devices; Surface physics.
Harmonic
Harmonic
1 Physical term describing the vibration in segments of a sound-producing body (see sound). A string vibrates simultaneously in its whole length and in segments of halves, thirds, fourths, etc. These segments form what is known in algebra as a harmonic series or progression, since the rate of vibration of each segment is an integral multiple of the frequency of the whole string, i.e., each segment vibrates respectively twice, three times, four times, etc., as fast as the whole string. The vibration of the whole string produces the fundamental tone, and the segments produce weaker subsidiary tones. A similar phenomenon occurs in an air column in a pipe. At most the first 16 tones in such a series can be heard by the human ear; the character or timbre of a fundamental tone is determined by the number of its subsidiary tones heard and their relative intensity. The subsidiary tones have been loosely called harmonics (as a noun), but they are properly called partials, the fundamental tone being the first partial. They are also called overtones (a synonym for “upper partials”), although this term includes a number of sounds that do not fit in with the harmonic series, and are therefore not considered musical.
2 Term describing the silvery sound produced separately when the fundamental and possibly more partial tones are damped by touching a string at a nodal point. Similarly harmonics are produced separately in an air column by overblowing or in brass wind instruments by the use of valves.
In acoustics and telecommunication, the harmonic of a wave is a component frequency of the signal that is an integer multiple of the fundamental frequency. For example, if the frequency is f, the harmonics have frequency 2f, 3f, 4f, etc. The harmonics have the property that they are all periodic at the signal frequency. Also, due to the properties of Fourier series, the sum of the signal and its harmonics is also periodic at that frequency.
Many oscillators, including the human voice, a bowed violin string, or a Cepheid variable star, are more or less periodic, and thus can be decomposed into harmonics.
Most passive oscillators, such as a plucked guitar string or a struck drum head or struck bell, naturally oscillate at several frequencies known as overtones. When the oscillator is long and thin, such as a guitar string, a trumpet, or a chime, the overtones are still integer multiples of the fundamental frequency. Hence, these devices can mimic the sound of singing and are often incorporated into music. Overtones whose frequency is not an integer multiple of the fundamental are called inharmonic and are often perceived as unpleasant.
The untrained human ear typically does not perceive harmonics as separate notes. Instead, they are perceived as the timbre of the tone. In a musical context, overtones that are not exactly integer multiples of the fundamental are known as inharmonics. Inharmonics that are not close to harmonics are known as partials. Bells have more clearly perceptible partials than most instruments. Antique singing bowls are well known for their unique quality of producing multiple harmonic overtones or multiphonics.
The tight relation between overtones and harmonics in music often leads to their being used synonymously in a strictly musical context, but they are counted differently leading to some possible confusion. This chart demonstrates how they are counted:1f 440 Hz fundamental frequency first harmonic
2f 880 Hz first overtone second harmonic
3f 1320 Hz second overtone third harmonic
4f 1760 Hz third overtone fourth harmonic
In many musical instruments, it is possible to play the upper harmonics without the fundamental note being present. In a simple case (e.g. recorder) this has the effect of making the note go up in pitch by an octave; but in more complex cases many other pitch variations are obtained. In some cases it also changes the timbre of the note. This is part of the normal method of obtaining higher notes in wind instruments, where it is called overblowing. The extended technique of playing multiphonics also produces harmonics. On string instruments it is possible to produce very pure sounding notes, called harmonics by string players, which have an eerie quality, as well as being high in pitch. Harmonics may be used to check at a unison the tuning of strings that are not tuned to the unison. For example, lightly fingering the node found half way down the highest string of a cello produces the same pitch as lightly fingering the node 1/3 of the way down the second highest string. For the human voice see Overtone singing, which uses harmonics.
Harmonics may be either used or considered as the basis of just intonation systems. Composer Arnold Dreyblatt is able to bring out different harmonics on the single string of his modified double bass by slightly altering his unique bowing technique halfway between hitting and bowing the strings. Composer Lawrence Ball uses harmonics to generate music electronically.
The fundamental frequency is the reciprocal of the period of the periodic phenomenon.
1 Physical term describing the vibration in segments of a sound-producing body (see sound). A string vibrates simultaneously in its whole length and in segments of halves, thirds, fourths, etc. These segments form what is known in algebra as a harmonic series or progression, since the rate of vibration of each segment is an integral multiple of the frequency of the whole string, i.e., each segment vibrates respectively twice, three times, four times, etc., as fast as the whole string. The vibration of the whole string produces the fundamental tone, and the segments produce weaker subsidiary tones. A similar phenomenon occurs in an air column in a pipe. At most the first 16 tones in such a series can be heard by the human ear; the character or timbre of a fundamental tone is determined by the number of its subsidiary tones heard and their relative intensity. The subsidiary tones have been loosely called harmonics (as a noun), but they are properly called partials, the fundamental tone being the first partial. They are also called overtones (a synonym for “upper partials”), although this term includes a number of sounds that do not fit in with the harmonic series, and are therefore not considered musical.
2 Term describing the silvery sound produced separately when the fundamental and possibly more partial tones are damped by touching a string at a nodal point. Similarly harmonics are produced separately in an air column by overblowing or in brass wind instruments by the use of valves.
In acoustics and telecommunication, the harmonic of a wave is a component frequency of the signal that is an integer multiple of the fundamental frequency. For example, if the frequency is f, the harmonics have frequency 2f, 3f, 4f, etc. The harmonics have the property that they are all periodic at the signal frequency. Also, due to the properties of Fourier series, the sum of the signal and its harmonics is also periodic at that frequency.
Many oscillators, including the human voice, a bowed violin string, or a Cepheid variable star, are more or less periodic, and thus can be decomposed into harmonics.
Most passive oscillators, such as a plucked guitar string or a struck drum head or struck bell, naturally oscillate at several frequencies known as overtones. When the oscillator is long and thin, such as a guitar string, a trumpet, or a chime, the overtones are still integer multiples of the fundamental frequency. Hence, these devices can mimic the sound of singing and are often incorporated into music. Overtones whose frequency is not an integer multiple of the fundamental are called inharmonic and are often perceived as unpleasant.
The untrained human ear typically does not perceive harmonics as separate notes. Instead, they are perceived as the timbre of the tone. In a musical context, overtones that are not exactly integer multiples of the fundamental are known as inharmonics. Inharmonics that are not close to harmonics are known as partials. Bells have more clearly perceptible partials than most instruments. Antique singing bowls are well known for their unique quality of producing multiple harmonic overtones or multiphonics.
The tight relation between overtones and harmonics in music often leads to their being used synonymously in a strictly musical context, but they are counted differently leading to some possible confusion. This chart demonstrates how they are counted:1f 440 Hz fundamental frequency first harmonic
2f 880 Hz first overtone second harmonic
3f 1320 Hz second overtone third harmonic
4f 1760 Hz third overtone fourth harmonic
In many musical instruments, it is possible to play the upper harmonics without the fundamental note being present. In a simple case (e.g. recorder) this has the effect of making the note go up in pitch by an octave; but in more complex cases many other pitch variations are obtained. In some cases it also changes the timbre of the note. This is part of the normal method of obtaining higher notes in wind instruments, where it is called overblowing. The extended technique of playing multiphonics also produces harmonics. On string instruments it is possible to produce very pure sounding notes, called harmonics by string players, which have an eerie quality, as well as being high in pitch. Harmonics may be used to check at a unison the tuning of strings that are not tuned to the unison. For example, lightly fingering the node found half way down the highest string of a cello produces the same pitch as lightly fingering the node 1/3 of the way down the second highest string. For the human voice see Overtone singing, which uses harmonics.
Harmonics may be either used or considered as the basis of just intonation systems. Composer Arnold Dreyblatt is able to bring out different harmonics on the single string of his modified double bass by slightly altering his unique bowing technique halfway between hitting and bowing the strings. Composer Lawrence Ball uses harmonics to generate music electronically.
The fundamental frequency is the reciprocal of the period of the periodic phenomenon.
Conservation of energy
Conservation of energy
The principle of conservation of energy states that energy cannot be created or destroyed, although it can be changed from one form to another. Thus in any isolated or closed system, the sum of all forms of energy remains constant. The energy of the system may be interconverted among many different forms—mechanical, electrical, magnetic, thermal, chemical, nuclear, and so on—and as time progresses, it tends to become less and less available; but within the limits of small experimental uncertainty, no change in total amount of energy has been observed in any situation in which it has been possible to ensure that energy has not entered or left the system in the form of work or heat. For a system that is both gaining and losing energy in the form of work and heat, as is true of any machine in operation, the energy principle asserts that the net gain of energy is equal to the total change of the system's internal energy.
There are many ways in which the principle of conservation of energy may be stated, depending on the intended application. Of particular interest is the special form of the principle known as the principle of conservation of mechanical energy which states that the mechanical energy of any system of bodies connected together in any way is conserved, provided that the system is free of all frictional forces, including internal friction that could arise during collisions of the bodies of the system.
J. P. Joule and others demonstrated the equivalence of heat and work by showing experimentally that for every definite amount of work done against friction there always appears a definite quantity of heat. The experiments usually were so arranged that the heat generated was absorbed by a given quantity of water, and it was observed that a given expenditure of mechanical energy always produced the same rise of temperature in the water. The resulting numerical relation between quantities of mechanical energy and heat is called the Joule equivalent, or is also known as mechanical equivalent of heat.
In view of the principle of equivalence of mass and energy in the restricted theory of relativity, the classical principle of conservation of energy must be regarded as a special case of the principle of conservation of mass-energy. However, this more general principle need be invoked only when dealing with certain nuclear phenomena or when speeds comparable with the speed of light (1.86 × 105 mi/s or 3 × 108 m/s) are involved.
The principle of conservation of energy states that energy cannot be created or destroyed, although it can be changed from one form to another. Thus in any isolated or closed system, the sum of all forms of energy remains constant. The energy of the system may be interconverted among many different forms—mechanical, electrical, magnetic, thermal, chemical, nuclear, and so on—and as time progresses, it tends to become less and less available; but within the limits of small experimental uncertainty, no change in total amount of energy has been observed in any situation in which it has been possible to ensure that energy has not entered or left the system in the form of work or heat. For a system that is both gaining and losing energy in the form of work and heat, as is true of any machine in operation, the energy principle asserts that the net gain of energy is equal to the total change of the system's internal energy.
There are many ways in which the principle of conservation of energy may be stated, depending on the intended application. Of particular interest is the special form of the principle known as the principle of conservation of mechanical energy which states that the mechanical energy of any system of bodies connected together in any way is conserved, provided that the system is free of all frictional forces, including internal friction that could arise during collisions of the bodies of the system.
J. P. Joule and others demonstrated the equivalence of heat and work by showing experimentally that for every definite amount of work done against friction there always appears a definite quantity of heat. The experiments usually were so arranged that the heat generated was absorbed by a given quantity of water, and it was observed that a given expenditure of mechanical energy always produced the same rise of temperature in the water. The resulting numerical relation between quantities of mechanical energy and heat is called the Joule equivalent, or is also known as mechanical equivalent of heat.
In view of the principle of equivalence of mass and energy in the restricted theory of relativity, the classical principle of conservation of energy must be regarded as a special case of the principle of conservation of mass-energy. However, this more general principle need be invoked only when dealing with certain nuclear phenomena or when speeds comparable with the speed of light (1.86 × 105 mi/s or 3 × 108 m/s) are involved.
Maxwell's equations
Maxwell's equations
Four equations, formulated by James Clerk Maxwell, that together form a complete description of the production and interrelation of electric and magnetic fields. The statements of these four equations are (1) electric field diverges from electric charge, (2) there are no isolated magnetic poles, (3) electric fields are produced by changing magnetic fields, and (4) circulating magnetic fields are produced by changing electric fields and by electric currents. Maxwell based his description of electromagnetic fields on these four statements.
In electromagnetism, Maxwell's equations are a set of four equations that were first presented as a distinct group in 1884 by Oliver Heaviside in conjunction with Willard Gibbs. These equations had appeared throughout James Clerk Maxwell's 1861 paper entitled On Physical Lines of Force.
Those equations describe the interrelationship between electric field, magnetic field, electric charge, and electric current. Although Maxwell himself was the originator of only one of these equations (by virtue of modifying an already existing equation), he derived them all again independently in conjunction with his molecular vortex model of Faraday's "lines of force".
Although Maxwell's equations were known before special relativity, they can be derived from Coulomb's law and special relativity if one assumes invariance of electric charge.[1][2] For more information, see links to relativity section.
History
Maxwell's equations are a set of four equations originally appearing separately in Maxwell's 1861 paper On Physical Lines of Force as equation (54) Faraday's law, equation (56) div B = 0, equation (112) Ampère's law with Maxwell's correction, and equation (113) Gauss's law. They express respectively how changing magnetic fields produce electric fields, the experimental absence of magnetic monopoles, how electric currents and changing electric fields produce magnetic fields (Ampère's circuital law with Maxwell's correction), and how electric charges produce electric fields.
Maxwell introduced an extra term to Ampère's circuital law which is the time derivative of electric field and known as Maxwell's displacement current. This modification is the most significant aspect of Maxwell's work in electromagnetism.
In Maxwell's 1865 paper, A Dynamical Theory of the Electromagnetic Field Maxwell's modified version of Ampère's circuital law enabled him to derive the electromagnetic wave equation, hence demonstrating that light is an electromagnetic wave.
Apart from Maxwell's amendment to Ampère's circuital law, none of these equations were original. Maxwell however uniquely re-derived them hydrodynamically and mechanically using his vortex model of Faraday's lines of force.
In 1884 Oliver Heaviside, in conjunction with Willard Gibbs, grouped these equations together and restated them in modern vector notation. It is important however to note that in doing so, Heaviside used partial time derivative notation as opposed to the total time derivative notation used by Maxwell at equation (54). The consequence of this is that we lose the vXB term that appeared in Maxwell's follow up equation (77). Nowadays, the vXB term sits beside the group known as Maxwell's equations and bears the name Lorentz Force.
This whole matter is confused because the term Maxwell's equations is also used for a set of eight equations in Maxwell's 1865 paper, A Dynamical Theory of the Electromagnetic Field, and this confusion is yet further confused by virtue of the fact that six of those eight equations are each written as three separate equations for the x, y, and z, axes, hence allowing even Maxwell to refer to them as twenty equations in twenty unknowns.
The two sets of Maxwell's equations are nearly physically equivalent, although the vXB term at equation (D) of the original eight is absent from the modern Heaviside four. The Maxwell-Ampère equation in Heaviside's restatement is an amalgamation of two equations in the set of eight that Maxwell published in his 1865 paper.
Summary of the modern Heaviside versions
Symbols in bold represent vector quantities, whereas symbols in italics represent scalar quantities.
The equations are given here in SI units. Unlike the equations of mechanics (for example), Maxwell's equations are not unchanged in other unit systems. Though the general form remains the same, various definitions get changed and different constants appear at different places. For example, the electric field and the magnetic field have the same unit (gauss) in the Gaussian system. Other than SI (used in engineering), the units commonly used are Gaussian units (based on the cgs system and considered to have some theoretical advantages over SI[3]), Lorentz-Heaviside units (used mainly in particle physics) and Planck units (used in theoretical physics).
Four equations, formulated by James Clerk Maxwell, that together form a complete description of the production and interrelation of electric and magnetic fields. The statements of these four equations are (1) electric field diverges from electric charge, (2) there are no isolated magnetic poles, (3) electric fields are produced by changing magnetic fields, and (4) circulating magnetic fields are produced by changing electric fields and by electric currents. Maxwell based his description of electromagnetic fields on these four statements.
In electromagnetism, Maxwell's equations are a set of four equations that were first presented as a distinct group in 1884 by Oliver Heaviside in conjunction with Willard Gibbs. These equations had appeared throughout James Clerk Maxwell's 1861 paper entitled On Physical Lines of Force.
Those equations describe the interrelationship between electric field, magnetic field, electric charge, and electric current. Although Maxwell himself was the originator of only one of these equations (by virtue of modifying an already existing equation), he derived them all again independently in conjunction with his molecular vortex model of Faraday's "lines of force".
Although Maxwell's equations were known before special relativity, they can be derived from Coulomb's law and special relativity if one assumes invariance of electric charge.[1][2] For more information, see links to relativity section.
History
Maxwell's equations are a set of four equations originally appearing separately in Maxwell's 1861 paper On Physical Lines of Force as equation (54) Faraday's law, equation (56) div B = 0, equation (112) Ampère's law with Maxwell's correction, and equation (113) Gauss's law. They express respectively how changing magnetic fields produce electric fields, the experimental absence of magnetic monopoles, how electric currents and changing electric fields produce magnetic fields (Ampère's circuital law with Maxwell's correction), and how electric charges produce electric fields.
Maxwell introduced an extra term to Ampère's circuital law which is the time derivative of electric field and known as Maxwell's displacement current. This modification is the most significant aspect of Maxwell's work in electromagnetism.
In Maxwell's 1865 paper, A Dynamical Theory of the Electromagnetic Field Maxwell's modified version of Ampère's circuital law enabled him to derive the electromagnetic wave equation, hence demonstrating that light is an electromagnetic wave.
Apart from Maxwell's amendment to Ampère's circuital law, none of these equations were original. Maxwell however uniquely re-derived them hydrodynamically and mechanically using his vortex model of Faraday's lines of force.
In 1884 Oliver Heaviside, in conjunction with Willard Gibbs, grouped these equations together and restated them in modern vector notation. It is important however to note that in doing so, Heaviside used partial time derivative notation as opposed to the total time derivative notation used by Maxwell at equation (54). The consequence of this is that we lose the vXB term that appeared in Maxwell's follow up equation (77). Nowadays, the vXB term sits beside the group known as Maxwell's equations and bears the name Lorentz Force.
This whole matter is confused because the term Maxwell's equations is also used for a set of eight equations in Maxwell's 1865 paper, A Dynamical Theory of the Electromagnetic Field, and this confusion is yet further confused by virtue of the fact that six of those eight equations are each written as three separate equations for the x, y, and z, axes, hence allowing even Maxwell to refer to them as twenty equations in twenty unknowns.
The two sets of Maxwell's equations are nearly physically equivalent, although the vXB term at equation (D) of the original eight is absent from the modern Heaviside four. The Maxwell-Ampère equation in Heaviside's restatement is an amalgamation of two equations in the set of eight that Maxwell published in his 1865 paper.
Summary of the modern Heaviside versions
Symbols in bold represent vector quantities, whereas symbols in italics represent scalar quantities.
The equations are given here in SI units. Unlike the equations of mechanics (for example), Maxwell's equations are not unchanged in other unit systems. Though the general form remains the same, various definitions get changed and different constants appear at different places. For example, the electric field and the magnetic field have the same unit (gauss) in the Gaussian system. Other than SI (used in engineering), the units commonly used are Gaussian units (based on the cgs system and considered to have some theoretical advantages over SI[3]), Lorentz-Heaviside units (used mainly in particle physics) and Planck units (used in theoretical physics).
Conduction
Conduction
The passage of electric charges due to a force exerted on them by an electric field. Conductivity is the measure of the ability of a conductor to carry electric current; it is defined as the ratio of the amount of charge passing through unit area of the conductor (perpendicular to the current direction) per second divided by the electric field intensity (the force on a unit charge). Conductivity is the reciprocal of resistivity and is therefore commonly expressed in units of siemens per meter, abbreviated S/m. See also Electrical resistivity.
In metals and semiconductors (such as silicon, of which transistors are made) the charges that are responsible for current are free electrons and holes (which, as missing electrons, act like positive charges). These are electrons or holes not bound to any particular atom and therefore able to move freely in the field. Conductivity due to electrons is known as n-type conductivity; that due to holes is known as p-type. See also Hole states in solids; Semiconductor.
The conductivity of metals is much higher than that of semiconductors because they have many more free electrons or holes. The free electrons or holes come from the metal atoms. Semiconductors differ from metals in two important respects. First, the semiconductor atoms do not contribute free electrons or holes unless thermally excited, and second, free electrons or holes can also arise from impurities or defects.
An exception to some of the rules stated above has been found in conjugated polymers. Polyacetylene, for example, although a semiconductor with extremely high resistance when undoped, can be doped so heavily with certain nonmetallic impurities (iodine, for example) that it attains a conductivity comparable to that of copper. See also Organic conductor.
In metals, although the number of free carriers does not vary with temperature, an increase in temperature decreases conductivity. The reason is that increasing temperature causes the lattice atoms to vibrate more strongly, impeding the motion of the free carriers in the field. This effect also occurs in semiconductors, but the increase in number of free carriers with temperature is usually a stronger effect. At low temperatures the thermal vibrations are weak, and the impediment to the motion of free carriers in the field comes from imperfections and impurities, which in metals usually does not vary with temperature. At the lowest temperatures, close to absolute zero, certain metals become superconductors, possessing infinite conductivity. See also Superconductivity.
Electrolytes conduct electricity by means of the positive and negative ions in solution. In ionic crystals, conduction may also take place by the motion of ions. This motion is much affected by the presence of lattice defects such as interstitial ions, vacancies, and foreign ions. See also Electrolytic conductance; Ionic crystals.
Electric current can flow through an evacuated region if electrons or ions are supplied. In a vacuum tube the current carriers are electrons emitted by a heated filament. The conductivity is low because only a small number of electrons can be “boiled off” at the normal temperatures of electron-emitting filaments. See also Electron emission; Vacuum tube.
The passage of electric charges due to a force exerted on them by an electric field. Conductivity is the measure of the ability of a conductor to carry electric current; it is defined as the ratio of the amount of charge passing through unit area of the conductor (perpendicular to the current direction) per second divided by the electric field intensity (the force on a unit charge). Conductivity is the reciprocal of resistivity and is therefore commonly expressed in units of siemens per meter, abbreviated S/m. See also Electrical resistivity.
In metals and semiconductors (such as silicon, of which transistors are made) the charges that are responsible for current are free electrons and holes (which, as missing electrons, act like positive charges). These are electrons or holes not bound to any particular atom and therefore able to move freely in the field. Conductivity due to electrons is known as n-type conductivity; that due to holes is known as p-type. See also Hole states in solids; Semiconductor.
The conductivity of metals is much higher than that of semiconductors because they have many more free electrons or holes. The free electrons or holes come from the metal atoms. Semiconductors differ from metals in two important respects. First, the semiconductor atoms do not contribute free electrons or holes unless thermally excited, and second, free electrons or holes can also arise from impurities or defects.
An exception to some of the rules stated above has been found in conjugated polymers. Polyacetylene, for example, although a semiconductor with extremely high resistance when undoped, can be doped so heavily with certain nonmetallic impurities (iodine, for example) that it attains a conductivity comparable to that of copper. See also Organic conductor.
In metals, although the number of free carriers does not vary with temperature, an increase in temperature decreases conductivity. The reason is that increasing temperature causes the lattice atoms to vibrate more strongly, impeding the motion of the free carriers in the field. This effect also occurs in semiconductors, but the increase in number of free carriers with temperature is usually a stronger effect. At low temperatures the thermal vibrations are weak, and the impediment to the motion of free carriers in the field comes from imperfections and impurities, which in metals usually does not vary with temperature. At the lowest temperatures, close to absolute zero, certain metals become superconductors, possessing infinite conductivity. See also Superconductivity.
Electrolytes conduct electricity by means of the positive and negative ions in solution. In ionic crystals, conduction may also take place by the motion of ions. This motion is much affected by the presence of lattice defects such as interstitial ions, vacancies, and foreign ions. See also Electrolytic conductance; Ionic crystals.
Electric current can flow through an evacuated region if electrons or ions are supplied. In a vacuum tube the current carriers are electrons emitted by a heated filament. The conductivity is low because only a small number of electrons can be “boiled off” at the normal temperatures of electron-emitting filaments. See also Electron emission; Vacuum tube.
Ground loop (electricity)
ground loop (electricity)
In an electrical system, ground loop refers to a current, generally unwanted, in a conductor connecting two points that are supposed to be at the same potential, that is, ground, but are actually at different potentials. Ground loops can be detrimental to the intended operation of the electrical system.
Description
A ground loop in a medium connecting circuits designed to be at the same potential but at different potentials can be hazardous, or produce problems to the electrical system. For example, the electrical potential at different points on the surface of the Earth can vary by thousands of volts, primarily from the influence of charged clouds. Such an occurrence can be hazardous, for example, to personnel working on long metal conductors.
In a floating ground system, that is, one not connected to Earth, the voltages will probably be unstable, and if some of the conductors that constitute the return circuit to the source have a relatively high resistance, or have high currents flowing through them that produce a significant voltage (I·R) drop, they can be hazardous.
Low current wiring is particularly susceptible to ground loops. If two pieces of audio equipment are plugged into different power outlets, there will often be a difference in their respective ground potentials. If a signal is passed from one to the other via an audio connection with the ground wire intact, this potential difference causes a spurious current to flow through the cables, eg: creating an audible buzz at the AC mains base frequency (50 or 60 Hz) and the harmonics thereof (120 Hz, 240 Hz, and so on), called mains hum. Sometimes, performers remove the grounding pin from the cord connecting an appliance to the power outlet, however, this creates an electrocution risk. The first solution is to ensure that all metal chassis are interconnected, then to the electrical distribution system at one point. The next is to have shielded cables for the low currents, with the shield connected only at the source end. Another solution is to use isolation transformers, opto-isolators or baluns to avoid a direct electrical connection between the different grounds. However, bandwidth of such is of consideration. The better isolation transformers have grounded shields between the two sets of windings. In circuits having high frequencies, such as computer monitors, chokes are placed at the end of the cables just before the termination to the next appliance, eg: computer. These chokes are most often called ferrite core devices.
In video, ground loop can be seen as hum bars (bands of slightly different brightness) scrolling vertically up the screen. These are frequently seen with Video projectors where the display device has its case grounded via a 3-prong plug, and the other components have a floating ground connected to the CATV coax. In this case the video cable is grounded at the projector end to the home electrical system, and at the other end to the cable TV's ground, inducing a current through the cable which distorts the picture. As with audio ground loops, this problem can be solved by placing an isolation transformer on the cable-tv coax. Alternatively, one can use a surge protector that includes coax protection. If the cable is routed through the same surge protector as the 3-prong device, both will be regrounded to the surge protector.
Ground loop issues with television coaxial cable can also affect any connected audio devices such as a receiver. Even if all of the audio and video equipment in for example a home theater system is plugged into the same power outlet, and thus all share the same ground, the coaxial cable entering the TV is actually grounded at the cable company. The potential of this ground is likely to differ slightly from the potential of the house's ground, so a ground loop occurs, causing undesirable mains hum in the system's speakers.
Ground and ground loops are also important in designing circuits. In many circuits, large currents may flow through the ground plane, leading to voltage differences of the ground reference in different parts of the circuit, leading to hum and other problems. Several techniques should be used to avoid ground loops, and otherwise, guarantee good grounding:
The external shield, and the shields of all connectors, should be connected together. This external ground should be connected to the ground plane of the PCB at only one point -- this avoids large current flowing through the ground plane of the PCB. If the connectors are mounted on the PCB, the outer perimeter of the PCB should contain a strip of copper connecting to the shields of the connectors. There should be a break in copper between this strip, and the main ground plane of the circuit. The two should be connected at only one point. This way, if there is a large current flowing between connector shields, it will not pass through the ground plane of the circuit.
A star topology should be used for ground distribution, avoiding loops.
Power devices should be placed closest to the power supply, while low-power devices can be placed further from it.
Signals, wherever possible, should be differential. Use differential signaling.
In an electrical system, ground loop refers to a current, generally unwanted, in a conductor connecting two points that are supposed to be at the same potential, that is, ground, but are actually at different potentials. Ground loops can be detrimental to the intended operation of the electrical system.
Description
A ground loop in a medium connecting circuits designed to be at the same potential but at different potentials can be hazardous, or produce problems to the electrical system. For example, the electrical potential at different points on the surface of the Earth can vary by thousands of volts, primarily from the influence of charged clouds. Such an occurrence can be hazardous, for example, to personnel working on long metal conductors.
In a floating ground system, that is, one not connected to Earth, the voltages will probably be unstable, and if some of the conductors that constitute the return circuit to the source have a relatively high resistance, or have high currents flowing through them that produce a significant voltage (I·R) drop, they can be hazardous.
Low current wiring is particularly susceptible to ground loops. If two pieces of audio equipment are plugged into different power outlets, there will often be a difference in their respective ground potentials. If a signal is passed from one to the other via an audio connection with the ground wire intact, this potential difference causes a spurious current to flow through the cables, eg: creating an audible buzz at the AC mains base frequency (50 or 60 Hz) and the harmonics thereof (120 Hz, 240 Hz, and so on), called mains hum. Sometimes, performers remove the grounding pin from the cord connecting an appliance to the power outlet, however, this creates an electrocution risk. The first solution is to ensure that all metal chassis are interconnected, then to the electrical distribution system at one point. The next is to have shielded cables for the low currents, with the shield connected only at the source end. Another solution is to use isolation transformers, opto-isolators or baluns to avoid a direct electrical connection between the different grounds. However, bandwidth of such is of consideration. The better isolation transformers have grounded shields between the two sets of windings. In circuits having high frequencies, such as computer monitors, chokes are placed at the end of the cables just before the termination to the next appliance, eg: computer. These chokes are most often called ferrite core devices.
In video, ground loop can be seen as hum bars (bands of slightly different brightness) scrolling vertically up the screen. These are frequently seen with Video projectors where the display device has its case grounded via a 3-prong plug, and the other components have a floating ground connected to the CATV coax. In this case the video cable is grounded at the projector end to the home electrical system, and at the other end to the cable TV's ground, inducing a current through the cable which distorts the picture. As with audio ground loops, this problem can be solved by placing an isolation transformer on the cable-tv coax. Alternatively, one can use a surge protector that includes coax protection. If the cable is routed through the same surge protector as the 3-prong device, both will be regrounded to the surge protector.
Ground loop issues with television coaxial cable can also affect any connected audio devices such as a receiver. Even if all of the audio and video equipment in for example a home theater system is plugged into the same power outlet, and thus all share the same ground, the coaxial cable entering the TV is actually grounded at the cable company. The potential of this ground is likely to differ slightly from the potential of the house's ground, so a ground loop occurs, causing undesirable mains hum in the system's speakers.
Ground and ground loops are also important in designing circuits. In many circuits, large currents may flow through the ground plane, leading to voltage differences of the ground reference in different parts of the circuit, leading to hum and other problems. Several techniques should be used to avoid ground loops, and otherwise, guarantee good grounding:
The external shield, and the shields of all connectors, should be connected together. This external ground should be connected to the ground plane of the PCB at only one point -- this avoids large current flowing through the ground plane of the PCB. If the connectors are mounted on the PCB, the outer perimeter of the PCB should contain a strip of copper connecting to the shields of the connectors. There should be a break in copper between this strip, and the main ground plane of the circuit. The two should be connected at only one point. This way, if there is a large current flowing between connector shields, it will not pass through the ground plane of the circuit.
A star topology should be used for ground distribution, avoiding loops.
Power devices should be placed closest to the power supply, while low-power devices can be placed further from it.
Signals, wherever possible, should be differential. Use differential signaling.
Ohm's law
Ohm's law
The law stating that the direct current flowing in a conductor is directly proportional to the potential difference between its ends. It is usually formulated as V = IR, where V is the potential difference, or voltage, I is the current, and R is the resistance of the conductor.
Following are the formulas for computing voltage, current, resistance and power. Traditionally, E is used for voltage (energy), but V is often substituted.
V or E = voltage (E=energy)
I = current in amps (I=intensity)
R = resistance in ohms
P = power in watts
V = I * R E = I * R
I = V / R I = E / R
R = V / I R = E / I
P = V * I P = E * I
The law stating that the direct current flowing in a conductor is directly proportional to the potential difference between its ends. It is usually formulated as V = IR, where V is the potential difference, or voltage, I is the current, and R is the resistance of the conductor.
Following are the formulas for computing voltage, current, resistance and power. Traditionally, E is used for voltage (energy), but V is often substituted.
V or E = voltage (E=energy)
I = current in amps (I=intensity)
R = resistance in ohms
P = power in watts
V = I * R E = I * R
I = V / R I = E / R
R = V / I R = E / I
P = V * I P = E * I
Electric motor
electric motor, machine that converts electrical energy into mechanical energy. When an electric current is passed through a wire loop that is in a magnetic field, the loop will rotate and the rotating motion is transmitted to a shaft, providing useful mechanical work. The traditional electric motor consists of a conducting loop that is mounted on a rotatable shaft. Current fed in by carbon blocks, called brushes, enters the loop through two slip rings. The magnetic field around the loop, supplied by an iron core field magnet, causes the loop to turn when current is flowing through it. In an alternating current (AC) motor, the current flowing in the loop is synchronized to reverse direction at the moment when the plane of the loop is perpendicular to the magnetic field and there is no magnetic force exerted on the loop. Because the momentum of the loop carries it around until the current is again supplied, continuous motion results. In alternating current induction motors the current passing through the loop does not come from an external source but is induced as the loop passes through the magnetic field. In a direct current (DC) motor, a device known as a split ring commutator switches the direction of the current each half rotation to maintain the same direction of motion of the shaft. In any motor the stationary parts constitute the stator, and the assembly carrying the loops is called the rotor, or armature. As it is easy to control the speed of direct-current motors by varying the field or armature voltage, these are used where speed control is necessary. The speed of AC induction motors is set roughly by the motor construction and the frequency of the current; a mechanical transmission must therefore be used to change speed. In addition, each different design fits only one application. However, AC induction motors are cheaper and simpler than DC motors. To obtain greater flexibility, the rotor circuit can be connected to various external control circuits. Most home appliances with small motors have a universal motor that runs on either DC or AC. Where the expense is warranted, the speed of AC motors is controlled by employing special equipment that varies the power-line frequency, which in the United States is 60 hertz (Hz), or 60 cycles per second. Brushless DC motors are constructed in a reverse fashion from the traditional form. The rotor contains a permanent magnet and the stator has the conducting coil of wire. By the elimination of brushes, these motors offer reduced maintainance, no spark hazard, and better speed control. They are widely used in computer disk drives, tape recorders, CD drives, and other electronic devices. Synchronous motors turn at a speed exactly proportional to the frequency. The very largest motors are synchronous motors with DC passing through the rotor.
Power quality
Power quality is a term used to describe electric power that motivates an electrical load and the load's ability to function properly with that electric power. Without the proper power, an electrical device (or load) may malfunction, fail prematurely or not operate at all. There are many ways in which electric power can be of poor quality and many more causes of such poor quality power.
The electric power industry is in the business of electricity generation (AC power), electric power transmission and ultimately electricity distribution to a point often located near the electricity meter of the end user of the electric power. The electricity then moves through the distribution and wiring system of the end user until it reaches the load. The complexity of the system to move electric energy from the point of production to the point of consumption combined with variatons in weather, electricity demand and other factors provide many opportunities for the quality of power delivered to be compromised.
While "power quality" is a convenient term for many, it is actually the quality of the voltage, rather than power or electric current, that is actual topic described by the term. Power is simply the flow of energy and the current demanded by a load is largely uncontrollable. Nevertheless the relationship between the concepts of "voltage quality" and energy quality is unknown.
Introduction
It is often useful to think of power quality as a compatibility problem: is the equipment connected to the grid compatible with the events on the grid, and is the power delivered by the grid, including the events, compatible with the equipment that is connected? Compatibility problems always have at least two solutions: in this case, either clean up the power, or make the equipment tougher.
Ideally electric power would be supplied as a sine wave with the amplitude and frequency given by national standards (in the case of mains) or system specifications (in the case of a power feed not directly attached to the mains) with an impedance of zero ohms at all frequencies.
No real life power feed will ever meet this ideal. It can deviate from it in the following ways (among others):
Variations in the peak or RMS voltage are both important to different types of equipment.
When the RMS voltage exceeds the nominal voltage by 10 to 80% for 0.5 cycle to 1 minute, the event is called a "swell".
A "dip" (in British English) or a "sag" (in American English - the two terms are equivalent) is the opposite situation: the RMS volage is below the nominal voltage by by 10 to 90% for 0.5 cycle to 1 minute.
Random or repetitive variations in the RMS voltage between 90 and 110% of nominal can produce a phenomina known as "flicker" in lighting equipment. Flicker is the impression of unsteadiness of visual sensation induced by a light stimulus on the human eye. A prcise definition of such a voltage fluctuations that produce flicker have been subject to ongoing debate in more than one scientific community for many years.
Abrupt, very brief increases in voltage, called "spikes", "impulses", or "surges", generally caused by large inductive loads being turned off, or more severely by lightning.
"Undervoltage" occurs when the nominal voltage drops below 90% for more than 1 minute. The term "brownout" in common usage has no formal definition but is commonly used to describe a reduction in system voltage by the utility or system operator to decrease demand or to increase system operating margins.
"Overvoltage" occurs when the nominal voltage rises above 110% for more than 1 minute.
Variations in the frequency
Variations in the wave shape - usually described as harmonics
Nonzero low-frequency impedance (when a load draws more power, the voltage drops)
Nonzero high-frequency impedance (when a load demands a large amount of current, then stops demanding it suddenly, there will be a dip or spike in the voltage due to the inductances in the power supply line)
Power conditioning
Power conditioning refers to conditioning the power to improve its quality.
An uninterruptible power supply can be used to switch off of mains power if there is a transient (temporary) condition on the line. However, cheaper UPS units create poor-quality power themselves, akin to imposing a higher-frequency and lower-amplitude sawtooth wave atop the sine wave.
A surge protector or simple capacitor or varistor can protect against most overvoltage conditions, while a lightning arrestor protects against severe spikes.
Electronic filters can remove harmonics.
The electric power industry is in the business of electricity generation (AC power), electric power transmission and ultimately electricity distribution to a point often located near the electricity meter of the end user of the electric power. The electricity then moves through the distribution and wiring system of the end user until it reaches the load. The complexity of the system to move electric energy from the point of production to the point of consumption combined with variatons in weather, electricity demand and other factors provide many opportunities for the quality of power delivered to be compromised.
While "power quality" is a convenient term for many, it is actually the quality of the voltage, rather than power or electric current, that is actual topic described by the term. Power is simply the flow of energy and the current demanded by a load is largely uncontrollable. Nevertheless the relationship between the concepts of "voltage quality" and energy quality is unknown.
Introduction
It is often useful to think of power quality as a compatibility problem: is the equipment connected to the grid compatible with the events on the grid, and is the power delivered by the grid, including the events, compatible with the equipment that is connected? Compatibility problems always have at least two solutions: in this case, either clean up the power, or make the equipment tougher.
Ideally electric power would be supplied as a sine wave with the amplitude and frequency given by national standards (in the case of mains) or system specifications (in the case of a power feed not directly attached to the mains) with an impedance of zero ohms at all frequencies.
No real life power feed will ever meet this ideal. It can deviate from it in the following ways (among others):
Variations in the peak or RMS voltage are both important to different types of equipment.
When the RMS voltage exceeds the nominal voltage by 10 to 80% for 0.5 cycle to 1 minute, the event is called a "swell".
A "dip" (in British English) or a "sag" (in American English - the two terms are equivalent) is the opposite situation: the RMS volage is below the nominal voltage by by 10 to 90% for 0.5 cycle to 1 minute.
Random or repetitive variations in the RMS voltage between 90 and 110% of nominal can produce a phenomina known as "flicker" in lighting equipment. Flicker is the impression of unsteadiness of visual sensation induced by a light stimulus on the human eye. A prcise definition of such a voltage fluctuations that produce flicker have been subject to ongoing debate in more than one scientific community for many years.
Abrupt, very brief increases in voltage, called "spikes", "impulses", or "surges", generally caused by large inductive loads being turned off, or more severely by lightning.
"Undervoltage" occurs when the nominal voltage drops below 90% for more than 1 minute. The term "brownout" in common usage has no formal definition but is commonly used to describe a reduction in system voltage by the utility or system operator to decrease demand or to increase system operating margins.
"Overvoltage" occurs when the nominal voltage rises above 110% for more than 1 minute.
Variations in the frequency
Variations in the wave shape - usually described as harmonics
Nonzero low-frequency impedance (when a load draws more power, the voltage drops)
Nonzero high-frequency impedance (when a load demands a large amount of current, then stops demanding it suddenly, there will be a dip or spike in the voltage due to the inductances in the power supply line)
Power conditioning
Power conditioning refers to conditioning the power to improve its quality.
An uninterruptible power supply can be used to switch off of mains power if there is a transient (temporary) condition on the line. However, cheaper UPS units create poor-quality power themselves, akin to imposing a higher-frequency and lower-amplitude sawtooth wave atop the sine wave.
A surge protector or simple capacitor or varistor can protect against most overvoltage conditions, while a lightning arrestor protects against severe spikes.
Electronic filters can remove harmonics.
Subscribe to:
Posts (Atom)