It all` started in a laboratory in France in 1839, where physicist Alexandre-Edmond Becquerel was working with metal electrodes in an electrolyte solution. What he noted— but couldn’t explain— was that faint electric currents were produced whenever the metals were exposed to light. So, he was the first scientific observer of the ‘photovoltaic effect’.
In 1865, a large Gutta Percha Company ship moved across the Atlantic laying the first successful transatlantic telegraph line. Their chief electrical engineer, Willoughby Smith, had developed a method for using a particular material— selenium— because of its semi-conductive properties and high electrical resistence— to continually test the cable for defects as it was being submerged. Smith’s selenium rods worked well, but only at night. So, he sought to discover the cause of the anomoly.
Smith designed a box with a sliding cover to put the rods in. And when the box was closed, and light kept out, the bars’ resistance— the amount it impedes electrical flow— was high and constant. While removing the cover made the the bars’ conductivity— the amount it enhanced electrical flow— suddenly, increase proportianate to the amount of sunlight.
He had to find out whether light or heat was affecting the selenium, so he put a bar of it under water. He covered and uncovered it, found the results similar to others, and concluded that “the resistance [of the selenium] was altered…according to the intensity of light”. He published his ground-breaking work in the 1873 Journal of the Society of Telegraph Engineers.
A few years later— in 1876— Alexander Graham Bell patented his telephone technologies. And, with the influx of investment funds, went on to establish American Telephone and Telegraph Company (AT&T)and its prominent Bell Laboratory, famous for their research and development (R&D). More on them later, though.
Meanwhile, two British scientists, Professor William Adams and his student Richard Day, took up Smith’s mantle, performing their own experiments on some of same selenium rods. They lit a candle near it and noticed that the needle on the measuring device reacted immedietly. Since heat reactions occur only gradually, they proved that light, alone, was causing a “flow of electricity” through a solid material. They termed the electrical current produced by light “photoelectric” and published their research in the Proceedings of the Royal Society of London in 1877. “Dilly dilly”, if you’re still reading— B.T.W.
But solar science would remain theoretical, until the late great Charles Fritts— an inventor from New York— approached the problem. Coming from an electrical engineering background, like Smith, Fritts was well-acquanted with the lack in coal-powered electricity grids. He thought they needed transporting and were messy. He thought to: spread a wide, thin layer of selenium onto a broad metal plate; cover it with an ultra-thin layer of semi-transparent gold-leaf; and place a glass pane atop. And— in doing so— Fritts engineered the first photovoltaic cell in 1883.
He tested his selenium solar ‘modules’— in the world’s first solar array— on a NYC rooftop in 1884. It worked, though at less than half-a-percent efficiency— or how much sunlight is converted to electricity. However, Fritts claimed, proudly, that his ‘modules’ generated a current “that is continuous, constant, and of considerable force[,]…not only by exposure to sunlight, but also to dim diffused daylight, and even lamplight.” Fritts published his work— “On a New Form of Selenium Photocell”— in the American Journal of Science 26, 1883.
Fritts, also, sent a panel to Werner von Siemens— an inventor on par with Edison at the time— who said that it “presented to us, for the first time, the direct conversion of the energy of light into electrical energy”. He was right, of course, and said, also, that photoelectricity was “scientifically, of the most far-reaching importance” though he— as well as other notable scientists of the time— knew very little of it.
Photoelectricity was largely ignored in its infancy, because it relied on light— not heat— to generate electricity. It countered every other powering device at the time. Even a black box could generate more heat energy from sunlight then that photoelectricity could. Until, Albert Einstein changed how we look at the light, showing us that it’s made of units of light energy— now called photons— which vary in power, based on the wavelength of light energy along the electromagnetic spectrum. For example, the shortest wavelength contains photons roughly four times more powerful than the longest.
The turn of The Twentieth Century saw these discoveries and others, such as those of the electron, follow with floods of research into their respective behaviors. And these methods allowed for photoelectricity to be— scientifically— understood for the first time:
In some materials, such as selenium, the energy from more powerful photons is able to knock poorly-linked electrons out of their atomic orbits. If wires are attached to the material, free electrons may flow out as direct current (DC) electricity.
Nineteenth Century scientists had termed the processes ‘photoelectrics’, but by the 1920s it had changed to better be known as ‘photovoltaics’. Yet, the concensus still agreed that the practical applications of the technology wouldn’t be realized until the efficiency could be increased many times over.
Around the middle of The Twentieth Century, Gerald Pearson— an experimental scientist— and Calvin Fuller— a chemist— worked together at the aforementioned Bell Laboratories. There they solved the crux of the photovoltaic problem when they developed the first silicon transistor— the principal component of every electronic device today. Fuller managed the introduction of impurities into piece of silicon in order to change it from a poor conductor of electricity into a superlative one. Fuller gave Pearson a similarly treated piece of silicon with a small concentration of gallium, which gave the silicon a positive charge. When Pearson dipped the silicon into hot lithium, as per Fuller’s formula, the portion of silicon immersed in the lithium became negatively charged. This area— where the two poles met— developed a permanent electrical field, and became better known as the positive-negative (p-n) junction— the center of electronic activity in the solar cell and transistor. All that was needed was lamplight to add the small amount of energy the modified silicon needed to start flowing its electricity.
Around the same time, Bell Labs tasked Daryl Chapin with finding a way to provide small amounts of occasional power to remote humid areas. What he, quickly, found was that traditional dry-cell batteries failed with too much humidity. What he needed was an alternative energy source. Although it was suggested that he investigate other alternative energy sources— such as thermoelectric, wind and steam machines— Chapin decided on solar cells. His selenium solar cells measured only at 0.5 percent efficiency, however. Though, when word of Chapin’s failures made it back to Bell Labs, Pearson told him to try Fuller’s and his silicon cells instead— which measured at 2.3 percent efficiency. So, Chapin gave up on selenium, in favor of improving silicon solar cells.
Many trails followed for Chapin and his silicon cells. As primary engineer— Chapin tested thousands of different configurations with seperate materials. With many modifications, and more help from Fuller, Chapin reached his goal of 6 percent efficiency. Bell Labs and the trio presented The Bell Solar Battery to the press on April 25th, 1954, powering a tiny Ferris wheel.
U.S. News & World Report published an article titled “Fuel Unlimited” that praised the silicon that “may provide more power than all the world’s coal, oil and uranium.” The New York Times agreed— on its front page— that the work of Chapin, Pearson, and Fuller “may mark the beginning of a new era, leading eventually to the realization of one of mankinds most cherished dreams— the harnessing of the almost limitless energy of the sun for the uses of civilization.”